PCPs May Have a New Tool to Help Identify Autism in Young Children

Article Type
Changed
Thu, 05/23/2024 - 15:15

Incorporating eye-tracking biomarkers into pediatric autism assessments may make identifying the condition easier, according to new findings published in JAMA Network Open.

Researchers created an artificial intelligence–based tool to help primary care clinicians and pediatricians spot potential cases of the neurological condition, according to Brandon Keehn, PhD, associate professor in the Department of Speech, Language, and Hearing Sciences at Purdue University in West Lafayette, Indiana, and an author of the study.

Most primary care clinicians do not receive specialized training in identifying autism, and around a third diagnose the condition with uncertainty, according to Dr. Keehn. The tool helps clinicians by incorporating their diagnosis and self-reported level of certainty with eye-tracking biomarkers. A clinical psychologist also assessed children, either verifying or confuting the earlier results.

The tool produced the same diagnosis as that from a psychologist in 90% of cases. When children were assessed using eye biomarkers alone, the diagnosis was aligned with that of a psychologist 77% of the time.

“This is the first step in demonstrating both that eye-tracking biomarkers are sensitive to autism and whether or not these biomarkers provide extra clinical information for primary care physicians to more accurately diagnose autism,” Dr. Keehn told this news organization.

The study took place between 2019 and 2022 and included 146 children between 14 and 48 months old who were treated at seven primary care practices in Indiana. Dr. Keehn and colleagues asked primary care clinicians to rate their level of certainty in their diagnosis.

During the biomarker test, toddlers watched cartoons while researchers tracked their eye movements. Six biomarkers included in the test were based on previous research linking eye movements to autism, according to Dr. Keehn.

These included whether toddlers looked more at images of people or geometric patterns and the speed and size of pupil dilation when exposed to bright light.

Most toddlers produced a positive result for autism in only one biomarker test. Dr. Keehn said this confirms that children should be tested for a variety of biomarkers because each patient’s condition manifests differently.

Dr. Keehn said his team is still a few steps away from determining how the model would work in a real clinical setting and that they are planning more research with a larger study population.

Alice Kuo, MD, a pediatrician specializing in autism at the University of California, Los Angeles (UCLA), said primary care clinicians should feel comfortable making an autism diagnosis.

“Any tool that helps them to do that can be useful, since wait times for a specialist can take years,” Dr. Kuo, also the director of the Autism Intervention Research Network on Physical Health at UCLA, said.

However, Dr. Kuo said she is concerned about the cases that were falsely identified as positive or negative.

“To be told your kid is autistic when he’s not, or to be told your kid is not when he clinically is, has huge ramifications,” she said.

The study was funded by the National Institute of Mental Health, the Riley Children’s Foundation, and the Indiana Clinical and Translational Sciences Institute. Dr. Keehn reported payments for workshops on the use of the Autism Diagnostic Observation Schedule.

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

Incorporating eye-tracking biomarkers into pediatric autism assessments may make identifying the condition easier, according to new findings published in JAMA Network Open.

Researchers created an artificial intelligence–based tool to help primary care clinicians and pediatricians spot potential cases of the neurological condition, according to Brandon Keehn, PhD, associate professor in the Department of Speech, Language, and Hearing Sciences at Purdue University in West Lafayette, Indiana, and an author of the study.

Most primary care clinicians do not receive specialized training in identifying autism, and around a third diagnose the condition with uncertainty, according to Dr. Keehn. The tool helps clinicians by incorporating their diagnosis and self-reported level of certainty with eye-tracking biomarkers. A clinical psychologist also assessed children, either verifying or confuting the earlier results.

The tool produced the same diagnosis as that from a psychologist in 90% of cases. When children were assessed using eye biomarkers alone, the diagnosis was aligned with that of a psychologist 77% of the time.

“This is the first step in demonstrating both that eye-tracking biomarkers are sensitive to autism and whether or not these biomarkers provide extra clinical information for primary care physicians to more accurately diagnose autism,” Dr. Keehn told this news organization.

The study took place between 2019 and 2022 and included 146 children between 14 and 48 months old who were treated at seven primary care practices in Indiana. Dr. Keehn and colleagues asked primary care clinicians to rate their level of certainty in their diagnosis.

During the biomarker test, toddlers watched cartoons while researchers tracked their eye movements. Six biomarkers included in the test were based on previous research linking eye movements to autism, according to Dr. Keehn.

These included whether toddlers looked more at images of people or geometric patterns and the speed and size of pupil dilation when exposed to bright light.

Most toddlers produced a positive result for autism in only one biomarker test. Dr. Keehn said this confirms that children should be tested for a variety of biomarkers because each patient’s condition manifests differently.

Dr. Keehn said his team is still a few steps away from determining how the model would work in a real clinical setting and that they are planning more research with a larger study population.

Alice Kuo, MD, a pediatrician specializing in autism at the University of California, Los Angeles (UCLA), said primary care clinicians should feel comfortable making an autism diagnosis.

“Any tool that helps them to do that can be useful, since wait times for a specialist can take years,” Dr. Kuo, also the director of the Autism Intervention Research Network on Physical Health at UCLA, said.

However, Dr. Kuo said she is concerned about the cases that were falsely identified as positive or negative.

“To be told your kid is autistic when he’s not, or to be told your kid is not when he clinically is, has huge ramifications,” she said.

The study was funded by the National Institute of Mental Health, the Riley Children’s Foundation, and the Indiana Clinical and Translational Sciences Institute. Dr. Keehn reported payments for workshops on the use of the Autism Diagnostic Observation Schedule.

A version of this article appeared on Medscape.com .

Incorporating eye-tracking biomarkers into pediatric autism assessments may make identifying the condition easier, according to new findings published in JAMA Network Open.

Researchers created an artificial intelligence–based tool to help primary care clinicians and pediatricians spot potential cases of the neurological condition, according to Brandon Keehn, PhD, associate professor in the Department of Speech, Language, and Hearing Sciences at Purdue University in West Lafayette, Indiana, and an author of the study.

Most primary care clinicians do not receive specialized training in identifying autism, and around a third diagnose the condition with uncertainty, according to Dr. Keehn. The tool helps clinicians by incorporating their diagnosis and self-reported level of certainty with eye-tracking biomarkers. A clinical psychologist also assessed children, either verifying or confuting the earlier results.

The tool produced the same diagnosis as that from a psychologist in 90% of cases. When children were assessed using eye biomarkers alone, the diagnosis was aligned with that of a psychologist 77% of the time.

“This is the first step in demonstrating both that eye-tracking biomarkers are sensitive to autism and whether or not these biomarkers provide extra clinical information for primary care physicians to more accurately diagnose autism,” Dr. Keehn told this news organization.

The study took place between 2019 and 2022 and included 146 children between 14 and 48 months old who were treated at seven primary care practices in Indiana. Dr. Keehn and colleagues asked primary care clinicians to rate their level of certainty in their diagnosis.

During the biomarker test, toddlers watched cartoons while researchers tracked their eye movements. Six biomarkers included in the test were based on previous research linking eye movements to autism, according to Dr. Keehn.

These included whether toddlers looked more at images of people or geometric patterns and the speed and size of pupil dilation when exposed to bright light.

Most toddlers produced a positive result for autism in only one biomarker test. Dr. Keehn said this confirms that children should be tested for a variety of biomarkers because each patient’s condition manifests differently.

Dr. Keehn said his team is still a few steps away from determining how the model would work in a real clinical setting and that they are planning more research with a larger study population.

Alice Kuo, MD, a pediatrician specializing in autism at the University of California, Los Angeles (UCLA), said primary care clinicians should feel comfortable making an autism diagnosis.

“Any tool that helps them to do that can be useful, since wait times for a specialist can take years,” Dr. Kuo, also the director of the Autism Intervention Research Network on Physical Health at UCLA, said.

However, Dr. Kuo said she is concerned about the cases that were falsely identified as positive or negative.

“To be told your kid is autistic when he’s not, or to be told your kid is not when he clinically is, has huge ramifications,” she said.

The study was funded by the National Institute of Mental Health, the Riley Children’s Foundation, and the Indiana Clinical and Translational Sciences Institute. Dr. Keehn reported payments for workshops on the use of the Autism Diagnostic Observation Schedule.

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ultraprocessed Foods May Be an Independent Risk Factor for Poor Brain Health

Article Type
Changed
Tue, 05/28/2024 - 15:00

Consuming highly processed foods may be harmful to the aging brain, independent of other risk factors for adverse neurologic outcomes and adherence to recommended dietary patterns, new research suggests.

Observations from a large cohort of adults followed for more than 10 years suggested that eating more ultraprocessed foods (UPFs) may increase the risk for cognitive decline and stroke, while eating more unprocessed or minimally processed foods may lower the risk.

“The first key takeaway is that the type of food that we eat matters for brain health, but it’s equally important to think about how it’s made and handled when thinking about brain health,” said study investigator W. Taylor Kimberly, MD, PhD, with Massachusetts General Hospital in Boston.

“The second is that it’s not just all a bad news story because while increased consumption of ultra-processed foods is associated with a higher risk of cognitive impairment and stroke, unprocessed foods appear to be protective,” Dr. Kimberly added.

The study was published online on May 22 in Neurology.
 

Food Processing Matters

UPFs are highly manipulated, low in protein and fiber, and packed with added ingredients, including sugar, fat, and salt. Examples of UPFs are soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, and fries.

Unprocessed or minimally processed foods include meats such as simple cuts of beef, pork, and chicken, and vegetables and fruits.

Research has shown associations between high UPF consumption and increased risk for metabolic and neurologic disorders.

As reported previously, in the ELSA-Brasil study, higher intake of UPFs was significantly associated with a faster rate of decline in executive and global cognitive function.

Yet, it’s unclear whether the extent of food processing contributes to the risk of adverse neurologic outcomes independent of dietary patterns.

Dr. Kimberly and colleagues examined the association of food processing levels with the risk for cognitive impairment and stroke in the long-running REGARDS study, a large prospective US cohort of Black and White adults aged 45 years and older.

Food processing levels were defined by the NOVA food classification system, which ranges from unprocessed or minimally processed foods (NOVA1) to UPFs (NOVA4). Dietary patterns were characterized based on food frequency questionnaires.

In the cognitive impairment cohort, 768 of 14,175 adults without evidence of impairment at baseline who underwent follow-up testing developed cognitive impairment.
 

Diet an Opportunity to Protect Brain Health

In multivariable Cox proportional hazards models adjusting for age, sex, high blood pressure, and other factors, a 10% increase in relative intake of UPFs was associated with a 16% higher risk for cognitive impairment (hazard ratio [HR], 1.16). Conversely, a higher intake of unprocessed or minimally processed foods correlated with a 12% lower risk for cognitive impairment (HR, 0.88).

In the stroke cohort, 1108 of 20,243 adults without a history of stroke had a stroke during the follow-up.

In multivariable Cox models, greater intake of UPFs was associated with an 8% increased risk for stroke (HR, 1.08), while greater intake of unprocessed or minimally processed foods correlated with a 9% lower risk for stroke (HR, 0.91).

The effect of UPFs on stroke risk was greater among Black than among White adults (UPF-by-race interaction HR, 1.15).

The associations between UPFs and both cognitive impairment and stroke were independent of adherence to the Mediterranean diet, the Dietary Approaches to Stop Hypertension (DASH) diet, and the Mediterranean-DASH Intervention for Neurodegenerative Delay diet.

These results “highlight the possibility that we have the capacity to maintain our brain health and prevent poor brain health outcomes by focusing on unprocessed foods in the long term,” Dr. Kimberly said.

He cautioned that this was “an observational study and not an interventional study, so we can’t say with certainty that substituting ultra-processed foods with unprocessed foods will definitively improve brain health,” Dr. Kimberly said. “That’s a clinical trial question that has not been done but our results certainly are provocative.”
 

 

 

Consider UPFs in National Guidelines?

The coauthors of an accompanying editorial said the “robust” results from Kimberly and colleagues highlight the “significant role of food processing levels and their relationship with adverse neurologic outcomes, independent of conventional dietary patterns.”

Peipei Gao, MS, with Harvard T.H. Chan School of Public Health, and Zhendong Mei, PhD, with Harvard Medical School, both in Boston, noted that the mechanisms underlying the impact of UPFs on adverse neurologic outcomes “can be attributed not only to their nutritional profiles,” including poor nutrient composition and high glycemic load, “but also to the presence of additives including emulsifiers, colorants, sweeteners, and nitrates/nitrites, which have been associated with disruptions in the gut microbial ecosystem and inflammation.

“Understanding how food processing levels are associated with human health offers a fresh take on the saying ‘you are what you eat,’ ” the editorialists wrote.

This new study, they noted, adds to the evidence by highlighting the link between UPFs and brain health, independent of traditional dietary patterns and “raises questions about whether considerations of UPFs should be included in dietary guidelines, as well as national and global public health policies for improving brain health.”

The editorialists called for large prospective population studies and randomized controlled trials to better understand the link between UPF consumption and brain health. “In addition, mechanistic studies are warranted to identify specific foods, detrimental processes, and additives that play a role in UPFs and their association with neurologic disorders,” they concluded.

Funding for the study was provided by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, National Institutes of Health, and Department of Health and Human Services. The authors and editorial writers had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Consuming highly processed foods may be harmful to the aging brain, independent of other risk factors for adverse neurologic outcomes and adherence to recommended dietary patterns, new research suggests.

Observations from a large cohort of adults followed for more than 10 years suggested that eating more ultraprocessed foods (UPFs) may increase the risk for cognitive decline and stroke, while eating more unprocessed or minimally processed foods may lower the risk.

“The first key takeaway is that the type of food that we eat matters for brain health, but it’s equally important to think about how it’s made and handled when thinking about brain health,” said study investigator W. Taylor Kimberly, MD, PhD, with Massachusetts General Hospital in Boston.

“The second is that it’s not just all a bad news story because while increased consumption of ultra-processed foods is associated with a higher risk of cognitive impairment and stroke, unprocessed foods appear to be protective,” Dr. Kimberly added.

The study was published online on May 22 in Neurology.
 

Food Processing Matters

UPFs are highly manipulated, low in protein and fiber, and packed with added ingredients, including sugar, fat, and salt. Examples of UPFs are soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, and fries.

Unprocessed or minimally processed foods include meats such as simple cuts of beef, pork, and chicken, and vegetables and fruits.

Research has shown associations between high UPF consumption and increased risk for metabolic and neurologic disorders.

As reported previously, in the ELSA-Brasil study, higher intake of UPFs was significantly associated with a faster rate of decline in executive and global cognitive function.

Yet, it’s unclear whether the extent of food processing contributes to the risk of adverse neurologic outcomes independent of dietary patterns.

Dr. Kimberly and colleagues examined the association of food processing levels with the risk for cognitive impairment and stroke in the long-running REGARDS study, a large prospective US cohort of Black and White adults aged 45 years and older.

Food processing levels were defined by the NOVA food classification system, which ranges from unprocessed or minimally processed foods (NOVA1) to UPFs (NOVA4). Dietary patterns were characterized based on food frequency questionnaires.

In the cognitive impairment cohort, 768 of 14,175 adults without evidence of impairment at baseline who underwent follow-up testing developed cognitive impairment.
 

Diet an Opportunity to Protect Brain Health

In multivariable Cox proportional hazards models adjusting for age, sex, high blood pressure, and other factors, a 10% increase in relative intake of UPFs was associated with a 16% higher risk for cognitive impairment (hazard ratio [HR], 1.16). Conversely, a higher intake of unprocessed or minimally processed foods correlated with a 12% lower risk for cognitive impairment (HR, 0.88).

In the stroke cohort, 1108 of 20,243 adults without a history of stroke had a stroke during the follow-up.

In multivariable Cox models, greater intake of UPFs was associated with an 8% increased risk for stroke (HR, 1.08), while greater intake of unprocessed or minimally processed foods correlated with a 9% lower risk for stroke (HR, 0.91).

The effect of UPFs on stroke risk was greater among Black than among White adults (UPF-by-race interaction HR, 1.15).

The associations between UPFs and both cognitive impairment and stroke were independent of adherence to the Mediterranean diet, the Dietary Approaches to Stop Hypertension (DASH) diet, and the Mediterranean-DASH Intervention for Neurodegenerative Delay diet.

These results “highlight the possibility that we have the capacity to maintain our brain health and prevent poor brain health outcomes by focusing on unprocessed foods in the long term,” Dr. Kimberly said.

He cautioned that this was “an observational study and not an interventional study, so we can’t say with certainty that substituting ultra-processed foods with unprocessed foods will definitively improve brain health,” Dr. Kimberly said. “That’s a clinical trial question that has not been done but our results certainly are provocative.”
 

 

 

Consider UPFs in National Guidelines?

The coauthors of an accompanying editorial said the “robust” results from Kimberly and colleagues highlight the “significant role of food processing levels and their relationship with adverse neurologic outcomes, independent of conventional dietary patterns.”

Peipei Gao, MS, with Harvard T.H. Chan School of Public Health, and Zhendong Mei, PhD, with Harvard Medical School, both in Boston, noted that the mechanisms underlying the impact of UPFs on adverse neurologic outcomes “can be attributed not only to their nutritional profiles,” including poor nutrient composition and high glycemic load, “but also to the presence of additives including emulsifiers, colorants, sweeteners, and nitrates/nitrites, which have been associated with disruptions in the gut microbial ecosystem and inflammation.

“Understanding how food processing levels are associated with human health offers a fresh take on the saying ‘you are what you eat,’ ” the editorialists wrote.

This new study, they noted, adds to the evidence by highlighting the link between UPFs and brain health, independent of traditional dietary patterns and “raises questions about whether considerations of UPFs should be included in dietary guidelines, as well as national and global public health policies for improving brain health.”

The editorialists called for large prospective population studies and randomized controlled trials to better understand the link between UPF consumption and brain health. “In addition, mechanistic studies are warranted to identify specific foods, detrimental processes, and additives that play a role in UPFs and their association with neurologic disorders,” they concluded.

Funding for the study was provided by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, National Institutes of Health, and Department of Health and Human Services. The authors and editorial writers had no relevant disclosures.

A version of this article appeared on Medscape.com.

Consuming highly processed foods may be harmful to the aging brain, independent of other risk factors for adverse neurologic outcomes and adherence to recommended dietary patterns, new research suggests.

Observations from a large cohort of adults followed for more than 10 years suggested that eating more ultraprocessed foods (UPFs) may increase the risk for cognitive decline and stroke, while eating more unprocessed or minimally processed foods may lower the risk.

“The first key takeaway is that the type of food that we eat matters for brain health, but it’s equally important to think about how it’s made and handled when thinking about brain health,” said study investigator W. Taylor Kimberly, MD, PhD, with Massachusetts General Hospital in Boston.

“The second is that it’s not just all a bad news story because while increased consumption of ultra-processed foods is associated with a higher risk of cognitive impairment and stroke, unprocessed foods appear to be protective,” Dr. Kimberly added.

The study was published online on May 22 in Neurology.
 

Food Processing Matters

UPFs are highly manipulated, low in protein and fiber, and packed with added ingredients, including sugar, fat, and salt. Examples of UPFs are soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, and fries.

Unprocessed or minimally processed foods include meats such as simple cuts of beef, pork, and chicken, and vegetables and fruits.

Research has shown associations between high UPF consumption and increased risk for metabolic and neurologic disorders.

As reported previously, in the ELSA-Brasil study, higher intake of UPFs was significantly associated with a faster rate of decline in executive and global cognitive function.

Yet, it’s unclear whether the extent of food processing contributes to the risk of adverse neurologic outcomes independent of dietary patterns.

Dr. Kimberly and colleagues examined the association of food processing levels with the risk for cognitive impairment and stroke in the long-running REGARDS study, a large prospective US cohort of Black and White adults aged 45 years and older.

Food processing levels were defined by the NOVA food classification system, which ranges from unprocessed or minimally processed foods (NOVA1) to UPFs (NOVA4). Dietary patterns were characterized based on food frequency questionnaires.

In the cognitive impairment cohort, 768 of 14,175 adults without evidence of impairment at baseline who underwent follow-up testing developed cognitive impairment.
 

Diet an Opportunity to Protect Brain Health

In multivariable Cox proportional hazards models adjusting for age, sex, high blood pressure, and other factors, a 10% increase in relative intake of UPFs was associated with a 16% higher risk for cognitive impairment (hazard ratio [HR], 1.16). Conversely, a higher intake of unprocessed or minimally processed foods correlated with a 12% lower risk for cognitive impairment (HR, 0.88).

In the stroke cohort, 1108 of 20,243 adults without a history of stroke had a stroke during the follow-up.

In multivariable Cox models, greater intake of UPFs was associated with an 8% increased risk for stroke (HR, 1.08), while greater intake of unprocessed or minimally processed foods correlated with a 9% lower risk for stroke (HR, 0.91).

The effect of UPFs on stroke risk was greater among Black than among White adults (UPF-by-race interaction HR, 1.15).

The associations between UPFs and both cognitive impairment and stroke were independent of adherence to the Mediterranean diet, the Dietary Approaches to Stop Hypertension (DASH) diet, and the Mediterranean-DASH Intervention for Neurodegenerative Delay diet.

These results “highlight the possibility that we have the capacity to maintain our brain health and prevent poor brain health outcomes by focusing on unprocessed foods in the long term,” Dr. Kimberly said.

He cautioned that this was “an observational study and not an interventional study, so we can’t say with certainty that substituting ultra-processed foods with unprocessed foods will definitively improve brain health,” Dr. Kimberly said. “That’s a clinical trial question that has not been done but our results certainly are provocative.”
 

 

 

Consider UPFs in National Guidelines?

The coauthors of an accompanying editorial said the “robust” results from Kimberly and colleagues highlight the “significant role of food processing levels and their relationship with adverse neurologic outcomes, independent of conventional dietary patterns.”

Peipei Gao, MS, with Harvard T.H. Chan School of Public Health, and Zhendong Mei, PhD, with Harvard Medical School, both in Boston, noted that the mechanisms underlying the impact of UPFs on adverse neurologic outcomes “can be attributed not only to their nutritional profiles,” including poor nutrient composition and high glycemic load, “but also to the presence of additives including emulsifiers, colorants, sweeteners, and nitrates/nitrites, which have been associated with disruptions in the gut microbial ecosystem and inflammation.

“Understanding how food processing levels are associated with human health offers a fresh take on the saying ‘you are what you eat,’ ” the editorialists wrote.

This new study, they noted, adds to the evidence by highlighting the link between UPFs and brain health, independent of traditional dietary patterns and “raises questions about whether considerations of UPFs should be included in dietary guidelines, as well as national and global public health policies for improving brain health.”

The editorialists called for large prospective population studies and randomized controlled trials to better understand the link between UPF consumption and brain health. “In addition, mechanistic studies are warranted to identify specific foods, detrimental processes, and additives that play a role in UPFs and their association with neurologic disorders,” they concluded.

Funding for the study was provided by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, National Institutes of Health, and Department of Health and Human Services. The authors and editorial writers had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Fluoride, Water, and Kids’ Brains: It’s Complicated

Article Type
Changed
Thu, 05/23/2024 - 12:33

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

Jama Network Open


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

Los Angeles County Department of Public Health


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

Jama Network Open


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

Los Angeles County Department of Public Health


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

Jama Network Open


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

Los Angeles County Department of Public Health


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Expert Guidance on Antiseizure Medication Use During Pregnancy

Article Type
Changed
Mon, 05/20/2024 - 12:25

New expert guidance to help clinicians manage the treatment of patients with epilepsy during pregnancy has been released.

Issued by the American Academy of Neurology, the American Epilepsy Society, and the Society for Maternal-Fetal Medicine, the new practice guideline covers the use of antiseizure medications (ASMs) and folic acid supplementation before conception and during pregnancy.

“Most children born to people with epilepsy are healthy, but there is a small risk of pregnancy-related problems, partly due to seizures and partly due to the effects of antiseizure medications,” the guidelines’ lead author Alison M. Pack, MD, MPH, professor of neurology and chief of the Epilepsy and Sleep Division, Columbia University, New York City, said in a news release.

“This guideline provides recommendations regarding the effects of antiseizure medications and folic acid supplementation on malformations at birth and the development of children during pregnancy, so that doctors and people with epilepsy can determine which treatments may be best for them,” she added. 

The guideline was published online in Neurology.
 

Why Now? 

The new guideline updates the 2009 guidance on epilepsy management during pregnancy. Since then, Dr. Pack told this news organization, there has been a wealth of new data on differential effects of different ASMs — notably, lamotrigine and levetiracetam — the most commonly prescribed medications in this population.

“In this guideline, we were able to assess differential effects of different ASMs on outcomes of interest, including major congenital malformations [MCMs], perinatal outcomes, and neurodevelopmental outcomes. In addition, we looked at the effect of folic acid supplementation on each of these outcomes,” she said.

The overarching goals of care for patients are to “optimize health outcomes both for individuals and their future offspring,” the authors wrote. Shared decision-making, they add, leads to better decision-making by providing a better understanding of the available treatment options and their potential risks, resulting in enhanced decision-making that aligns with personal values.

Clinicians should recommend ASMs that optimize seizure control and fetal outcomes, in the event of a pregnancy, at the earliest possible preconception time, the guideline authors note.

“Overall, treating clinicians need to balance treating the person with epilepsy to control convulsive seizures (generalized tonic-clonic seizures and focal-to-bilateral tonic-clonic seizures) to minimize potential risks to the birth parent and the possible risks of certain ASMs on the fetus if pregnancy occurs,” they wrote.

If a patient is already pregnant, the experts recommend that clinicians “exercise caution” in removing or replacing an ASM that controls convulsive seizures, even if it’s “not an optimal choice” for the fetus. 

In addition, they advise that ASM levels should be monitored throughout the pregnancy, guided by individual ASM pharmacokinetics and an individual patient’s clinical presentation. ASM dose, they note, should be adjusted during pregnancy in response to decreasing serum ASM levels or worsening seizure control.

The authors point out that there are limited data on “pregnancy-related outcomes with respect to acetazolamide, eslicarbazepine, ethosuximide, lacosamide, nitrazepam, perampanel, piracetam, pregabalin, rufinamide, stiripentol, tiagabine, and vigabatrin.”

Patients should be informed that the birth prevalence of any major congenital malformation in the general population ranges between 2.4% and 2.9%.
 

If Feasible, Avoid Valproic Acid 

“One of the most important take-home messages is that valproic acid has the highest unadjusted birth prevalence of all major congenital malformations — 9.7% — and the highest unadjusted birth prevalence of neural tube defects at 1.4%,” Dr. Pack said. As a result, the guideline authors advise against using valproic acid, if clinically feasible.

Valproic acid also has the highest prevalence of negative neurodevelopmental outcomes, including a reduction in global IQ and an increased prevalence of autism spectrum disorder (ASD). Patients should be counseled accordingly and advised of the increased risk for ASD and decreased IQ resulting from valproic acid.

Clinicians should consider using lamotrigine, levetiracetam, or oxcarbazepine when appropriate. Serum concentrations of most ASMs have a “defined therapeutic window” for effective seizure control and that concentration may decrease during pregnancy, particularly with lamotrigine and levetiracetam, the authors note.

Phenobarbital, topiramate, and valproic acid should because of the increased risk for cardiac malformations, oral clefts, and urogenital and renal malformations.

Fetal screening for major congenital malformations is recommended to enable early detection and timely intervention in patients treated with any ASM during pregnancy Patients receiving phenobarbital during pregnancy should also undergo fetal cardiac screenings.

Valproic acid and topiramate are also associated with children who are small for their gestational age. To enable early identification of fetal growth restriction, patients taking valproic acid or topiramate should be monitored. In addition, children exposed to these medications in utero should be monitored during childhood to ensure they are meeting age-appropriate developmental milestones. 

Folic acid taken during pregnancy can reduce the prevalence of negative neurodevelopment outcomes, but not major congenital malformations, Dr. Pack noted. 

“Due to limited available data, we were unable to define an optimal dose of folic acid supplementation beyond at least 0.4 mg/d,” Dr. Pack said. “Future studies, preferably randomized clinical trials, are needed to better define the optimal dose.”

She emphasized that epilepsy is one of the most common neurologic disorders, and 1 in 5 of those affected are people of childbearing potential. Understanding the effects of ASMs on pregnancy outcomes is critical for physicians who manage these patients.
 

Uncertainty Remains 

Commenting for this news organization, Kimford Meador, MD, a professor in the Department of Neurology and Neurological Sciences at Stanford University School of Medicine , Stanford Neuroscience Health Center, Palo Alto, California, noted that the new guidelines reflect the gains in knowledge since 2009 and that the recommendations are “reasonable, based on available data.”

However, “one very important point is how much remains unknown,” said Dr. Meador, who was not involved in writing the current guideline. “Many ASMs have no data, and several have estimates based on small samples or a single observational study.” Thus, “the risks for the majority of ASMs are uncertain.”

Given that randomized trials “are not possible in this population, and that all observational studies are subject to residual confounding, a reliable signal across multiple studies in humans is required to be certain of findings,” he stated.

This practice guideline was developed with financial support from the American Academy of Neurology. Dr. Pack serves on the editorial board for the journal Epilepsy Currents, receives royalties from UpToDate, receives funding from the National Institutes of Health for serving as coinvestigator and site principal investigator for the Maternal Outcomes and Neurodevelopmental Effects of Antiepileptic Drugs (MONEAD) study, and receives funding from Bayer for serving as a co-investigator on a study on women with epilepsy initiating a progestin intrauterine device. One of Dr. Pack’s immediate family members has received personal compensation for serving as an employee of REGENEXBIO. The other authors’ disclosures are listed on the original paper. Dr. Meador has received research support from the National Institutes of Health, Veterans Administration, Eisai, Inc, and Suno Medtronic Navigation, Inc, and the Epilepsy Study Consortium pays Dr. Meador’s university for his research on the Human Epilepsy Project and consultant time related to Eisai, UCB Pharma, and Xenon.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New expert guidance to help clinicians manage the treatment of patients with epilepsy during pregnancy has been released.

Issued by the American Academy of Neurology, the American Epilepsy Society, and the Society for Maternal-Fetal Medicine, the new practice guideline covers the use of antiseizure medications (ASMs) and folic acid supplementation before conception and during pregnancy.

“Most children born to people with epilepsy are healthy, but there is a small risk of pregnancy-related problems, partly due to seizures and partly due to the effects of antiseizure medications,” the guidelines’ lead author Alison M. Pack, MD, MPH, professor of neurology and chief of the Epilepsy and Sleep Division, Columbia University, New York City, said in a news release.

“This guideline provides recommendations regarding the effects of antiseizure medications and folic acid supplementation on malformations at birth and the development of children during pregnancy, so that doctors and people with epilepsy can determine which treatments may be best for them,” she added. 

The guideline was published online in Neurology.
 

Why Now? 

The new guideline updates the 2009 guidance on epilepsy management during pregnancy. Since then, Dr. Pack told this news organization, there has been a wealth of new data on differential effects of different ASMs — notably, lamotrigine and levetiracetam — the most commonly prescribed medications in this population.

“In this guideline, we were able to assess differential effects of different ASMs on outcomes of interest, including major congenital malformations [MCMs], perinatal outcomes, and neurodevelopmental outcomes. In addition, we looked at the effect of folic acid supplementation on each of these outcomes,” she said.

The overarching goals of care for patients are to “optimize health outcomes both for individuals and their future offspring,” the authors wrote. Shared decision-making, they add, leads to better decision-making by providing a better understanding of the available treatment options and their potential risks, resulting in enhanced decision-making that aligns with personal values.

Clinicians should recommend ASMs that optimize seizure control and fetal outcomes, in the event of a pregnancy, at the earliest possible preconception time, the guideline authors note.

“Overall, treating clinicians need to balance treating the person with epilepsy to control convulsive seizures (generalized tonic-clonic seizures and focal-to-bilateral tonic-clonic seizures) to minimize potential risks to the birth parent and the possible risks of certain ASMs on the fetus if pregnancy occurs,” they wrote.

If a patient is already pregnant, the experts recommend that clinicians “exercise caution” in removing or replacing an ASM that controls convulsive seizures, even if it’s “not an optimal choice” for the fetus. 

In addition, they advise that ASM levels should be monitored throughout the pregnancy, guided by individual ASM pharmacokinetics and an individual patient’s clinical presentation. ASM dose, they note, should be adjusted during pregnancy in response to decreasing serum ASM levels or worsening seizure control.

The authors point out that there are limited data on “pregnancy-related outcomes with respect to acetazolamide, eslicarbazepine, ethosuximide, lacosamide, nitrazepam, perampanel, piracetam, pregabalin, rufinamide, stiripentol, tiagabine, and vigabatrin.”

Patients should be informed that the birth prevalence of any major congenital malformation in the general population ranges between 2.4% and 2.9%.
 

If Feasible, Avoid Valproic Acid 

“One of the most important take-home messages is that valproic acid has the highest unadjusted birth prevalence of all major congenital malformations — 9.7% — and the highest unadjusted birth prevalence of neural tube defects at 1.4%,” Dr. Pack said. As a result, the guideline authors advise against using valproic acid, if clinically feasible.

Valproic acid also has the highest prevalence of negative neurodevelopmental outcomes, including a reduction in global IQ and an increased prevalence of autism spectrum disorder (ASD). Patients should be counseled accordingly and advised of the increased risk for ASD and decreased IQ resulting from valproic acid.

Clinicians should consider using lamotrigine, levetiracetam, or oxcarbazepine when appropriate. Serum concentrations of most ASMs have a “defined therapeutic window” for effective seizure control and that concentration may decrease during pregnancy, particularly with lamotrigine and levetiracetam, the authors note.

Phenobarbital, topiramate, and valproic acid should because of the increased risk for cardiac malformations, oral clefts, and urogenital and renal malformations.

Fetal screening for major congenital malformations is recommended to enable early detection and timely intervention in patients treated with any ASM during pregnancy Patients receiving phenobarbital during pregnancy should also undergo fetal cardiac screenings.

Valproic acid and topiramate are also associated with children who are small for their gestational age. To enable early identification of fetal growth restriction, patients taking valproic acid or topiramate should be monitored. In addition, children exposed to these medications in utero should be monitored during childhood to ensure they are meeting age-appropriate developmental milestones. 

Folic acid taken during pregnancy can reduce the prevalence of negative neurodevelopment outcomes, but not major congenital malformations, Dr. Pack noted. 

“Due to limited available data, we were unable to define an optimal dose of folic acid supplementation beyond at least 0.4 mg/d,” Dr. Pack said. “Future studies, preferably randomized clinical trials, are needed to better define the optimal dose.”

She emphasized that epilepsy is one of the most common neurologic disorders, and 1 in 5 of those affected are people of childbearing potential. Understanding the effects of ASMs on pregnancy outcomes is critical for physicians who manage these patients.
 

Uncertainty Remains 

Commenting for this news organization, Kimford Meador, MD, a professor in the Department of Neurology and Neurological Sciences at Stanford University School of Medicine , Stanford Neuroscience Health Center, Palo Alto, California, noted that the new guidelines reflect the gains in knowledge since 2009 and that the recommendations are “reasonable, based on available data.”

However, “one very important point is how much remains unknown,” said Dr. Meador, who was not involved in writing the current guideline. “Many ASMs have no data, and several have estimates based on small samples or a single observational study.” Thus, “the risks for the majority of ASMs are uncertain.”

Given that randomized trials “are not possible in this population, and that all observational studies are subject to residual confounding, a reliable signal across multiple studies in humans is required to be certain of findings,” he stated.

This practice guideline was developed with financial support from the American Academy of Neurology. Dr. Pack serves on the editorial board for the journal Epilepsy Currents, receives royalties from UpToDate, receives funding from the National Institutes of Health for serving as coinvestigator and site principal investigator for the Maternal Outcomes and Neurodevelopmental Effects of Antiepileptic Drugs (MONEAD) study, and receives funding from Bayer for serving as a co-investigator on a study on women with epilepsy initiating a progestin intrauterine device. One of Dr. Pack’s immediate family members has received personal compensation for serving as an employee of REGENEXBIO. The other authors’ disclosures are listed on the original paper. Dr. Meador has received research support from the National Institutes of Health, Veterans Administration, Eisai, Inc, and Suno Medtronic Navigation, Inc, and the Epilepsy Study Consortium pays Dr. Meador’s university for his research on the Human Epilepsy Project and consultant time related to Eisai, UCB Pharma, and Xenon.

A version of this article first appeared on Medscape.com.

New expert guidance to help clinicians manage the treatment of patients with epilepsy during pregnancy has been released.

Issued by the American Academy of Neurology, the American Epilepsy Society, and the Society for Maternal-Fetal Medicine, the new practice guideline covers the use of antiseizure medications (ASMs) and folic acid supplementation before conception and during pregnancy.

“Most children born to people with epilepsy are healthy, but there is a small risk of pregnancy-related problems, partly due to seizures and partly due to the effects of antiseizure medications,” the guidelines’ lead author Alison M. Pack, MD, MPH, professor of neurology and chief of the Epilepsy and Sleep Division, Columbia University, New York City, said in a news release.

“This guideline provides recommendations regarding the effects of antiseizure medications and folic acid supplementation on malformations at birth and the development of children during pregnancy, so that doctors and people with epilepsy can determine which treatments may be best for them,” she added. 

The guideline was published online in Neurology.
 

Why Now? 

The new guideline updates the 2009 guidance on epilepsy management during pregnancy. Since then, Dr. Pack told this news organization, there has been a wealth of new data on differential effects of different ASMs — notably, lamotrigine and levetiracetam — the most commonly prescribed medications in this population.

“In this guideline, we were able to assess differential effects of different ASMs on outcomes of interest, including major congenital malformations [MCMs], perinatal outcomes, and neurodevelopmental outcomes. In addition, we looked at the effect of folic acid supplementation on each of these outcomes,” she said.

The overarching goals of care for patients are to “optimize health outcomes both for individuals and their future offspring,” the authors wrote. Shared decision-making, they add, leads to better decision-making by providing a better understanding of the available treatment options and their potential risks, resulting in enhanced decision-making that aligns with personal values.

Clinicians should recommend ASMs that optimize seizure control and fetal outcomes, in the event of a pregnancy, at the earliest possible preconception time, the guideline authors note.

“Overall, treating clinicians need to balance treating the person with epilepsy to control convulsive seizures (generalized tonic-clonic seizures and focal-to-bilateral tonic-clonic seizures) to minimize potential risks to the birth parent and the possible risks of certain ASMs on the fetus if pregnancy occurs,” they wrote.

If a patient is already pregnant, the experts recommend that clinicians “exercise caution” in removing or replacing an ASM that controls convulsive seizures, even if it’s “not an optimal choice” for the fetus. 

In addition, they advise that ASM levels should be monitored throughout the pregnancy, guided by individual ASM pharmacokinetics and an individual patient’s clinical presentation. ASM dose, they note, should be adjusted during pregnancy in response to decreasing serum ASM levels or worsening seizure control.

The authors point out that there are limited data on “pregnancy-related outcomes with respect to acetazolamide, eslicarbazepine, ethosuximide, lacosamide, nitrazepam, perampanel, piracetam, pregabalin, rufinamide, stiripentol, tiagabine, and vigabatrin.”

Patients should be informed that the birth prevalence of any major congenital malformation in the general population ranges between 2.4% and 2.9%.
 

If Feasible, Avoid Valproic Acid 

“One of the most important take-home messages is that valproic acid has the highest unadjusted birth prevalence of all major congenital malformations — 9.7% — and the highest unadjusted birth prevalence of neural tube defects at 1.4%,” Dr. Pack said. As a result, the guideline authors advise against using valproic acid, if clinically feasible.

Valproic acid also has the highest prevalence of negative neurodevelopmental outcomes, including a reduction in global IQ and an increased prevalence of autism spectrum disorder (ASD). Patients should be counseled accordingly and advised of the increased risk for ASD and decreased IQ resulting from valproic acid.

Clinicians should consider using lamotrigine, levetiracetam, or oxcarbazepine when appropriate. Serum concentrations of most ASMs have a “defined therapeutic window” for effective seizure control and that concentration may decrease during pregnancy, particularly with lamotrigine and levetiracetam, the authors note.

Phenobarbital, topiramate, and valproic acid should because of the increased risk for cardiac malformations, oral clefts, and urogenital and renal malformations.

Fetal screening for major congenital malformations is recommended to enable early detection and timely intervention in patients treated with any ASM during pregnancy Patients receiving phenobarbital during pregnancy should also undergo fetal cardiac screenings.

Valproic acid and topiramate are also associated with children who are small for their gestational age. To enable early identification of fetal growth restriction, patients taking valproic acid or topiramate should be monitored. In addition, children exposed to these medications in utero should be monitored during childhood to ensure they are meeting age-appropriate developmental milestones. 

Folic acid taken during pregnancy can reduce the prevalence of negative neurodevelopment outcomes, but not major congenital malformations, Dr. Pack noted. 

“Due to limited available data, we were unable to define an optimal dose of folic acid supplementation beyond at least 0.4 mg/d,” Dr. Pack said. “Future studies, preferably randomized clinical trials, are needed to better define the optimal dose.”

She emphasized that epilepsy is one of the most common neurologic disorders, and 1 in 5 of those affected are people of childbearing potential. Understanding the effects of ASMs on pregnancy outcomes is critical for physicians who manage these patients.
 

Uncertainty Remains 

Commenting for this news organization, Kimford Meador, MD, a professor in the Department of Neurology and Neurological Sciences at Stanford University School of Medicine , Stanford Neuroscience Health Center, Palo Alto, California, noted that the new guidelines reflect the gains in knowledge since 2009 and that the recommendations are “reasonable, based on available data.”

However, “one very important point is how much remains unknown,” said Dr. Meador, who was not involved in writing the current guideline. “Many ASMs have no data, and several have estimates based on small samples or a single observational study.” Thus, “the risks for the majority of ASMs are uncertain.”

Given that randomized trials “are not possible in this population, and that all observational studies are subject to residual confounding, a reliable signal across multiple studies in humans is required to be certain of findings,” he stated.

This practice guideline was developed with financial support from the American Academy of Neurology. Dr. Pack serves on the editorial board for the journal Epilepsy Currents, receives royalties from UpToDate, receives funding from the National Institutes of Health for serving as coinvestigator and site principal investigator for the Maternal Outcomes and Neurodevelopmental Effects of Antiepileptic Drugs (MONEAD) study, and receives funding from Bayer for serving as a co-investigator on a study on women with epilepsy initiating a progestin intrauterine device. One of Dr. Pack’s immediate family members has received personal compensation for serving as an employee of REGENEXBIO. The other authors’ disclosures are listed on the original paper. Dr. Meador has received research support from the National Institutes of Health, Veterans Administration, Eisai, Inc, and Suno Medtronic Navigation, Inc, and the Epilepsy Study Consortium pays Dr. Meador’s university for his research on the Human Epilepsy Project and consultant time related to Eisai, UCB Pharma, and Xenon.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Global Analysis Identifies Drugs Associated With SJS-TEN in Children

Article Type
Changed
Thu, 05/16/2024 - 11:28

 

TOPLINE:

Antiepileptic and anti-infectious agents were the most common drugs associated with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) in children in an analysis of a World Health Organization (WHO) database.

METHODOLOGY:

  • SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
  • Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
  • They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
  • A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.

TAKEAWAY:

  • Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
  • The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
  • All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
  • Vaccines were not associated with significant signals.

IN PRACTICE:

The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”

SOURCE:

The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.

LIMITATIONS:

Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.

DISCLOSURES:

This study did not receive any funding. The authors declared no conflict of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Antiepileptic and anti-infectious agents were the most common drugs associated with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) in children in an analysis of a World Health Organization (WHO) database.

METHODOLOGY:

  • SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
  • Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
  • They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
  • A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.

TAKEAWAY:

  • Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
  • The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
  • All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
  • Vaccines were not associated with significant signals.

IN PRACTICE:

The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”

SOURCE:

The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.

LIMITATIONS:

Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.

DISCLOSURES:

This study did not receive any funding. The authors declared no conflict of interest.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Antiepileptic and anti-infectious agents were the most common drugs associated with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) in children in an analysis of a World Health Organization (WHO) database.

METHODOLOGY:

  • SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
  • Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
  • They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
  • A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.

TAKEAWAY:

  • Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
  • The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
  • All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
  • Vaccines were not associated with significant signals.

IN PRACTICE:

The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”

SOURCE:

The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.

LIMITATIONS:

Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.

DISCLOSURES:

This study did not receive any funding. The authors declared no conflict of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lecanemab’s Promise and Peril: Alzheimer’s Treatment Dilemma

Article Type
Changed
Wed, 05/15/2024 - 11:45

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lower Urinary Tract Symptoms Associated With Poorer Cognition in Older Adults

Article Type
Changed
Tue, 05/14/2024 - 16:25

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High-Potency Cannabis Tied to Impaired Brain Development, Psychosis, Cannabis-Use Disorder

Article Type
Changed
Tue, 05/14/2024 - 13:08

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM APA 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Widespread, Long-Held Practice in Dementia Called Into Question

Article Type
Changed
Tue, 05/14/2024 - 12:31

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

It Would Be Nice if Olive Oil Really Did Prevent Dementia

Article Type
Changed
Tue, 05/14/2024 - 10:03

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article