Fluoride, Water, and Kids’ Brains: It’s Complicated

Article Type
Changed
Thu, 05/23/2024 - 12:33

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

Jama Network Open


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

Los Angeles County Department of Public Health


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

Jama Network Open


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

Los Angeles County Department of Public Health


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity. 

I recently looked back at my folder full of these medical study commentaries, this weekly video series we call Impact Factor, and realized that I’ve been doing this for a long time. More than 400 articles, believe it or not. 

I’ve learned a lot in that time — about medicine, of course — but also about how people react to certain topics. If you’ve been with me this whole time, or even for just a chunk of it, you’ll know that I tend to take a measured approach to most topics. No one study is ever truly definitive, after all. But regardless of how even-keeled I may be, there are some topics that I just know in advance are going to be a bit divisive: studies about gun control; studies about vitamin D; and, of course, studies about fluoride.
 

Shall We Shake This Hornet’s Nest? 

The fluoridation of the US water system began in 1945 with the goal of reducing cavities in the population. The CDC named water fluoridation one of the 10 great public health achievements of the 20th century, along with such inarguable achievements as the recognition of tobacco as a health hazard.

But fluoridation has never been without its detractors. One problem is that the spectrum of beliefs about the potential harm of fluoridation is huge. On one end, you have science-based concerns such as the recognition that excessive fluoride intake can cause fluorosis and stain tooth enamel. I’ll note that the EPA regulates fluoride levels — there is a fair amount of naturally occurring fluoride in water tables around the world — to prevent this. And, of course, on the other end of the spectrum, you have beliefs that are essentially conspiracy theories: “They” add fluoride to the water supply to control us.

The challenge for me is that when one “side” of a scientific debate includes the crazy theories, it can be hard to discuss that whole spectrum, since there are those who will see evidence of any adverse fluoride effect as confirmation that the conspiracy theory is true. 

I can’t help this. So I’ll just say this up front: I am about to tell you about a study that shows some potential risk from fluoride exposure. I will tell you up front that there are some significant caveats to the study that call the results into question. And I will tell you up front that no one is controlling your mind, or my mind, with fluoride; they do it with social media.
 

Let’s Dive Into These Shark-Infested, Fluoridated Waters

We’re talking about the study, “Maternal Urinary Fluoride and Child Neurobehavior at Age 36 Months,” which appears in JAMA Network Open.

It’s a study of 229 mother-child pairs from the Los Angeles area. The moms had their urinary fluoride level measured once before 30 weeks of gestation. A neurobehavioral battery called the Preschool Child Behavior Checklist was administered to the children at age 36 months. 

The main thing you’ll hear about this study — in headlines, Facebook posts, and manifestos locked in drawers somewhere — is the primary result: A 0.68-mg/L increase in urinary fluoride in the mothers, about 25 percentile points, was associated with a doubling of the risk for neurobehavioral problems in their kids when they were 3 years old.

Yikes.

But this is not a randomized trial. Researchers didn’t randomly assign some women to have high fluoride intake and some women to have low fluoride intake. They knew that other factors that might lead to neurobehavioral problems could also lead to higher fluoride intake. They represent these factors in what’s known as a directed acyclic graph, as seen here, and account for them statistically using a regression equation.

Jama Network Open


Not represented here are neighborhood characteristics. Los Angeles does not have uniformly fluoridated water, and neurobehavioral problems in kids are strongly linked to stressors in their environments. Fluoride level could be an innocent bystander.

Los Angeles County Department of Public Health


I’m really just describing the classic issue of correlation versus causation here, the bane of all observational research and — let’s be honest — a bit of a crutch that allows us to disregard the results of studies we don’t like, provided the study wasn’t a randomized trial. 

But I have a deeper issue with this study than the old “failure to adjust for relevant confounders” thing, as important as that is.

The exposure of interest in this study is maternal urinary fluoride, as measured in a spot sample. It’s not often that I get to go deep on nephrology in this space, but let’s think about that for a second. Let’s assume for a moment that fluoride is toxic to the developing fetal brain, the main concern raised by the results of the study. How would that work? Presumably, mom would be ingesting fluoride from various sources (like the water supply), and that fluoride would get into her blood, and from her blood across the placenta to the baby’s blood, and into the baby’s brain.
 

 

 

Is Urinary Fluoride a Good Measure of Blood Fluoride?

It’s not great. Empirically, we have data that tell us that levels of urine fluoride are not all that similar to levels of serum fluoride. In 2014, a study investigated the correlation between urine and serum fluoride in a cohort of 60 schoolchildren and found a correlation coefficient of around 0.5. 

Why isn’t urine fluoride a great proxy for serum fluoride? The most obvious reason is the urine concentration. Human urine concentration can range from about 50 mmol to 1200 mmol (a 24-fold difference) depending on hydration status. Over the course of 24 hours, for example, the amount of fluoride you put out in your urine may be fairly stable in relation to intake, but for a spot urine sample it would be wildly variable. The authors know this, of course, and so they divide the measured urine fluoride by the specific gravity of the urine to give a sort of “dilution adjusted” value. That’s what is actually used in this study. But specific gravity is, itself, an imperfect measure of how dilute the urine is. 

This is something that comes up a lot in urinary biomarker research and it’s not that hard to get around. The best thing would be to just measure blood levels of fluoride. The second best option is 24-hour fluoride excretion. After that, the next best thing would be to adjust the spot concentration by other markers of urinary dilution — creatinine or osmolality — as sensitivity analyses. Any of these approaches would lend credence to the results of the study.

Urinary fluoride excretion is pH dependent. The more acidic the urine, the less fluoride is excreted. Many things — including, importantly, diet — affect urine pH. And it is not a stretch to think that diet may also affect the developing fetus. Neither urine pH nor dietary habits were accounted for in this study. 

So, here we are. We have an observational study suggesting a harm that may be associated with fluoride. There may be a causal link here, in which case we need further studies to weigh the harm against the more well-established public health benefit. Or, this is all correlation — an illusion created by the limitations of observational data, and the unique challenges of estimating intake from a single urine sample. In other words, this study has something for everyone, fluoride boosters and skeptics alike. Let the arguments begin. But, if possible, leave me out of it.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Expert Guidance on Antiseizure Medication Use During Pregnancy

Article Type
Changed
Mon, 05/20/2024 - 12:25

New expert guidance to help clinicians manage the treatment of patients with epilepsy during pregnancy has been released.

Issued by the American Academy of Neurology, the American Epilepsy Society, and the Society for Maternal-Fetal Medicine, the new practice guideline covers the use of antiseizure medications (ASMs) and folic acid supplementation before conception and during pregnancy.

“Most children born to people with epilepsy are healthy, but there is a small risk of pregnancy-related problems, partly due to seizures and partly due to the effects of antiseizure medications,” the guidelines’ lead author Alison M. Pack, MD, MPH, professor of neurology and chief of the Epilepsy and Sleep Division, Columbia University, New York City, said in a news release.

“This guideline provides recommendations regarding the effects of antiseizure medications and folic acid supplementation on malformations at birth and the development of children during pregnancy, so that doctors and people with epilepsy can determine which treatments may be best for them,” she added. 

The guideline was published online in Neurology.
 

Why Now? 

The new guideline updates the 2009 guidance on epilepsy management during pregnancy. Since then, Dr. Pack told this news organization, there has been a wealth of new data on differential effects of different ASMs — notably, lamotrigine and levetiracetam — the most commonly prescribed medications in this population.

“In this guideline, we were able to assess differential effects of different ASMs on outcomes of interest, including major congenital malformations [MCMs], perinatal outcomes, and neurodevelopmental outcomes. In addition, we looked at the effect of folic acid supplementation on each of these outcomes,” she said.

The overarching goals of care for patients are to “optimize health outcomes both for individuals and their future offspring,” the authors wrote. Shared decision-making, they add, leads to better decision-making by providing a better understanding of the available treatment options and their potential risks, resulting in enhanced decision-making that aligns with personal values.

Clinicians should recommend ASMs that optimize seizure control and fetal outcomes, in the event of a pregnancy, at the earliest possible preconception time, the guideline authors note.

“Overall, treating clinicians need to balance treating the person with epilepsy to control convulsive seizures (generalized tonic-clonic seizures and focal-to-bilateral tonic-clonic seizures) to minimize potential risks to the birth parent and the possible risks of certain ASMs on the fetus if pregnancy occurs,” they wrote.

If a patient is already pregnant, the experts recommend that clinicians “exercise caution” in removing or replacing an ASM that controls convulsive seizures, even if it’s “not an optimal choice” for the fetus. 

In addition, they advise that ASM levels should be monitored throughout the pregnancy, guided by individual ASM pharmacokinetics and an individual patient’s clinical presentation. ASM dose, they note, should be adjusted during pregnancy in response to decreasing serum ASM levels or worsening seizure control.

The authors point out that there are limited data on “pregnancy-related outcomes with respect to acetazolamide, eslicarbazepine, ethosuximide, lacosamide, nitrazepam, perampanel, piracetam, pregabalin, rufinamide, stiripentol, tiagabine, and vigabatrin.”

Patients should be informed that the birth prevalence of any major congenital malformation in the general population ranges between 2.4% and 2.9%.
 

If Feasible, Avoid Valproic Acid 

“One of the most important take-home messages is that valproic acid has the highest unadjusted birth prevalence of all major congenital malformations — 9.7% — and the highest unadjusted birth prevalence of neural tube defects at 1.4%,” Dr. Pack said. As a result, the guideline authors advise against using valproic acid, if clinically feasible.

Valproic acid also has the highest prevalence of negative neurodevelopmental outcomes, including a reduction in global IQ and an increased prevalence of autism spectrum disorder (ASD). Patients should be counseled accordingly and advised of the increased risk for ASD and decreased IQ resulting from valproic acid.

Clinicians should consider using lamotrigine, levetiracetam, or oxcarbazepine when appropriate. Serum concentrations of most ASMs have a “defined therapeutic window” for effective seizure control and that concentration may decrease during pregnancy, particularly with lamotrigine and levetiracetam, the authors note.

Phenobarbital, topiramate, and valproic acid should because of the increased risk for cardiac malformations, oral clefts, and urogenital and renal malformations.

Fetal screening for major congenital malformations is recommended to enable early detection and timely intervention in patients treated with any ASM during pregnancy Patients receiving phenobarbital during pregnancy should also undergo fetal cardiac screenings.

Valproic acid and topiramate are also associated with children who are small for their gestational age. To enable early identification of fetal growth restriction, patients taking valproic acid or topiramate should be monitored. In addition, children exposed to these medications in utero should be monitored during childhood to ensure they are meeting age-appropriate developmental milestones. 

Folic acid taken during pregnancy can reduce the prevalence of negative neurodevelopment outcomes, but not major congenital malformations, Dr. Pack noted. 

“Due to limited available data, we were unable to define an optimal dose of folic acid supplementation beyond at least 0.4 mg/d,” Dr. Pack said. “Future studies, preferably randomized clinical trials, are needed to better define the optimal dose.”

She emphasized that epilepsy is one of the most common neurologic disorders, and 1 in 5 of those affected are people of childbearing potential. Understanding the effects of ASMs on pregnancy outcomes is critical for physicians who manage these patients.
 

Uncertainty Remains 

Commenting for this news organization, Kimford Meador, MD, a professor in the Department of Neurology and Neurological Sciences at Stanford University School of Medicine , Stanford Neuroscience Health Center, Palo Alto, California, noted that the new guidelines reflect the gains in knowledge since 2009 and that the recommendations are “reasonable, based on available data.”

However, “one very important point is how much remains unknown,” said Dr. Meador, who was not involved in writing the current guideline. “Many ASMs have no data, and several have estimates based on small samples or a single observational study.” Thus, “the risks for the majority of ASMs are uncertain.”

Given that randomized trials “are not possible in this population, and that all observational studies are subject to residual confounding, a reliable signal across multiple studies in humans is required to be certain of findings,” he stated.

This practice guideline was developed with financial support from the American Academy of Neurology. Dr. Pack serves on the editorial board for the journal Epilepsy Currents, receives royalties from UpToDate, receives funding from the National Institutes of Health for serving as coinvestigator and site principal investigator for the Maternal Outcomes and Neurodevelopmental Effects of Antiepileptic Drugs (MONEAD) study, and receives funding from Bayer for serving as a co-investigator on a study on women with epilepsy initiating a progestin intrauterine device. One of Dr. Pack’s immediate family members has received personal compensation for serving as an employee of REGENEXBIO. The other authors’ disclosures are listed on the original paper. Dr. Meador has received research support from the National Institutes of Health, Veterans Administration, Eisai, Inc, and Suno Medtronic Navigation, Inc, and the Epilepsy Study Consortium pays Dr. Meador’s university for his research on the Human Epilepsy Project and consultant time related to Eisai, UCB Pharma, and Xenon.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New expert guidance to help clinicians manage the treatment of patients with epilepsy during pregnancy has been released.

Issued by the American Academy of Neurology, the American Epilepsy Society, and the Society for Maternal-Fetal Medicine, the new practice guideline covers the use of antiseizure medications (ASMs) and folic acid supplementation before conception and during pregnancy.

“Most children born to people with epilepsy are healthy, but there is a small risk of pregnancy-related problems, partly due to seizures and partly due to the effects of antiseizure medications,” the guidelines’ lead author Alison M. Pack, MD, MPH, professor of neurology and chief of the Epilepsy and Sleep Division, Columbia University, New York City, said in a news release.

“This guideline provides recommendations regarding the effects of antiseizure medications and folic acid supplementation on malformations at birth and the development of children during pregnancy, so that doctors and people with epilepsy can determine which treatments may be best for them,” she added. 

The guideline was published online in Neurology.
 

Why Now? 

The new guideline updates the 2009 guidance on epilepsy management during pregnancy. Since then, Dr. Pack told this news organization, there has been a wealth of new data on differential effects of different ASMs — notably, lamotrigine and levetiracetam — the most commonly prescribed medications in this population.

“In this guideline, we were able to assess differential effects of different ASMs on outcomes of interest, including major congenital malformations [MCMs], perinatal outcomes, and neurodevelopmental outcomes. In addition, we looked at the effect of folic acid supplementation on each of these outcomes,” she said.

The overarching goals of care for patients are to “optimize health outcomes both for individuals and their future offspring,” the authors wrote. Shared decision-making, they add, leads to better decision-making by providing a better understanding of the available treatment options and their potential risks, resulting in enhanced decision-making that aligns with personal values.

Clinicians should recommend ASMs that optimize seizure control and fetal outcomes, in the event of a pregnancy, at the earliest possible preconception time, the guideline authors note.

“Overall, treating clinicians need to balance treating the person with epilepsy to control convulsive seizures (generalized tonic-clonic seizures and focal-to-bilateral tonic-clonic seizures) to minimize potential risks to the birth parent and the possible risks of certain ASMs on the fetus if pregnancy occurs,” they wrote.

If a patient is already pregnant, the experts recommend that clinicians “exercise caution” in removing or replacing an ASM that controls convulsive seizures, even if it’s “not an optimal choice” for the fetus. 

In addition, they advise that ASM levels should be monitored throughout the pregnancy, guided by individual ASM pharmacokinetics and an individual patient’s clinical presentation. ASM dose, they note, should be adjusted during pregnancy in response to decreasing serum ASM levels or worsening seizure control.

The authors point out that there are limited data on “pregnancy-related outcomes with respect to acetazolamide, eslicarbazepine, ethosuximide, lacosamide, nitrazepam, perampanel, piracetam, pregabalin, rufinamide, stiripentol, tiagabine, and vigabatrin.”

Patients should be informed that the birth prevalence of any major congenital malformation in the general population ranges between 2.4% and 2.9%.
 

If Feasible, Avoid Valproic Acid 

“One of the most important take-home messages is that valproic acid has the highest unadjusted birth prevalence of all major congenital malformations — 9.7% — and the highest unadjusted birth prevalence of neural tube defects at 1.4%,” Dr. Pack said. As a result, the guideline authors advise against using valproic acid, if clinically feasible.

Valproic acid also has the highest prevalence of negative neurodevelopmental outcomes, including a reduction in global IQ and an increased prevalence of autism spectrum disorder (ASD). Patients should be counseled accordingly and advised of the increased risk for ASD and decreased IQ resulting from valproic acid.

Clinicians should consider using lamotrigine, levetiracetam, or oxcarbazepine when appropriate. Serum concentrations of most ASMs have a “defined therapeutic window” for effective seizure control and that concentration may decrease during pregnancy, particularly with lamotrigine and levetiracetam, the authors note.

Phenobarbital, topiramate, and valproic acid should because of the increased risk for cardiac malformations, oral clefts, and urogenital and renal malformations.

Fetal screening for major congenital malformations is recommended to enable early detection and timely intervention in patients treated with any ASM during pregnancy Patients receiving phenobarbital during pregnancy should also undergo fetal cardiac screenings.

Valproic acid and topiramate are also associated with children who are small for their gestational age. To enable early identification of fetal growth restriction, patients taking valproic acid or topiramate should be monitored. In addition, children exposed to these medications in utero should be monitored during childhood to ensure they are meeting age-appropriate developmental milestones. 

Folic acid taken during pregnancy can reduce the prevalence of negative neurodevelopment outcomes, but not major congenital malformations, Dr. Pack noted. 

“Due to limited available data, we were unable to define an optimal dose of folic acid supplementation beyond at least 0.4 mg/d,” Dr. Pack said. “Future studies, preferably randomized clinical trials, are needed to better define the optimal dose.”

She emphasized that epilepsy is one of the most common neurologic disorders, and 1 in 5 of those affected are people of childbearing potential. Understanding the effects of ASMs on pregnancy outcomes is critical for physicians who manage these patients.
 

Uncertainty Remains 

Commenting for this news organization, Kimford Meador, MD, a professor in the Department of Neurology and Neurological Sciences at Stanford University School of Medicine , Stanford Neuroscience Health Center, Palo Alto, California, noted that the new guidelines reflect the gains in knowledge since 2009 and that the recommendations are “reasonable, based on available data.”

However, “one very important point is how much remains unknown,” said Dr. Meador, who was not involved in writing the current guideline. “Many ASMs have no data, and several have estimates based on small samples or a single observational study.” Thus, “the risks for the majority of ASMs are uncertain.”

Given that randomized trials “are not possible in this population, and that all observational studies are subject to residual confounding, a reliable signal across multiple studies in humans is required to be certain of findings,” he stated.

This practice guideline was developed with financial support from the American Academy of Neurology. Dr. Pack serves on the editorial board for the journal Epilepsy Currents, receives royalties from UpToDate, receives funding from the National Institutes of Health for serving as coinvestigator and site principal investigator for the Maternal Outcomes and Neurodevelopmental Effects of Antiepileptic Drugs (MONEAD) study, and receives funding from Bayer for serving as a co-investigator on a study on women with epilepsy initiating a progestin intrauterine device. One of Dr. Pack’s immediate family members has received personal compensation for serving as an employee of REGENEXBIO. The other authors’ disclosures are listed on the original paper. Dr. Meador has received research support from the National Institutes of Health, Veterans Administration, Eisai, Inc, and Suno Medtronic Navigation, Inc, and the Epilepsy Study Consortium pays Dr. Meador’s university for his research on the Human Epilepsy Project and consultant time related to Eisai, UCB Pharma, and Xenon.

A version of this article first appeared on Medscape.com.

New expert guidance to help clinicians manage the treatment of patients with epilepsy during pregnancy has been released.

Issued by the American Academy of Neurology, the American Epilepsy Society, and the Society for Maternal-Fetal Medicine, the new practice guideline covers the use of antiseizure medications (ASMs) and folic acid supplementation before conception and during pregnancy.

“Most children born to people with epilepsy are healthy, but there is a small risk of pregnancy-related problems, partly due to seizures and partly due to the effects of antiseizure medications,” the guidelines’ lead author Alison M. Pack, MD, MPH, professor of neurology and chief of the Epilepsy and Sleep Division, Columbia University, New York City, said in a news release.

“This guideline provides recommendations regarding the effects of antiseizure medications and folic acid supplementation on malformations at birth and the development of children during pregnancy, so that doctors and people with epilepsy can determine which treatments may be best for them,” she added. 

The guideline was published online in Neurology.
 

Why Now? 

The new guideline updates the 2009 guidance on epilepsy management during pregnancy. Since then, Dr. Pack told this news organization, there has been a wealth of new data on differential effects of different ASMs — notably, lamotrigine and levetiracetam — the most commonly prescribed medications in this population.

“In this guideline, we were able to assess differential effects of different ASMs on outcomes of interest, including major congenital malformations [MCMs], perinatal outcomes, and neurodevelopmental outcomes. In addition, we looked at the effect of folic acid supplementation on each of these outcomes,” she said.

The overarching goals of care for patients are to “optimize health outcomes both for individuals and their future offspring,” the authors wrote. Shared decision-making, they add, leads to better decision-making by providing a better understanding of the available treatment options and their potential risks, resulting in enhanced decision-making that aligns with personal values.

Clinicians should recommend ASMs that optimize seizure control and fetal outcomes, in the event of a pregnancy, at the earliest possible preconception time, the guideline authors note.

“Overall, treating clinicians need to balance treating the person with epilepsy to control convulsive seizures (generalized tonic-clonic seizures and focal-to-bilateral tonic-clonic seizures) to minimize potential risks to the birth parent and the possible risks of certain ASMs on the fetus if pregnancy occurs,” they wrote.

If a patient is already pregnant, the experts recommend that clinicians “exercise caution” in removing or replacing an ASM that controls convulsive seizures, even if it’s “not an optimal choice” for the fetus. 

In addition, they advise that ASM levels should be monitored throughout the pregnancy, guided by individual ASM pharmacokinetics and an individual patient’s clinical presentation. ASM dose, they note, should be adjusted during pregnancy in response to decreasing serum ASM levels or worsening seizure control.

The authors point out that there are limited data on “pregnancy-related outcomes with respect to acetazolamide, eslicarbazepine, ethosuximide, lacosamide, nitrazepam, perampanel, piracetam, pregabalin, rufinamide, stiripentol, tiagabine, and vigabatrin.”

Patients should be informed that the birth prevalence of any major congenital malformation in the general population ranges between 2.4% and 2.9%.
 

If Feasible, Avoid Valproic Acid 

“One of the most important take-home messages is that valproic acid has the highest unadjusted birth prevalence of all major congenital malformations — 9.7% — and the highest unadjusted birth prevalence of neural tube defects at 1.4%,” Dr. Pack said. As a result, the guideline authors advise against using valproic acid, if clinically feasible.

Valproic acid also has the highest prevalence of negative neurodevelopmental outcomes, including a reduction in global IQ and an increased prevalence of autism spectrum disorder (ASD). Patients should be counseled accordingly and advised of the increased risk for ASD and decreased IQ resulting from valproic acid.

Clinicians should consider using lamotrigine, levetiracetam, or oxcarbazepine when appropriate. Serum concentrations of most ASMs have a “defined therapeutic window” for effective seizure control and that concentration may decrease during pregnancy, particularly with lamotrigine and levetiracetam, the authors note.

Phenobarbital, topiramate, and valproic acid should because of the increased risk for cardiac malformations, oral clefts, and urogenital and renal malformations.

Fetal screening for major congenital malformations is recommended to enable early detection and timely intervention in patients treated with any ASM during pregnancy Patients receiving phenobarbital during pregnancy should also undergo fetal cardiac screenings.

Valproic acid and topiramate are also associated with children who are small for their gestational age. To enable early identification of fetal growth restriction, patients taking valproic acid or topiramate should be monitored. In addition, children exposed to these medications in utero should be monitored during childhood to ensure they are meeting age-appropriate developmental milestones. 

Folic acid taken during pregnancy can reduce the prevalence of negative neurodevelopment outcomes, but not major congenital malformations, Dr. Pack noted. 

“Due to limited available data, we were unable to define an optimal dose of folic acid supplementation beyond at least 0.4 mg/d,” Dr. Pack said. “Future studies, preferably randomized clinical trials, are needed to better define the optimal dose.”

She emphasized that epilepsy is one of the most common neurologic disorders, and 1 in 5 of those affected are people of childbearing potential. Understanding the effects of ASMs on pregnancy outcomes is critical for physicians who manage these patients.
 

Uncertainty Remains 

Commenting for this news organization, Kimford Meador, MD, a professor in the Department of Neurology and Neurological Sciences at Stanford University School of Medicine , Stanford Neuroscience Health Center, Palo Alto, California, noted that the new guidelines reflect the gains in knowledge since 2009 and that the recommendations are “reasonable, based on available data.”

However, “one very important point is how much remains unknown,” said Dr. Meador, who was not involved in writing the current guideline. “Many ASMs have no data, and several have estimates based on small samples or a single observational study.” Thus, “the risks for the majority of ASMs are uncertain.”

Given that randomized trials “are not possible in this population, and that all observational studies are subject to residual confounding, a reliable signal across multiple studies in humans is required to be certain of findings,” he stated.

This practice guideline was developed with financial support from the American Academy of Neurology. Dr. Pack serves on the editorial board for the journal Epilepsy Currents, receives royalties from UpToDate, receives funding from the National Institutes of Health for serving as coinvestigator and site principal investigator for the Maternal Outcomes and Neurodevelopmental Effects of Antiepileptic Drugs (MONEAD) study, and receives funding from Bayer for serving as a co-investigator on a study on women with epilepsy initiating a progestin intrauterine device. One of Dr. Pack’s immediate family members has received personal compensation for serving as an employee of REGENEXBIO. The other authors’ disclosures are listed on the original paper. Dr. Meador has received research support from the National Institutes of Health, Veterans Administration, Eisai, Inc, and Suno Medtronic Navigation, Inc, and the Epilepsy Study Consortium pays Dr. Meador’s university for his research on the Human Epilepsy Project and consultant time related to Eisai, UCB Pharma, and Xenon.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Global Analysis Identifies Drugs Associated With SJS-TEN in Children

Article Type
Changed
Thu, 05/16/2024 - 11:28

 

TOPLINE:

Antiepileptic and anti-infectious agents were the most common drugs associated with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) in children in an analysis of a World Health Organization (WHO) database.

METHODOLOGY:

  • SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
  • Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
  • They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
  • A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.

TAKEAWAY:

  • Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
  • The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
  • All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
  • Vaccines were not associated with significant signals.

IN PRACTICE:

The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”

SOURCE:

The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.

LIMITATIONS:

Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.

DISCLOSURES:

This study did not receive any funding. The authors declared no conflict of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Antiepileptic and anti-infectious agents were the most common drugs associated with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) in children in an analysis of a World Health Organization (WHO) database.

METHODOLOGY:

  • SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
  • Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
  • They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
  • A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.

TAKEAWAY:

  • Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
  • The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
  • All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
  • Vaccines were not associated with significant signals.

IN PRACTICE:

The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”

SOURCE:

The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.

LIMITATIONS:

Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.

DISCLOSURES:

This study did not receive any funding. The authors declared no conflict of interest.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Antiepileptic and anti-infectious agents were the most common drugs associated with Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) in children in an analysis of a World Health Organization (WHO) database.

METHODOLOGY:

  • SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
  • Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
  • They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
  • A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.

TAKEAWAY:

  • Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
  • The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
  • All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
  • Vaccines were not associated with significant signals.

IN PRACTICE:

The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”

SOURCE:

The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.

LIMITATIONS:

Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.

DISCLOSURES:

This study did not receive any funding. The authors declared no conflict of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lecanemab’s Promise and Peril: Alzheimer’s Treatment Dilemma

Article Type
Changed
Wed, 05/15/2024 - 11:45

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Clinicians interested in treating patients with symptoms of mild cognitive impairment or mild dementia should carefully analyze the potential benefits and harms of monoclonal amyloid beta therapy, including likelihood of side effects and overall burden on the patient, according to researchers at the annual meeting of the American Geriatrics Society (AGS). 

Lecanemab (Leqembi) may help some patients by lowering the level of beta-amyloid protein in the brain. Results from a phase 3 trial presented at the conference showed participants with Alzheimer’s disease had a 27% slower progression of the disease compared with placebo.

But clinicians must weigh that advantage against risks and contraindications, according to Esther Oh, MD, PhD, an associate professor in the Division of Geriatric Medicine and Gerontology and co-director of the Johns Hopkins Memory and Alzheimer’s Treatment Center, Johns Hopkins University, Baltimore, Maryland, who spoke during a plenary session. Lecanemab gained accelerated approval by the US Food and Drug Administration in January 2023 and full approval in July 2023.

The results from CLARITY, an 18-month, multicenter, double-blind trial involving 1795 participants aged 50-90 years, showed that the variation between treatment and placebo did not meet the criteria for a minimum clinically important difference for mild cognitive impairment or mild Alzheimer’s disease.

Even more concerning to Dr. Oh was the rate of amyloid-related abnormalities on brain imaging, which can cause brain edema and hemorrhage (12.6% and 17.3%, respectively). Almost 85% of cases were asymptomatic. 

The risk for abnormalities indicates that thrombolytics are contraindicated for patients taking the drug, according to Dr. Oh. 

“Appropriate use recommendations exclude vitamin K antagonists such as warfarin, direct oral anticoagulants and heparin, although aspirin and other antiplatelet agents are allowed,” Dr. Oh said during the presentation.

Blood biomarkers, PET imaging, and levels of amyloid-beta proteins in cerebrospinal fluid are used to determine eligibility for lecanemab. However, tau biomarkers may indicate signs of cognitive impairment decades prior to symptoms. Some evidence indicates that the drug may be more effective in individuals with low tau levels that are evident in earlier stages of disease. Tau can also be determined from cerebrospinal fluid, however, “we do not factor in tau protein as a biomarker for treatment eligibility, but this may become an important biomarker in the future,” Dr. Oh said.

Lecanemab is cost-prohibitive for many patients, with an annual price tag of $26,000. Treatment also requires monthly infusions, a PET, intravenous administration, lab work, multiple MRIs, and potentially an APOE4 serum test.

Medicare covers the majority of services, but patients are responsible for deductibles and copays, an estimated $7000 annually, according to Shari Ling, MD, deputy chief medical officer with the US Centers for Medicare & Medicaid Services, who also spoke during the session. Supplemental or other insurance such as Medicaid are also not included in this estimate.

The Medicare population is growing more complex over time, Dr. Ling said. In 2021, 54% of beneficiaries had five or more comorbidities, which can affect eligibility for lecanemab. 

“Across the healthcare system, we are learning what is necessary for coordination of delivery, for evaluation of people who receive these treatments, and for the care that is not anticipated,” Dr. Ling noted.

Neither speaker reported any financial conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lower Urinary Tract Symptoms Associated With Poorer Cognition in Older Adults

Article Type
Changed
Tue, 05/14/2024 - 16:25

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Lower urinary tract symptoms were significantly associated with lower scores on measures of cognitive impairment in older adults, based on data from approximately 10,000 individuals.

“We know that lower urinary tract symptoms are very common in aging men and women;” however, older adults often underreport symptoms and avoid seeking treatment, Belinda Williams, MD, of the University of Alabama, Birmingham, said in a presentation at the annual meeting of the American Geriatrics Society.

“Evidence also shows us that the incidence of lower urinary tract symptoms (LUTS) is higher in patients with dementia,” she said. However, the association between cognitive impairment and LUTS has not been well studied, she said.

To address this knowledge gap, Dr. Williams and colleagues reviewed data from older adults with and without LUTS who were enrolled in the REasons for Geographic and Racial Differences in Stroke (REGARDS) study, a cohort study including 30,239 Black or White adults aged 45 years and older who completed telephone or in-home assessments in 2003-2007 and in 2013-2017.

The study population included 6062 women and 4438 men who responded to questionnaires about LUTS and completed several cognitive tests via telephone in 2019-2010. The tests evaluated verbal fluency, executive function, and memory, and included the Six-Item Screener, Animal Naming, Letter F naming, and word list learning; lower scores indicated poorer cognitive performance.

Participants who met the criteria for LUTS were categorized as having mild, moderate, or severe symptoms.

The researchers controlled for age, race, education, income, and urban/rural setting in a multivariate analysis. The mean ages of the women and men were 69 years and 63 years, respectively; 41% and 32% were Black, 59% and 68% were White.

Overall, 70% of women and 62% of men reported LUTS; 6.2% and 8.2%, respectively, met criteria for cognitive impairment. The association between cognitive impairment and LUTS was statistically significant for all specific tests (P < .01), but not for the global cognitive domain tests.

Black men were more likely to report LUTS than White men, but LUTS reports were similar between Black and White women.

Moderate LUTS was the most common degree of severity for men and women (54% and 64%, respectively).

The most common symptom overall was pre-toilet leakage (urge urinary incontinence), reported by 94% of women and 91% of men. The next most common symptoms for men and women were nocturia and urgency.

“We found that, across the board, in all the cognitive tests, LUTS were associated with lower cognitive test scores,” Dr. Williams said in her presentation. Little differences were seen on the Six-Item Screener, she noted, but when they further analyzed the data using scores lower than 4 to indicate cognitive impairment, they found significant association with LUTS, she said.

The results showing that the presence of LUTS was consistently associated with lower cognitive test scores of verbal fluency, executive function, and memory, are applicable in clinical practice, Dr. Williams said in her presentation.

“Recognizing the subtle changes in cognition among older adults with LUTS may impact treatment decisions,” she said. “For example, we can encourage and advise our patients to be physically and cognitively active and to avoid anticholinergic medications.”

Next steps for research include analyzing longitudinal changes in cognition among participants with and without LUTS, said Dr. Williams.

During a question-and-answer session, Dr. Williams agreed with a comment that incorporating cognitive screening strategies in to LUTS clinical pathways might be helpful, such as conducting a baseline Montreal Cognitive Assessment Test (MoCA) in patients with LUTS. “Periodic repeat MoCAs thereafter can help assess decline in cognition,” she said.

The study was supported by the National Institutes of Neurological Disorders and Stroke and the National Institute on Aging. The researchers had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High-Potency Cannabis Tied to Impaired Brain Development, Psychosis, Cannabis-Use Disorder

Article Type
Changed
Tue, 05/14/2024 - 13:08

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

It’s becoming clear that the adolescent brain is particularly vulnerable to cannabis, especially today’s higher-potency products, which put teens at risk for impaired brain development; mental health issues, including psychosis; and cannabis-use disorder (CUD). 

That was the message delivered by Yasmin Hurd, PhD, director of the Addiction Institute at Mount Sinai in New York, during a press briefing at the American Psychiatric Association (APA) 2024 annual meeting

“We’re actually in historic times in that we now have highly concentrated, highly potent cannabis products that are administered in various routes,” Dr. Hurd told reporters. 

Tetrahydrocannabinol (THC) concentrations in cannabis products have increased over the years, from around 2%-4% to 15%-24% now, Dr. Hurd noted.

The impact of high-potency cannabis products and increased risk for CUD and mental health problems, particularly in adolescents, “must be taken seriously, especially in light of the current mental health crisis,” Dr. Hurd and colleagues wrote in a commentary on the developmental trajectory of CUD published simultaneously in the American Journal of Psychiatry
 

Dramatic Increase in Teen Cannabis Use

A recent study from Oregon Health & Science University showed that adolescent cannabis abuse in the United States has increased dramatically, by about 245%, since 2000. 

“Drug abuse is often driven by what is in front of you,” Nora Volkow, MD, director of the National Institute on Drug Abuse, noted in an interview. 

“Right now, cannabis is widely available. So, guess what? Cannabis becomes the drug that people take. Nicotine is much harder to get. It is regulated to a much greater extent than cannabis, so fewer teenagers are consuming nicotine than are consuming cannabis,” Dr. Volkow said. 

Cannabis exposure during neurodevelopment has the potential to alter the endocannabinoid system, which in turn, can affect the development of neural pathways that mediate reward; emotional regulation; and multiple cognitive domains including executive functioning and decision-making, learning, abstraction, and attention — all processes central to substance use disorder and other psychiatric disorders, Dr. Hurd said at the briefing.

Dr. Volkow said that cannabis use in adolescence and young adulthood is “very concerning because that’s also the age of risk for psychosis, particularly schizophrenia, with one study showing that use of cannabis in high doses can trigger psychotic episodes, particularly among young males.”

Dr. Hurd noted that not all young people who use cannabis develop CUD, “but a significant number do,” and large-scale studies have consistently reported two main factors associated with CUD risk.

The first is age, both for the onset and frequency of use at younger age. Those who start using cannabis before age 16 years are at the highest risk for CUD. The risk for CUD also increases significantly among youth who use cannabis at least weekly, with the highest prevalence among youth who use cannabis daily. One large study linked increased frequency of use with up to a 17-fold increased risk for CUD.

The second factor consistently associated with the risk for CUD is biologic sex, with CUD rates typically higher in male individuals.
 

Treatment Challenges

For young people who develop CUD, access to and uptake of treatment can be challenging.

“Given that the increased potency of cannabis and cannabinoid products is expected to increase CUD risk, it is disturbing that less than 10% of youth who meet the criteria for a substance use disorder, including CUD, receive treatment,” Dr. Hurd and colleagues point out in their commentary. 

Another challenge is that treatment strategies for CUD are currently limited and consist mainly of motivational enhancement and cognitive-behavioral therapies. 

“Clearly new treatment strategies are needed to address the mounting challenge of CUD risk in teens and young adults,” Dr. Hurd and colleagues wrote. 

Summing up, Dr. Hurd told reporters, “We now know that most psychiatric disorders have a developmental origin, and the adolescent time period is a critical window for cannabis use disorder risk.”

Yet, on a positive note, the “plasticity of the developing brain that makes it vulnerable to cannabis use disorder and psychiatric comorbidities also provides an opportunity for prevention and early intervention to change that trajectory,” Dr. Hurd said. 

The changing legal landscape of cannabis — the US Drug Enforcement Agency is moving forward with plans to move marijuana from a Schedule I to a Schedule III controlled substance under the Controlled Substance Act — makes addressing these risks all the timelier. 

“As states vie to leverage tax dollars from the growing cannabis industry, a significant portion of such funds must be used for early intervention/prevention strategies to reduce the impact of cannabis on the developing brain,” Dr. Hurd and colleagues wrote. 

This research was supported in part by the National Institute on Drug Abuse and the National Institutes of Health. Dr. Hurd and Dr. Volkow have no relevant disclosures. 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM APA 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Widespread, Long-Held Practice in Dementia Called Into Question

Article Type
Changed
Tue, 05/14/2024 - 12:31

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Hospitalized patients with dementia and dysphagia are often prescribed a “dysphagia diet,” made up of texture-modified foods and thickened liquids in an effort to reduce the risk for aspiration or other problems. However, a new study calls this widespread and long-held practice into question.

Investigators found no evidence that the use of thickened liquids reduced mortality or respiratory complications, such as pneumonia, aspiration, or choking, compared with thin-liquid diets in patients with Alzheimer’s disease and related dementias (ADRD) and dysphagia. Patients receiving thick liquids were less likely to be intubated, but they were actually more likely to have respiratory complications.

“When hospitalized patients with Alzheimer’s disease and related dementias are found to have dysphagia, our go-to solution is to use a thick liquid diet,” senior author Liron Sinvani, MD, with the Feinstein Institutes for Medical Research, Manhasset, New York, said in a news release.

“However, there is no concrete evidence that thick liquids improve health outcomes, and we also know that thick liquids can lead to decreased palatability, poor oral intake, dehydration, malnutrition, and worse quality of life,” added Dr. Sinvani, director of the geriatric hospitalist service at Northwell Health in New York.

The study was published online in JAMA Internal Medicine.
 

Challenging a Go-To Solution

The researchers compared outcomes in a propensity score-matched cohort of patients with ADRD and dysphagia (mean age, 86 years; 54% women) receiving mostly thick liquids versus thin liquids during their hospitalization. There were 4458 patients in each group.

They found no significant difference in hospital mortality between the thick liquids and thin liquids groups (hazard ratio [HR], 0.92; = .46).

Patients receiving thick liquids were less likely to require intubation (odds ratio [OR], 0.66; 95% CI, 0.54-0.80) but were more likely to develop respiratory complications (OR, 1.73; 95% CI, 1.56-1.91).

The two groups did not differ significantly in terms of risk for dehydration, hospital length of stay, or rate of 30-day readmission.

“This cohort study emphasizes the need for prospective studies that evaluate whether thick liquids are associated with improved clinical outcomes in hospitalized patients with ADRD and dysphagia,” the authors wrote.

Because few patients received a Modified Barium Swallow Study at baseline, researchers were unable to confirm the presence of dysphagia or account for dysphagia severity and impairment. It’s possible that patients in the thick liquid group had more severe dysphagia than those in the thin liquid group.

Another limitation is that the type of dementia and severity were not characterized. Also, the study could not account for factors like oral hygiene, immune status, and diet adherence that could impact risks like aspiration pneumonia.
 

Theoretical Benefit, No Evidence

In an invited commentary on the study, Eric Widera, MD, with University of California San Francisco, noted that medicine is “littered with interventions that have become the standard of practice based on theoretical benefits without clinical evidence”.

One example is percutaneous endoscopic gastrostomy tubes for individuals with dysphagia and dementia.

“For decades, these tubes were regularly used in individuals with dementia on the assumption that bypassing the oropharyngeal route would decrease rates of aspiration and, therefore, decrease adverse outcomes like pressure ulcers, malnutrition, pneumonia, and death. However, similar to what we see with thickened liquids, evidence slowly built that this standard of practice was not evidence-based practice,” Dr. Widera wrote.

When thinking about thick liquid diets, Dr. Widera encouraged clinicians to “acknowledge the limitations of the evidence both for and against thickened-liquid diets.”

He also encouraged clinicians to “put yourself in the shoes of the patients who will be asked to adhere to this modified diet. For 12 hours, drink your tea, coffee, wine, and water as thickened liquids,” Dr. Widera suggested. “The goal is not to convince yourself never to prescribe thickened liquids, but rather to be mindful of how a thickened liquid diet affects patients’ liquid and food intake, how it changes the mouthfeel and taste of different drinks, and how it affects patients’ quality of life.”

Clinicians also should “proactively engage speech-language pathologists, but do not ask them if it is safe for a patient with dementia to eat or drink normally. Instead, ask what we can do to meet the patient’s goals and maintain quality of life given the current evidence base,” Dr. Widera wrote.

“For some, when the patient’s goals are focused on comfort, this may lead to a recommendation for thickened liquids if their use may resolve significant coughing distress after drinking thin liquids. Alternatively, even when the patient’s goals are focused on prolonging life, the risks of thickened liquids, including dehydration and decreased food and fluid intake, as well as the thin evidence for mortality improvement, will argue against their use,” Dr. Widera added.

Funding for the study was provided by grants from the National Institute on Aging and by the William S. Middleton Veteran Affairs Hospital, Madison, Wisconsin. Dr. Sinvani and Dr. Widera declared no relevant conflicts of interest.

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

It Would Be Nice if Olive Oil Really Did Prevent Dementia

Article Type
Changed
Tue, 05/14/2024 - 10:03

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

As you all know by now, I’m always looking out for lifestyle changes that are both pleasurable and healthy. They are hard to find, especially when it comes to diet. My kids complain about this all the time: “When you say ‘healthy food,’ you just mean yucky food.” And yes, French fries are amazing, and no, we can’t have them three times a day.

So, when I saw an article claiming that olive oil reduces the risk for dementia, I was interested. I love olive oil; I cook with it all the time. But as is always the case in the world of nutritional epidemiology, we need to be careful. There are a lot of reasons to doubt the results of this study — and one reason to believe it’s true.

The study I’m talking about is “Consumption of Olive Oil and Diet Quality and Risk of Dementia-Related Death,” appearing in JAMA Network Open and following a well-trod formula in the nutritional epidemiology space.

Nearly 100,000 participants, all healthcare workers, filled out a food frequency questionnaire every 4 years with 130 questions touching on all aspects of diet: How often do you eat bananas, bacon, olive oil? Participants were followed for more than 20 years, and if they died, the cause of death was flagged as being dementia-related or not. Over that time frame there were around 38,000 deaths, of which 4751 were due to dementia.

The rest is just statistics. The authors show that those who reported consuming more olive oil were less likely to die from dementia — about 50% less likely, if you compare those who reported eating more than 7 grams of olive oil a day with those who reported eating none.
 

Is It What You Eat, or What You Don’t Eat?

And we could stop there if we wanted to; I’m sure big olive oil would be happy with that. Is there such a thing as “big olive oil”? But no, we need to dig deeper here because this study has the same problems as all nutritional epidemiology studies. Number one, no one is sitting around drinking small cups of olive oil. They consume it with other foods. And it was clear from the food frequency questionnaire that people who consumed more olive oil also consumed less red meat, more fruits and vegetables, more whole grains, more butter, and less margarine. And those are just the findings reported in the paper. I suspect that people who eat more olive oil also eat more tomatoes, for example, though data this granular aren’t shown. So, it can be really hard, in studies like this, to know for sure that it’s actually the olive oil that is helpful rather than some other constituent in the diet.

The flip side of that coin presents another issue. The food you eat is also a marker of the food you don’t eat. People who ate olive oil consumed less margarine, for example. At the time of this study, margarine was still adulterated with trans-fats, which a pretty solid evidence base suggests are really bad for your vascular system. So perhaps it’s not that olive oil is particularly good for you but that something else is bad for you. In other words, simply adding olive oil to your diet without changing anything else may not do anything.

The other major problem with studies of this sort is that people don’t consume food at random. The type of person who eats a lot of olive oil is simply different from the type of person who doesn›t. For one thing, olive oil is expensive. A 25-ounce bottle of olive oil is on sale at my local supermarket right now for $11.00. A similar-sized bottle of vegetable oil goes for $4.00.

Isn’t it interesting that food that costs more money tends to be associated with better health outcomes? (I’m looking at you, red wine.) Perhaps it’s not the food; perhaps it’s the money. We aren’t provided data on household income in this study, but we can see that the heavy olive oil users were less likely to be current smokers and they got more physical activity.

Now, the authors are aware of these limitations and do their best to account for them. In multivariable models, they adjust for other stuff in the diet, and even for income (sort of; they use census tract as a proxy for income, which is really a broad brush), and still find a significant though weakened association showing a protective effect of olive oil on dementia-related death. But still — adjustment is never perfect, and the small effect size here could definitely be due to residual confounding.
 

 

 

Evidence More Convincing

Now, I did tell you that there is one reason to believe that this study is true, but it’s not really from this study.

It’s from the PREDIMED randomized trial.

This is nutritional epidemiology I can get behind. Published in 2018, investigators in Spain randomized around 7500 participants to receive a liter of olive oil once a week vs mixed nuts, vs small nonfood gifts, the idea here being that if you have olive oil around, you’ll use it more. And people who were randomly assigned to get the olive oil had a 30% lower rate of cardiovascular events. A secondary analysis of that study found that the rate of development of mild cognitive impairment was 65% lower in those who were randomly assigned to olive oil. That’s an impressive result.

So, there might be something to this olive oil thing, but I’m not quite ready to add it to my “pleasurable things that are still good for you” list just yet. Though it does make me wonder: Can we make French fries in the stuff?
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Inappropriate Medication Use Persists in Older Adults With Dementia

Article Type
Changed
Mon, 05/13/2024 - 16:46

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Medications that could have a negative effect on cognition are often used by older adults with dementia, according to data from approximately 13 million individuals presented at the annual meeting of the American Geriatrics Society.

Classes of medications including anticholinergics, antipsychotics, benzodiazepines, and non-benzodiazepine sedatives (Z drugs) have been identified as potentially inappropriate medications (PIMs) in patients with dementia, according to The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.

The medications that could worsen dementia or cognition are known as CogPIMs, said presenting author Caroline M. Mak, a doctor of pharmacy candidate at the University at Buffalo School of Pharmacy and Pharmaceutical Sciences, New York.

Previous research has characterized the prevalence of use of CogPIMs, but data connecting use of CogPIMs and healthcare use are lacking, Ms. Mak said.

Ms. Mak and colleagues conducted a cross-sectional analysis of data from 2011 to 2015 from the Medical Expenditure Panel Survey (MEPS), a national survey with data on medication and healthcare use. The researchers included approximately 13 million survey respondents older than 65 years with dementia.

Exposure to CogPIMs was defined as filling a prescription for one or more of the CogPIMs during the study period. Population estimates of the prevalence of use of the CogPIMs were created using survey-weighted procedures, and prevalence trends were assessed using the Cochran-Armitage test.

Overall, the prevalence was 15.9%, 11.5%, 7.5%, and 3.8% for use of benzodiazepines, anticholinergics, antipsychotics, and Z drugs, respectively, during the study period.

Of these, benzodiazepines showed a significant trend with an increase in prevalence from 8.9% in 2011 to 16.4% in 2015 (P = .02).

The odds of hospitalization were more than twice as likely in individuals who reported using Z drugs (odds ratio, 2.57; P = .02) based on logistic regression. In addition, exposure to antipsychotics was significantly associated with an increased rate of hospitalization based on a binomial model for incidence rate ratio (IRR, 1.51; P = .02).

The findings were limited by several factors including the cross-sectional design, reliance on self-reports, and the lack of more recent data.

However, the results show that CogPIMs are often used by older adults with dementia, and antipsychotics and Z drugs could be targets for interventions to prevent harm from medication interactions and side effects, the researchers concluded.
 

Findings Highlight Need for Drug Awareness

The current study is important because of the expansion in the aging population and an increase in the number of patients with dementia, Ms. Mak said in an interview. “In both our older population and dementia patients, there are certain medication considerations that we need to take into account, and certain drugs that should be avoided if possible,” she said. Clinicians have been trying to use the Beers criteria to reduce potential medication harm, she noted. “One group of investigators (Hilmer et al.), has proposed a narrower focus on anticholinergic and sedative/hypnotic medication in the Drug Burden Index (DBI); the CogPIMs are a subset of both approaches (Beers and DBI) and represent a collection of medications that pose potential risks to our patients,” said Ms. Mak.

Continued reassessment is needed on appropriateness of anticholinergics, Z drugs, benzodiazepines, and antipsychotics in older patients with dementia, she added.

“Even though the only group to have a significant increase in prevalence [of use] was the benzodiazepine group, we didn’t see a decrease in any of the other groups,” said Ms. Mak. The current research provides a benchmark for CogPIMs use that can be monitored in the future for increases or, ideally, decreases, she said.
 

Part of a Bigger Picture

The current study is part of the work of Team Alice, a national deprescribing group affiliated with the University at Buffalo that was inspired by the tragic death of Alice Brennan, triggered by preventable medication harm, Ms. Mak said in an interview. “Team Alice consists of an array of academic, primary care, health plan, and regional health information partners that have designed patient-driven interventions to reduce medication harm, especially within primary care settings,” she said. “Their mission is to save people like Alice by pursuing multiple strategies to deprescribe unsafe medication, reduce harm, and foster successful aging. By characterizing the use of CogPIMs, we can design better intervention strategies,” she said.

Although Ms. Mak was not surprised by the emergence of benzodiazepines as the most commonly used drug groups, she was surprised by the increase during the study period.

“Unfortunately, our dataset was not rich enough to include reasons for this increase,” she said. In practice, “I have seen patients getting short-term, as needed, prescriptions for a benzodiazepine to address the anxiety and/or insomnia after the loss of a loved one; this may account for a small proportion of benzodiazepine use that appears to be inappropriate because of a lack of associated appropriate diagnosis,” she noted.

Also, the findings of increased hospitalization associated with Z drugs raises concerns, Ms. Mak said. Although the findings are consistent with other research, they illustrate the need for further investigation to identify strategies to prevent this harm, she said. “Not finding associations with hospitalization related to benzodiazepine or anticholinergics was a mild surprise,” Ms. Mak said in an interview. “However, while we know that these drugs can have a negative effect on older people, the effects may not have been severe enough to result in hospitalizations,” she said.

Looking ahead, Ms. Mak said she would like to see the study rerun with a more current data set, especially with regard to benzodiazepines and antipsychotics.
 

Seek Strategies to Reduce Medication Use

The current study was notable for its community-based population and attention to hospitalizations, Shelly Gray, PharmD, a professor of pharmacy at the University of Washington School of Pharmacy, said in an interview.

“Most studies examining potentially inappropriate medications that may impair cognition have been conducted in nursing homes, while this study focuses on community dwelling older adults where most people with dementia live,” said Dr. Gray, who served as a moderator for the session in which the study was presented.

In addition, “A unique aspect of this study was to examine how these medications are related to hospitalizations,” she said.

Given recent efforts to reduce use of potentially inappropriate medications in people with dementia, the increase in prevalence of use over the study period was surprising, especially for benzodiazepines, said Dr. Gray.

In clinical practice, “health care providers should continue to look for opportunities to deprescribe medications that may worsen cognition in people with dementia,” she said. However, more research is needed to examine trends in the years beyond 2015 for a more contemporary picture of medication use in this population, she noted.

The study received no outside funding. The researchers and Dr. Gray had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AGS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID Vaccines and New-Onset Seizures: New Data

Article Type
Changed
Fri, 05/10/2024 - 11:31

There is no association between the SARS-CoV-2 vaccine and the risk for new-onset seizure, data from a new meta-analysis of six randomized, placebo-controlled clinical trials (RCTs) showed.

Results of the pooled analysis that included 63,500 individuals vaccinated with SARS-CoV-2 and 55,000 who received a placebo vaccine showed there was no significant difference between the two groups with respect to new-onset seizures at 28- or 43-day follow-up.

Regarding new-onset seizures in the general population, there was no statistically significant difference in risk for seizure incidence among vaccinated individuals vs placebo recipients, according to our meta-analysis, wrote the investigators, led by Ali Rafati, MD, MPH, Iran University of Medical Sciences in Tehran.

The findings were published online in JAMA Neurology.

Mixed Results

Results from previous research have been mixed regarding the link between the SARS-CoV-2 vaccination and new-onset seizures, with some showing an association.

To learn more about the possible association between the vaccines and new-onset seizures, the researchers conducted a literature review and identified six RCTs that measured adverse events following SARS-CoV-2 vaccinations (including messenger RNA, viral vector, and inactivated virus) vs placebo or other vaccines.

While five of the studies defined new-onset seizures according to the Medical Dictionary for Regulatory Activities, trial investigators in the sixth RCT assessed and determined new-onset seizures in participants.

Participants received two vaccinations 28 days apart in five RCTs and only one vaccine in the sixth trial.

The research team searched the data for new-onset seizure in the 28 days following one or both COVID vaccinations.

No Link Found

After comparing the incidence of new-onset seizure between the 63,500 vaccine (nine new-onset seizures, 0.014%) and 55,000 placebo recipients (one new-onset seizure, 0.002%), investigators found no significant difference between the two groups (odds ratio [OR], 2.70; 95% CI, 0.76-9.57; P = .12)

Investigators also sliced the data several ways to see if it would yield different results. When they analyzed data by vaccine platform (viral vector) and age group (children), they didn’t observe significant differences in new-onset data.

The researchers also searched for data beyond the month following the injection to encompass the entire blinded phase, so they analyzed the results of three RCTs that reported adverse events up to 162 days after the vaccine.

After pooling the results from the three studies, investigators found no statistical difference between the vaccine and placebo groups in terms of the new-onset seizure (OR, 2.31; 95% CI, 0.86%-3.23; P > .99)

Study limitations included the missing information on vaccine doses or risk factors for the development of seizures. Also, the RCTs included in the meta-analysis were conducted at different times, so the SARS-CoV-2 vaccines may have differed in their composition and efficacy.

“The global vaccination drive against SARS-CoV-2 has been a monumental effort in combating the pandemic. SARS-CoV-2 vaccinations that are now available appear safe and appropriate,” the authors wrote.

There were no study funding sources or disclosures reported.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

There is no association between the SARS-CoV-2 vaccine and the risk for new-onset seizure, data from a new meta-analysis of six randomized, placebo-controlled clinical trials (RCTs) showed.

Results of the pooled analysis that included 63,500 individuals vaccinated with SARS-CoV-2 and 55,000 who received a placebo vaccine showed there was no significant difference between the two groups with respect to new-onset seizures at 28- or 43-day follow-up.

Regarding new-onset seizures in the general population, there was no statistically significant difference in risk for seizure incidence among vaccinated individuals vs placebo recipients, according to our meta-analysis, wrote the investigators, led by Ali Rafati, MD, MPH, Iran University of Medical Sciences in Tehran.

The findings were published online in JAMA Neurology.

Mixed Results

Results from previous research have been mixed regarding the link between the SARS-CoV-2 vaccination and new-onset seizures, with some showing an association.

To learn more about the possible association between the vaccines and new-onset seizures, the researchers conducted a literature review and identified six RCTs that measured adverse events following SARS-CoV-2 vaccinations (including messenger RNA, viral vector, and inactivated virus) vs placebo or other vaccines.

While five of the studies defined new-onset seizures according to the Medical Dictionary for Regulatory Activities, trial investigators in the sixth RCT assessed and determined new-onset seizures in participants.

Participants received two vaccinations 28 days apart in five RCTs and only one vaccine in the sixth trial.

The research team searched the data for new-onset seizure in the 28 days following one or both COVID vaccinations.

No Link Found

After comparing the incidence of new-onset seizure between the 63,500 vaccine (nine new-onset seizures, 0.014%) and 55,000 placebo recipients (one new-onset seizure, 0.002%), investigators found no significant difference between the two groups (odds ratio [OR], 2.70; 95% CI, 0.76-9.57; P = .12)

Investigators also sliced the data several ways to see if it would yield different results. When they analyzed data by vaccine platform (viral vector) and age group (children), they didn’t observe significant differences in new-onset data.

The researchers also searched for data beyond the month following the injection to encompass the entire blinded phase, so they analyzed the results of three RCTs that reported adverse events up to 162 days after the vaccine.

After pooling the results from the three studies, investigators found no statistical difference between the vaccine and placebo groups in terms of the new-onset seizure (OR, 2.31; 95% CI, 0.86%-3.23; P > .99)

Study limitations included the missing information on vaccine doses or risk factors for the development of seizures. Also, the RCTs included in the meta-analysis were conducted at different times, so the SARS-CoV-2 vaccines may have differed in their composition and efficacy.

“The global vaccination drive against SARS-CoV-2 has been a monumental effort in combating the pandemic. SARS-CoV-2 vaccinations that are now available appear safe and appropriate,” the authors wrote.

There were no study funding sources or disclosures reported.

A version of this article appeared on Medscape.com.

There is no association between the SARS-CoV-2 vaccine and the risk for new-onset seizure, data from a new meta-analysis of six randomized, placebo-controlled clinical trials (RCTs) showed.

Results of the pooled analysis that included 63,500 individuals vaccinated with SARS-CoV-2 and 55,000 who received a placebo vaccine showed there was no significant difference between the two groups with respect to new-onset seizures at 28- or 43-day follow-up.

Regarding new-onset seizures in the general population, there was no statistically significant difference in risk for seizure incidence among vaccinated individuals vs placebo recipients, according to our meta-analysis, wrote the investigators, led by Ali Rafati, MD, MPH, Iran University of Medical Sciences in Tehran.

The findings were published online in JAMA Neurology.

Mixed Results

Results from previous research have been mixed regarding the link between the SARS-CoV-2 vaccination and new-onset seizures, with some showing an association.

To learn more about the possible association between the vaccines and new-onset seizures, the researchers conducted a literature review and identified six RCTs that measured adverse events following SARS-CoV-2 vaccinations (including messenger RNA, viral vector, and inactivated virus) vs placebo or other vaccines.

While five of the studies defined new-onset seizures according to the Medical Dictionary for Regulatory Activities, trial investigators in the sixth RCT assessed and determined new-onset seizures in participants.

Participants received two vaccinations 28 days apart in five RCTs and only one vaccine in the sixth trial.

The research team searched the data for new-onset seizure in the 28 days following one or both COVID vaccinations.

No Link Found

After comparing the incidence of new-onset seizure between the 63,500 vaccine (nine new-onset seizures, 0.014%) and 55,000 placebo recipients (one new-onset seizure, 0.002%), investigators found no significant difference between the two groups (odds ratio [OR], 2.70; 95% CI, 0.76-9.57; P = .12)

Investigators also sliced the data several ways to see if it would yield different results. When they analyzed data by vaccine platform (viral vector) and age group (children), they didn’t observe significant differences in new-onset data.

The researchers also searched for data beyond the month following the injection to encompass the entire blinded phase, so they analyzed the results of three RCTs that reported adverse events up to 162 days after the vaccine.

After pooling the results from the three studies, investigators found no statistical difference between the vaccine and placebo groups in terms of the new-onset seizure (OR, 2.31; 95% CI, 0.86%-3.23; P > .99)

Study limitations included the missing information on vaccine doses or risk factors for the development of seizures. Also, the RCTs included in the meta-analysis were conducted at different times, so the SARS-CoV-2 vaccines may have differed in their composition and efficacy.

“The global vaccination drive against SARS-CoV-2 has been a monumental effort in combating the pandemic. SARS-CoV-2 vaccinations that are now available appear safe and appropriate,” the authors wrote.

There were no study funding sources or disclosures reported.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article