User login
TBI Significantly Increases Mortality Rate Among Veterans With Epilepsy
recent research published in Epilepsia.
, according toIn a retrospective cohort study, Ali Roghani, PhD, of the division of epidemiology at the University of Utah School of Medicine in Salt Lake City, and colleagues evaluated 938,890 veterans between 2000 and 2019 in the Defense Health Agency and the Veterans Health Administration who served in the US military after the September 11 attacks. Overall, 27,436 veterans met criteria for a diagnosis of epilepsy, 264,890 had received a diagnosis for a traumatic brain injury (TBI), and the remaining patients had neither epilepsy nor TBI.
Among the veterans with no epilepsy, 248,714 veterans had a TBI diagnosis, while in the group of patients with epilepsy, 10,358 veterans experienced a TBI before their epilepsy diagnosis, 1598 were diagnosed with a TBI within 6 months of epilepsy, and 4310 veterans had a TBI 6 months after an epilepsy diagnosis. The researchers assessed all-cause mortality in each group, calculating cumulative mortality rates compared with the group of veterans who had no TBI and no epilepsy diagnosis.
Dr. Roghani and colleagues found a significantly higher mortality rate among veterans who developed epilepsy compared with a control group with neither epilepsy nor TBI (6.26% vs. 1.12%; P < .01), with a majority of veterans in the group who died being White (67.4%) men (89.9%). Compared with veterans who were deceased, nondeceased veterans were significantly more likely to have a history of being deployed (70.7% vs. 64.8%; P < .001), were less likely to be in the army (52.2% vs. 55.0%; P < .001), and were more likely to reach the rank of officer or warrant officer (8.1% vs. 7.6%; P = .014).
There were also significant differences in clinical characteristics between nondeceased and deceased veterans, including a higher rate of substance abuse disorder, smoking history, cardiovascular disease, stroke, transient ischemic attack, cancer, liver disease, kidney disease, or other injury as well as overdose, suicidal ideation, and homelessness. “Most clinical conditions were significantly different between deceased and nondeceased in part due to the large cohort size,” the researchers said.
After performing Cox regression analyses, the researchers found a higher mortality risk in veterans with epilepsy and/or TBIs among those who developed a TBI within 6 months of an epilepsy diagnosis (hazard ratio [HR], 5.02; 95% CI, 4.21-5.99), had a TBI prior to epilepsy (HR, 4.25; 95% CI, 3.89-4.58), had epilepsy alone (HR, 4.00; 95% CI, 3.67-4.36), had a TBI more than 6 months after an epilepsy diagnosis (HR, 2.49; 95% CI, 2.17-2.85), and those who had epilepsy alone (HR, 1.30; 95% CI, 1.25-1.36) compared with veterans who had neither epilepsy nor a TBI.
“The temporal relationship with TBI that occurred within 6 months after epilepsy diagnosis may suggest an increased vulnerability to accidents, severe injuries, or TBI resulting from seizures, potentially elevating mortality risk,” Dr. Roghani and colleagues wrote.
The researchers said the results “raise concerns” about the subgroup of patients who are diagnosed with epilepsy close to experiencing a TBI.
“Our results provide information regarding the temporal relationship between epilepsy and TBI regarding mortality in a cohort of post-9/11 veterans, which highlights the need for enhanced primary prevention, such as more access to health care among people with epilepsy and TBI,” they said. “Given the rising incidence of TBI in both the military and civilian populations, these findings suggest close monitoring might be crucial to develop effective prevention strategies for long-term complications, particularly [post-traumatic epilepsy].”
Reevaluating the Treatment of Epilepsy
Juliann Paolicchi, MD, a neurologist and member of the epilepsy team at Northwell Health in New York, who was not involved with the study, said in an interview that TBIs have been studied more closely since the beginning of conflicts in the Middle East, particularly in Iran and Afghanistan, where “newer artillery causes more diffuse traumatic injury to the brain and the body than the effects of more typical weaponry.”
The study by Roghani and colleagues, she said, “is groundbreaking in that it looks at the connection and timing of these two disruptive forces, epilepsy and TBI, on the brain,” she said. “The study reveals that timing is everything: The combination of two disrupting circuitry effects in proximity can have a deadly effect. The summation is greater than either alone in veterans, and has significant effects on the brain’s ability to sustain the functions that keep us alive.”
The 6 months following either a diagnosis of epilepsy or TBI is “crucial,” Dr. Paolicchi noted. “Military and private citizens should be closely monitored during this period, and the results suggest they should refrain from activities that could predispose to further brain injury.”
In addition, current standards for treatment of epilepsy may need to be reevaluated, she said. “Patients are not always treated with a seizure medication after a first seizure, but perhaps, especially in patients at higher risk for brain injury such as the military and athletes, that policy warrants further examination.”
The findings by Roghani and colleagues may also extend to other groups, such as evaluating athletes after a concussion, patients after they are in a motor vehicle accident, and infants with traumatic brain injury, Dr. Paolicchi said. “The results suggest a reexamining of the proximity [of TBI] and epilepsy in these and other areas,” she noted.
The authors reported personal and institutional relationships in the form of research support and other financial compensation from AbbVie, Biohaven, CURE, Department of Defense, Department of Veterans Affairs (VA), Eisai, Engage, National Institutes of Health, Sanofi, SCS Consulting, Sunovion, and UCB. This study was supported by funding from the Department of Defense, VA Health Systems, and the VA HSR&D Informatics, Decision Enhancement, and Analytic Sciences Center of Innovation. Dr. Paolicchi reports no relevant conflicts of interest.
recent research published in Epilepsia.
, according toIn a retrospective cohort study, Ali Roghani, PhD, of the division of epidemiology at the University of Utah School of Medicine in Salt Lake City, and colleagues evaluated 938,890 veterans between 2000 and 2019 in the Defense Health Agency and the Veterans Health Administration who served in the US military after the September 11 attacks. Overall, 27,436 veterans met criteria for a diagnosis of epilepsy, 264,890 had received a diagnosis for a traumatic brain injury (TBI), and the remaining patients had neither epilepsy nor TBI.
Among the veterans with no epilepsy, 248,714 veterans had a TBI diagnosis, while in the group of patients with epilepsy, 10,358 veterans experienced a TBI before their epilepsy diagnosis, 1598 were diagnosed with a TBI within 6 months of epilepsy, and 4310 veterans had a TBI 6 months after an epilepsy diagnosis. The researchers assessed all-cause mortality in each group, calculating cumulative mortality rates compared with the group of veterans who had no TBI and no epilepsy diagnosis.
Dr. Roghani and colleagues found a significantly higher mortality rate among veterans who developed epilepsy compared with a control group with neither epilepsy nor TBI (6.26% vs. 1.12%; P < .01), with a majority of veterans in the group who died being White (67.4%) men (89.9%). Compared with veterans who were deceased, nondeceased veterans were significantly more likely to have a history of being deployed (70.7% vs. 64.8%; P < .001), were less likely to be in the army (52.2% vs. 55.0%; P < .001), and were more likely to reach the rank of officer or warrant officer (8.1% vs. 7.6%; P = .014).
There were also significant differences in clinical characteristics between nondeceased and deceased veterans, including a higher rate of substance abuse disorder, smoking history, cardiovascular disease, stroke, transient ischemic attack, cancer, liver disease, kidney disease, or other injury as well as overdose, suicidal ideation, and homelessness. “Most clinical conditions were significantly different between deceased and nondeceased in part due to the large cohort size,” the researchers said.
After performing Cox regression analyses, the researchers found a higher mortality risk in veterans with epilepsy and/or TBIs among those who developed a TBI within 6 months of an epilepsy diagnosis (hazard ratio [HR], 5.02; 95% CI, 4.21-5.99), had a TBI prior to epilepsy (HR, 4.25; 95% CI, 3.89-4.58), had epilepsy alone (HR, 4.00; 95% CI, 3.67-4.36), had a TBI more than 6 months after an epilepsy diagnosis (HR, 2.49; 95% CI, 2.17-2.85), and those who had epilepsy alone (HR, 1.30; 95% CI, 1.25-1.36) compared with veterans who had neither epilepsy nor a TBI.
“The temporal relationship with TBI that occurred within 6 months after epilepsy diagnosis may suggest an increased vulnerability to accidents, severe injuries, or TBI resulting from seizures, potentially elevating mortality risk,” Dr. Roghani and colleagues wrote.
The researchers said the results “raise concerns” about the subgroup of patients who are diagnosed with epilepsy close to experiencing a TBI.
“Our results provide information regarding the temporal relationship between epilepsy and TBI regarding mortality in a cohort of post-9/11 veterans, which highlights the need for enhanced primary prevention, such as more access to health care among people with epilepsy and TBI,” they said. “Given the rising incidence of TBI in both the military and civilian populations, these findings suggest close monitoring might be crucial to develop effective prevention strategies for long-term complications, particularly [post-traumatic epilepsy].”
Reevaluating the Treatment of Epilepsy
Juliann Paolicchi, MD, a neurologist and member of the epilepsy team at Northwell Health in New York, who was not involved with the study, said in an interview that TBIs have been studied more closely since the beginning of conflicts in the Middle East, particularly in Iran and Afghanistan, where “newer artillery causes more diffuse traumatic injury to the brain and the body than the effects of more typical weaponry.”
The study by Roghani and colleagues, she said, “is groundbreaking in that it looks at the connection and timing of these two disruptive forces, epilepsy and TBI, on the brain,” she said. “The study reveals that timing is everything: The combination of two disrupting circuitry effects in proximity can have a deadly effect. The summation is greater than either alone in veterans, and has significant effects on the brain’s ability to sustain the functions that keep us alive.”
The 6 months following either a diagnosis of epilepsy or TBI is “crucial,” Dr. Paolicchi noted. “Military and private citizens should be closely monitored during this period, and the results suggest they should refrain from activities that could predispose to further brain injury.”
In addition, current standards for treatment of epilepsy may need to be reevaluated, she said. “Patients are not always treated with a seizure medication after a first seizure, but perhaps, especially in patients at higher risk for brain injury such as the military and athletes, that policy warrants further examination.”
The findings by Roghani and colleagues may also extend to other groups, such as evaluating athletes after a concussion, patients after they are in a motor vehicle accident, and infants with traumatic brain injury, Dr. Paolicchi said. “The results suggest a reexamining of the proximity [of TBI] and epilepsy in these and other areas,” she noted.
The authors reported personal and institutional relationships in the form of research support and other financial compensation from AbbVie, Biohaven, CURE, Department of Defense, Department of Veterans Affairs (VA), Eisai, Engage, National Institutes of Health, Sanofi, SCS Consulting, Sunovion, and UCB. This study was supported by funding from the Department of Defense, VA Health Systems, and the VA HSR&D Informatics, Decision Enhancement, and Analytic Sciences Center of Innovation. Dr. Paolicchi reports no relevant conflicts of interest.
recent research published in Epilepsia.
, according toIn a retrospective cohort study, Ali Roghani, PhD, of the division of epidemiology at the University of Utah School of Medicine in Salt Lake City, and colleagues evaluated 938,890 veterans between 2000 and 2019 in the Defense Health Agency and the Veterans Health Administration who served in the US military after the September 11 attacks. Overall, 27,436 veterans met criteria for a diagnosis of epilepsy, 264,890 had received a diagnosis for a traumatic brain injury (TBI), and the remaining patients had neither epilepsy nor TBI.
Among the veterans with no epilepsy, 248,714 veterans had a TBI diagnosis, while in the group of patients with epilepsy, 10,358 veterans experienced a TBI before their epilepsy diagnosis, 1598 were diagnosed with a TBI within 6 months of epilepsy, and 4310 veterans had a TBI 6 months after an epilepsy diagnosis. The researchers assessed all-cause mortality in each group, calculating cumulative mortality rates compared with the group of veterans who had no TBI and no epilepsy diagnosis.
Dr. Roghani and colleagues found a significantly higher mortality rate among veterans who developed epilepsy compared with a control group with neither epilepsy nor TBI (6.26% vs. 1.12%; P < .01), with a majority of veterans in the group who died being White (67.4%) men (89.9%). Compared with veterans who were deceased, nondeceased veterans were significantly more likely to have a history of being deployed (70.7% vs. 64.8%; P < .001), were less likely to be in the army (52.2% vs. 55.0%; P < .001), and were more likely to reach the rank of officer or warrant officer (8.1% vs. 7.6%; P = .014).
There were also significant differences in clinical characteristics between nondeceased and deceased veterans, including a higher rate of substance abuse disorder, smoking history, cardiovascular disease, stroke, transient ischemic attack, cancer, liver disease, kidney disease, or other injury as well as overdose, suicidal ideation, and homelessness. “Most clinical conditions were significantly different between deceased and nondeceased in part due to the large cohort size,” the researchers said.
After performing Cox regression analyses, the researchers found a higher mortality risk in veterans with epilepsy and/or TBIs among those who developed a TBI within 6 months of an epilepsy diagnosis (hazard ratio [HR], 5.02; 95% CI, 4.21-5.99), had a TBI prior to epilepsy (HR, 4.25; 95% CI, 3.89-4.58), had epilepsy alone (HR, 4.00; 95% CI, 3.67-4.36), had a TBI more than 6 months after an epilepsy diagnosis (HR, 2.49; 95% CI, 2.17-2.85), and those who had epilepsy alone (HR, 1.30; 95% CI, 1.25-1.36) compared with veterans who had neither epilepsy nor a TBI.
“The temporal relationship with TBI that occurred within 6 months after epilepsy diagnosis may suggest an increased vulnerability to accidents, severe injuries, or TBI resulting from seizures, potentially elevating mortality risk,” Dr. Roghani and colleagues wrote.
The researchers said the results “raise concerns” about the subgroup of patients who are diagnosed with epilepsy close to experiencing a TBI.
“Our results provide information regarding the temporal relationship between epilepsy and TBI regarding mortality in a cohort of post-9/11 veterans, which highlights the need for enhanced primary prevention, such as more access to health care among people with epilepsy and TBI,” they said. “Given the rising incidence of TBI in both the military and civilian populations, these findings suggest close monitoring might be crucial to develop effective prevention strategies for long-term complications, particularly [post-traumatic epilepsy].”
Reevaluating the Treatment of Epilepsy
Juliann Paolicchi, MD, a neurologist and member of the epilepsy team at Northwell Health in New York, who was not involved with the study, said in an interview that TBIs have been studied more closely since the beginning of conflicts in the Middle East, particularly in Iran and Afghanistan, where “newer artillery causes more diffuse traumatic injury to the brain and the body than the effects of more typical weaponry.”
The study by Roghani and colleagues, she said, “is groundbreaking in that it looks at the connection and timing of these two disruptive forces, epilepsy and TBI, on the brain,” she said. “The study reveals that timing is everything: The combination of two disrupting circuitry effects in proximity can have a deadly effect. The summation is greater than either alone in veterans, and has significant effects on the brain’s ability to sustain the functions that keep us alive.”
The 6 months following either a diagnosis of epilepsy or TBI is “crucial,” Dr. Paolicchi noted. “Military and private citizens should be closely monitored during this period, and the results suggest they should refrain from activities that could predispose to further brain injury.”
In addition, current standards for treatment of epilepsy may need to be reevaluated, she said. “Patients are not always treated with a seizure medication after a first seizure, but perhaps, especially in patients at higher risk for brain injury such as the military and athletes, that policy warrants further examination.”
The findings by Roghani and colleagues may also extend to other groups, such as evaluating athletes after a concussion, patients after they are in a motor vehicle accident, and infants with traumatic brain injury, Dr. Paolicchi said. “The results suggest a reexamining of the proximity [of TBI] and epilepsy in these and other areas,” she noted.
The authors reported personal and institutional relationships in the form of research support and other financial compensation from AbbVie, Biohaven, CURE, Department of Defense, Department of Veterans Affairs (VA), Eisai, Engage, National Institutes of Health, Sanofi, SCS Consulting, Sunovion, and UCB. This study was supported by funding from the Department of Defense, VA Health Systems, and the VA HSR&D Informatics, Decision Enhancement, and Analytic Sciences Center of Innovation. Dr. Paolicchi reports no relevant conflicts of interest.
FROM EPILEPSIA
New Parkinson’s Disease Gene Discovered
HELSINKI, FINLAND — , a discovery that experts believe will have important clinical implications in the not-too-distant future.
A variant in PMSF1, a proteasome regulator, was identified in 15 families from 13 countries around the world, with 22 affected individuals.
“These families were ethnically diverse, and in all of them, the variant in PMSF1 correlated with the neurologic phenotype. We know this is very clear cut — the genotype/phenotype correlation — with the patients carrying the missense mutation having ‘mild’ symptoms, while those with the progressive loss-of-function variant had the most severe phenotype,” she noted.
“Our findings unequivocally link defective PSMF1 to early-onset PD and neurodegeneration and suggest mitochondrial dysfunction as a mechanistic contributor,” study investigator Francesca Magrinelli, MD, PhD, of University College London (UCL) Queen Square Institute of Neurology, UCL, London, told delegates at the 2024 Congress of the European Academy of Neurology.
Managing Patient Expectations
Those “mildly” affected had an early-onset Parkinson’s disease starting between the second and fifth decade of life with pyramidal tract signs, dysphasia, psychiatric comorbidity, and early levodopa-induced dyskinesia.
In those with the intermediate type, Parkinson’s disease symptoms start in childhood and include, among other things, global hypokinesia, developmental delay, cerebellar signs, and in some, associated epilepsy.
In most cases, there was evidence on brain MRI of a hypoplasia of the corpus callosum, Dr. Magrinelli said. In the most severely affected individuals, there was perinatal lethality with neurologic manifestations.
While it may seem that the genetics of Parkinson’s disease is an academic exercise for the most part, it won’t be too much longer before it yields practical information that will inform how patients are treated, said Parkinson’s disease expert Christine Klein, MD, of the Institute of Neurogenetics and Department of Neurology, University of Lübeck, Helsinki, Finland.
The genetics of Parkinson’s disease are complicated, even within a single family. So, it’s very important to assess the pathogenicity of different variants, Dr. Klein noted.
“I am sure that you have all had a Parkinson’s disease [gene] panel back, and it says, ‘variant of uncertain significance.’ This is the worst thing that can happen. The lab does not know what it means. You don’t know what it means, and you don’t know what to tell the patient. So how do you get around this?”
Dr. Klein said that before conducting any genetic testing, clinicians should inform the patient that they may have a genetic variant of uncertain significance. It doesn’t solve the problem, but it does help physicians manage patient expectations.
Clinical Relevance on the Way?
While it may seem that all of the identified variants that predict Parkinson’s disease which, in addition to PSMF1, include the well-established LRRK2 and GBA1, may look the same, this is not true when patient history is taken into account, said Dr. Klein.
For example, age-of-onset of Parkinson’s disease can differ between identified variants, and this has led to “a paradigm change” whereby a purely genetic finding is called a disease.
This first occurred in Huntington’s disease, when researchers gave individuals at high genetic risk of developing the illness, but who currently had no clinical symptoms, the label of “Stage Zero disease.”
This is important to note “because if we get to the stage of having drugs that can slow down, or even prevent, progression to Parkinson’s disease, then it will be key to have patients we know are going to develop it to participate in clinical trials for such agents,” said Dr. Klein.
She cited the example of a family that she recently encountered that had genetic test results that showed variants of unknown significance, so Dr. Klein had the family’s samples sent to a specialized lab in Dundee, Scotland, for further analysis.
“The biochemists found that this variant was indeed pathogenic, and kinase-activating, so this is very helpful and very important because there are now clinical trials in Parkinson’s disease with kinase inhibitors,” she noted.
“If you think there is something else [over and above the finding of uncertain significance] in your Parkinson’s disease panel, and you are not happy with the genetic report, send it somewhere else,” Dr. Klein advised.
“We will see a lot more patients with genetic Parkinson’s disease in the future,” she predicted, while citing two recent preliminary clinical trials that have shown some promise in terms of neuroprotection in patients with early Parkinson’s disease.
“It remains to be seen whether there will be light at the end of the tunnel,” she said, but it may soon be possible to find treatments that delay, or even prevent, Parkinson’s disease onset.
Dr. Magrinelli reported receiving speaker’s honoraria from MJFF Edmond J. Safra Clinical Research Fellowship in Movement Disorders (Class of 2023), MJFF Edmond J. Safra Movement Disorders Research Career Development Award 2023 (Grant ID MJFF-023893), American Parkinson Disease Association (Research Grant 2024), and the David Blank Charitable Foundation. Dr. Klein reported being a medical advisor to Retromer Therapeutics, Takeda, and Centogene and speakers’ honoraria from Desitin and Bial.
A version of this article first appeared on Medscape.com.
HELSINKI, FINLAND — , a discovery that experts believe will have important clinical implications in the not-too-distant future.
A variant in PMSF1, a proteasome regulator, was identified in 15 families from 13 countries around the world, with 22 affected individuals.
“These families were ethnically diverse, and in all of them, the variant in PMSF1 correlated with the neurologic phenotype. We know this is very clear cut — the genotype/phenotype correlation — with the patients carrying the missense mutation having ‘mild’ symptoms, while those with the progressive loss-of-function variant had the most severe phenotype,” she noted.
“Our findings unequivocally link defective PSMF1 to early-onset PD and neurodegeneration and suggest mitochondrial dysfunction as a mechanistic contributor,” study investigator Francesca Magrinelli, MD, PhD, of University College London (UCL) Queen Square Institute of Neurology, UCL, London, told delegates at the 2024 Congress of the European Academy of Neurology.
Managing Patient Expectations
Those “mildly” affected had an early-onset Parkinson’s disease starting between the second and fifth decade of life with pyramidal tract signs, dysphasia, psychiatric comorbidity, and early levodopa-induced dyskinesia.
In those with the intermediate type, Parkinson’s disease symptoms start in childhood and include, among other things, global hypokinesia, developmental delay, cerebellar signs, and in some, associated epilepsy.
In most cases, there was evidence on brain MRI of a hypoplasia of the corpus callosum, Dr. Magrinelli said. In the most severely affected individuals, there was perinatal lethality with neurologic manifestations.
While it may seem that the genetics of Parkinson’s disease is an academic exercise for the most part, it won’t be too much longer before it yields practical information that will inform how patients are treated, said Parkinson’s disease expert Christine Klein, MD, of the Institute of Neurogenetics and Department of Neurology, University of Lübeck, Helsinki, Finland.
The genetics of Parkinson’s disease are complicated, even within a single family. So, it’s very important to assess the pathogenicity of different variants, Dr. Klein noted.
“I am sure that you have all had a Parkinson’s disease [gene] panel back, and it says, ‘variant of uncertain significance.’ This is the worst thing that can happen. The lab does not know what it means. You don’t know what it means, and you don’t know what to tell the patient. So how do you get around this?”
Dr. Klein said that before conducting any genetic testing, clinicians should inform the patient that they may have a genetic variant of uncertain significance. It doesn’t solve the problem, but it does help physicians manage patient expectations.
Clinical Relevance on the Way?
While it may seem that all of the identified variants that predict Parkinson’s disease which, in addition to PSMF1, include the well-established LRRK2 and GBA1, may look the same, this is not true when patient history is taken into account, said Dr. Klein.
For example, age-of-onset of Parkinson’s disease can differ between identified variants, and this has led to “a paradigm change” whereby a purely genetic finding is called a disease.
This first occurred in Huntington’s disease, when researchers gave individuals at high genetic risk of developing the illness, but who currently had no clinical symptoms, the label of “Stage Zero disease.”
This is important to note “because if we get to the stage of having drugs that can slow down, or even prevent, progression to Parkinson’s disease, then it will be key to have patients we know are going to develop it to participate in clinical trials for such agents,” said Dr. Klein.
She cited the example of a family that she recently encountered that had genetic test results that showed variants of unknown significance, so Dr. Klein had the family’s samples sent to a specialized lab in Dundee, Scotland, for further analysis.
“The biochemists found that this variant was indeed pathogenic, and kinase-activating, so this is very helpful and very important because there are now clinical trials in Parkinson’s disease with kinase inhibitors,” she noted.
“If you think there is something else [over and above the finding of uncertain significance] in your Parkinson’s disease panel, and you are not happy with the genetic report, send it somewhere else,” Dr. Klein advised.
“We will see a lot more patients with genetic Parkinson’s disease in the future,” she predicted, while citing two recent preliminary clinical trials that have shown some promise in terms of neuroprotection in patients with early Parkinson’s disease.
“It remains to be seen whether there will be light at the end of the tunnel,” she said, but it may soon be possible to find treatments that delay, or even prevent, Parkinson’s disease onset.
Dr. Magrinelli reported receiving speaker’s honoraria from MJFF Edmond J. Safra Clinical Research Fellowship in Movement Disorders (Class of 2023), MJFF Edmond J. Safra Movement Disorders Research Career Development Award 2023 (Grant ID MJFF-023893), American Parkinson Disease Association (Research Grant 2024), and the David Blank Charitable Foundation. Dr. Klein reported being a medical advisor to Retromer Therapeutics, Takeda, and Centogene and speakers’ honoraria from Desitin and Bial.
A version of this article first appeared on Medscape.com.
HELSINKI, FINLAND — , a discovery that experts believe will have important clinical implications in the not-too-distant future.
A variant in PMSF1, a proteasome regulator, was identified in 15 families from 13 countries around the world, with 22 affected individuals.
“These families were ethnically diverse, and in all of them, the variant in PMSF1 correlated with the neurologic phenotype. We know this is very clear cut — the genotype/phenotype correlation — with the patients carrying the missense mutation having ‘mild’ symptoms, while those with the progressive loss-of-function variant had the most severe phenotype,” she noted.
“Our findings unequivocally link defective PSMF1 to early-onset PD and neurodegeneration and suggest mitochondrial dysfunction as a mechanistic contributor,” study investigator Francesca Magrinelli, MD, PhD, of University College London (UCL) Queen Square Institute of Neurology, UCL, London, told delegates at the 2024 Congress of the European Academy of Neurology.
Managing Patient Expectations
Those “mildly” affected had an early-onset Parkinson’s disease starting between the second and fifth decade of life with pyramidal tract signs, dysphasia, psychiatric comorbidity, and early levodopa-induced dyskinesia.
In those with the intermediate type, Parkinson’s disease symptoms start in childhood and include, among other things, global hypokinesia, developmental delay, cerebellar signs, and in some, associated epilepsy.
In most cases, there was evidence on brain MRI of a hypoplasia of the corpus callosum, Dr. Magrinelli said. In the most severely affected individuals, there was perinatal lethality with neurologic manifestations.
While it may seem that the genetics of Parkinson’s disease is an academic exercise for the most part, it won’t be too much longer before it yields practical information that will inform how patients are treated, said Parkinson’s disease expert Christine Klein, MD, of the Institute of Neurogenetics and Department of Neurology, University of Lübeck, Helsinki, Finland.
The genetics of Parkinson’s disease are complicated, even within a single family. So, it’s very important to assess the pathogenicity of different variants, Dr. Klein noted.
“I am sure that you have all had a Parkinson’s disease [gene] panel back, and it says, ‘variant of uncertain significance.’ This is the worst thing that can happen. The lab does not know what it means. You don’t know what it means, and you don’t know what to tell the patient. So how do you get around this?”
Dr. Klein said that before conducting any genetic testing, clinicians should inform the patient that they may have a genetic variant of uncertain significance. It doesn’t solve the problem, but it does help physicians manage patient expectations.
Clinical Relevance on the Way?
While it may seem that all of the identified variants that predict Parkinson’s disease which, in addition to PSMF1, include the well-established LRRK2 and GBA1, may look the same, this is not true when patient history is taken into account, said Dr. Klein.
For example, age-of-onset of Parkinson’s disease can differ between identified variants, and this has led to “a paradigm change” whereby a purely genetic finding is called a disease.
This first occurred in Huntington’s disease, when researchers gave individuals at high genetic risk of developing the illness, but who currently had no clinical symptoms, the label of “Stage Zero disease.”
This is important to note “because if we get to the stage of having drugs that can slow down, or even prevent, progression to Parkinson’s disease, then it will be key to have patients we know are going to develop it to participate in clinical trials for such agents,” said Dr. Klein.
She cited the example of a family that she recently encountered that had genetic test results that showed variants of unknown significance, so Dr. Klein had the family’s samples sent to a specialized lab in Dundee, Scotland, for further analysis.
“The biochemists found that this variant was indeed pathogenic, and kinase-activating, so this is very helpful and very important because there are now clinical trials in Parkinson’s disease with kinase inhibitors,” she noted.
“If you think there is something else [over and above the finding of uncertain significance] in your Parkinson’s disease panel, and you are not happy with the genetic report, send it somewhere else,” Dr. Klein advised.
“We will see a lot more patients with genetic Parkinson’s disease in the future,” she predicted, while citing two recent preliminary clinical trials that have shown some promise in terms of neuroprotection in patients with early Parkinson’s disease.
“It remains to be seen whether there will be light at the end of the tunnel,” she said, but it may soon be possible to find treatments that delay, or even prevent, Parkinson’s disease onset.
Dr. Magrinelli reported receiving speaker’s honoraria from MJFF Edmond J. Safra Clinical Research Fellowship in Movement Disorders (Class of 2023), MJFF Edmond J. Safra Movement Disorders Research Career Development Award 2023 (Grant ID MJFF-023893), American Parkinson Disease Association (Research Grant 2024), and the David Blank Charitable Foundation. Dr. Klein reported being a medical advisor to Retromer Therapeutics, Takeda, and Centogene and speakers’ honoraria from Desitin and Bial.
A version of this article first appeared on Medscape.com.
FROM EAN 2024
Night Owl or Lark? The Answer May Affect Cognition
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
new research suggests.
“Rather than just being personal preferences, these chronotypes could impact our cognitive function,” said study investigator, Raha West, MBChB, with Imperial College London, London, England, in a statement.
But the researchers also urged caution when interpreting the findings.
“It’s important to note that this doesn’t mean all morning people have worse cognitive performance. The findings reflect an overall trend where the majority might lean toward better cognition in the evening types,” Dr. West added.
In addition, across the board, getting the recommended 7-9 hours of nightly sleep was best for cognitive function, and sleeping for less than 7 or more than 9 hours had detrimental effects on brain function regardless of whether an individual was a night owl or lark.
The study was published online in BMJ Public Health.
A UK Biobank Cohort Study
The findings are based on a cross-sectional analysis of 26,820 adults aged 53-86 years from the UK Biobank database, who were categorized into two cohorts.
Cohort 1 had 10,067 participants (56% women) who completed four cognitive tests measuring fluid intelligence/reasoning, pairs matching, reaction time, and prospective memory. Cohort 2 had 16,753 participants (56% women) who completed two cognitive assessments (pairs matching and reaction time).
Participants self-reported sleep duration, chronotype, and quality. Cognitive test scores were evaluated against sleep parameters and health and lifestyle factors including sex, age, vascular and cardiac conditions, diabetes,alcohol use, smoking habits, and body mass index.
The results revealed a positive association between normal sleep duration (7-9 hours) and cognitive scores in Cohort 1 (beta, 0.0567), while extended sleep duration negatively impacted scores across in Cohort 1 and 2 (beta, –0.188 and beta, –0.2619, respectively).
An individual’s preference for evening or morning activity correlated strongly with their test scores. In particular, night owls consistently performed better on cognitive tests than early birds.
“While understanding and working with your natural sleep tendencies is essential, it’s equally important to remember to get just enough sleep, not too long or too short,” Dr. West noted. “This is crucial for keeping your brain healthy and functioning at its best.”
Contrary to some previous findings, the study did not find a significant relationship between sleep, sleepiness/insomnia, and cognitive performance. This may be because specific aspects of insomnia, such as severity and chronicity, as well as comorbid conditions need to be considered, the investigators wrote.
They added that age and diabetes consistently emerged as negative predictors of cognitive functioning across both cohorts, in line with previous research.
Limitations of the study include the cross-sectional design, which limits causal inferences; the possibility of residual confounding; and reliance on self-reported sleep data.
Also, the study did not adjust for educational attainment, a factor potentially influential on cognitive performance and sleep patterns, because of incomplete data. The study also did not factor in depression and social isolation, which have been shown to increase the risk for cognitive decline.
No Real-World Implications
Several outside experts offered their perspective on the study in a statement from the UK nonprofit Science Media Centre.
The study provides “interesting insights” into the difference in memory and thinking in people who identify themselves as a “morning” or “evening” person, Jacqui Hanley, PhD, with Alzheimer’s Research UK, said in the statement.
However, without a detailed picture of what is going on in the brain, it’s not clear whether being a morning or evening person affects memory and thinking or whether a decline in cognition is causing changes to sleeping patterns, Dr. Hanley added.
Roi Cohen Kadosh, PhD, CPsychol, professor of cognitive neuroscience, University of Surrey, Guildford, England, cautioned that there are “multiple potential reasons” for these associations.
“Therefore, there are no implications in my view for the real world. I fear that the general public will not be able to understand that and will change their sleep pattern, while this study does not give any evidence that this will lead to any benefit,” Dr. Cohen Kadosh said.
Jessica Chelekis, PhD, MBA, a sleep expert from Brunel University London, Uxbridge, England, said that the “main takeaway should be that the cultural belief that early risers are more productive than ‘night owls’ does not hold up to scientific scrutiny.”
“While everyone should aim to get good-quality sleep each night, we should also try to be aware of what time of day we are at our (cognitive) best and work in ways that suit us. Night owls, in particular, should not be shamed into fitting a stereotype that favors an ‘early to bed, early to rise’ practice,” Dr. Chelekis said.
Funding for the study was provided by the Korea Institute of Oriental Medicine in collaboration with Imperial College London. Dr. Hanley, Dr. Cohen Kadosh, and Dr. Chelekis have no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM BMJ PUBLIC HEALTH
Change in Clinical Definition of Parkinson’s Triggers Debate
Parkinson’s disease (PD) and dementia with Lewy bodies are currently defined by clinical features, which can be heterogeneous and do not capture the presymptomatic phase of neurodegeneration.
Recent advances have enabled the detection of misfolded and aggregated alpha-synuclein protein (synucleinopathy) — a key pathologic feature of these diseases — allowing for earlier and more accurate diagnosis. This has led two international research groups to propose a major shift from a clinical to a biological definition of the disease.
Both groups emphasized the detection of alpha-synuclein through recently developed seed amplification assays as a key diagnostic and staging tool, although they differ in their approaches and criteria.
NSD-ISS
NSD is defined by the presence during life of pathologic neuronal alpha-synuclein (S, the first biological anchor) in cerebrospinal fluid (CSF), regardless of the presence of any specific clinical syndrome. Individuals with pathologic neuronal alpha-synuclein aggregates are at a high risk for dopaminergic neuronal dysfunction (D, the second key biological anchor).
Dr. Simuni and colleagues also proposed the NSD integrated staging system (NSD-ISS) rooted in the S and D biological anchors coupled with the degree of functional impairment caused by clinical signs or symptoms.
Stages 0-1 occur without signs or symptoms and are defined by the presence of pathogenic variants in the SNCA gene (stage 0), S alone (stage 1A), or S and D (stage 1B).
The presence of clinical manifestations marks the transition to stage 2 and beyond, with stage 2 characterized by subtle signs or symptoms but without functional impairment. Stages 2B-6 require both S and D and stage-specific increases in functional impairment.
“An advantage of the NSD-ISS will be to reduce heterogeneity in clinical trials by requiring biological consistency within the study cohort rather than identifying study participants on the basis of clinical criteria for Parkinson’s disease and dementia with Lewy bodies,” Dr. Simuni and colleagues pointed out in a position paper describing the NSD-ISS published online earlier this year in The Lancet Neurology.
The NSD-ISS will “evolve to include the incorporation of data-driven definitions of stage-specific functional anchors and additional biomarkers as they emerge and are validated.”
For now, the NSD-ISS is intended for research use only and not in the clinic.
The SynNeurGe Research Diagnostic Criteria
Separately, a team led by Anthony Lang, MD, with the Krembil Brain Institute at Toronto Western Hospital, Toronto, Ontario, Canada, proposed the SynNeurGe biological classification of PD.
Described in a companion paper published online in The Lancet Neurology, their “S-N-G” classification emphasizes the important interactions between three biological factors that contribute to disease: The presence or absence of pathologic alpha-synuclein (S) in tissues or CSF, an evidence of underlying neurodegeneration (N) defined by neuroimaging procedures, and the documentation of pathogenic gene variants (G) that cause or strongly predispose to PD.
These three components link to a clinical component, defined either by a single high-specificity clinical feature or by multiple lower-specificity clinical features.
As with the NSD-ISS, the SynNeurGe model is intended for research purposes only and is not ready for immediate application in the clinic.
Both groups acknowledged the need for studies to test and validate the proposed classification systems.
Caveats, Cautionary Notes
Adopting a biological definition of PD would represent a shift as the field has prompted considerable discussion and healthy debate.
Commenting for this news organization, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, said the principle behind the proposed classifications is where “the field needs to go.”
“Right now, people with Parkinson’s take too long to get a confirmed diagnosis of their disease, and despite best efforts, clinicians can get it wrong, not diagnosing people or maybe misdiagnosing people,” Dr. Beck said. “Moving to a biological basis, where we have better certainty, is going to be really important.”
Beck noted that the NSD-ISS “goes all in on alpha-synuclein,” which does play a big role in PD, but added, “I don’t know if I want to declare a winner after the first heat. There are other biomarkers that are coming to fruition but still need validation, and alpha-synuclein may be just one of many to help determine whether someone has Parkinson’s disease or not.”
Un Kang, MD, director of translational research at the Fresco Institute for Parkinson’s & Movement Disorders at NYU Langone Health, New York City, told this news organization that alpha-synuclein has “very high diagnostic accuracy” but cautioned that the adoption of a biological definition for PD would not usurp a clinical diagnosis.
“We need both,” Dr. Kang said. “But knowing the underlying pathology is important for earlier diagnosis and testing of potential therapies to treat the molecular pathology. If a patient doesn’t have abnormal synuclein, you may be treating the wrong disease.”
The coauthors of recent JAMA Neurology perspective said the biological definitions are “exciting, but there is “wisdom” in tapping the brakes when attempting to establish a biological definition and classification system for PD.
“Although these two proposals represent significant steps forward, a sprint toward the finish line may not be wise,” wrote Njideka U. Okubadejo, MD, with University of Lagos, Nigeria; Joseph Jankovic, MD, with Baylor College of Medicine, Houston; and Michael S. Okun, MD, with University of Florida Health, Gainesville, Florida.
“A process that embraces inclusivity and weaves in evolving technological advancements will be important. Who benefits if implementation of a biologically based staging system for PD is hurried?” they continued.
The proposals rely heavily on alpha-synuclein assays, they noted, which currently require subjective interpretation and lack extensive validation. They also worry that the need for expensive and, in some regions, unattainable biological fluids (CSF) or imaging studies (dopamine transporter scan) may limit global access to both PD trials and future therapeutics.
They also worry about retiring the name Parkinson’s disease.
“Beyond the historical importance of the term Parkinson disease, any classification that proposes abandoning the two words in either clinical or research descriptions could have unintended global repercussions,” Dr. Okubadejo, Dr. Jankovic, and Dr. Okun cautioned.
Dr. Beck told this news organization he’s spoken to clinicians at meetings about this and “no one really likes the idea” of retiring the term Parkinson’s disease.
Frederick Ketchum, MD, and Nathaniel Chin, MD, with University of Wisconsin–Madison, worry about the “lived” experience of the asymptomatic patient after receiving a biological diagnosis.
“Biological diagnosis might enable effective prognostication and treatment in the future but will substantially change the experience of illness for patients now as new frameworks are slowly adopted and knowledge is gained,” they said in a correspondence in The Lancet Neurology.
“Understanding and addressing this lived experience remains a core task for health professionals and must be made central as we begin an era in which neurological diseases are redefined on a biological basis,” Dr. Ketchum and Dr. Chin advised.
A complete list of agencies that supported this work and author disclosures are available with the original articles. Dr. Beck and Dr. Kang had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Parkinson’s disease (PD) and dementia with Lewy bodies are currently defined by clinical features, which can be heterogeneous and do not capture the presymptomatic phase of neurodegeneration.
Recent advances have enabled the detection of misfolded and aggregated alpha-synuclein protein (synucleinopathy) — a key pathologic feature of these diseases — allowing for earlier and more accurate diagnosis. This has led two international research groups to propose a major shift from a clinical to a biological definition of the disease.
Both groups emphasized the detection of alpha-synuclein through recently developed seed amplification assays as a key diagnostic and staging tool, although they differ in their approaches and criteria.
NSD-ISS
NSD is defined by the presence during life of pathologic neuronal alpha-synuclein (S, the first biological anchor) in cerebrospinal fluid (CSF), regardless of the presence of any specific clinical syndrome. Individuals with pathologic neuronal alpha-synuclein aggregates are at a high risk for dopaminergic neuronal dysfunction (D, the second key biological anchor).
Dr. Simuni and colleagues also proposed the NSD integrated staging system (NSD-ISS) rooted in the S and D biological anchors coupled with the degree of functional impairment caused by clinical signs or symptoms.
Stages 0-1 occur without signs or symptoms and are defined by the presence of pathogenic variants in the SNCA gene (stage 0), S alone (stage 1A), or S and D (stage 1B).
The presence of clinical manifestations marks the transition to stage 2 and beyond, with stage 2 characterized by subtle signs or symptoms but without functional impairment. Stages 2B-6 require both S and D and stage-specific increases in functional impairment.
“An advantage of the NSD-ISS will be to reduce heterogeneity in clinical trials by requiring biological consistency within the study cohort rather than identifying study participants on the basis of clinical criteria for Parkinson’s disease and dementia with Lewy bodies,” Dr. Simuni and colleagues pointed out in a position paper describing the NSD-ISS published online earlier this year in The Lancet Neurology.
The NSD-ISS will “evolve to include the incorporation of data-driven definitions of stage-specific functional anchors and additional biomarkers as they emerge and are validated.”
For now, the NSD-ISS is intended for research use only and not in the clinic.
The SynNeurGe Research Diagnostic Criteria
Separately, a team led by Anthony Lang, MD, with the Krembil Brain Institute at Toronto Western Hospital, Toronto, Ontario, Canada, proposed the SynNeurGe biological classification of PD.
Described in a companion paper published online in The Lancet Neurology, their “S-N-G” classification emphasizes the important interactions between three biological factors that contribute to disease: The presence or absence of pathologic alpha-synuclein (S) in tissues or CSF, an evidence of underlying neurodegeneration (N) defined by neuroimaging procedures, and the documentation of pathogenic gene variants (G) that cause or strongly predispose to PD.
These three components link to a clinical component, defined either by a single high-specificity clinical feature or by multiple lower-specificity clinical features.
As with the NSD-ISS, the SynNeurGe model is intended for research purposes only and is not ready for immediate application in the clinic.
Both groups acknowledged the need for studies to test and validate the proposed classification systems.
Caveats, Cautionary Notes
Adopting a biological definition of PD would represent a shift as the field has prompted considerable discussion and healthy debate.
Commenting for this news organization, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, said the principle behind the proposed classifications is where “the field needs to go.”
“Right now, people with Parkinson’s take too long to get a confirmed diagnosis of their disease, and despite best efforts, clinicians can get it wrong, not diagnosing people or maybe misdiagnosing people,” Dr. Beck said. “Moving to a biological basis, where we have better certainty, is going to be really important.”
Beck noted that the NSD-ISS “goes all in on alpha-synuclein,” which does play a big role in PD, but added, “I don’t know if I want to declare a winner after the first heat. There are other biomarkers that are coming to fruition but still need validation, and alpha-synuclein may be just one of many to help determine whether someone has Parkinson’s disease or not.”
Un Kang, MD, director of translational research at the Fresco Institute for Parkinson’s & Movement Disorders at NYU Langone Health, New York City, told this news organization that alpha-synuclein has “very high diagnostic accuracy” but cautioned that the adoption of a biological definition for PD would not usurp a clinical diagnosis.
“We need both,” Dr. Kang said. “But knowing the underlying pathology is important for earlier diagnosis and testing of potential therapies to treat the molecular pathology. If a patient doesn’t have abnormal synuclein, you may be treating the wrong disease.”
The coauthors of recent JAMA Neurology perspective said the biological definitions are “exciting, but there is “wisdom” in tapping the brakes when attempting to establish a biological definition and classification system for PD.
“Although these two proposals represent significant steps forward, a sprint toward the finish line may not be wise,” wrote Njideka U. Okubadejo, MD, with University of Lagos, Nigeria; Joseph Jankovic, MD, with Baylor College of Medicine, Houston; and Michael S. Okun, MD, with University of Florida Health, Gainesville, Florida.
“A process that embraces inclusivity and weaves in evolving technological advancements will be important. Who benefits if implementation of a biologically based staging system for PD is hurried?” they continued.
The proposals rely heavily on alpha-synuclein assays, they noted, which currently require subjective interpretation and lack extensive validation. They also worry that the need for expensive and, in some regions, unattainable biological fluids (CSF) or imaging studies (dopamine transporter scan) may limit global access to both PD trials and future therapeutics.
They also worry about retiring the name Parkinson’s disease.
“Beyond the historical importance of the term Parkinson disease, any classification that proposes abandoning the two words in either clinical or research descriptions could have unintended global repercussions,” Dr. Okubadejo, Dr. Jankovic, and Dr. Okun cautioned.
Dr. Beck told this news organization he’s spoken to clinicians at meetings about this and “no one really likes the idea” of retiring the term Parkinson’s disease.
Frederick Ketchum, MD, and Nathaniel Chin, MD, with University of Wisconsin–Madison, worry about the “lived” experience of the asymptomatic patient after receiving a biological diagnosis.
“Biological diagnosis might enable effective prognostication and treatment in the future but will substantially change the experience of illness for patients now as new frameworks are slowly adopted and knowledge is gained,” they said in a correspondence in The Lancet Neurology.
“Understanding and addressing this lived experience remains a core task for health professionals and must be made central as we begin an era in which neurological diseases are redefined on a biological basis,” Dr. Ketchum and Dr. Chin advised.
A complete list of agencies that supported this work and author disclosures are available with the original articles. Dr. Beck and Dr. Kang had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Parkinson’s disease (PD) and dementia with Lewy bodies are currently defined by clinical features, which can be heterogeneous and do not capture the presymptomatic phase of neurodegeneration.
Recent advances have enabled the detection of misfolded and aggregated alpha-synuclein protein (synucleinopathy) — a key pathologic feature of these diseases — allowing for earlier and more accurate diagnosis. This has led two international research groups to propose a major shift from a clinical to a biological definition of the disease.
Both groups emphasized the detection of alpha-synuclein through recently developed seed amplification assays as a key diagnostic and staging tool, although they differ in their approaches and criteria.
NSD-ISS
NSD is defined by the presence during life of pathologic neuronal alpha-synuclein (S, the first biological anchor) in cerebrospinal fluid (CSF), regardless of the presence of any specific clinical syndrome. Individuals with pathologic neuronal alpha-synuclein aggregates are at a high risk for dopaminergic neuronal dysfunction (D, the second key biological anchor).
Dr. Simuni and colleagues also proposed the NSD integrated staging system (NSD-ISS) rooted in the S and D biological anchors coupled with the degree of functional impairment caused by clinical signs or symptoms.
Stages 0-1 occur without signs or symptoms and are defined by the presence of pathogenic variants in the SNCA gene (stage 0), S alone (stage 1A), or S and D (stage 1B).
The presence of clinical manifestations marks the transition to stage 2 and beyond, with stage 2 characterized by subtle signs or symptoms but without functional impairment. Stages 2B-6 require both S and D and stage-specific increases in functional impairment.
“An advantage of the NSD-ISS will be to reduce heterogeneity in clinical trials by requiring biological consistency within the study cohort rather than identifying study participants on the basis of clinical criteria for Parkinson’s disease and dementia with Lewy bodies,” Dr. Simuni and colleagues pointed out in a position paper describing the NSD-ISS published online earlier this year in The Lancet Neurology.
The NSD-ISS will “evolve to include the incorporation of data-driven definitions of stage-specific functional anchors and additional biomarkers as they emerge and are validated.”
For now, the NSD-ISS is intended for research use only and not in the clinic.
The SynNeurGe Research Diagnostic Criteria
Separately, a team led by Anthony Lang, MD, with the Krembil Brain Institute at Toronto Western Hospital, Toronto, Ontario, Canada, proposed the SynNeurGe biological classification of PD.
Described in a companion paper published online in The Lancet Neurology, their “S-N-G” classification emphasizes the important interactions between three biological factors that contribute to disease: The presence or absence of pathologic alpha-synuclein (S) in tissues or CSF, an evidence of underlying neurodegeneration (N) defined by neuroimaging procedures, and the documentation of pathogenic gene variants (G) that cause or strongly predispose to PD.
These three components link to a clinical component, defined either by a single high-specificity clinical feature or by multiple lower-specificity clinical features.
As with the NSD-ISS, the SynNeurGe model is intended for research purposes only and is not ready for immediate application in the clinic.
Both groups acknowledged the need for studies to test and validate the proposed classification systems.
Caveats, Cautionary Notes
Adopting a biological definition of PD would represent a shift as the field has prompted considerable discussion and healthy debate.
Commenting for this news organization, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, said the principle behind the proposed classifications is where “the field needs to go.”
“Right now, people with Parkinson’s take too long to get a confirmed diagnosis of their disease, and despite best efforts, clinicians can get it wrong, not diagnosing people or maybe misdiagnosing people,” Dr. Beck said. “Moving to a biological basis, where we have better certainty, is going to be really important.”
Beck noted that the NSD-ISS “goes all in on alpha-synuclein,” which does play a big role in PD, but added, “I don’t know if I want to declare a winner after the first heat. There are other biomarkers that are coming to fruition but still need validation, and alpha-synuclein may be just one of many to help determine whether someone has Parkinson’s disease or not.”
Un Kang, MD, director of translational research at the Fresco Institute for Parkinson’s & Movement Disorders at NYU Langone Health, New York City, told this news organization that alpha-synuclein has “very high diagnostic accuracy” but cautioned that the adoption of a biological definition for PD would not usurp a clinical diagnosis.
“We need both,” Dr. Kang said. “But knowing the underlying pathology is important for earlier diagnosis and testing of potential therapies to treat the molecular pathology. If a patient doesn’t have abnormal synuclein, you may be treating the wrong disease.”
The coauthors of recent JAMA Neurology perspective said the biological definitions are “exciting, but there is “wisdom” in tapping the brakes when attempting to establish a biological definition and classification system for PD.
“Although these two proposals represent significant steps forward, a sprint toward the finish line may not be wise,” wrote Njideka U. Okubadejo, MD, with University of Lagos, Nigeria; Joseph Jankovic, MD, with Baylor College of Medicine, Houston; and Michael S. Okun, MD, with University of Florida Health, Gainesville, Florida.
“A process that embraces inclusivity and weaves in evolving technological advancements will be important. Who benefits if implementation of a biologically based staging system for PD is hurried?” they continued.
The proposals rely heavily on alpha-synuclein assays, they noted, which currently require subjective interpretation and lack extensive validation. They also worry that the need for expensive and, in some regions, unattainable biological fluids (CSF) or imaging studies (dopamine transporter scan) may limit global access to both PD trials and future therapeutics.
They also worry about retiring the name Parkinson’s disease.
“Beyond the historical importance of the term Parkinson disease, any classification that proposes abandoning the two words in either clinical or research descriptions could have unintended global repercussions,” Dr. Okubadejo, Dr. Jankovic, and Dr. Okun cautioned.
Dr. Beck told this news organization he’s spoken to clinicians at meetings about this and “no one really likes the idea” of retiring the term Parkinson’s disease.
Frederick Ketchum, MD, and Nathaniel Chin, MD, with University of Wisconsin–Madison, worry about the “lived” experience of the asymptomatic patient after receiving a biological diagnosis.
“Biological diagnosis might enable effective prognostication and treatment in the future but will substantially change the experience of illness for patients now as new frameworks are slowly adopted and knowledge is gained,” they said in a correspondence in The Lancet Neurology.
“Understanding and addressing this lived experience remains a core task for health professionals and must be made central as we begin an era in which neurological diseases are redefined on a biological basis,” Dr. Ketchum and Dr. Chin advised.
A complete list of agencies that supported this work and author disclosures are available with the original articles. Dr. Beck and Dr. Kang had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Managing Agitation in Alzheimer’s Disease: Five Things to Know
Agitation is a neuropsychiatric symptom in patients with Alzheimer’s disease (AD), the most common form of dementia. The prevalence of this symptom is about 40%-65%, with the higher end of the range applying to patients who have moderate to severe dementia. . The DICE approach is a collaborative process for managing behavioral symptoms in dementia, wherein the caregiver describes the behaviors, the provider investigates the etiology, the provider and caregiver create a treatment plan, and the provider evaluates the outcome of the interventions. We use this widely adopted approach as the framework for discussing recent advances in the management of agitation.
Here are five things to know about managing agitation in AD.
1. There is a new operational definition for agitation in dementia.
Agitation in dementia is a syndrome that encompasses specific behaviors across all dementia types. The 2023 operational definition of agitation in dementia by the International Psychogeriatric Association (IPA) includes three domains: excessive motor activity (including pacing, rocking, restlessness, and performing repetitious mannerisms), verbal aggression (including using profanity, screaming, and shouting), and physical aggression (including interpersonal aggression and mishandling or destruction of property). These behaviors must be persistent or recurrent for at least 2 weeks or represent a dramatic change from the person’s baseline behavior, must be associated with excessive distress or disability beyond what is caused by the cognitive impairment itself, and result in significant impairment in at least one of the three specified functional domains. Behavioral symptoms in dementia frequently co-occur, which affects treatment and prognosis. For instance, the risk for stroke associated with antipsychotic treatments appears to be higher in dementia-related psychosis without agitation than in agitation alone or in psychosis with agitation. Therefore, the use of a rating scale such as the Neuropsychiatric Inventory–Questionnaire (NPI-Q), which takes 5 minutes or less to administer, is recommended to identify and track behavioral symptoms and caregiver distress.
2. The etiology of agitation in dementia may be multifactorial.
It is important in every case to identify all underlying etiologies so that presumed causal and/or exacerbating factors are not inadvertently missed. Agitation may be a means of communicating distress owing to unmet needs or a patient-environment mismatch (function-focused approach) or may be a direct consequence of the dementia itself (behavioral-symptom approach). These approaches are not mutually exclusive. A patient can present with agitation as a direct consequence of dementia and inadequately treated pain concurrently.
The new IPA definition specifies several exclusion criteria for agitation in dementia, including underlying medical conditions, delirium, substance use, and suboptimal care conditions. It is especially crucial to accurately identify delirium because dementia is an independent risk factor for delirium, which in turn may accelerate the progression of cognitive and functional decline. Even subsyndromal delirium in older adults leads to a higher 3-year mortality rate that is comparable to that seen in delirium. Older adults with acute-onset agitation in the context of dementia should undergo a comprehensive assessment for delirium, as agitation may be the only indication of a serious underlying medical condition.
3. Nonpharmacologic interventions should be used whenever possible.
The wider adoption of nonpharmacologic interventions in clinical practice has been greatly limited by the heterogeneity in study protocols, including in selection of participants, in the types of dementias included, and in defining and applying the intervention strategies. Nevertheless, there is general consensus that individualized behavioral strategies that build on the patients’ interests and preserved abilities are more effective, at least in the short term. Patients best suited for these interventions are those with less cognitive decline, better communication skills, less impairment in activities of daily living, and higher responsiveness. A systematic review of systematic reviews found music therapy to be the most effective intervention for reducing agitation and aggression in dementia, along with behavioral management techniques when supervised by healthcare professionals. On the other hand, physical restraints are best avoided, as their use in hospitalized patients has been associated with longer stays, higher costs, lower odds of being discharged to home, and in long-term care patients with longer stays, with increased risk for medical complications and functional decline.
4. Antidepressants are not all equally safe or efficacious in managing agitation.
In a network meta-analysis that looked at the effects of several antidepressants on agitation in dementia, citalopram had just under 95% probability of efficacy and was the only antidepressant that was significantly more efficacious than placebo. In the multicenter CitAD trial, citalopram was efficacious and well tolerated for the treatment of agitation in AD, but the mean dose of citalopram used, 30 mg/d, was higher than the maximum dose of 20 mg/d recommended by the US Food and Drug Administration (FDA) in those aged 60 years or above. The optimal candidates for citalopram were those under the age of 85 with mild to moderate AD and mild to moderate nonpsychotic agitation, and it took up to 9 weeks for it to be fully effective. Due to the risk for dose-dependent QTc prolongation with citalopram, a baseline ECG must be done, and a second ECG is recommended if a clinical decision is made to exceed the recommended maximum daily dose. In the CitAD trial, 66% of patients in the citalopram arm received cholinesterase inhibitors concurrently while 44% received memantine, so these symptomatic treatments for AD should not be stopped solely for initiating a citalopram trial.
The antiagitation effect of citalopram may well be a class effect of all selective serotonin reuptake inhibitors (SSRIs), given that there is also evidence favoring the use of sertraline and escitalopram. The S-CitAD trial, the first large, randomized controlled study of escitalopram for the treatment of agitation in dementia, is expected to announce its top-line results sometime this year. However, not all antidepressant classes appear to be equally efficacious or safe. In the large, 12-week randomized placebo-controlled trial SYMBAD, mirtazapine was not only ineffective in treating nonpsychotic agitation in AD but was also associated with a higher mortality rate that just missed statistical significance. Trazodone is also often used for treating agitation, but there is insufficient evidence regarding efficacy and a high probability of adverse effects, even at low doses.
5. Antipsychotics may be effective drugs for treating severe dementia-related agitation.
The CATIE-AD study found that the small beneficial effects of antipsychotics for treating agitation and psychosis in AD were offset by their adverse effects and high discontinuation rates, and the FDA-imposed boxed warnings in 2005 and 2008 cautioned against the use of both first- and second-generation antipsychotics to manage dementia-related psychosis owing to an increased risk for death. Subsequently, the quest for safer and more effective alternatives culminated in the FDA approval of brexpiprazole in 2023 for the treatment of agitation in AD, but the black box warning was left in place. Three randomized controlled trials found brexpiprazole to be relatively safe, with statistically significant improvement in agitation. It was especially efficacious for severe agitation, but there is controversy about whether such improvement is clinically meaningful and whether brexpiprazole is truly superior to other antipsychotics for treating dementia-related agitation. As in the previously mentioned citalopram studies, most patients in the brexpiprazole studies received the drug as an add-on to memantine and/or a cholinesterase inhibitor, and it was proven effective over a period of up to 12 weeks across the three trials. Regarding other antipsychotics, aripiprazole and risperidone have been shown to be effective in treating agitation in patients with mixed dementia, but risperidone has also been associated with the highest risk for strokes (about 80% probability). Unfortunately, an unintended consequence of the boxed warnings on antipsychotics has been an increase in off-label substitution of psychotropic drugs with unproven efficacy and a questionable safety profile, such as valproic acid preparations, that have been linked to an increased short-term risk for accelerated brain volume loss and rapid cognitive decline, as well as a higher risk for mortality.
Lisa M. Wise, assistant professor, Psychiatry, at Oregon Health & Science University, and staff psychiatrist, Department of Psychiatry, Portland VA Medical Center, Portland, Oregon, and Vimal M. Aga, adjunct assistant professor, Department of Neurology, Oregon Health & Science University, and geriatric psychiatrist, Layton Aging and Alzheimer’s Disease Center, Portland, Oregon, have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Agitation is a neuropsychiatric symptom in patients with Alzheimer’s disease (AD), the most common form of dementia. The prevalence of this symptom is about 40%-65%, with the higher end of the range applying to patients who have moderate to severe dementia. . The DICE approach is a collaborative process for managing behavioral symptoms in dementia, wherein the caregiver describes the behaviors, the provider investigates the etiology, the provider and caregiver create a treatment plan, and the provider evaluates the outcome of the interventions. We use this widely adopted approach as the framework for discussing recent advances in the management of agitation.
Here are five things to know about managing agitation in AD.
1. There is a new operational definition for agitation in dementia.
Agitation in dementia is a syndrome that encompasses specific behaviors across all dementia types. The 2023 operational definition of agitation in dementia by the International Psychogeriatric Association (IPA) includes three domains: excessive motor activity (including pacing, rocking, restlessness, and performing repetitious mannerisms), verbal aggression (including using profanity, screaming, and shouting), and physical aggression (including interpersonal aggression and mishandling or destruction of property). These behaviors must be persistent or recurrent for at least 2 weeks or represent a dramatic change from the person’s baseline behavior, must be associated with excessive distress or disability beyond what is caused by the cognitive impairment itself, and result in significant impairment in at least one of the three specified functional domains. Behavioral symptoms in dementia frequently co-occur, which affects treatment and prognosis. For instance, the risk for stroke associated with antipsychotic treatments appears to be higher in dementia-related psychosis without agitation than in agitation alone or in psychosis with agitation. Therefore, the use of a rating scale such as the Neuropsychiatric Inventory–Questionnaire (NPI-Q), which takes 5 minutes or less to administer, is recommended to identify and track behavioral symptoms and caregiver distress.
2. The etiology of agitation in dementia may be multifactorial.
It is important in every case to identify all underlying etiologies so that presumed causal and/or exacerbating factors are not inadvertently missed. Agitation may be a means of communicating distress owing to unmet needs or a patient-environment mismatch (function-focused approach) or may be a direct consequence of the dementia itself (behavioral-symptom approach). These approaches are not mutually exclusive. A patient can present with agitation as a direct consequence of dementia and inadequately treated pain concurrently.
The new IPA definition specifies several exclusion criteria for agitation in dementia, including underlying medical conditions, delirium, substance use, and suboptimal care conditions. It is especially crucial to accurately identify delirium because dementia is an independent risk factor for delirium, which in turn may accelerate the progression of cognitive and functional decline. Even subsyndromal delirium in older adults leads to a higher 3-year mortality rate that is comparable to that seen in delirium. Older adults with acute-onset agitation in the context of dementia should undergo a comprehensive assessment for delirium, as agitation may be the only indication of a serious underlying medical condition.
3. Nonpharmacologic interventions should be used whenever possible.
The wider adoption of nonpharmacologic interventions in clinical practice has been greatly limited by the heterogeneity in study protocols, including in selection of participants, in the types of dementias included, and in defining and applying the intervention strategies. Nevertheless, there is general consensus that individualized behavioral strategies that build on the patients’ interests and preserved abilities are more effective, at least in the short term. Patients best suited for these interventions are those with less cognitive decline, better communication skills, less impairment in activities of daily living, and higher responsiveness. A systematic review of systematic reviews found music therapy to be the most effective intervention for reducing agitation and aggression in dementia, along with behavioral management techniques when supervised by healthcare professionals. On the other hand, physical restraints are best avoided, as their use in hospitalized patients has been associated with longer stays, higher costs, lower odds of being discharged to home, and in long-term care patients with longer stays, with increased risk for medical complications and functional decline.
4. Antidepressants are not all equally safe or efficacious in managing agitation.
In a network meta-analysis that looked at the effects of several antidepressants on agitation in dementia, citalopram had just under 95% probability of efficacy and was the only antidepressant that was significantly more efficacious than placebo. In the multicenter CitAD trial, citalopram was efficacious and well tolerated for the treatment of agitation in AD, but the mean dose of citalopram used, 30 mg/d, was higher than the maximum dose of 20 mg/d recommended by the US Food and Drug Administration (FDA) in those aged 60 years or above. The optimal candidates for citalopram were those under the age of 85 with mild to moderate AD and mild to moderate nonpsychotic agitation, and it took up to 9 weeks for it to be fully effective. Due to the risk for dose-dependent QTc prolongation with citalopram, a baseline ECG must be done, and a second ECG is recommended if a clinical decision is made to exceed the recommended maximum daily dose. In the CitAD trial, 66% of patients in the citalopram arm received cholinesterase inhibitors concurrently while 44% received memantine, so these symptomatic treatments for AD should not be stopped solely for initiating a citalopram trial.
The antiagitation effect of citalopram may well be a class effect of all selective serotonin reuptake inhibitors (SSRIs), given that there is also evidence favoring the use of sertraline and escitalopram. The S-CitAD trial, the first large, randomized controlled study of escitalopram for the treatment of agitation in dementia, is expected to announce its top-line results sometime this year. However, not all antidepressant classes appear to be equally efficacious or safe. In the large, 12-week randomized placebo-controlled trial SYMBAD, mirtazapine was not only ineffective in treating nonpsychotic agitation in AD but was also associated with a higher mortality rate that just missed statistical significance. Trazodone is also often used for treating agitation, but there is insufficient evidence regarding efficacy and a high probability of adverse effects, even at low doses.
5. Antipsychotics may be effective drugs for treating severe dementia-related agitation.
The CATIE-AD study found that the small beneficial effects of antipsychotics for treating agitation and psychosis in AD were offset by their adverse effects and high discontinuation rates, and the FDA-imposed boxed warnings in 2005 and 2008 cautioned against the use of both first- and second-generation antipsychotics to manage dementia-related psychosis owing to an increased risk for death. Subsequently, the quest for safer and more effective alternatives culminated in the FDA approval of brexpiprazole in 2023 for the treatment of agitation in AD, but the black box warning was left in place. Three randomized controlled trials found brexpiprazole to be relatively safe, with statistically significant improvement in agitation. It was especially efficacious for severe agitation, but there is controversy about whether such improvement is clinically meaningful and whether brexpiprazole is truly superior to other antipsychotics for treating dementia-related agitation. As in the previously mentioned citalopram studies, most patients in the brexpiprazole studies received the drug as an add-on to memantine and/or a cholinesterase inhibitor, and it was proven effective over a period of up to 12 weeks across the three trials. Regarding other antipsychotics, aripiprazole and risperidone have been shown to be effective in treating agitation in patients with mixed dementia, but risperidone has also been associated with the highest risk for strokes (about 80% probability). Unfortunately, an unintended consequence of the boxed warnings on antipsychotics has been an increase in off-label substitution of psychotropic drugs with unproven efficacy and a questionable safety profile, such as valproic acid preparations, that have been linked to an increased short-term risk for accelerated brain volume loss and rapid cognitive decline, as well as a higher risk for mortality.
Lisa M. Wise, assistant professor, Psychiatry, at Oregon Health & Science University, and staff psychiatrist, Department of Psychiatry, Portland VA Medical Center, Portland, Oregon, and Vimal M. Aga, adjunct assistant professor, Department of Neurology, Oregon Health & Science University, and geriatric psychiatrist, Layton Aging and Alzheimer’s Disease Center, Portland, Oregon, have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Agitation is a neuropsychiatric symptom in patients with Alzheimer’s disease (AD), the most common form of dementia. The prevalence of this symptom is about 40%-65%, with the higher end of the range applying to patients who have moderate to severe dementia. . The DICE approach is a collaborative process for managing behavioral symptoms in dementia, wherein the caregiver describes the behaviors, the provider investigates the etiology, the provider and caregiver create a treatment plan, and the provider evaluates the outcome of the interventions. We use this widely adopted approach as the framework for discussing recent advances in the management of agitation.
Here are five things to know about managing agitation in AD.
1. There is a new operational definition for agitation in dementia.
Agitation in dementia is a syndrome that encompasses specific behaviors across all dementia types. The 2023 operational definition of agitation in dementia by the International Psychogeriatric Association (IPA) includes three domains: excessive motor activity (including pacing, rocking, restlessness, and performing repetitious mannerisms), verbal aggression (including using profanity, screaming, and shouting), and physical aggression (including interpersonal aggression and mishandling or destruction of property). These behaviors must be persistent or recurrent for at least 2 weeks or represent a dramatic change from the person’s baseline behavior, must be associated with excessive distress or disability beyond what is caused by the cognitive impairment itself, and result in significant impairment in at least one of the three specified functional domains. Behavioral symptoms in dementia frequently co-occur, which affects treatment and prognosis. For instance, the risk for stroke associated with antipsychotic treatments appears to be higher in dementia-related psychosis without agitation than in agitation alone or in psychosis with agitation. Therefore, the use of a rating scale such as the Neuropsychiatric Inventory–Questionnaire (NPI-Q), which takes 5 minutes or less to administer, is recommended to identify and track behavioral symptoms and caregiver distress.
2. The etiology of agitation in dementia may be multifactorial.
It is important in every case to identify all underlying etiologies so that presumed causal and/or exacerbating factors are not inadvertently missed. Agitation may be a means of communicating distress owing to unmet needs or a patient-environment mismatch (function-focused approach) or may be a direct consequence of the dementia itself (behavioral-symptom approach). These approaches are not mutually exclusive. A patient can present with agitation as a direct consequence of dementia and inadequately treated pain concurrently.
The new IPA definition specifies several exclusion criteria for agitation in dementia, including underlying medical conditions, delirium, substance use, and suboptimal care conditions. It is especially crucial to accurately identify delirium because dementia is an independent risk factor for delirium, which in turn may accelerate the progression of cognitive and functional decline. Even subsyndromal delirium in older adults leads to a higher 3-year mortality rate that is comparable to that seen in delirium. Older adults with acute-onset agitation in the context of dementia should undergo a comprehensive assessment for delirium, as agitation may be the only indication of a serious underlying medical condition.
3. Nonpharmacologic interventions should be used whenever possible.
The wider adoption of nonpharmacologic interventions in clinical practice has been greatly limited by the heterogeneity in study protocols, including in selection of participants, in the types of dementias included, and in defining and applying the intervention strategies. Nevertheless, there is general consensus that individualized behavioral strategies that build on the patients’ interests and preserved abilities are more effective, at least in the short term. Patients best suited for these interventions are those with less cognitive decline, better communication skills, less impairment in activities of daily living, and higher responsiveness. A systematic review of systematic reviews found music therapy to be the most effective intervention for reducing agitation and aggression in dementia, along with behavioral management techniques when supervised by healthcare professionals. On the other hand, physical restraints are best avoided, as their use in hospitalized patients has been associated with longer stays, higher costs, lower odds of being discharged to home, and in long-term care patients with longer stays, with increased risk for medical complications and functional decline.
4. Antidepressants are not all equally safe or efficacious in managing agitation.
In a network meta-analysis that looked at the effects of several antidepressants on agitation in dementia, citalopram had just under 95% probability of efficacy and was the only antidepressant that was significantly more efficacious than placebo. In the multicenter CitAD trial, citalopram was efficacious and well tolerated for the treatment of agitation in AD, but the mean dose of citalopram used, 30 mg/d, was higher than the maximum dose of 20 mg/d recommended by the US Food and Drug Administration (FDA) in those aged 60 years or above. The optimal candidates for citalopram were those under the age of 85 with mild to moderate AD and mild to moderate nonpsychotic agitation, and it took up to 9 weeks for it to be fully effective. Due to the risk for dose-dependent QTc prolongation with citalopram, a baseline ECG must be done, and a second ECG is recommended if a clinical decision is made to exceed the recommended maximum daily dose. In the CitAD trial, 66% of patients in the citalopram arm received cholinesterase inhibitors concurrently while 44% received memantine, so these symptomatic treatments for AD should not be stopped solely for initiating a citalopram trial.
The antiagitation effect of citalopram may well be a class effect of all selective serotonin reuptake inhibitors (SSRIs), given that there is also evidence favoring the use of sertraline and escitalopram. The S-CitAD trial, the first large, randomized controlled study of escitalopram for the treatment of agitation in dementia, is expected to announce its top-line results sometime this year. However, not all antidepressant classes appear to be equally efficacious or safe. In the large, 12-week randomized placebo-controlled trial SYMBAD, mirtazapine was not only ineffective in treating nonpsychotic agitation in AD but was also associated with a higher mortality rate that just missed statistical significance. Trazodone is also often used for treating agitation, but there is insufficient evidence regarding efficacy and a high probability of adverse effects, even at low doses.
5. Antipsychotics may be effective drugs for treating severe dementia-related agitation.
The CATIE-AD study found that the small beneficial effects of antipsychotics for treating agitation and psychosis in AD were offset by their adverse effects and high discontinuation rates, and the FDA-imposed boxed warnings in 2005 and 2008 cautioned against the use of both first- and second-generation antipsychotics to manage dementia-related psychosis owing to an increased risk for death. Subsequently, the quest for safer and more effective alternatives culminated in the FDA approval of brexpiprazole in 2023 for the treatment of agitation in AD, but the black box warning was left in place. Three randomized controlled trials found brexpiprazole to be relatively safe, with statistically significant improvement in agitation. It was especially efficacious for severe agitation, but there is controversy about whether such improvement is clinically meaningful and whether brexpiprazole is truly superior to other antipsychotics for treating dementia-related agitation. As in the previously mentioned citalopram studies, most patients in the brexpiprazole studies received the drug as an add-on to memantine and/or a cholinesterase inhibitor, and it was proven effective over a period of up to 12 weeks across the three trials. Regarding other antipsychotics, aripiprazole and risperidone have been shown to be effective in treating agitation in patients with mixed dementia, but risperidone has also been associated with the highest risk for strokes (about 80% probability). Unfortunately, an unintended consequence of the boxed warnings on antipsychotics has been an increase in off-label substitution of psychotropic drugs with unproven efficacy and a questionable safety profile, such as valproic acid preparations, that have been linked to an increased short-term risk for accelerated brain volume loss and rapid cognitive decline, as well as a higher risk for mortality.
Lisa M. Wise, assistant professor, Psychiatry, at Oregon Health & Science University, and staff psychiatrist, Department of Psychiatry, Portland VA Medical Center, Portland, Oregon, and Vimal M. Aga, adjunct assistant professor, Department of Neurology, Oregon Health & Science University, and geriatric psychiatrist, Layton Aging and Alzheimer’s Disease Center, Portland, Oregon, have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Gut Biomarkers Accurately Flag Autism Spectrum Disorder
, new research shows.
The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.
Their study was published online in Nature Microbiology.
Beyond Bacteria
The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD.
However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear.
To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China.
After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD.
Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children.
A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls.
“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote.
They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets.
‘Exciting’ Possibilities
“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre.
“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added.
He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.
“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted.
This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
, new research shows.
The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.
Their study was published online in Nature Microbiology.
Beyond Bacteria
The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD.
However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear.
To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China.
After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD.
Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children.
A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls.
“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote.
They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets.
‘Exciting’ Possibilities
“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre.
“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added.
He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.
“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted.
This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
, new research shows.
The findings could form the basis for development of a noninvasive diagnostic test for ASD and also provide novel therapeutic targets, wrote investigators, led by Siew C. Ng, MBBS, PhD, with the Microbiota I-Center (MagIC), the Chinese University of Hong Kong.
Their study was published online in Nature Microbiology.
Beyond Bacteria
The gut microbiome has been shown to play a central role in modulating the gut-brain axis, potentially influencing the development of ASD.
However, most studies in ASD have focused on the bacterial component of the microbiome. Whether nonbacterial microorganisms (such as gut archaea, fungi, and viruses) or function of the gut microbiome are altered in ASD remains unclear.
To investigate, the researchers performed metagenomic sequencing on fecal samples from 1627 boys and girls aged 1-13 years with and without ASD from five cohorts in China.
After controlling for diet, medication, and comorbidity, they identified 14 archaea, 51 bacteria, 7 fungi, 18 viruses, 27 microbial genes, and 12 metabolic pathways that were altered in children with ASD.
Machine-learning models using single-kingdom panels (archaea, bacteria, fungi, viruses) achieved area under the curve (AUC) values ranging from 0.68 to 0.87 in differentiating children with ASD from neurotypical control children.
A model based on a panel of 31 multikingdom and functional markers achieved “high predictive value” for ASD, achieving an AUC of 0.91, with comparable performance among boys and girls.
“The reproducible performance of the models across ages, sexes, and cohorts highlights their potential as promising diagnostic tools for ASD,” the investigators wrote.
They also noted that the accuracy of the model was largely driven by the biosynthesis pathways of ubiquinol-7 and thiamine diphosphate, which were less abundant in children with ASD, and may serve as therapeutic targets.
‘Exciting’ Possibilities
“This study broadens our understanding by including fungi, archaea, and viruses, where previous studies have largely focused on the role of gut bacteria in autism,” Bhismadev Chakrabarti, PhD, research director of the Centre for Autism at the University of Reading, United Kingdom, said in a statement from the nonprofit UK Science Media Centre.
“The results are broadly in line with previous studies that show reduced microbial diversity in autistic individuals. It also examines one of the largest samples seen in a study like this, which further strengthens the results,” Dr. Chakrabarti added.
He said this research may provide “new ways of detecting autism, if microbial markers turn out to strengthen the ability of genetic and behavioral tests to detect autism. A future platform that can combine genetic, microbial, and simple behavioral assessments could help address the detection gap.
“One limitation of this data is that it cannot assess any causal role for the microbiota in the development of autism,” Dr. Chakrabarti noted.
This study was supported by InnoHK, the Government of Hong Kong, Special Administrative Region of the People’s Republic of China, The D. H. Chen Foundation, and the New Cornerstone Science Foundation through the New Cornerstone Investigator Program. Dr. Ng has served as an advisory board member for Pfizer, Ferring, Janssen, and AbbVie; has received honoraria as a speaker for Ferring, Tillotts, Menarini, Janssen, AbbVie, and Takeda; is a scientific cofounder and shareholder of GenieBiome; receives patent royalties through her affiliated institutions; and is named as a co-inventor of patent applications that cover the therapeutic and diagnostic use of microbiome. Dr. Chakrabarti has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM NATURE MICROBIOLOGY
Retinal Issues Rise After Cataract Surgery
TOPLINE:
The incidence of new retinal tears and detachments after cataract surgery in patients with previously treated phakic retinal tears is relatively high, occurring in nearly one out of every 18 eyes within a year of surgery, with younger men being particularly vulnerable.
METHODOLOGY:
- Researchers conducted a retrospective review of 12,109 phakic eyes treated for retinal tears with laser photocoagulation or cryotherapy between April 1, 2012, and May 31, 2023.
- Cataract surgery was subsequently performed in a total of 1039 (8.6%) eyes during the follow-up period, with 713 eyes of 660 patients meeting the inclusion criteria.
- The mean duration of follow-up after the primary treatment of phakic retinal tears and after cataract surgery was 56.6 and 34.8 months, respectively.
- The primary outcome measures were the incidence of retinal tears or detachments following cataract surgery; secondary outcomes were the risk factors for a diagnosis of retinal tears or detachments and visual and anatomic results.
TAKEAWAY:
- The overall incidence of a retinal tear or detachment following cataract surgery was 7.3% during the follow-up period, with a 1-year incidence of 5.6%.
- The factors significantly associated with the risk for retinal tear or detachment after surgery included younger age (odds ratio [OR], 1.03; P = .028) and male gender (OR, 2.06; P = .022).
- Visual acuity significantly worsened at the time of diagnosis of retinal detachment after cataract surgery (median log of the minimal angle of resolution, 0.18; P = .009).
- About 80.6% of the cases achieved anatomical success after a single surgery for the repair of retinal detachment following cataract surgery at 3 months, with a 100% success rate for reattachment.
IN PRACTICE:
“It is essential to conduct a thorough preoperative assessment and to maintain high level of suspicion for additional retinal breaks,” the authors wrote. “Educating patients on warning symptoms is crucial for early detection and treatment of [retinal detachment], thereby helping to prevent further complications,” they added.
SOURCE:
The study was led by Bita Momenaei, MD, from the Wills Eye Hospital of Thomas Jefferson University in Philadelphia. It was published online in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limits firm conclusions about the risk factors for retinal tear or detachment after cataract surgery. Some diagnosed tears might have been pre-existing but became visible post-surgery due to improved clarity. The incidence data on retinal tear or detachment were limited to patients who returned for follow-up at the facility, potentially underestimating true incidence rates.
DISCLOSURES:
The study was supported by the J. Arch McNamara, MD, Fund for Retina Research and Education. Some of the authors declared serving as consultants or receiving research grants from various pharmaceutical and medical device companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
The incidence of new retinal tears and detachments after cataract surgery in patients with previously treated phakic retinal tears is relatively high, occurring in nearly one out of every 18 eyes within a year of surgery, with younger men being particularly vulnerable.
METHODOLOGY:
- Researchers conducted a retrospective review of 12,109 phakic eyes treated for retinal tears with laser photocoagulation or cryotherapy between April 1, 2012, and May 31, 2023.
- Cataract surgery was subsequently performed in a total of 1039 (8.6%) eyes during the follow-up period, with 713 eyes of 660 patients meeting the inclusion criteria.
- The mean duration of follow-up after the primary treatment of phakic retinal tears and after cataract surgery was 56.6 and 34.8 months, respectively.
- The primary outcome measures were the incidence of retinal tears or detachments following cataract surgery; secondary outcomes were the risk factors for a diagnosis of retinal tears or detachments and visual and anatomic results.
TAKEAWAY:
- The overall incidence of a retinal tear or detachment following cataract surgery was 7.3% during the follow-up period, with a 1-year incidence of 5.6%.
- The factors significantly associated with the risk for retinal tear or detachment after surgery included younger age (odds ratio [OR], 1.03; P = .028) and male gender (OR, 2.06; P = .022).
- Visual acuity significantly worsened at the time of diagnosis of retinal detachment after cataract surgery (median log of the minimal angle of resolution, 0.18; P = .009).
- About 80.6% of the cases achieved anatomical success after a single surgery for the repair of retinal detachment following cataract surgery at 3 months, with a 100% success rate for reattachment.
IN PRACTICE:
“It is essential to conduct a thorough preoperative assessment and to maintain high level of suspicion for additional retinal breaks,” the authors wrote. “Educating patients on warning symptoms is crucial for early detection and treatment of [retinal detachment], thereby helping to prevent further complications,” they added.
SOURCE:
The study was led by Bita Momenaei, MD, from the Wills Eye Hospital of Thomas Jefferson University in Philadelphia. It was published online in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limits firm conclusions about the risk factors for retinal tear or detachment after cataract surgery. Some diagnosed tears might have been pre-existing but became visible post-surgery due to improved clarity. The incidence data on retinal tear or detachment were limited to patients who returned for follow-up at the facility, potentially underestimating true incidence rates.
DISCLOSURES:
The study was supported by the J. Arch McNamara, MD, Fund for Retina Research and Education. Some of the authors declared serving as consultants or receiving research grants from various pharmaceutical and medical device companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
The incidence of new retinal tears and detachments after cataract surgery in patients with previously treated phakic retinal tears is relatively high, occurring in nearly one out of every 18 eyes within a year of surgery, with younger men being particularly vulnerable.
METHODOLOGY:
- Researchers conducted a retrospective review of 12,109 phakic eyes treated for retinal tears with laser photocoagulation or cryotherapy between April 1, 2012, and May 31, 2023.
- Cataract surgery was subsequently performed in a total of 1039 (8.6%) eyes during the follow-up period, with 713 eyes of 660 patients meeting the inclusion criteria.
- The mean duration of follow-up after the primary treatment of phakic retinal tears and after cataract surgery was 56.6 and 34.8 months, respectively.
- The primary outcome measures were the incidence of retinal tears or detachments following cataract surgery; secondary outcomes were the risk factors for a diagnosis of retinal tears or detachments and visual and anatomic results.
TAKEAWAY:
- The overall incidence of a retinal tear or detachment following cataract surgery was 7.3% during the follow-up period, with a 1-year incidence of 5.6%.
- The factors significantly associated with the risk for retinal tear or detachment after surgery included younger age (odds ratio [OR], 1.03; P = .028) and male gender (OR, 2.06; P = .022).
- Visual acuity significantly worsened at the time of diagnosis of retinal detachment after cataract surgery (median log of the minimal angle of resolution, 0.18; P = .009).
- About 80.6% of the cases achieved anatomical success after a single surgery for the repair of retinal detachment following cataract surgery at 3 months, with a 100% success rate for reattachment.
IN PRACTICE:
“It is essential to conduct a thorough preoperative assessment and to maintain high level of suspicion for additional retinal breaks,” the authors wrote. “Educating patients on warning symptoms is crucial for early detection and treatment of [retinal detachment], thereby helping to prevent further complications,” they added.
SOURCE:
The study was led by Bita Momenaei, MD, from the Wills Eye Hospital of Thomas Jefferson University in Philadelphia. It was published online in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limits firm conclusions about the risk factors for retinal tear or detachment after cataract surgery. Some diagnosed tears might have been pre-existing but became visible post-surgery due to improved clarity. The incidence data on retinal tear or detachment were limited to patients who returned for follow-up at the facility, potentially underestimating true incidence rates.
DISCLOSURES:
The study was supported by the J. Arch McNamara, MD, Fund for Retina Research and Education. Some of the authors declared serving as consultants or receiving research grants from various pharmaceutical and medical device companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
Neck Pain in Migraine Is Common, Linked to More Disability
, an international, prospective, cross-sectional study finds.
Of 51,969 respondents with headache over the past year, the 27.9% with migraine were more likely to have neck pain than those with non-migraine headache (68.3% vs 36.1%, respectively, P < .001), reported Richard B. Lipton, MD, professor of neurology at Albert Einstein College of Medicine, New York City, and colleagues in Headache.
Compared with other patients with migraine, those who also have neck pain have “greater disability, more psychiatric comorbidities, more allodynia, diminished quality of life, decreased work productivity, and reduced response to treatment,” Dr. Lipton said in an interview. “If patients don’t report [neck pain], it is probably worth asking about. And when patients have both migraine and neck pain, they may merit increased therapeutic attention.”
As Dr. Lipton noted, clinicians have long known that neck pain is common in migraine, although it’s been unclear how the two conditions are connected. “One possibility is that the neck pain is actually a manifestation of the migraine headache. Another possibility is that the neck pain is an independent factor unrelated to migraine headaches: Many people have migraine and cervical spine disease. And the third possibility is that neck pain may be an exacerbating factor, that cervical spine disease may make the migraine worse.”
Referred pain is a potential factor too, he said.
Assessing Migraine, Neck Pain, and Disability
The new study sought to better understand the role of neck pain in migraine, Dr. Lipton said.
For the CaMEO-I study, researchers surveyed 51,969 adults with headache via the Internet in Canada, France, Germany, Japan, United Kingdom, and the United States from 2021-2022. Most of the 37,477 patients with non-migraine headaches were considered to have tension headaches.
Among the 14,492 patients with migraine, demographics were statistically similar among those who had neck pain or didn’t have it (average age = 40.7 and 42.1, 68.4% and 72.5% female, and average BMIs = 26.0 and 26.4, respectively).
Among patients in the US, 71.4% of patients with migraine reported neck pain versus 35.9% of those with non-migraine headaches. In Canada, the numbers were 69.5% and 37.5%, respectively.
Among all patients with migraine, moderate-to-severe disability was more common among those with neck pain than those without neck pain (47.7% vs 28.9%, respectively, P < .001). Those with both migraine and neck pain had more symptom burden (P < .001), and 28.4% said neck pain was their most bothersome symptom. They also had a higher number of symptoms (P < .001).
Several conditions were more common among patients with migraine who reported neck pain versus those who didn’t (depression/anxiety, 40.2% vs 28.2%; anxiety, 41.2% vs 29.2%; and allodynia, 54.0% vs 36.6%, respectively, all P < 0.001). Those with neck pain were also more likely to have “poor acute treatment optimization” (61.1% vs 53.3%, respectively, P < .001).
Researchers noted limitations such as the use of self-reported data, the potential for selection bias, limitations regarding survey questions, and an inability to determine causation.
Clinical Messages
The findings suggest that patients with both migraine and neck pain have greater activation of second-order neurons in the trigeminocervical complex, Dr. Lipton said.
He added that neck pain is often part of the migraine prodrome or the migraine attack itself, suggesting that it’s “part and parcel of the migraine attack.” However, neck pain may have another cause — such as degenerative disease of the neck — if it’s not directly connected to migraine, he added.
As for clinical messages from the study, “it’s quite likely that the neck pain is a primary manifestation of migraine. Migraine may well be the explanation in the absence of a reason to look further,” Dr. Lipton said.
If neck pain heralds a migraine, treating the prodrome with CGRP receptor antagonists (“gepants”) can be helpful, he said. He highlighted other preventive options include beta blockers, anti-epilepsy drugs, and monoclonal antibodies. There’s also anecdotal support for using botulinum toxin A in patients with chronic migraine and neck pain, he said.
In an interview, Mayo Clinic Arizona associate professor of neurology Rashmi B. Halker Singh, MD, who’s familiar with the study but did not take part in it, praised the research. The findings “help us to better understand the impact of living with neck pain if you are somebody with migraine,” she said. “It alerts us that we need to be more aggressive in how we manage that in patients.”
The study also emphasizes the importance of preventive medication in appropriate patients with migraine, especially those with neck pain who may be living with greater disability, she said. “About 13% of people with migraine are on a preventive medication, but about 40% are eligible. That’s an area where we have a big gap.”
Dr. Halker Singh added that non-medication strategies such as acupuncture and physical therapy can be helpful.
AbbVie funded the study. Dr. Lipton reports support for the study from AbbVie; research support paid to his institution from the Czap Foundation, National Headache Foundation, National Institutes of Health, S&L Marx Foundation, and US Food and Drug Administration; and personal fees from AbbVie/Allergan, American Academy of Neurology, American Headache Society, Amgen, Biohaven, Biovision, Boston, Dr. Reddy’s (Promius), electroCore, Eli Lilly, GlaxoSmithKline, Grifols, Lundbeck (Alder), Merck, Pernix, Pfizer, Teva, Vector, and Vedanta Research. He holds stock/options in Axon, Biohaven, CoolTech, and Manistee. Other authors report various disclosures.
Dr. Halker Singh is deputy editor of Headache, where the study was published, but wasn’t aware of it until it was published.
, an international, prospective, cross-sectional study finds.
Of 51,969 respondents with headache over the past year, the 27.9% with migraine were more likely to have neck pain than those with non-migraine headache (68.3% vs 36.1%, respectively, P < .001), reported Richard B. Lipton, MD, professor of neurology at Albert Einstein College of Medicine, New York City, and colleagues in Headache.
Compared with other patients with migraine, those who also have neck pain have “greater disability, more psychiatric comorbidities, more allodynia, diminished quality of life, decreased work productivity, and reduced response to treatment,” Dr. Lipton said in an interview. “If patients don’t report [neck pain], it is probably worth asking about. And when patients have both migraine and neck pain, they may merit increased therapeutic attention.”
As Dr. Lipton noted, clinicians have long known that neck pain is common in migraine, although it’s been unclear how the two conditions are connected. “One possibility is that the neck pain is actually a manifestation of the migraine headache. Another possibility is that the neck pain is an independent factor unrelated to migraine headaches: Many people have migraine and cervical spine disease. And the third possibility is that neck pain may be an exacerbating factor, that cervical spine disease may make the migraine worse.”
Referred pain is a potential factor too, he said.
Assessing Migraine, Neck Pain, and Disability
The new study sought to better understand the role of neck pain in migraine, Dr. Lipton said.
For the CaMEO-I study, researchers surveyed 51,969 adults with headache via the Internet in Canada, France, Germany, Japan, United Kingdom, and the United States from 2021-2022. Most of the 37,477 patients with non-migraine headaches were considered to have tension headaches.
Among the 14,492 patients with migraine, demographics were statistically similar among those who had neck pain or didn’t have it (average age = 40.7 and 42.1, 68.4% and 72.5% female, and average BMIs = 26.0 and 26.4, respectively).
Among patients in the US, 71.4% of patients with migraine reported neck pain versus 35.9% of those with non-migraine headaches. In Canada, the numbers were 69.5% and 37.5%, respectively.
Among all patients with migraine, moderate-to-severe disability was more common among those with neck pain than those without neck pain (47.7% vs 28.9%, respectively, P < .001). Those with both migraine and neck pain had more symptom burden (P < .001), and 28.4% said neck pain was their most bothersome symptom. They also had a higher number of symptoms (P < .001).
Several conditions were more common among patients with migraine who reported neck pain versus those who didn’t (depression/anxiety, 40.2% vs 28.2%; anxiety, 41.2% vs 29.2%; and allodynia, 54.0% vs 36.6%, respectively, all P < 0.001). Those with neck pain were also more likely to have “poor acute treatment optimization” (61.1% vs 53.3%, respectively, P < .001).
Researchers noted limitations such as the use of self-reported data, the potential for selection bias, limitations regarding survey questions, and an inability to determine causation.
Clinical Messages
The findings suggest that patients with both migraine and neck pain have greater activation of second-order neurons in the trigeminocervical complex, Dr. Lipton said.
He added that neck pain is often part of the migraine prodrome or the migraine attack itself, suggesting that it’s “part and parcel of the migraine attack.” However, neck pain may have another cause — such as degenerative disease of the neck — if it’s not directly connected to migraine, he added.
As for clinical messages from the study, “it’s quite likely that the neck pain is a primary manifestation of migraine. Migraine may well be the explanation in the absence of a reason to look further,” Dr. Lipton said.
If neck pain heralds a migraine, treating the prodrome with CGRP receptor antagonists (“gepants”) can be helpful, he said. He highlighted other preventive options include beta blockers, anti-epilepsy drugs, and monoclonal antibodies. There’s also anecdotal support for using botulinum toxin A in patients with chronic migraine and neck pain, he said.
In an interview, Mayo Clinic Arizona associate professor of neurology Rashmi B. Halker Singh, MD, who’s familiar with the study but did not take part in it, praised the research. The findings “help us to better understand the impact of living with neck pain if you are somebody with migraine,” she said. “It alerts us that we need to be more aggressive in how we manage that in patients.”
The study also emphasizes the importance of preventive medication in appropriate patients with migraine, especially those with neck pain who may be living with greater disability, she said. “About 13% of people with migraine are on a preventive medication, but about 40% are eligible. That’s an area where we have a big gap.”
Dr. Halker Singh added that non-medication strategies such as acupuncture and physical therapy can be helpful.
AbbVie funded the study. Dr. Lipton reports support for the study from AbbVie; research support paid to his institution from the Czap Foundation, National Headache Foundation, National Institutes of Health, S&L Marx Foundation, and US Food and Drug Administration; and personal fees from AbbVie/Allergan, American Academy of Neurology, American Headache Society, Amgen, Biohaven, Biovision, Boston, Dr. Reddy’s (Promius), electroCore, Eli Lilly, GlaxoSmithKline, Grifols, Lundbeck (Alder), Merck, Pernix, Pfizer, Teva, Vector, and Vedanta Research. He holds stock/options in Axon, Biohaven, CoolTech, and Manistee. Other authors report various disclosures.
Dr. Halker Singh is deputy editor of Headache, where the study was published, but wasn’t aware of it until it was published.
, an international, prospective, cross-sectional study finds.
Of 51,969 respondents with headache over the past year, the 27.9% with migraine were more likely to have neck pain than those with non-migraine headache (68.3% vs 36.1%, respectively, P < .001), reported Richard B. Lipton, MD, professor of neurology at Albert Einstein College of Medicine, New York City, and colleagues in Headache.
Compared with other patients with migraine, those who also have neck pain have “greater disability, more psychiatric comorbidities, more allodynia, diminished quality of life, decreased work productivity, and reduced response to treatment,” Dr. Lipton said in an interview. “If patients don’t report [neck pain], it is probably worth asking about. And when patients have both migraine and neck pain, they may merit increased therapeutic attention.”
As Dr. Lipton noted, clinicians have long known that neck pain is common in migraine, although it’s been unclear how the two conditions are connected. “One possibility is that the neck pain is actually a manifestation of the migraine headache. Another possibility is that the neck pain is an independent factor unrelated to migraine headaches: Many people have migraine and cervical spine disease. And the third possibility is that neck pain may be an exacerbating factor, that cervical spine disease may make the migraine worse.”
Referred pain is a potential factor too, he said.
Assessing Migraine, Neck Pain, and Disability
The new study sought to better understand the role of neck pain in migraine, Dr. Lipton said.
For the CaMEO-I study, researchers surveyed 51,969 adults with headache via the Internet in Canada, France, Germany, Japan, United Kingdom, and the United States from 2021-2022. Most of the 37,477 patients with non-migraine headaches were considered to have tension headaches.
Among the 14,492 patients with migraine, demographics were statistically similar among those who had neck pain or didn’t have it (average age = 40.7 and 42.1, 68.4% and 72.5% female, and average BMIs = 26.0 and 26.4, respectively).
Among patients in the US, 71.4% of patients with migraine reported neck pain versus 35.9% of those with non-migraine headaches. In Canada, the numbers were 69.5% and 37.5%, respectively.
Among all patients with migraine, moderate-to-severe disability was more common among those with neck pain than those without neck pain (47.7% vs 28.9%, respectively, P < .001). Those with both migraine and neck pain had more symptom burden (P < .001), and 28.4% said neck pain was their most bothersome symptom. They also had a higher number of symptoms (P < .001).
Several conditions were more common among patients with migraine who reported neck pain versus those who didn’t (depression/anxiety, 40.2% vs 28.2%; anxiety, 41.2% vs 29.2%; and allodynia, 54.0% vs 36.6%, respectively, all P < 0.001). Those with neck pain were also more likely to have “poor acute treatment optimization” (61.1% vs 53.3%, respectively, P < .001).
Researchers noted limitations such as the use of self-reported data, the potential for selection bias, limitations regarding survey questions, and an inability to determine causation.
Clinical Messages
The findings suggest that patients with both migraine and neck pain have greater activation of second-order neurons in the trigeminocervical complex, Dr. Lipton said.
He added that neck pain is often part of the migraine prodrome or the migraine attack itself, suggesting that it’s “part and parcel of the migraine attack.” However, neck pain may have another cause — such as degenerative disease of the neck — if it’s not directly connected to migraine, he added.
As for clinical messages from the study, “it’s quite likely that the neck pain is a primary manifestation of migraine. Migraine may well be the explanation in the absence of a reason to look further,” Dr. Lipton said.
If neck pain heralds a migraine, treating the prodrome with CGRP receptor antagonists (“gepants”) can be helpful, he said. He highlighted other preventive options include beta blockers, anti-epilepsy drugs, and monoclonal antibodies. There’s also anecdotal support for using botulinum toxin A in patients with chronic migraine and neck pain, he said.
In an interview, Mayo Clinic Arizona associate professor of neurology Rashmi B. Halker Singh, MD, who’s familiar with the study but did not take part in it, praised the research. The findings “help us to better understand the impact of living with neck pain if you are somebody with migraine,” she said. “It alerts us that we need to be more aggressive in how we manage that in patients.”
The study also emphasizes the importance of preventive medication in appropriate patients with migraine, especially those with neck pain who may be living with greater disability, she said. “About 13% of people with migraine are on a preventive medication, but about 40% are eligible. That’s an area where we have a big gap.”
Dr. Halker Singh added that non-medication strategies such as acupuncture and physical therapy can be helpful.
AbbVie funded the study. Dr. Lipton reports support for the study from AbbVie; research support paid to his institution from the Czap Foundation, National Headache Foundation, National Institutes of Health, S&L Marx Foundation, and US Food and Drug Administration; and personal fees from AbbVie/Allergan, American Academy of Neurology, American Headache Society, Amgen, Biohaven, Biovision, Boston, Dr. Reddy’s (Promius), electroCore, Eli Lilly, GlaxoSmithKline, Grifols, Lundbeck (Alder), Merck, Pernix, Pfizer, Teva, Vector, and Vedanta Research. He holds stock/options in Axon, Biohaven, CoolTech, and Manistee. Other authors report various disclosures.
Dr. Halker Singh is deputy editor of Headache, where the study was published, but wasn’t aware of it until it was published.
FROM HEADACHE
Is Anxiety a Prodromal Feature of Parkinson’s Disease?
new research suggested.
Investigators drew on 10-year data from primary care registry to compare almost 110,000 patients who developed anxiety after the age of 50 years with close to 900,000 matched controls without anxiety.
After adjusting for a variety of sociodemographic, lifestyle, psychiatric, and neurological factors, they found that the risk of developing Parkinson’s disease was double in those with anxiety, compared with controls.
“Anxiety is known to be a feature of the early stages of Parkinson’s disease, but prior to our study, the prospective risk of Parkinson’s in those over the age of 50 with new-onset anxiety was unknown,” colead author Juan Bazo Alvarez, a senior research fellow in the Division of Epidemiology and Health at University College London, London, England, said in a news release.
The study was published online in the British Journal of General Practice.
The presence of anxiety is increased in prodromal Parkinson’s disease, but the prospective risk for Parkinson’s disease in those aged 50 years or older with new-onset anxiety was largely unknown.
Investigators analyzed data from a large UK primary care dataset that includes all people aged between 50 and 99 years who were registered with a participating practice from Jan. 1, 2008, to Dec. 31, 2018.
They identified 109,435 people (35% men) with more than one anxiety record in the database but no previous record of anxiety for 1 year or more and 878,256 people (37% men) with no history of anxiety (control group).
Features of Parkinson’s disease such as sleep problems, depression, tremor, and impaired balance were then tracked from the point of the anxiety diagnosis until 1 year before the Parkinson’s disease diagnosis.
Among those with anxiety, 331 developed Parkinson’s disease during the follow-up period, with a median time to diagnosis of 4.9 years after the first recorded episode of anxiety.
The incidence of Parkinson’s disease was 1.2 per 1000 person-years (95% CI, 0.92-1.13) in those with anxiety versus 0.49 (95% CI, 0.47-0.52) in those without anxiety.
After adjustment for age, sex, social deprivation, lifestyle factors, severe mental illness, head trauma, and dementia, the risk for Parkinson’s disease was double in those with anxiety, compared with the non-anxiety group (hazard ratio, 2.1; 95% CI, 1.9-2.4).
Individuals without anxiety also developed Parkinson’s disease later than those with anxiety.
The researchers identified specific symptoms that were associated with later development of Parkinson’s disease in those with anxiety, including depression, sleep disturbance, fatigue, and cognitive impairment, among other symptoms.
“The results suggest that there is a strong association between anxiety and diagnosis of Parkinson’s disease in patients aged over 50 years who present with a new diagnosis of anxiety,” the authors wrote. “This provides evidence for anxiety as a prodromal presentation of Parkinson’s disease.”
Future research “should explore anxiety in relation to other prodromal symptoms and how this symptom complex is associated with the incidence of Parkinson’s disease,” the researchers wrote. Doing so “may lead to earlier diagnosis and better management of Parkinson’s disease.”
This study was funded by the European Union. Specific authors received funding from the National Institute for Health and Care Research and the Alzheimer’s Society Clinical Training Fellowship program. The authors declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
new research suggested.
Investigators drew on 10-year data from primary care registry to compare almost 110,000 patients who developed anxiety after the age of 50 years with close to 900,000 matched controls without anxiety.
After adjusting for a variety of sociodemographic, lifestyle, psychiatric, and neurological factors, they found that the risk of developing Parkinson’s disease was double in those with anxiety, compared with controls.
“Anxiety is known to be a feature of the early stages of Parkinson’s disease, but prior to our study, the prospective risk of Parkinson’s in those over the age of 50 with new-onset anxiety was unknown,” colead author Juan Bazo Alvarez, a senior research fellow in the Division of Epidemiology and Health at University College London, London, England, said in a news release.
The study was published online in the British Journal of General Practice.
The presence of anxiety is increased in prodromal Parkinson’s disease, but the prospective risk for Parkinson’s disease in those aged 50 years or older with new-onset anxiety was largely unknown.
Investigators analyzed data from a large UK primary care dataset that includes all people aged between 50 and 99 years who were registered with a participating practice from Jan. 1, 2008, to Dec. 31, 2018.
They identified 109,435 people (35% men) with more than one anxiety record in the database but no previous record of anxiety for 1 year or more and 878,256 people (37% men) with no history of anxiety (control group).
Features of Parkinson’s disease such as sleep problems, depression, tremor, and impaired balance were then tracked from the point of the anxiety diagnosis until 1 year before the Parkinson’s disease diagnosis.
Among those with anxiety, 331 developed Parkinson’s disease during the follow-up period, with a median time to diagnosis of 4.9 years after the first recorded episode of anxiety.
The incidence of Parkinson’s disease was 1.2 per 1000 person-years (95% CI, 0.92-1.13) in those with anxiety versus 0.49 (95% CI, 0.47-0.52) in those without anxiety.
After adjustment for age, sex, social deprivation, lifestyle factors, severe mental illness, head trauma, and dementia, the risk for Parkinson’s disease was double in those with anxiety, compared with the non-anxiety group (hazard ratio, 2.1; 95% CI, 1.9-2.4).
Individuals without anxiety also developed Parkinson’s disease later than those with anxiety.
The researchers identified specific symptoms that were associated with later development of Parkinson’s disease in those with anxiety, including depression, sleep disturbance, fatigue, and cognitive impairment, among other symptoms.
“The results suggest that there is a strong association between anxiety and diagnosis of Parkinson’s disease in patients aged over 50 years who present with a new diagnosis of anxiety,” the authors wrote. “This provides evidence for anxiety as a prodromal presentation of Parkinson’s disease.”
Future research “should explore anxiety in relation to other prodromal symptoms and how this symptom complex is associated with the incidence of Parkinson’s disease,” the researchers wrote. Doing so “may lead to earlier diagnosis and better management of Parkinson’s disease.”
This study was funded by the European Union. Specific authors received funding from the National Institute for Health and Care Research and the Alzheimer’s Society Clinical Training Fellowship program. The authors declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
new research suggested.
Investigators drew on 10-year data from primary care registry to compare almost 110,000 patients who developed anxiety after the age of 50 years with close to 900,000 matched controls without anxiety.
After adjusting for a variety of sociodemographic, lifestyle, psychiatric, and neurological factors, they found that the risk of developing Parkinson’s disease was double in those with anxiety, compared with controls.
“Anxiety is known to be a feature of the early stages of Parkinson’s disease, but prior to our study, the prospective risk of Parkinson’s in those over the age of 50 with new-onset anxiety was unknown,” colead author Juan Bazo Alvarez, a senior research fellow in the Division of Epidemiology and Health at University College London, London, England, said in a news release.
The study was published online in the British Journal of General Practice.
The presence of anxiety is increased in prodromal Parkinson’s disease, but the prospective risk for Parkinson’s disease in those aged 50 years or older with new-onset anxiety was largely unknown.
Investigators analyzed data from a large UK primary care dataset that includes all people aged between 50 and 99 years who were registered with a participating practice from Jan. 1, 2008, to Dec. 31, 2018.
They identified 109,435 people (35% men) with more than one anxiety record in the database but no previous record of anxiety for 1 year or more and 878,256 people (37% men) with no history of anxiety (control group).
Features of Parkinson’s disease such as sleep problems, depression, tremor, and impaired balance were then tracked from the point of the anxiety diagnosis until 1 year before the Parkinson’s disease diagnosis.
Among those with anxiety, 331 developed Parkinson’s disease during the follow-up period, with a median time to diagnosis of 4.9 years after the first recorded episode of anxiety.
The incidence of Parkinson’s disease was 1.2 per 1000 person-years (95% CI, 0.92-1.13) in those with anxiety versus 0.49 (95% CI, 0.47-0.52) in those without anxiety.
After adjustment for age, sex, social deprivation, lifestyle factors, severe mental illness, head trauma, and dementia, the risk for Parkinson’s disease was double in those with anxiety, compared with the non-anxiety group (hazard ratio, 2.1; 95% CI, 1.9-2.4).
Individuals without anxiety also developed Parkinson’s disease later than those with anxiety.
The researchers identified specific symptoms that were associated with later development of Parkinson’s disease in those with anxiety, including depression, sleep disturbance, fatigue, and cognitive impairment, among other symptoms.
“The results suggest that there is a strong association between anxiety and diagnosis of Parkinson’s disease in patients aged over 50 years who present with a new diagnosis of anxiety,” the authors wrote. “This provides evidence for anxiety as a prodromal presentation of Parkinson’s disease.”
Future research “should explore anxiety in relation to other prodromal symptoms and how this symptom complex is associated with the incidence of Parkinson’s disease,” the researchers wrote. Doing so “may lead to earlier diagnosis and better management of Parkinson’s disease.”
This study was funded by the European Union. Specific authors received funding from the National Institute for Health and Care Research and the Alzheimer’s Society Clinical Training Fellowship program. The authors declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE BRITISH JOURNAL OF GENERAL PRACTICE
Benzos Are Hard on the Brain, But Do They Raise Dementia Risk?
The study of more than 5000 older adults found that benzodiazepine use was associated with an accelerated reduction in the volume of the hippocampus and amygdala — brain regions involved in memory and mood regulation. However, benzodiazepine use overall was not associated with an increased risk for dementia.
The findings suggest that benzodiazepine use “may have subtle, long-term impact on brain health,” lead investigator Frank Wolters, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues wrote.
The study was published online in BMC Medicine.
Conflicting Evidence
Benzodiazepines are commonly prescribed in older adults for anxiety and sleep disorders. Though the short-term cognitive side effects are well documented, the long-term impact on neurodegeneration and dementia risk remains unclear. Some studies have linked benzodiazepine use to an increased risk for dementia, whereas others have not.
Dr. Wolters and colleagues assessed the effect of benzodiazepine use on long-term dementia risk and on imaging markers of neurodegeneration in 5443 cognitively healthy adults (mean age, 71 years; 57% women) from the population-based Rotterdam Study.
Benzodiazepine use between 1991 and 2008 was determined using pharmacy dispensing records, and dementia incidence was determined from medical records.
Half of the participants had used benzodiazepines at any time in the 15 years before baseline (2005-2008); 47% used anxiolytics, 20% used sedative-hypnotics, 34% used both, and 13% were still using the drugs at the baseline assessment.
During an average follow-up of 11 years, 13% of participants developed dementia.
Overall, use of benzodiazepines was not associated with dementia risk, compared with never-use (hazard ratio [HR], 1.06), irrespective of cumulative dose.
The risk for dementia was somewhat higher with any use of anxiolytics than with sedative-hypnotics (HR, 1.17 vs HR, 0.92), although neither was statistically significant. The highest risk estimates were observed for high cumulative dose of anxiolytics (HR, 1.33).
Sensitivity analyses of the two most commonly used anxiolytics found no differences in risk between use of short half-life oxazepam and long half-life diazepam (HR, 1.01 and HR, 1.06, respectively, for ever-use, compared with never-use for oxazepam and diazepam).
Brain Atrophy
The researchers investigated potential associations between benzodiazepine use and brain volumes using brain MRI imaging from 4836 participants.
They found that current use of a benzodiazepine at baseline was significantly associated with lower total brain volume — as well as lower hippocampus, amygdala, and thalamus volume cross-sectionally — and with accelerated volume loss of the hippocampus and, to a lesser extent, amygdala longitudinally.
Imaging findings did not differ by type of benzodiazepine used or cumulative dose.
“Given the availability of effective alternative pharmacological and nonpharmacological treatments for anxiety and sleep problems, it is important to carefully consider the necessity of prolonged benzodiazepine use in light of potential detrimental effects on brain health,” the authors wrote.
Risks Go Beyond the Brain
Commenting on the study, Shaheen Lakhan, MD, PhD, a neurologist and researcher based in Miami, Florida, noted that “chronic benzodiazepine use may reduce neuroplasticity, potentially interfering with the brain’s ability to form new connections and adapt.
“Long-term use can lead to down-regulation of GABA receptors, altering the brain’s natural inhibitory mechanisms and potentially contributing to tolerance and withdrawal symptoms. Prolonged use can also disrupt the balance of various neurotransmitter systems beyond just GABA, potentially affecting mood, cognition, and overall brain function,” said Dr. Lakhan, who was not involved in the study.
“While the literature is mixed on chronic benzodiazepine use and dementia risk, prolonged use has consistently been associated with accelerated volume loss in certain brain regions, particularly the hippocampus and amygdala,” which are responsible for memory, learning, and emotional regulation, he noted.
“Beyond cognitive impairments and brain volume loss, chronic benzodiazepine use is associated with tolerance and dependence, potential for abuse, interactions with other drugs, and increased fall risk, especially in older adults,” Dr. Lakhan added.
Current guidelines discourage long-term use of benzodiazepines because of risk for psychological and physical dependence; falls; and cognitive impairment, especially in older adults. Nevertheless, research shows that 30%-40% of older benzodiazepine users stay on the medication beyond the recommended period of several weeks.
Donovan T. Maust, MD, Department of Psychiatry, University of Michigan Medical School, Ann Arbor, said in an interview these new findings are consistent with other recently published observational research that suggest benzodiazepine use is not linked to dementia risk.
“I realize that such meta-analyses that find a positive relationship between benzodiazepines and dementia are out there, but they include older, less rigorous studies,” said Dr. Maust, who was not part of the new study. “In my opinion, the jury is not still out on this topic. However, there are plenty of other reasons to avoid them — and in particular, starting them — in older adults, most notably the increased risk of fall injury as well as increased overdose risk when taken along with opioids.”
A version of this article first appeared on Medscape.com.
The study of more than 5000 older adults found that benzodiazepine use was associated with an accelerated reduction in the volume of the hippocampus and amygdala — brain regions involved in memory and mood regulation. However, benzodiazepine use overall was not associated with an increased risk for dementia.
The findings suggest that benzodiazepine use “may have subtle, long-term impact on brain health,” lead investigator Frank Wolters, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues wrote.
The study was published online in BMC Medicine.
Conflicting Evidence
Benzodiazepines are commonly prescribed in older adults for anxiety and sleep disorders. Though the short-term cognitive side effects are well documented, the long-term impact on neurodegeneration and dementia risk remains unclear. Some studies have linked benzodiazepine use to an increased risk for dementia, whereas others have not.
Dr. Wolters and colleagues assessed the effect of benzodiazepine use on long-term dementia risk and on imaging markers of neurodegeneration in 5443 cognitively healthy adults (mean age, 71 years; 57% women) from the population-based Rotterdam Study.
Benzodiazepine use between 1991 and 2008 was determined using pharmacy dispensing records, and dementia incidence was determined from medical records.
Half of the participants had used benzodiazepines at any time in the 15 years before baseline (2005-2008); 47% used anxiolytics, 20% used sedative-hypnotics, 34% used both, and 13% were still using the drugs at the baseline assessment.
During an average follow-up of 11 years, 13% of participants developed dementia.
Overall, use of benzodiazepines was not associated with dementia risk, compared with never-use (hazard ratio [HR], 1.06), irrespective of cumulative dose.
The risk for dementia was somewhat higher with any use of anxiolytics than with sedative-hypnotics (HR, 1.17 vs HR, 0.92), although neither was statistically significant. The highest risk estimates were observed for high cumulative dose of anxiolytics (HR, 1.33).
Sensitivity analyses of the two most commonly used anxiolytics found no differences in risk between use of short half-life oxazepam and long half-life diazepam (HR, 1.01 and HR, 1.06, respectively, for ever-use, compared with never-use for oxazepam and diazepam).
Brain Atrophy
The researchers investigated potential associations between benzodiazepine use and brain volumes using brain MRI imaging from 4836 participants.
They found that current use of a benzodiazepine at baseline was significantly associated with lower total brain volume — as well as lower hippocampus, amygdala, and thalamus volume cross-sectionally — and with accelerated volume loss of the hippocampus and, to a lesser extent, amygdala longitudinally.
Imaging findings did not differ by type of benzodiazepine used or cumulative dose.
“Given the availability of effective alternative pharmacological and nonpharmacological treatments for anxiety and sleep problems, it is important to carefully consider the necessity of prolonged benzodiazepine use in light of potential detrimental effects on brain health,” the authors wrote.
Risks Go Beyond the Brain
Commenting on the study, Shaheen Lakhan, MD, PhD, a neurologist and researcher based in Miami, Florida, noted that “chronic benzodiazepine use may reduce neuroplasticity, potentially interfering with the brain’s ability to form new connections and adapt.
“Long-term use can lead to down-regulation of GABA receptors, altering the brain’s natural inhibitory mechanisms and potentially contributing to tolerance and withdrawal symptoms. Prolonged use can also disrupt the balance of various neurotransmitter systems beyond just GABA, potentially affecting mood, cognition, and overall brain function,” said Dr. Lakhan, who was not involved in the study.
“While the literature is mixed on chronic benzodiazepine use and dementia risk, prolonged use has consistently been associated with accelerated volume loss in certain brain regions, particularly the hippocampus and amygdala,” which are responsible for memory, learning, and emotional regulation, he noted.
“Beyond cognitive impairments and brain volume loss, chronic benzodiazepine use is associated with tolerance and dependence, potential for abuse, interactions with other drugs, and increased fall risk, especially in older adults,” Dr. Lakhan added.
Current guidelines discourage long-term use of benzodiazepines because of risk for psychological and physical dependence; falls; and cognitive impairment, especially in older adults. Nevertheless, research shows that 30%-40% of older benzodiazepine users stay on the medication beyond the recommended period of several weeks.
Donovan T. Maust, MD, Department of Psychiatry, University of Michigan Medical School, Ann Arbor, said in an interview these new findings are consistent with other recently published observational research that suggest benzodiazepine use is not linked to dementia risk.
“I realize that such meta-analyses that find a positive relationship between benzodiazepines and dementia are out there, but they include older, less rigorous studies,” said Dr. Maust, who was not part of the new study. “In my opinion, the jury is not still out on this topic. However, there are plenty of other reasons to avoid them — and in particular, starting them — in older adults, most notably the increased risk of fall injury as well as increased overdose risk when taken along with opioids.”
A version of this article first appeared on Medscape.com.
The study of more than 5000 older adults found that benzodiazepine use was associated with an accelerated reduction in the volume of the hippocampus and amygdala — brain regions involved in memory and mood regulation. However, benzodiazepine use overall was not associated with an increased risk for dementia.
The findings suggest that benzodiazepine use “may have subtle, long-term impact on brain health,” lead investigator Frank Wolters, MD, PhD, with Erasmus University Medical Center, Rotterdam, the Netherlands, and colleagues wrote.
The study was published online in BMC Medicine.
Conflicting Evidence
Benzodiazepines are commonly prescribed in older adults for anxiety and sleep disorders. Though the short-term cognitive side effects are well documented, the long-term impact on neurodegeneration and dementia risk remains unclear. Some studies have linked benzodiazepine use to an increased risk for dementia, whereas others have not.
Dr. Wolters and colleagues assessed the effect of benzodiazepine use on long-term dementia risk and on imaging markers of neurodegeneration in 5443 cognitively healthy adults (mean age, 71 years; 57% women) from the population-based Rotterdam Study.
Benzodiazepine use between 1991 and 2008 was determined using pharmacy dispensing records, and dementia incidence was determined from medical records.
Half of the participants had used benzodiazepines at any time in the 15 years before baseline (2005-2008); 47% used anxiolytics, 20% used sedative-hypnotics, 34% used both, and 13% were still using the drugs at the baseline assessment.
During an average follow-up of 11 years, 13% of participants developed dementia.
Overall, use of benzodiazepines was not associated with dementia risk, compared with never-use (hazard ratio [HR], 1.06), irrespective of cumulative dose.
The risk for dementia was somewhat higher with any use of anxiolytics than with sedative-hypnotics (HR, 1.17 vs HR, 0.92), although neither was statistically significant. The highest risk estimates were observed for high cumulative dose of anxiolytics (HR, 1.33).
Sensitivity analyses of the two most commonly used anxiolytics found no differences in risk between use of short half-life oxazepam and long half-life diazepam (HR, 1.01 and HR, 1.06, respectively, for ever-use, compared with never-use for oxazepam and diazepam).
Brain Atrophy
The researchers investigated potential associations between benzodiazepine use and brain volumes using brain MRI imaging from 4836 participants.
They found that current use of a benzodiazepine at baseline was significantly associated with lower total brain volume — as well as lower hippocampus, amygdala, and thalamus volume cross-sectionally — and with accelerated volume loss of the hippocampus and, to a lesser extent, amygdala longitudinally.
Imaging findings did not differ by type of benzodiazepine used or cumulative dose.
“Given the availability of effective alternative pharmacological and nonpharmacological treatments for anxiety and sleep problems, it is important to carefully consider the necessity of prolonged benzodiazepine use in light of potential detrimental effects on brain health,” the authors wrote.
Risks Go Beyond the Brain
Commenting on the study, Shaheen Lakhan, MD, PhD, a neurologist and researcher based in Miami, Florida, noted that “chronic benzodiazepine use may reduce neuroplasticity, potentially interfering with the brain’s ability to form new connections and adapt.
“Long-term use can lead to down-regulation of GABA receptors, altering the brain’s natural inhibitory mechanisms and potentially contributing to tolerance and withdrawal symptoms. Prolonged use can also disrupt the balance of various neurotransmitter systems beyond just GABA, potentially affecting mood, cognition, and overall brain function,” said Dr. Lakhan, who was not involved in the study.
“While the literature is mixed on chronic benzodiazepine use and dementia risk, prolonged use has consistently been associated with accelerated volume loss in certain brain regions, particularly the hippocampus and amygdala,” which are responsible for memory, learning, and emotional regulation, he noted.
“Beyond cognitive impairments and brain volume loss, chronic benzodiazepine use is associated with tolerance and dependence, potential for abuse, interactions with other drugs, and increased fall risk, especially in older adults,” Dr. Lakhan added.
Current guidelines discourage long-term use of benzodiazepines because of risk for psychological and physical dependence; falls; and cognitive impairment, especially in older adults. Nevertheless, research shows that 30%-40% of older benzodiazepine users stay on the medication beyond the recommended period of several weeks.
Donovan T. Maust, MD, Department of Psychiatry, University of Michigan Medical School, Ann Arbor, said in an interview these new findings are consistent with other recently published observational research that suggest benzodiazepine use is not linked to dementia risk.
“I realize that such meta-analyses that find a positive relationship between benzodiazepines and dementia are out there, but they include older, less rigorous studies,” said Dr. Maust, who was not part of the new study. “In my opinion, the jury is not still out on this topic. However, there are plenty of other reasons to avoid them — and in particular, starting them — in older adults, most notably the increased risk of fall injury as well as increased overdose risk when taken along with opioids.”
A version of this article first appeared on Medscape.com.
FROM BMC MEDICINE