Cognition and MS

Article Type
Changed
Tue, 04/30/2019 - 14:23
Display Headline
Cognition and MS

Cognitive changes related to multiple sclerosis (MS) were first mentioned by Jean-Martin Charcot in 1877; however, it is only within the past 25-30 years that cognitive impairment in MS has received significant clinical study. Despite a growing body of research, though, formal screening of cognitive function is not always part of routine MS clinical care.

Q)How common are cognitive symptoms in MS?

Cognitive changes affect up to 65% of patients in MS clinic samples and about one-third of pediatric MS patients.1 Cognitive deficits occur in all the MS disease courses, including clinically isolated syndrome, although they are most prevalent in secondary progressive and primary progressive disease.1 Cognitive changes have even been observed in radiographically isolated syndrome, in which MRI changes consistent with MS are observed without any neurologic symptoms or signs.2

Q)What cognitive domains are affected in MS?

Strong correlations have been demonstrated between cognitive impairment and MRI findings, including whole brain atrophy and, to some degree, overall white matter lesion burden. Cognitive changes also result from damage in specific areas, including deep gray matter and the corpus callosum, cerebral cortex, and mesial temporal lobe.3-5

The type and severity of cognitive deficits vary widely among people with MS. However, difficulties with information processing speed and short-term memory are the symptoms most commonly seen in this population. Processing speed problems affect new learning and impact memory and executive function. Other domains that can be affected are complex attention, verbal fluency, and visuospatial perception.1

Q)Are cognitive symptoms in MS progressive?

Not everyone with cognitive symptoms related to MS will show progressive changes. However, in a longitudinal study, increasing age and degree of physical disability were predictive of worsening cognitive symptoms. Also, people who demonstrate early cognitive symptoms may experience greater worsening.6

Q)What impact do cognitive symptoms have?

Changes in cognition are a common reason for someone to experience performance issues in the workplace and as such significantly affect a person’s ability to maintain employment. Impaired cognition is a primary cause of early departure from the workforceand has significant implications for self-image and self-esteem.7

Furthermore, cognitive symptoms can impact adherence to medications. They also can negatively affect daily life, through increased risk for motor vehicle accidents, difficulties with routine household tasks, and significant challenges to relationships (particularly but not exclusively those with caregivers).

Continue to: How are cognitive symptoms assessed?

 

 

Q)How are cognitive symptoms assessed?

There are several screening tools that take very little time to administer and can be used in the clinic setting. The Symbol Digit Modalities Test (SDMT; www.wpspublish.com/store/p/2955/sdmt-symbol-digit-modalities-test) is validated in MS and takes approximately 90 s to complete. This screening instrument is proprietary and has a small fee associated with its use.8

Other possible causes of cognitive dysfunction should be investigated as well. These include an examination of medications being used—such as anticholinergics, benzodiazepines, other sedatives, cannabis, topiramate, and opioids—and consideration of other diseases and conditions, including vascular conditions, metabolic deficiencies, infection, tumor, substance abuse, early dementia, or hypothyroidism, which may contribute to or cause cognitive impairment.

Should cognitive problems be identified—either through the history, during the clinic visit, or via screening tests—more formal testing, usually performed by a neuropsychologist, may be useful in identifying the domains of function that are impaired. This information can help to identify and implement appropriate compensatory strategies, plan cognitive rehabilitation interventions, and (in the United States) assist the individual to obtain Social Security disability benefits.

Q)How are cognitive symptoms managed?

Multiple clinical trials of cognitive rehabilitation strategies have demonstrated the efficacy of computer-based programs in improving new learning, short-term memory, processing speed, and attention.9 Cognitive rehabilitation programs should be administered and/or supervised by a health care professional who is knowledgeable about MS as well as cognitive rehabilitation. Professionals such as neuropsychologists, occupational therapists, and speech language pathologists often direct cognitive training programs.

Medications that stimulate the central nervous system have been used to improve mental alertness. However, clinical trials are few and have yielded mixed results.

Continue to: In clinical trials...

 

 

In clinical trials, physical exercise has been shown to improve processing speed. More research is needed to demonstrate the type of exercise that is most beneficial and the extent of improvement in cognitive function that results.

SUMMARY

Cognitive function can be negatively impacted by MS. Activities of daily living, including employment and relationships, can be negatively impacted by changes in cognition. Regular screening of cognition is recommended by the National MS Society, using validated screening tools such as the SDMT. Additional testing is warranted for individuals reporting cognitive difficulties at home or work, or those who score below controls on screening tests. Cognitive rehabilitation may help some individuals improve their cognitive function. More research is needed to identify additional cognitive training techniques, better understand the role of physical exercise, and identify medications that may be of benefit to maintain cognitive function.

References

1. Amato MP, Zipoli V, Portaccio E. Cognitive changes in multiple sclerosis. Expert Rev Neurother. 2008;8(10):1585-1596.
2. Labiano-Fontcuberta A, Martínez-Ginés ML, Aladro Y, et al. A comparison study of cognitive deficits in radiologically and clinically isolated syndromes. Mult Scler. 2016;22(2):250-253.
3. Benedict RH, Ramasamy D, Munschauer F, et al. Memory impairment in multiple sclerosis: correlation with deep grey matter and mesial temporal atrophy. J Neurol Neurosurg Psychiatry. 2009;80(2):201-206.
4. Rocca MA, Amato MP, De Stefano N, et al; MAGNIMS Study Group. Clinical and imaging assessment of cognitive dysfunction in multiple sclerosis. Lancet Neurol. 2015;14(3):302-317.
5. Rovaris M, Comi G, Filippi M. MRI markers of destructive pathology in multiple sclerosis-related cognitive dysfunction. J Neurol Sci. 2006;245(1-2):111-116.
6. Johnen A, Landmeyer NC, Bürkner PC, et al. Distinct cognitive impairments in different disease courses of multiple sclerosis: a systematic review and meta-analysis. Neurosci Biobehav Rev. 2017;83:568-578.
7. Rao SM, Leo GJ, Ellington L, et al. Cognitive dysfunction in multiple sclerosis. II. Impact on employment and social functioning. Neurology. 1991;41(5):692-696.
8. Parmenter BA, Weinstock-Guttman B, Garg N, et al. Screening for cognitive impairment in multiple sclerosis using the symbol digit modalities test. Mult Scler. 2007;13(1):52-57.
9. Goverover Y, Chiaravalloti ND, O’Brien AR, DeLuca J. Evidenced-based cognitive rehabilitation for persons with multiple sclerosis: an updated review of the literature from 2007 to 2016. Arch Phys Med Rehabil. 2018;99(2):390-407.

Article PDF
Author and Disclosure Information

Clinician Reviews in partnership with


MS Consult is edited by Colleen J. Harris, MN, NP, MSCN, Nurse Practitioner/Manager of the Multiple Sclerosis Clinic at Foothills Medical Centre in Calgary, Alberta, Canada, and Bryan Walker, MHS, PA-C, who is in the Department of Neurology, Division of MS and Neuroimmunology, at Duke University Medical Center in Durham, North Carolina.

 

Kathleen Costello is Associate Vice President Healthcare Access, National MS Society.

Issue
Clinician Reviews - 29(1)
Publications
Topics
Page Number
5e-7e
Sections
Author and Disclosure Information

Clinician Reviews in partnership with


MS Consult is edited by Colleen J. Harris, MN, NP, MSCN, Nurse Practitioner/Manager of the Multiple Sclerosis Clinic at Foothills Medical Centre in Calgary, Alberta, Canada, and Bryan Walker, MHS, PA-C, who is in the Department of Neurology, Division of MS and Neuroimmunology, at Duke University Medical Center in Durham, North Carolina.

 

Kathleen Costello is Associate Vice President Healthcare Access, National MS Society.

Author and Disclosure Information

Clinician Reviews in partnership with


MS Consult is edited by Colleen J. Harris, MN, NP, MSCN, Nurse Practitioner/Manager of the Multiple Sclerosis Clinic at Foothills Medical Centre in Calgary, Alberta, Canada, and Bryan Walker, MHS, PA-C, who is in the Department of Neurology, Division of MS and Neuroimmunology, at Duke University Medical Center in Durham, North Carolina.

 

Kathleen Costello is Associate Vice President Healthcare Access, National MS Society.

Article PDF
Article PDF

Cognitive changes related to multiple sclerosis (MS) were first mentioned by Jean-Martin Charcot in 1877; however, it is only within the past 25-30 years that cognitive impairment in MS has received significant clinical study. Despite a growing body of research, though, formal screening of cognitive function is not always part of routine MS clinical care.

Q)How common are cognitive symptoms in MS?

Cognitive changes affect up to 65% of patients in MS clinic samples and about one-third of pediatric MS patients.1 Cognitive deficits occur in all the MS disease courses, including clinically isolated syndrome, although they are most prevalent in secondary progressive and primary progressive disease.1 Cognitive changes have even been observed in radiographically isolated syndrome, in which MRI changes consistent with MS are observed without any neurologic symptoms or signs.2

Q)What cognitive domains are affected in MS?

Strong correlations have been demonstrated between cognitive impairment and MRI findings, including whole brain atrophy and, to some degree, overall white matter lesion burden. Cognitive changes also result from damage in specific areas, including deep gray matter and the corpus callosum, cerebral cortex, and mesial temporal lobe.3-5

The type and severity of cognitive deficits vary widely among people with MS. However, difficulties with information processing speed and short-term memory are the symptoms most commonly seen in this population. Processing speed problems affect new learning and impact memory and executive function. Other domains that can be affected are complex attention, verbal fluency, and visuospatial perception.1

Q)Are cognitive symptoms in MS progressive?

Not everyone with cognitive symptoms related to MS will show progressive changes. However, in a longitudinal study, increasing age and degree of physical disability were predictive of worsening cognitive symptoms. Also, people who demonstrate early cognitive symptoms may experience greater worsening.6

Q)What impact do cognitive symptoms have?

Changes in cognition are a common reason for someone to experience performance issues in the workplace and as such significantly affect a person’s ability to maintain employment. Impaired cognition is a primary cause of early departure from the workforceand has significant implications for self-image and self-esteem.7

Furthermore, cognitive symptoms can impact adherence to medications. They also can negatively affect daily life, through increased risk for motor vehicle accidents, difficulties with routine household tasks, and significant challenges to relationships (particularly but not exclusively those with caregivers).

Continue to: How are cognitive symptoms assessed?

 

 

Q)How are cognitive symptoms assessed?

There are several screening tools that take very little time to administer and can be used in the clinic setting. The Symbol Digit Modalities Test (SDMT; www.wpspublish.com/store/p/2955/sdmt-symbol-digit-modalities-test) is validated in MS and takes approximately 90 s to complete. This screening instrument is proprietary and has a small fee associated with its use.8

Other possible causes of cognitive dysfunction should be investigated as well. These include an examination of medications being used—such as anticholinergics, benzodiazepines, other sedatives, cannabis, topiramate, and opioids—and consideration of other diseases and conditions, including vascular conditions, metabolic deficiencies, infection, tumor, substance abuse, early dementia, or hypothyroidism, which may contribute to or cause cognitive impairment.

Should cognitive problems be identified—either through the history, during the clinic visit, or via screening tests—more formal testing, usually performed by a neuropsychologist, may be useful in identifying the domains of function that are impaired. This information can help to identify and implement appropriate compensatory strategies, plan cognitive rehabilitation interventions, and (in the United States) assist the individual to obtain Social Security disability benefits.

Q)How are cognitive symptoms managed?

Multiple clinical trials of cognitive rehabilitation strategies have demonstrated the efficacy of computer-based programs in improving new learning, short-term memory, processing speed, and attention.9 Cognitive rehabilitation programs should be administered and/or supervised by a health care professional who is knowledgeable about MS as well as cognitive rehabilitation. Professionals such as neuropsychologists, occupational therapists, and speech language pathologists often direct cognitive training programs.

Medications that stimulate the central nervous system have been used to improve mental alertness. However, clinical trials are few and have yielded mixed results.

Continue to: In clinical trials...

 

 

In clinical trials, physical exercise has been shown to improve processing speed. More research is needed to demonstrate the type of exercise that is most beneficial and the extent of improvement in cognitive function that results.

SUMMARY

Cognitive function can be negatively impacted by MS. Activities of daily living, including employment and relationships, can be negatively impacted by changes in cognition. Regular screening of cognition is recommended by the National MS Society, using validated screening tools such as the SDMT. Additional testing is warranted for individuals reporting cognitive difficulties at home or work, or those who score below controls on screening tests. Cognitive rehabilitation may help some individuals improve their cognitive function. More research is needed to identify additional cognitive training techniques, better understand the role of physical exercise, and identify medications that may be of benefit to maintain cognitive function.

Cognitive changes related to multiple sclerosis (MS) were first mentioned by Jean-Martin Charcot in 1877; however, it is only within the past 25-30 years that cognitive impairment in MS has received significant clinical study. Despite a growing body of research, though, formal screening of cognitive function is not always part of routine MS clinical care.

Q)How common are cognitive symptoms in MS?

Cognitive changes affect up to 65% of patients in MS clinic samples and about one-third of pediatric MS patients.1 Cognitive deficits occur in all the MS disease courses, including clinically isolated syndrome, although they are most prevalent in secondary progressive and primary progressive disease.1 Cognitive changes have even been observed in radiographically isolated syndrome, in which MRI changes consistent with MS are observed without any neurologic symptoms or signs.2

Q)What cognitive domains are affected in MS?

Strong correlations have been demonstrated between cognitive impairment and MRI findings, including whole brain atrophy and, to some degree, overall white matter lesion burden. Cognitive changes also result from damage in specific areas, including deep gray matter and the corpus callosum, cerebral cortex, and mesial temporal lobe.3-5

The type and severity of cognitive deficits vary widely among people with MS. However, difficulties with information processing speed and short-term memory are the symptoms most commonly seen in this population. Processing speed problems affect new learning and impact memory and executive function. Other domains that can be affected are complex attention, verbal fluency, and visuospatial perception.1

Q)Are cognitive symptoms in MS progressive?

Not everyone with cognitive symptoms related to MS will show progressive changes. However, in a longitudinal study, increasing age and degree of physical disability were predictive of worsening cognitive symptoms. Also, people who demonstrate early cognitive symptoms may experience greater worsening.6

Q)What impact do cognitive symptoms have?

Changes in cognition are a common reason for someone to experience performance issues in the workplace and as such significantly affect a person’s ability to maintain employment. Impaired cognition is a primary cause of early departure from the workforceand has significant implications for self-image and self-esteem.7

Furthermore, cognitive symptoms can impact adherence to medications. They also can negatively affect daily life, through increased risk for motor vehicle accidents, difficulties with routine household tasks, and significant challenges to relationships (particularly but not exclusively those with caregivers).

Continue to: How are cognitive symptoms assessed?

 

 

Q)How are cognitive symptoms assessed?

There are several screening tools that take very little time to administer and can be used in the clinic setting. The Symbol Digit Modalities Test (SDMT; www.wpspublish.com/store/p/2955/sdmt-symbol-digit-modalities-test) is validated in MS and takes approximately 90 s to complete. This screening instrument is proprietary and has a small fee associated with its use.8

Other possible causes of cognitive dysfunction should be investigated as well. These include an examination of medications being used—such as anticholinergics, benzodiazepines, other sedatives, cannabis, topiramate, and opioids—and consideration of other diseases and conditions, including vascular conditions, metabolic deficiencies, infection, tumor, substance abuse, early dementia, or hypothyroidism, which may contribute to or cause cognitive impairment.

Should cognitive problems be identified—either through the history, during the clinic visit, or via screening tests—more formal testing, usually performed by a neuropsychologist, may be useful in identifying the domains of function that are impaired. This information can help to identify and implement appropriate compensatory strategies, plan cognitive rehabilitation interventions, and (in the United States) assist the individual to obtain Social Security disability benefits.

Q)How are cognitive symptoms managed?

Multiple clinical trials of cognitive rehabilitation strategies have demonstrated the efficacy of computer-based programs in improving new learning, short-term memory, processing speed, and attention.9 Cognitive rehabilitation programs should be administered and/or supervised by a health care professional who is knowledgeable about MS as well as cognitive rehabilitation. Professionals such as neuropsychologists, occupational therapists, and speech language pathologists often direct cognitive training programs.

Medications that stimulate the central nervous system have been used to improve mental alertness. However, clinical trials are few and have yielded mixed results.

Continue to: In clinical trials...

 

 

In clinical trials, physical exercise has been shown to improve processing speed. More research is needed to demonstrate the type of exercise that is most beneficial and the extent of improvement in cognitive function that results.

SUMMARY

Cognitive function can be negatively impacted by MS. Activities of daily living, including employment and relationships, can be negatively impacted by changes in cognition. Regular screening of cognition is recommended by the National MS Society, using validated screening tools such as the SDMT. Additional testing is warranted for individuals reporting cognitive difficulties at home or work, or those who score below controls on screening tests. Cognitive rehabilitation may help some individuals improve their cognitive function. More research is needed to identify additional cognitive training techniques, better understand the role of physical exercise, and identify medications that may be of benefit to maintain cognitive function.

References

1. Amato MP, Zipoli V, Portaccio E. Cognitive changes in multiple sclerosis. Expert Rev Neurother. 2008;8(10):1585-1596.
2. Labiano-Fontcuberta A, Martínez-Ginés ML, Aladro Y, et al. A comparison study of cognitive deficits in radiologically and clinically isolated syndromes. Mult Scler. 2016;22(2):250-253.
3. Benedict RH, Ramasamy D, Munschauer F, et al. Memory impairment in multiple sclerosis: correlation with deep grey matter and mesial temporal atrophy. J Neurol Neurosurg Psychiatry. 2009;80(2):201-206.
4. Rocca MA, Amato MP, De Stefano N, et al; MAGNIMS Study Group. Clinical and imaging assessment of cognitive dysfunction in multiple sclerosis. Lancet Neurol. 2015;14(3):302-317.
5. Rovaris M, Comi G, Filippi M. MRI markers of destructive pathology in multiple sclerosis-related cognitive dysfunction. J Neurol Sci. 2006;245(1-2):111-116.
6. Johnen A, Landmeyer NC, Bürkner PC, et al. Distinct cognitive impairments in different disease courses of multiple sclerosis: a systematic review and meta-analysis. Neurosci Biobehav Rev. 2017;83:568-578.
7. Rao SM, Leo GJ, Ellington L, et al. Cognitive dysfunction in multiple sclerosis. II. Impact on employment and social functioning. Neurology. 1991;41(5):692-696.
8. Parmenter BA, Weinstock-Guttman B, Garg N, et al. Screening for cognitive impairment in multiple sclerosis using the symbol digit modalities test. Mult Scler. 2007;13(1):52-57.
9. Goverover Y, Chiaravalloti ND, O’Brien AR, DeLuca J. Evidenced-based cognitive rehabilitation for persons with multiple sclerosis: an updated review of the literature from 2007 to 2016. Arch Phys Med Rehabil. 2018;99(2):390-407.

References

1. Amato MP, Zipoli V, Portaccio E. Cognitive changes in multiple sclerosis. Expert Rev Neurother. 2008;8(10):1585-1596.
2. Labiano-Fontcuberta A, Martínez-Ginés ML, Aladro Y, et al. A comparison study of cognitive deficits in radiologically and clinically isolated syndromes. Mult Scler. 2016;22(2):250-253.
3. Benedict RH, Ramasamy D, Munschauer F, et al. Memory impairment in multiple sclerosis: correlation with deep grey matter and mesial temporal atrophy. J Neurol Neurosurg Psychiatry. 2009;80(2):201-206.
4. Rocca MA, Amato MP, De Stefano N, et al; MAGNIMS Study Group. Clinical and imaging assessment of cognitive dysfunction in multiple sclerosis. Lancet Neurol. 2015;14(3):302-317.
5. Rovaris M, Comi G, Filippi M. MRI markers of destructive pathology in multiple sclerosis-related cognitive dysfunction. J Neurol Sci. 2006;245(1-2):111-116.
6. Johnen A, Landmeyer NC, Bürkner PC, et al. Distinct cognitive impairments in different disease courses of multiple sclerosis: a systematic review and meta-analysis. Neurosci Biobehav Rev. 2017;83:568-578.
7. Rao SM, Leo GJ, Ellington L, et al. Cognitive dysfunction in multiple sclerosis. II. Impact on employment and social functioning. Neurology. 1991;41(5):692-696.
8. Parmenter BA, Weinstock-Guttman B, Garg N, et al. Screening for cognitive impairment in multiple sclerosis using the symbol digit modalities test. Mult Scler. 2007;13(1):52-57.
9. Goverover Y, Chiaravalloti ND, O’Brien AR, DeLuca J. Evidenced-based cognitive rehabilitation for persons with multiple sclerosis: an updated review of the literature from 2007 to 2016. Arch Phys Med Rehabil. 2018;99(2):390-407.

Issue
Clinician Reviews - 29(1)
Issue
Clinician Reviews - 29(1)
Page Number
5e-7e
Page Number
5e-7e
Publications
Publications
Topics
Article Type
Display Headline
Cognition and MS
Display Headline
Cognition and MS
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

How seizure prediction may benefit patients with epilepsy

Article Type
Changed
Tue, 02/12/2019 - 12:02

 

– For people with epilepsy, “the sudden and apparently unpredictable nature of seizures is one of the most disabling aspects of having the disorder,” said Michael Privitera, MD. Reliable seizure forecasts could help patients stay safe, improve their quality of life, and create intervention opportunities to prevent seizures.

Dr. Michael Privitera

If a patient knew that “tomorrow will be a dangerous day” with a 50% chance of having a seizure, the patient could avoid hazardous activities, try to reduce stress, or increase supervision to reduce the risk of sudden, unexpected death in epilepsy, said Dr. Privitera, professor of neurology and director of the epilepsy center at the University of Cincinnati Gardner Neuroscience Institute. Physicians might be able to intervene during high-risk periods by altering antiepileptic drug regimens.

Evidence suggests that seizure prediction is possible today and that advances in wearable devices and analysis of chronic EEG recordings likely will improve the ability to predict seizures, Dr. Privitera said at the annual meeting of the American Epilepsy Society. Studies have found that some patients can predict the likelihood of seizures in the next 24 hours better than chance. In the future, algorithms that incorporate variables such as pulse, stress, mood, electrodermal activity, circadian rhythms, and EEG may further refine seizure prediction.

A complex picture

One problem with predicting seizures is that “you can have substantial changes in the seizure tendency, but not have a seizure,” Dr. Privitera said. Stress, alcohol, and missed medications, for example, may affect the seizure threshold. “They may be additive, and it may be when those things all hit at once that a seizure happens.”

Many patients report prodromal or premonitory symptoms before a seizure. “Most of us as clinicians will say, ‘Well, maybe you have some inkling, but I don’t think you’re really able to predict it,’ ” Dr. Privitera said.

Sheryl R. Haut, MD, professor of neurology at the Albert Einstein College of Medicine, New York, and her colleagues prospectively looked at patient self-prediction in 2007 (Neurology. 2007 Jan 23;68[4]:262-6). The investigators followed 74 people with epilepsy who completed a daily diary in which they predicted the likelihood of a seizure occurring in the next 24 hours. Their analysis included approximately 15,000 diary days and 1,400 seizure days.

A subset of participants, about 20%, was significantly better than chance at predicting when a seizure would happen. If a patient in this subgroup said that a seizure was extremely likely, then a seizure occurred approximately 37% of the time. If a patient predicted that a seizure was extremely unlikely, there was about a 10% chance of having a seizure.

“This was a pretty substantial difference,” Dr. Privitera said. Combining patients’ predictions with their self-reported stress levels seemed to yield the most accurate predictions.
 

Stress and the SMILE study

About 90% of people with epilepsy identify at least one seizure precipitant, and the most commonly cited trigger is stress. When Dr. Privitera and his colleagues surveyed patients in their clinic, 82% identified stress as a trigger (Epilepsy Behav. 2014 Dec;41:74-7). More than half of these patients had used some form of stress reduction, such as exercise, yoga, or meditation; 88% of those patients thought that stress reduction helped their seizures.

 

 

Underlying anxiety was the only difference between patients who thought that their seizures were triggered by stress and those who did not. Patients who did not think that stress triggered their seizures had significantly lower scores on the Generalized Anxiety Disorders–7.

Subsequently, Dr. Haut, Dr. Privitera, and colleagues conducted the Stress Management Intervention for Living with Epilepsy (SMILE) study, a prospective, controlled trial assessing the efficacy of a stress reduction intervention for reducing seizures, as well as measuring seizure self-prediction (Neurology. 2018 Mar 13;90[11]:e963-70). The researchers randomized patients to a progressive muscle relaxation intervention or to a control group; patients in the control group wrote down their activities for the day.

Patients posted diary entries twice daily into a smartphone, reporting stress levels and mood-related variables. As in Dr. Haut’s earlier study, patients predicted whether having a seizure was extremely unlikely, unlikely, neutral, likely, or extremely likely. Mood and stress variables (such as feeling unpleasant or pleasant, relaxed or stressed, and not worried or extremely worried) were ranked on a visual analog scale from 0 to 100.

The trial included participants who had at least two seizures per month and any seizure trigger. Medications were kept stable throughout the study. During a 2-month baseline, patients tracked their seizures and stress levels. During the 3-month treatment period, patients received the active or control intervention.

In all, 64 subjects completed the study, completing all diary entries on 94% of the days. In the active-treatment group, median seizure frequency decreased by 29%, compared with a 25% decrease in the control group. However, the difference between the groups was not statistically significant. Although the 25% reduction in the control group probably is partly attributable to the placebo effect, part of the decrease may be related to a mindfulness effect from completing the diary, Dr. Privitera said.

The active-treatment group had a statistically significant reduction in self-reported stress, compared with the control group, but this decrease did not correlate with seizure reduction. Changes in anxiety levels also did not correlate with seizures.

“It does not disprove the [stress] hypothesis, but it does tell us that there is more going on with stress and seizure triggers than just patients’ self-reported stress,” Dr. Privitera said.
 

Patients’ predictions

The seizure prediction findings in SMILE were similar to those of Dr. Haut’s earlier study. Among the 10 highest predictors out of the 64 participants, “when they said that a seizure was extremely likely, they were 8.36 times more likely to have a seizure than when they said a seizure was extremely unlikely,” Dr. Privitera said.

Many patients seemed to increase their predicted seizure probabilities in the days after having a seizure. In addition, feeling sad, nervous, worried, tense, or stressed significantly increased the likelihood that a patient would predict that a seizure was coming. However, these feelings were “not very accurate [for predicting] actual seizures,” he said. “Some people are better predictors, but really the basis of that prediction remains to be seen. One of my hypotheses is that some of these people may actually be responding to subclinical EEG changes.”

Together, these self-prediction studies include data from 4,500 seizures and 26,000 diary entries and show that “there is some information in patient self-report that can help us in understanding how to predict and when to predict seizures,” Dr. Privitera said.

 

 

Incorporating cardiac, EEG, and other variables

Various other factors may warrant inclusion in a seizure forecasting system. A new vagus nerve stimulation system responds to heart rate changes that occur at seizure onset. And for decades, researchers have studied the potential for EEG readings to predict seizures. A 2008 analysis of 47 reports concluded that limited progress had been made in predicting a seizure from interictal EEG (Epilepsy Behav. 2008 Jan;12[1]:128-35). Now, however, long-term intracranial recordings are providing new and important information about EEG patterns.

Whereas early studies examined EEG recordings from epilepsy monitoring units – when patients may have been sleep deprived, had medications removed, or recently undergone surgery – chronic intracranial recordings from devices such as the RNS (responsive neurostimulation) System have allowed researchers to look long term at EEG changes that are more representative of patients’ typical EEG patterns.

The RNS System detects interictal spikes and seizure discharges and then provides an electrical stimulation to stop seizures. “When you look at these recordings, there are a lot more electrographic seizures than clinical seizures that trigger these stimulations,” said Dr. Privitera. “If you look at somebody with a typical RNS, they may have 100 stimulations in a day and no clinical seizures. There are lots and lots of subclinical electrographic bursts – and not just spikes, but things that look like short electrographic seizures – that occur throughout the day.”

A handheld device

Researchers in Melbourne designed a system that uses implanted electrodes to provide chronic recordings (Lancet Neurol. 2013 Jun;12[6]:563-71). An algorithm then learned to predict the likelihood of a seizure from the patient’s data as the system recorded over time. The system could indicate when a seizure was likely by displaying a light on a handheld device. Patients were recorded for between 6 months and 3 years.

“There was a statistically significant ability to predict when seizures were happening,” Dr. Privitera said. “There is information in long-term intracranial recordings in many of these people that will help allow us to do a better prediction than what we are able to do right now, which is essentially not much.”

This research suggests that pooling data across patients may not be an effective seizure prediction strategy because different epilepsy types have different patterns. In addition, an individual’s patterns may differ from a group’s patterns. Complicating matters, individual patients may have multiple seizure types with different onset mechanisms.

“Another important lesson is that false positives in a deterministic sense may not represent false positives in a probabilistic sense,” Dr. Privitera said. “That is, when the seizure prediction program – whether it is the diary or the intracranial EEG or anything else – says the threshold changed, but you did not have a seizure, it does not mean that your prediction system was wrong. If the seizure tendency is going up … and your system says the seizure tendency went up, but all you are measuring is actual seizures, it looks like it is a false positive prediction of seizures. But in fact it is a true positive prediction of the seizure tendency changing but not necessarily reaching seizure threshold.”

 

 

Multiday patterns

Recent research shows that “we are just at the start,” Dr. Privitera said. “There are patterns underlying seizure frequency that … we are only beginning to be able to look at because of these chronic recordings.”

Baud et al. analyzed interictal epileptiform activity and seizures in patients who have had responsive neurostimulators for as long as 10 years (Nat Commun. 2018 Jan 8;9[1]:88). “What they found was that interictal spikes and rhythmic discharges oscillate with circadian and multiday periods that differ from person to person,” Dr. Privitera said. “There were multiday periodicities, most commonly in the 20- to 30-day duration, that were relatively stable over periods of time that lasted up to years.”

Researchers knew that seizures in women of childbearing age can cluster in association with the menstrual cycle, but similar cycles also were seen in men. In addition, the researchers found that seizures “occur preferentially during the rising phase of these multiday interictal rhythms,” which has implications for seizure forecasts, Dr. Privitera noted.
 

Stress biomarkers and wearables

Future seizure prediction methods may incorporate other biomarkers, such as stress hormones. A researcher at the University of Cincinnati, Jason Heikenfeld, PhD, is conducting research with a sensor that sticks to the wrist and measures sweat content, Dr. Privitera said. The technology originally was developed to measure sodium and potassium in sweat, but Dr. Privitera’s group has been working with him to measure cortisol, which may be a biomarker for stress and be useful for seizure prediction.

“Multivariate models are needed. We have lots of different ways that we can look at seizure prediction, and most likely the most accurate seizure prediction programs will incorporate multiple different areas,” Dr. Privitera said. “Seizure forecasting is possible. We can do it now. We can probably do it better than chance in many patients. ... It is important because changes in seizure likelihood could lead to pharmacologic or device or behavioral interventions that may help prevent seizures.”

Dr. Privitera reported conducting contracted research for Greenwich and SK Life Science and receiving consulting fees from Upsher-Smith and Astellas.

SOURCE: Privitera M. AES 2018, Judith Hoyer Lecture in Epilepsy.

Meeting/Event
Issue
Neurology Reviews- 27(2)
Publications
Topics
Page Number
24-25
Sections
Meeting/Event
Meeting/Event

 

– For people with epilepsy, “the sudden and apparently unpredictable nature of seizures is one of the most disabling aspects of having the disorder,” said Michael Privitera, MD. Reliable seizure forecasts could help patients stay safe, improve their quality of life, and create intervention opportunities to prevent seizures.

Dr. Michael Privitera

If a patient knew that “tomorrow will be a dangerous day” with a 50% chance of having a seizure, the patient could avoid hazardous activities, try to reduce stress, or increase supervision to reduce the risk of sudden, unexpected death in epilepsy, said Dr. Privitera, professor of neurology and director of the epilepsy center at the University of Cincinnati Gardner Neuroscience Institute. Physicians might be able to intervene during high-risk periods by altering antiepileptic drug regimens.

Evidence suggests that seizure prediction is possible today and that advances in wearable devices and analysis of chronic EEG recordings likely will improve the ability to predict seizures, Dr. Privitera said at the annual meeting of the American Epilepsy Society. Studies have found that some patients can predict the likelihood of seizures in the next 24 hours better than chance. In the future, algorithms that incorporate variables such as pulse, stress, mood, electrodermal activity, circadian rhythms, and EEG may further refine seizure prediction.

A complex picture

One problem with predicting seizures is that “you can have substantial changes in the seizure tendency, but not have a seizure,” Dr. Privitera said. Stress, alcohol, and missed medications, for example, may affect the seizure threshold. “They may be additive, and it may be when those things all hit at once that a seizure happens.”

Many patients report prodromal or premonitory symptoms before a seizure. “Most of us as clinicians will say, ‘Well, maybe you have some inkling, but I don’t think you’re really able to predict it,’ ” Dr. Privitera said.

Sheryl R. Haut, MD, professor of neurology at the Albert Einstein College of Medicine, New York, and her colleagues prospectively looked at patient self-prediction in 2007 (Neurology. 2007 Jan 23;68[4]:262-6). The investigators followed 74 people with epilepsy who completed a daily diary in which they predicted the likelihood of a seizure occurring in the next 24 hours. Their analysis included approximately 15,000 diary days and 1,400 seizure days.

A subset of participants, about 20%, was significantly better than chance at predicting when a seizure would happen. If a patient in this subgroup said that a seizure was extremely likely, then a seizure occurred approximately 37% of the time. If a patient predicted that a seizure was extremely unlikely, there was about a 10% chance of having a seizure.

“This was a pretty substantial difference,” Dr. Privitera said. Combining patients’ predictions with their self-reported stress levels seemed to yield the most accurate predictions.
 

Stress and the SMILE study

About 90% of people with epilepsy identify at least one seizure precipitant, and the most commonly cited trigger is stress. When Dr. Privitera and his colleagues surveyed patients in their clinic, 82% identified stress as a trigger (Epilepsy Behav. 2014 Dec;41:74-7). More than half of these patients had used some form of stress reduction, such as exercise, yoga, or meditation; 88% of those patients thought that stress reduction helped their seizures.

 

 

Underlying anxiety was the only difference between patients who thought that their seizures were triggered by stress and those who did not. Patients who did not think that stress triggered their seizures had significantly lower scores on the Generalized Anxiety Disorders–7.

Subsequently, Dr. Haut, Dr. Privitera, and colleagues conducted the Stress Management Intervention for Living with Epilepsy (SMILE) study, a prospective, controlled trial assessing the efficacy of a stress reduction intervention for reducing seizures, as well as measuring seizure self-prediction (Neurology. 2018 Mar 13;90[11]:e963-70). The researchers randomized patients to a progressive muscle relaxation intervention or to a control group; patients in the control group wrote down their activities for the day.

Patients posted diary entries twice daily into a smartphone, reporting stress levels and mood-related variables. As in Dr. Haut’s earlier study, patients predicted whether having a seizure was extremely unlikely, unlikely, neutral, likely, or extremely likely. Mood and stress variables (such as feeling unpleasant or pleasant, relaxed or stressed, and not worried or extremely worried) were ranked on a visual analog scale from 0 to 100.

The trial included participants who had at least two seizures per month and any seizure trigger. Medications were kept stable throughout the study. During a 2-month baseline, patients tracked their seizures and stress levels. During the 3-month treatment period, patients received the active or control intervention.

In all, 64 subjects completed the study, completing all diary entries on 94% of the days. In the active-treatment group, median seizure frequency decreased by 29%, compared with a 25% decrease in the control group. However, the difference between the groups was not statistically significant. Although the 25% reduction in the control group probably is partly attributable to the placebo effect, part of the decrease may be related to a mindfulness effect from completing the diary, Dr. Privitera said.

The active-treatment group had a statistically significant reduction in self-reported stress, compared with the control group, but this decrease did not correlate with seizure reduction. Changes in anxiety levels also did not correlate with seizures.

“It does not disprove the [stress] hypothesis, but it does tell us that there is more going on with stress and seizure triggers than just patients’ self-reported stress,” Dr. Privitera said.
 

Patients’ predictions

The seizure prediction findings in SMILE were similar to those of Dr. Haut’s earlier study. Among the 10 highest predictors out of the 64 participants, “when they said that a seizure was extremely likely, they were 8.36 times more likely to have a seizure than when they said a seizure was extremely unlikely,” Dr. Privitera said.

Many patients seemed to increase their predicted seizure probabilities in the days after having a seizure. In addition, feeling sad, nervous, worried, tense, or stressed significantly increased the likelihood that a patient would predict that a seizure was coming. However, these feelings were “not very accurate [for predicting] actual seizures,” he said. “Some people are better predictors, but really the basis of that prediction remains to be seen. One of my hypotheses is that some of these people may actually be responding to subclinical EEG changes.”

Together, these self-prediction studies include data from 4,500 seizures and 26,000 diary entries and show that “there is some information in patient self-report that can help us in understanding how to predict and when to predict seizures,” Dr. Privitera said.

 

 

Incorporating cardiac, EEG, and other variables

Various other factors may warrant inclusion in a seizure forecasting system. A new vagus nerve stimulation system responds to heart rate changes that occur at seizure onset. And for decades, researchers have studied the potential for EEG readings to predict seizures. A 2008 analysis of 47 reports concluded that limited progress had been made in predicting a seizure from interictal EEG (Epilepsy Behav. 2008 Jan;12[1]:128-35). Now, however, long-term intracranial recordings are providing new and important information about EEG patterns.

Whereas early studies examined EEG recordings from epilepsy monitoring units – when patients may have been sleep deprived, had medications removed, or recently undergone surgery – chronic intracranial recordings from devices such as the RNS (responsive neurostimulation) System have allowed researchers to look long term at EEG changes that are more representative of patients’ typical EEG patterns.

The RNS System detects interictal spikes and seizure discharges and then provides an electrical stimulation to stop seizures. “When you look at these recordings, there are a lot more electrographic seizures than clinical seizures that trigger these stimulations,” said Dr. Privitera. “If you look at somebody with a typical RNS, they may have 100 stimulations in a day and no clinical seizures. There are lots and lots of subclinical electrographic bursts – and not just spikes, but things that look like short electrographic seizures – that occur throughout the day.”

A handheld device

Researchers in Melbourne designed a system that uses implanted electrodes to provide chronic recordings (Lancet Neurol. 2013 Jun;12[6]:563-71). An algorithm then learned to predict the likelihood of a seizure from the patient’s data as the system recorded over time. The system could indicate when a seizure was likely by displaying a light on a handheld device. Patients were recorded for between 6 months and 3 years.

“There was a statistically significant ability to predict when seizures were happening,” Dr. Privitera said. “There is information in long-term intracranial recordings in many of these people that will help allow us to do a better prediction than what we are able to do right now, which is essentially not much.”

This research suggests that pooling data across patients may not be an effective seizure prediction strategy because different epilepsy types have different patterns. In addition, an individual’s patterns may differ from a group’s patterns. Complicating matters, individual patients may have multiple seizure types with different onset mechanisms.

“Another important lesson is that false positives in a deterministic sense may not represent false positives in a probabilistic sense,” Dr. Privitera said. “That is, when the seizure prediction program – whether it is the diary or the intracranial EEG or anything else – says the threshold changed, but you did not have a seizure, it does not mean that your prediction system was wrong. If the seizure tendency is going up … and your system says the seizure tendency went up, but all you are measuring is actual seizures, it looks like it is a false positive prediction of seizures. But in fact it is a true positive prediction of the seizure tendency changing but not necessarily reaching seizure threshold.”

 

 

Multiday patterns

Recent research shows that “we are just at the start,” Dr. Privitera said. “There are patterns underlying seizure frequency that … we are only beginning to be able to look at because of these chronic recordings.”

Baud et al. analyzed interictal epileptiform activity and seizures in patients who have had responsive neurostimulators for as long as 10 years (Nat Commun. 2018 Jan 8;9[1]:88). “What they found was that interictal spikes and rhythmic discharges oscillate with circadian and multiday periods that differ from person to person,” Dr. Privitera said. “There were multiday periodicities, most commonly in the 20- to 30-day duration, that were relatively stable over periods of time that lasted up to years.”

Researchers knew that seizures in women of childbearing age can cluster in association with the menstrual cycle, but similar cycles also were seen in men. In addition, the researchers found that seizures “occur preferentially during the rising phase of these multiday interictal rhythms,” which has implications for seizure forecasts, Dr. Privitera noted.
 

Stress biomarkers and wearables

Future seizure prediction methods may incorporate other biomarkers, such as stress hormones. A researcher at the University of Cincinnati, Jason Heikenfeld, PhD, is conducting research with a sensor that sticks to the wrist and measures sweat content, Dr. Privitera said. The technology originally was developed to measure sodium and potassium in sweat, but Dr. Privitera’s group has been working with him to measure cortisol, which may be a biomarker for stress and be useful for seizure prediction.

“Multivariate models are needed. We have lots of different ways that we can look at seizure prediction, and most likely the most accurate seizure prediction programs will incorporate multiple different areas,” Dr. Privitera said. “Seizure forecasting is possible. We can do it now. We can probably do it better than chance in many patients. ... It is important because changes in seizure likelihood could lead to pharmacologic or device or behavioral interventions that may help prevent seizures.”

Dr. Privitera reported conducting contracted research for Greenwich and SK Life Science and receiving consulting fees from Upsher-Smith and Astellas.

SOURCE: Privitera M. AES 2018, Judith Hoyer Lecture in Epilepsy.

 

– For people with epilepsy, “the sudden and apparently unpredictable nature of seizures is one of the most disabling aspects of having the disorder,” said Michael Privitera, MD. Reliable seizure forecasts could help patients stay safe, improve their quality of life, and create intervention opportunities to prevent seizures.

Dr. Michael Privitera

If a patient knew that “tomorrow will be a dangerous day” with a 50% chance of having a seizure, the patient could avoid hazardous activities, try to reduce stress, or increase supervision to reduce the risk of sudden, unexpected death in epilepsy, said Dr. Privitera, professor of neurology and director of the epilepsy center at the University of Cincinnati Gardner Neuroscience Institute. Physicians might be able to intervene during high-risk periods by altering antiepileptic drug regimens.

Evidence suggests that seizure prediction is possible today and that advances in wearable devices and analysis of chronic EEG recordings likely will improve the ability to predict seizures, Dr. Privitera said at the annual meeting of the American Epilepsy Society. Studies have found that some patients can predict the likelihood of seizures in the next 24 hours better than chance. In the future, algorithms that incorporate variables such as pulse, stress, mood, electrodermal activity, circadian rhythms, and EEG may further refine seizure prediction.

A complex picture

One problem with predicting seizures is that “you can have substantial changes in the seizure tendency, but not have a seizure,” Dr. Privitera said. Stress, alcohol, and missed medications, for example, may affect the seizure threshold. “They may be additive, and it may be when those things all hit at once that a seizure happens.”

Many patients report prodromal or premonitory symptoms before a seizure. “Most of us as clinicians will say, ‘Well, maybe you have some inkling, but I don’t think you’re really able to predict it,’ ” Dr. Privitera said.

Sheryl R. Haut, MD, professor of neurology at the Albert Einstein College of Medicine, New York, and her colleagues prospectively looked at patient self-prediction in 2007 (Neurology. 2007 Jan 23;68[4]:262-6). The investigators followed 74 people with epilepsy who completed a daily diary in which they predicted the likelihood of a seizure occurring in the next 24 hours. Their analysis included approximately 15,000 diary days and 1,400 seizure days.

A subset of participants, about 20%, was significantly better than chance at predicting when a seizure would happen. If a patient in this subgroup said that a seizure was extremely likely, then a seizure occurred approximately 37% of the time. If a patient predicted that a seizure was extremely unlikely, there was about a 10% chance of having a seizure.

“This was a pretty substantial difference,” Dr. Privitera said. Combining patients’ predictions with their self-reported stress levels seemed to yield the most accurate predictions.
 

Stress and the SMILE study

About 90% of people with epilepsy identify at least one seizure precipitant, and the most commonly cited trigger is stress. When Dr. Privitera and his colleagues surveyed patients in their clinic, 82% identified stress as a trigger (Epilepsy Behav. 2014 Dec;41:74-7). More than half of these patients had used some form of stress reduction, such as exercise, yoga, or meditation; 88% of those patients thought that stress reduction helped their seizures.

 

 

Underlying anxiety was the only difference between patients who thought that their seizures were triggered by stress and those who did not. Patients who did not think that stress triggered their seizures had significantly lower scores on the Generalized Anxiety Disorders–7.

Subsequently, Dr. Haut, Dr. Privitera, and colleagues conducted the Stress Management Intervention for Living with Epilepsy (SMILE) study, a prospective, controlled trial assessing the efficacy of a stress reduction intervention for reducing seizures, as well as measuring seizure self-prediction (Neurology. 2018 Mar 13;90[11]:e963-70). The researchers randomized patients to a progressive muscle relaxation intervention or to a control group; patients in the control group wrote down their activities for the day.

Patients posted diary entries twice daily into a smartphone, reporting stress levels and mood-related variables. As in Dr. Haut’s earlier study, patients predicted whether having a seizure was extremely unlikely, unlikely, neutral, likely, or extremely likely. Mood and stress variables (such as feeling unpleasant or pleasant, relaxed or stressed, and not worried or extremely worried) were ranked on a visual analog scale from 0 to 100.

The trial included participants who had at least two seizures per month and any seizure trigger. Medications were kept stable throughout the study. During a 2-month baseline, patients tracked their seizures and stress levels. During the 3-month treatment period, patients received the active or control intervention.

In all, 64 subjects completed the study, completing all diary entries on 94% of the days. In the active-treatment group, median seizure frequency decreased by 29%, compared with a 25% decrease in the control group. However, the difference between the groups was not statistically significant. Although the 25% reduction in the control group probably is partly attributable to the placebo effect, part of the decrease may be related to a mindfulness effect from completing the diary, Dr. Privitera said.

The active-treatment group had a statistically significant reduction in self-reported stress, compared with the control group, but this decrease did not correlate with seizure reduction. Changes in anxiety levels also did not correlate with seizures.

“It does not disprove the [stress] hypothesis, but it does tell us that there is more going on with stress and seizure triggers than just patients’ self-reported stress,” Dr. Privitera said.
 

Patients’ predictions

The seizure prediction findings in SMILE were similar to those of Dr. Haut’s earlier study. Among the 10 highest predictors out of the 64 participants, “when they said that a seizure was extremely likely, they were 8.36 times more likely to have a seizure than when they said a seizure was extremely unlikely,” Dr. Privitera said.

Many patients seemed to increase their predicted seizure probabilities in the days after having a seizure. In addition, feeling sad, nervous, worried, tense, or stressed significantly increased the likelihood that a patient would predict that a seizure was coming. However, these feelings were “not very accurate [for predicting] actual seizures,” he said. “Some people are better predictors, but really the basis of that prediction remains to be seen. One of my hypotheses is that some of these people may actually be responding to subclinical EEG changes.”

Together, these self-prediction studies include data from 4,500 seizures and 26,000 diary entries and show that “there is some information in patient self-report that can help us in understanding how to predict and when to predict seizures,” Dr. Privitera said.

 

 

Incorporating cardiac, EEG, and other variables

Various other factors may warrant inclusion in a seizure forecasting system. A new vagus nerve stimulation system responds to heart rate changes that occur at seizure onset. And for decades, researchers have studied the potential for EEG readings to predict seizures. A 2008 analysis of 47 reports concluded that limited progress had been made in predicting a seizure from interictal EEG (Epilepsy Behav. 2008 Jan;12[1]:128-35). Now, however, long-term intracranial recordings are providing new and important information about EEG patterns.

Whereas early studies examined EEG recordings from epilepsy monitoring units – when patients may have been sleep deprived, had medications removed, or recently undergone surgery – chronic intracranial recordings from devices such as the RNS (responsive neurostimulation) System have allowed researchers to look long term at EEG changes that are more representative of patients’ typical EEG patterns.

The RNS System detects interictal spikes and seizure discharges and then provides an electrical stimulation to stop seizures. “When you look at these recordings, there are a lot more electrographic seizures than clinical seizures that trigger these stimulations,” said Dr. Privitera. “If you look at somebody with a typical RNS, they may have 100 stimulations in a day and no clinical seizures. There are lots and lots of subclinical electrographic bursts – and not just spikes, but things that look like short electrographic seizures – that occur throughout the day.”

A handheld device

Researchers in Melbourne designed a system that uses implanted electrodes to provide chronic recordings (Lancet Neurol. 2013 Jun;12[6]:563-71). An algorithm then learned to predict the likelihood of a seizure from the patient’s data as the system recorded over time. The system could indicate when a seizure was likely by displaying a light on a handheld device. Patients were recorded for between 6 months and 3 years.

“There was a statistically significant ability to predict when seizures were happening,” Dr. Privitera said. “There is information in long-term intracranial recordings in many of these people that will help allow us to do a better prediction than what we are able to do right now, which is essentially not much.”

This research suggests that pooling data across patients may not be an effective seizure prediction strategy because different epilepsy types have different patterns. In addition, an individual’s patterns may differ from a group’s patterns. Complicating matters, individual patients may have multiple seizure types with different onset mechanisms.

“Another important lesson is that false positives in a deterministic sense may not represent false positives in a probabilistic sense,” Dr. Privitera said. “That is, when the seizure prediction program – whether it is the diary or the intracranial EEG or anything else – says the threshold changed, but you did not have a seizure, it does not mean that your prediction system was wrong. If the seizure tendency is going up … and your system says the seizure tendency went up, but all you are measuring is actual seizures, it looks like it is a false positive prediction of seizures. But in fact it is a true positive prediction of the seizure tendency changing but not necessarily reaching seizure threshold.”

 

 

Multiday patterns

Recent research shows that “we are just at the start,” Dr. Privitera said. “There are patterns underlying seizure frequency that … we are only beginning to be able to look at because of these chronic recordings.”

Baud et al. analyzed interictal epileptiform activity and seizures in patients who have had responsive neurostimulators for as long as 10 years (Nat Commun. 2018 Jan 8;9[1]:88). “What they found was that interictal spikes and rhythmic discharges oscillate with circadian and multiday periods that differ from person to person,” Dr. Privitera said. “There were multiday periodicities, most commonly in the 20- to 30-day duration, that were relatively stable over periods of time that lasted up to years.”

Researchers knew that seizures in women of childbearing age can cluster in association with the menstrual cycle, but similar cycles also were seen in men. In addition, the researchers found that seizures “occur preferentially during the rising phase of these multiday interictal rhythms,” which has implications for seizure forecasts, Dr. Privitera noted.
 

Stress biomarkers and wearables

Future seizure prediction methods may incorporate other biomarkers, such as stress hormones. A researcher at the University of Cincinnati, Jason Heikenfeld, PhD, is conducting research with a sensor that sticks to the wrist and measures sweat content, Dr. Privitera said. The technology originally was developed to measure sodium and potassium in sweat, but Dr. Privitera’s group has been working with him to measure cortisol, which may be a biomarker for stress and be useful for seizure prediction.

“Multivariate models are needed. We have lots of different ways that we can look at seizure prediction, and most likely the most accurate seizure prediction programs will incorporate multiple different areas,” Dr. Privitera said. “Seizure forecasting is possible. We can do it now. We can probably do it better than chance in many patients. ... It is important because changes in seizure likelihood could lead to pharmacologic or device or behavioral interventions that may help prevent seizures.”

Dr. Privitera reported conducting contracted research for Greenwich and SK Life Science and receiving consulting fees from Upsher-Smith and Astellas.

SOURCE: Privitera M. AES 2018, Judith Hoyer Lecture in Epilepsy.

Issue
Neurology Reviews- 27(2)
Issue
Neurology Reviews- 27(2)
Page Number
24-25
Page Number
24-25
Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM AES 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Frailty may affect the expression of dementia

Results suggest strategies for delaying dementia onset
Article Type
Changed
Thu, 12/15/2022 - 15:47

Among people of the same age, the degree of frailty influences the association between Alzheimer’s disease pathology and Alzheimer’s dementia, according to research published online ahead of print Jan. 17 in Lancet Neurology. Data suggest that frailty reduces the threshold for Alzheimer’s disease pathology to cause cognitive decline. Frailty also may contribute to other mechanisms that cause dementia, such as inflammation and immunosenescence, said the investigators.

Dr. Kenneth Rockwood

“While more research is needed, given that frailty is potentially reversible, it is possible that helping people to maintain function and independence in later life could reduce both dementia risk and the severity of debilitating symptoms common in this disease,” said Professor Kenneth Rockwood, MD, of the Nova Scotia Health Authority and Dalhousie University in Halifax, N.S., in a press release.
 

More susceptible to dementia?

The presence of amyloid plaques and neurofibrillary tangles is not a sufficient condition for the clinical expression of dementia. Some patients with a high degree of Alzheimer’s disease pathology have no apparent cognitive decline. Other factors therefore may modify the relationship between pathology and dementia.

Most people who develop Alzheimer’s disease dementia are older than 65 years, and many of these patients are frail. Frailty is understood as a decreased physiologic reserve and an increased risk for adverse health outcomes. Dr. Rockwood and his colleagues hypothesized that frailty moderates the clinical expression of dementia in relation to Alzheimer’s disease pathology.

To test their hypothesis, the investigators performed a cross-sectional analysis of data from the Rush Memory and Aging Project, which collects clinical and pathologic data from adults older than 59 years without dementia at baseline who live in Illinois. Since 1997, participants have undergone annual clinical and neuropsychological evaluations, and the cohort has been followed for 21 years. For their analysis, Dr. Rockwood and his colleagues included participants without dementia or with Alzheimer’s dementia at their last clinical assessment. Eligible participants had died, and complete autopsy data were available for them.

The researchers measured Alzheimer’s disease pathology using a summary measure of neurofibrillary tangles and neuritic and diffuse plaques. Clinical diagnoses of Alzheimer’s dementia were based on clinician consensus. Dr. Rockwood and his colleagues retrospectively created a 41-item frailty index from variables (e.g., symptoms, signs, comorbidities, and function) that were obtained at each clinical evaluation.

Logistic regression and moderation modeling allowed the investigators to evaluate relationships between Alzheimer’s disease pathology, frailty, and Alzheimer’s dementia. Dr. Rockwood and hus colleagues adjusted all analyses for age, sex, and education.

In all, 456 participants were included in the analysis. The sample’s mean age at death was 89.7 years, and 69% of participants were women. At participants’ last clinical assessment, 242 (53%) had possible or probable Alzheimer’s dementia.

The sample’s mean frailty index was 0.42. The median frailty index was 0.41, a value similar to the threshold commonly used to distinguish between moderate and severe frailty. People with high frailty index scores (i.e., 0.41 or greater) were older, had lower Mini-Mental State Examination scores, were more likely to have a diagnosis of dementia, and had a higher Braak stage than those with moderate or low frailty index scores.
 

 

 

Significant interaction between frailty and Alzheimer’s disease

After the investigators adjusted for age, sex, and education, frailty (odds ratio, 1.76) and Alzheimer’s disease pathology (OR, 4.81) were independently associated with Alzheimer’s dementia. When the investigators added frailty to the model for the relationship between Alzheimer’s disease pathology and Alzheimer’s dementia, the model fit improved. They found a significant interaction between frailty and Alzheimer’s disease pathology (OR, 0.73). People with a low amount of frailty were better able to tolerate Alzheimer’s disease pathology, and people with higher amounts of frailty were more likely to have more Alzheimer’s disease pathology and clinical dementia.

One of the study’s limitations is that it is a secondary analysis, according to Dr. Rockwood and his colleagues. In addition, frailty was measured close to participants’ time of death, and the measurements may thus reflect terminal decline. Participant deaths resulting from causes other than those related to dementia might have confounded the results. Finally, the sample came entirely from people living in retirement homes in Illinois, which might have introduced bias. Future research should use a population-based sample, said the authors.

Frailty could be a basis for risk stratification and could inform the management and treatment of older adults, said Dr. Rockwood and his colleagues. The study results have “the potential to improve our understanding of disease expression, explain failures in pharmacologic treatment, and aid in the development of more appropriate therapeutic targets, approaches, and measurements of success,” they concluded.

The study had no source of funding. The authors reported receiving fees and grants from DGI Clinical, GlaxoSmithKline, Pfizer, and Sanofi. Authors also received support from governmental bodies such as the National Institutes of Health and the Canadian Institutes of Health Research.

SOURCE: Wallace LMK et al. Lancet Neurol. 2019;18:177-84.

Body

 

The results of the study by Rockwood and colleagues confirm the strong links between frailty and Alzheimer’s disease and other dementias, said Francesco Panza, MD, PhD, of the University of Bari (Italy) Aldo Moro, and his colleagues in an accompanying editorial.

Frailty is primary or preclinical when it is not directly associated with a specific disease or when the patient has no substantial disability. Frailty is considered secondary or clinical when it is associated with known comorbidities (e.g., cardiovascular disease or depression). “This distinction is central in identifying frailty phenotypes with the potential to predict and prevent dementia, using novel models of risk that introduce modifiable factors,” wrote Dr. Panza and his colleagues.

“In light of current knowledge on the cognitive frailty phenotype, secondary preventive strategies for cognitive impairment and physical frailty can be suggested,” they added. “For instance, individualized multidomain interventions can target physical, nutritional, cognitive, and psychological domains that might delay the progression to overt dementia and secondary occurrence of adverse health-related outcomes, such as disability, hospitalization, and mortality.”

Dr. Panza, Madia Lozupone, MD, PhD , and Giancarlo Logroscino, MD, PhD , are affiliated with the neurodegenerative disease unit in the department of basic medicine, neuroscience, and sense organs at the University of Bari (Italy) Aldo Moro. The above remarks come from an editorial that these authors wrote to accompany the study by Rockwood et al. The authors declared no competing interests.

Issue
Neurology Reviews- 27(2)
Publications
Topics
Page Number
1, 38
Sections
Body

 

The results of the study by Rockwood and colleagues confirm the strong links between frailty and Alzheimer’s disease and other dementias, said Francesco Panza, MD, PhD, of the University of Bari (Italy) Aldo Moro, and his colleagues in an accompanying editorial.

Frailty is primary or preclinical when it is not directly associated with a specific disease or when the patient has no substantial disability. Frailty is considered secondary or clinical when it is associated with known comorbidities (e.g., cardiovascular disease or depression). “This distinction is central in identifying frailty phenotypes with the potential to predict and prevent dementia, using novel models of risk that introduce modifiable factors,” wrote Dr. Panza and his colleagues.

“In light of current knowledge on the cognitive frailty phenotype, secondary preventive strategies for cognitive impairment and physical frailty can be suggested,” they added. “For instance, individualized multidomain interventions can target physical, nutritional, cognitive, and psychological domains that might delay the progression to overt dementia and secondary occurrence of adverse health-related outcomes, such as disability, hospitalization, and mortality.”

Dr. Panza, Madia Lozupone, MD, PhD , and Giancarlo Logroscino, MD, PhD , are affiliated with the neurodegenerative disease unit in the department of basic medicine, neuroscience, and sense organs at the University of Bari (Italy) Aldo Moro. The above remarks come from an editorial that these authors wrote to accompany the study by Rockwood et al. The authors declared no competing interests.

Body

 

The results of the study by Rockwood and colleagues confirm the strong links between frailty and Alzheimer’s disease and other dementias, said Francesco Panza, MD, PhD, of the University of Bari (Italy) Aldo Moro, and his colleagues in an accompanying editorial.

Frailty is primary or preclinical when it is not directly associated with a specific disease or when the patient has no substantial disability. Frailty is considered secondary or clinical when it is associated with known comorbidities (e.g., cardiovascular disease or depression). “This distinction is central in identifying frailty phenotypes with the potential to predict and prevent dementia, using novel models of risk that introduce modifiable factors,” wrote Dr. Panza and his colleagues.

“In light of current knowledge on the cognitive frailty phenotype, secondary preventive strategies for cognitive impairment and physical frailty can be suggested,” they added. “For instance, individualized multidomain interventions can target physical, nutritional, cognitive, and psychological domains that might delay the progression to overt dementia and secondary occurrence of adverse health-related outcomes, such as disability, hospitalization, and mortality.”

Dr. Panza, Madia Lozupone, MD, PhD , and Giancarlo Logroscino, MD, PhD , are affiliated with the neurodegenerative disease unit in the department of basic medicine, neuroscience, and sense organs at the University of Bari (Italy) Aldo Moro. The above remarks come from an editorial that these authors wrote to accompany the study by Rockwood et al. The authors declared no competing interests.

Title
Results suggest strategies for delaying dementia onset
Results suggest strategies for delaying dementia onset

Among people of the same age, the degree of frailty influences the association between Alzheimer’s disease pathology and Alzheimer’s dementia, according to research published online ahead of print Jan. 17 in Lancet Neurology. Data suggest that frailty reduces the threshold for Alzheimer’s disease pathology to cause cognitive decline. Frailty also may contribute to other mechanisms that cause dementia, such as inflammation and immunosenescence, said the investigators.

Dr. Kenneth Rockwood

“While more research is needed, given that frailty is potentially reversible, it is possible that helping people to maintain function and independence in later life could reduce both dementia risk and the severity of debilitating symptoms common in this disease,” said Professor Kenneth Rockwood, MD, of the Nova Scotia Health Authority and Dalhousie University in Halifax, N.S., in a press release.
 

More susceptible to dementia?

The presence of amyloid plaques and neurofibrillary tangles is not a sufficient condition for the clinical expression of dementia. Some patients with a high degree of Alzheimer’s disease pathology have no apparent cognitive decline. Other factors therefore may modify the relationship between pathology and dementia.

Most people who develop Alzheimer’s disease dementia are older than 65 years, and many of these patients are frail. Frailty is understood as a decreased physiologic reserve and an increased risk for adverse health outcomes. Dr. Rockwood and his colleagues hypothesized that frailty moderates the clinical expression of dementia in relation to Alzheimer’s disease pathology.

To test their hypothesis, the investigators performed a cross-sectional analysis of data from the Rush Memory and Aging Project, which collects clinical and pathologic data from adults older than 59 years without dementia at baseline who live in Illinois. Since 1997, participants have undergone annual clinical and neuropsychological evaluations, and the cohort has been followed for 21 years. For their analysis, Dr. Rockwood and his colleagues included participants without dementia or with Alzheimer’s dementia at their last clinical assessment. Eligible participants had died, and complete autopsy data were available for them.

The researchers measured Alzheimer’s disease pathology using a summary measure of neurofibrillary tangles and neuritic and diffuse plaques. Clinical diagnoses of Alzheimer’s dementia were based on clinician consensus. Dr. Rockwood and his colleagues retrospectively created a 41-item frailty index from variables (e.g., symptoms, signs, comorbidities, and function) that were obtained at each clinical evaluation.

Logistic regression and moderation modeling allowed the investigators to evaluate relationships between Alzheimer’s disease pathology, frailty, and Alzheimer’s dementia. Dr. Rockwood and hus colleagues adjusted all analyses for age, sex, and education.

In all, 456 participants were included in the analysis. The sample’s mean age at death was 89.7 years, and 69% of participants were women. At participants’ last clinical assessment, 242 (53%) had possible or probable Alzheimer’s dementia.

The sample’s mean frailty index was 0.42. The median frailty index was 0.41, a value similar to the threshold commonly used to distinguish between moderate and severe frailty. People with high frailty index scores (i.e., 0.41 or greater) were older, had lower Mini-Mental State Examination scores, were more likely to have a diagnosis of dementia, and had a higher Braak stage than those with moderate or low frailty index scores.
 

 

 

Significant interaction between frailty and Alzheimer’s disease

After the investigators adjusted for age, sex, and education, frailty (odds ratio, 1.76) and Alzheimer’s disease pathology (OR, 4.81) were independently associated with Alzheimer’s dementia. When the investigators added frailty to the model for the relationship between Alzheimer’s disease pathology and Alzheimer’s dementia, the model fit improved. They found a significant interaction between frailty and Alzheimer’s disease pathology (OR, 0.73). People with a low amount of frailty were better able to tolerate Alzheimer’s disease pathology, and people with higher amounts of frailty were more likely to have more Alzheimer’s disease pathology and clinical dementia.

One of the study’s limitations is that it is a secondary analysis, according to Dr. Rockwood and his colleagues. In addition, frailty was measured close to participants’ time of death, and the measurements may thus reflect terminal decline. Participant deaths resulting from causes other than those related to dementia might have confounded the results. Finally, the sample came entirely from people living in retirement homes in Illinois, which might have introduced bias. Future research should use a population-based sample, said the authors.

Frailty could be a basis for risk stratification and could inform the management and treatment of older adults, said Dr. Rockwood and his colleagues. The study results have “the potential to improve our understanding of disease expression, explain failures in pharmacologic treatment, and aid in the development of more appropriate therapeutic targets, approaches, and measurements of success,” they concluded.

The study had no source of funding. The authors reported receiving fees and grants from DGI Clinical, GlaxoSmithKline, Pfizer, and Sanofi. Authors also received support from governmental bodies such as the National Institutes of Health and the Canadian Institutes of Health Research.

SOURCE: Wallace LMK et al. Lancet Neurol. 2019;18:177-84.

Among people of the same age, the degree of frailty influences the association between Alzheimer’s disease pathology and Alzheimer’s dementia, according to research published online ahead of print Jan. 17 in Lancet Neurology. Data suggest that frailty reduces the threshold for Alzheimer’s disease pathology to cause cognitive decline. Frailty also may contribute to other mechanisms that cause dementia, such as inflammation and immunosenescence, said the investigators.

Dr. Kenneth Rockwood

“While more research is needed, given that frailty is potentially reversible, it is possible that helping people to maintain function and independence in later life could reduce both dementia risk and the severity of debilitating symptoms common in this disease,” said Professor Kenneth Rockwood, MD, of the Nova Scotia Health Authority and Dalhousie University in Halifax, N.S., in a press release.
 

More susceptible to dementia?

The presence of amyloid plaques and neurofibrillary tangles is not a sufficient condition for the clinical expression of dementia. Some patients with a high degree of Alzheimer’s disease pathology have no apparent cognitive decline. Other factors therefore may modify the relationship between pathology and dementia.

Most people who develop Alzheimer’s disease dementia are older than 65 years, and many of these patients are frail. Frailty is understood as a decreased physiologic reserve and an increased risk for adverse health outcomes. Dr. Rockwood and his colleagues hypothesized that frailty moderates the clinical expression of dementia in relation to Alzheimer’s disease pathology.

To test their hypothesis, the investigators performed a cross-sectional analysis of data from the Rush Memory and Aging Project, which collects clinical and pathologic data from adults older than 59 years without dementia at baseline who live in Illinois. Since 1997, participants have undergone annual clinical and neuropsychological evaluations, and the cohort has been followed for 21 years. For their analysis, Dr. Rockwood and his colleagues included participants without dementia or with Alzheimer’s dementia at their last clinical assessment. Eligible participants had died, and complete autopsy data were available for them.

The researchers measured Alzheimer’s disease pathology using a summary measure of neurofibrillary tangles and neuritic and diffuse plaques. Clinical diagnoses of Alzheimer’s dementia were based on clinician consensus. Dr. Rockwood and his colleagues retrospectively created a 41-item frailty index from variables (e.g., symptoms, signs, comorbidities, and function) that were obtained at each clinical evaluation.

Logistic regression and moderation modeling allowed the investigators to evaluate relationships between Alzheimer’s disease pathology, frailty, and Alzheimer’s dementia. Dr. Rockwood and hus colleagues adjusted all analyses for age, sex, and education.

In all, 456 participants were included in the analysis. The sample’s mean age at death was 89.7 years, and 69% of participants were women. At participants’ last clinical assessment, 242 (53%) had possible or probable Alzheimer’s dementia.

The sample’s mean frailty index was 0.42. The median frailty index was 0.41, a value similar to the threshold commonly used to distinguish between moderate and severe frailty. People with high frailty index scores (i.e., 0.41 or greater) were older, had lower Mini-Mental State Examination scores, were more likely to have a diagnosis of dementia, and had a higher Braak stage than those with moderate or low frailty index scores.
 

 

 

Significant interaction between frailty and Alzheimer’s disease

After the investigators adjusted for age, sex, and education, frailty (odds ratio, 1.76) and Alzheimer’s disease pathology (OR, 4.81) were independently associated with Alzheimer’s dementia. When the investigators added frailty to the model for the relationship between Alzheimer’s disease pathology and Alzheimer’s dementia, the model fit improved. They found a significant interaction between frailty and Alzheimer’s disease pathology (OR, 0.73). People with a low amount of frailty were better able to tolerate Alzheimer’s disease pathology, and people with higher amounts of frailty were more likely to have more Alzheimer’s disease pathology and clinical dementia.

One of the study’s limitations is that it is a secondary analysis, according to Dr. Rockwood and his colleagues. In addition, frailty was measured close to participants’ time of death, and the measurements may thus reflect terminal decline. Participant deaths resulting from causes other than those related to dementia might have confounded the results. Finally, the sample came entirely from people living in retirement homes in Illinois, which might have introduced bias. Future research should use a population-based sample, said the authors.

Frailty could be a basis for risk stratification and could inform the management and treatment of older adults, said Dr. Rockwood and his colleagues. The study results have “the potential to improve our understanding of disease expression, explain failures in pharmacologic treatment, and aid in the development of more appropriate therapeutic targets, approaches, and measurements of success,” they concluded.

The study had no source of funding. The authors reported receiving fees and grants from DGI Clinical, GlaxoSmithKline, Pfizer, and Sanofi. Authors also received support from governmental bodies such as the National Institutes of Health and the Canadian Institutes of Health Research.

SOURCE: Wallace LMK et al. Lancet Neurol. 2019;18:177-84.

Issue
Neurology Reviews- 27(2)
Issue
Neurology Reviews- 27(2)
Page Number
1, 38
Page Number
1, 38
Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM LANCET NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Frailty modifies the association between Alzheimer’s disease pathology and Alzheimer dementia.

Major finding: Frailty index score (odds ratio, 1.76) is independently associated with dementia status.

Study details: A cross-sectional analysis of 456 deceased participants in the Rush Memory and Aging Project.

Disclosures: The study had no outside funding.

Source: Wallace LMK et al. Lancet Neurol. 2019;18:177-84.
 

Disqus Comments
Default
Use ProPublica

Obesity paradox applies to post-stroke mortality

Article Type
Changed
Mon, 02/25/2019 - 16:57

– Overweight and obese military veterans who experienced an in-hospital stroke had a lower 30-day and 1-year all-cause mortality than did those who were normal weight in a large national study, Lauren Costa reported at the American Heart Association scientific sessions.

Underweight patients had a significantly increased mortality risk, added Ms. Costa of the VA Boston Healthcare System.

It’s yet another instance of what is known as the obesity paradox, which has also been described in patients with heart failure, acute coronary syndrome, MI, chronic obstructive pulmonary disease, and other conditions.

Ms. Costa presented a retrospective study of 26,267 patients in the Veterans Health Administration database who had a first stroke in-hospital during 2002-2012. There were subsequently 14,166 deaths, including 2,473 within the first 30 days and 5,854 in the first year post stroke.

Each patient’s body mass index was calculated based on the average of all BMI measurements obtained 1-24 months prior to the stroke. The analysis of the relationship between BMI and poststroke mortality included extensive statistical adjustment for potential confounders, including age, sex, smoking, cancer, dementia, peripheral artery disease, diabetes, coronary heart disease, atrial fibrillation, chronic kidney disease, use of statins, and antihypertensive therapy.

Breaking down the study population into eight BMI categories, Ms. Costa found that the adjusted risk of 30-day all-cause mortality post stroke was reduced by 22%-38% in patients in the overweight or obese groupings, compared with the reference population with a normal-weight BMI of 22.5 to less than 25 kg/m2.

One-year, all-cause mortality showed the same pattern of BMI-based significant differences.

Of deaths within 30 days post stroke, 34% were stroke-related. In an analysis restricted to that group, the evidence of an obesity paradox was attenuated. Indeed, the only BMI group with an adjusted 30-day stroke-related mortality significantly different from the normal-weight reference group were patients with Class III obesity, defined as a BMI of 40 or more. Their risk was reduced by 45%.

The obesity paradox remains a controversial issue among epidemiologists. The increased mortality associated with being underweight among patients with diseases where the obesity paradox has been documented is widely thought to be caused by frailty and/or an underlying illness not adjusted for in analyses. But the mechanism for the reduced mortality risk in overweight and obese patients seen in the VA stroke study and other studies remains unknown despite much speculation.

Ms. Costa reported having no financial conflicts regarding her study, which was supported by the Department of Veterans Affairs.
 

SOURCE: Costa L. Circulation. 2018;138(suppl 1): Abstract 14288.

Meeting/Event
Issue
Neurology Reviews- 27(3)
Publications
Topics
Page Number
28
Sections
Meeting/Event
Meeting/Event

– Overweight and obese military veterans who experienced an in-hospital stroke had a lower 30-day and 1-year all-cause mortality than did those who were normal weight in a large national study, Lauren Costa reported at the American Heart Association scientific sessions.

Underweight patients had a significantly increased mortality risk, added Ms. Costa of the VA Boston Healthcare System.

It’s yet another instance of what is known as the obesity paradox, which has also been described in patients with heart failure, acute coronary syndrome, MI, chronic obstructive pulmonary disease, and other conditions.

Ms. Costa presented a retrospective study of 26,267 patients in the Veterans Health Administration database who had a first stroke in-hospital during 2002-2012. There were subsequently 14,166 deaths, including 2,473 within the first 30 days and 5,854 in the first year post stroke.

Each patient’s body mass index was calculated based on the average of all BMI measurements obtained 1-24 months prior to the stroke. The analysis of the relationship between BMI and poststroke mortality included extensive statistical adjustment for potential confounders, including age, sex, smoking, cancer, dementia, peripheral artery disease, diabetes, coronary heart disease, atrial fibrillation, chronic kidney disease, use of statins, and antihypertensive therapy.

Breaking down the study population into eight BMI categories, Ms. Costa found that the adjusted risk of 30-day all-cause mortality post stroke was reduced by 22%-38% in patients in the overweight or obese groupings, compared with the reference population with a normal-weight BMI of 22.5 to less than 25 kg/m2.

One-year, all-cause mortality showed the same pattern of BMI-based significant differences.

Of deaths within 30 days post stroke, 34% were stroke-related. In an analysis restricted to that group, the evidence of an obesity paradox was attenuated. Indeed, the only BMI group with an adjusted 30-day stroke-related mortality significantly different from the normal-weight reference group were patients with Class III obesity, defined as a BMI of 40 or more. Their risk was reduced by 45%.

The obesity paradox remains a controversial issue among epidemiologists. The increased mortality associated with being underweight among patients with diseases where the obesity paradox has been documented is widely thought to be caused by frailty and/or an underlying illness not adjusted for in analyses. But the mechanism for the reduced mortality risk in overweight and obese patients seen in the VA stroke study and other studies remains unknown despite much speculation.

Ms. Costa reported having no financial conflicts regarding her study, which was supported by the Department of Veterans Affairs.
 

SOURCE: Costa L. Circulation. 2018;138(suppl 1): Abstract 14288.

– Overweight and obese military veterans who experienced an in-hospital stroke had a lower 30-day and 1-year all-cause mortality than did those who were normal weight in a large national study, Lauren Costa reported at the American Heart Association scientific sessions.

Underweight patients had a significantly increased mortality risk, added Ms. Costa of the VA Boston Healthcare System.

It’s yet another instance of what is known as the obesity paradox, which has also been described in patients with heart failure, acute coronary syndrome, MI, chronic obstructive pulmonary disease, and other conditions.

Ms. Costa presented a retrospective study of 26,267 patients in the Veterans Health Administration database who had a first stroke in-hospital during 2002-2012. There were subsequently 14,166 deaths, including 2,473 within the first 30 days and 5,854 in the first year post stroke.

Each patient’s body mass index was calculated based on the average of all BMI measurements obtained 1-24 months prior to the stroke. The analysis of the relationship between BMI and poststroke mortality included extensive statistical adjustment for potential confounders, including age, sex, smoking, cancer, dementia, peripheral artery disease, diabetes, coronary heart disease, atrial fibrillation, chronic kidney disease, use of statins, and antihypertensive therapy.

Breaking down the study population into eight BMI categories, Ms. Costa found that the adjusted risk of 30-day all-cause mortality post stroke was reduced by 22%-38% in patients in the overweight or obese groupings, compared with the reference population with a normal-weight BMI of 22.5 to less than 25 kg/m2.

One-year, all-cause mortality showed the same pattern of BMI-based significant differences.

Of deaths within 30 days post stroke, 34% were stroke-related. In an analysis restricted to that group, the evidence of an obesity paradox was attenuated. Indeed, the only BMI group with an adjusted 30-day stroke-related mortality significantly different from the normal-weight reference group were patients with Class III obesity, defined as a BMI of 40 or more. Their risk was reduced by 45%.

The obesity paradox remains a controversial issue among epidemiologists. The increased mortality associated with being underweight among patients with diseases where the obesity paradox has been documented is widely thought to be caused by frailty and/or an underlying illness not adjusted for in analyses. But the mechanism for the reduced mortality risk in overweight and obese patients seen in the VA stroke study and other studies remains unknown despite much speculation.

Ms. Costa reported having no financial conflicts regarding her study, which was supported by the Department of Veterans Affairs.
 

SOURCE: Costa L. Circulation. 2018;138(suppl 1): Abstract 14288.

Issue
Neurology Reviews- 27(3)
Issue
Neurology Reviews- 27(3)
Page Number
28
Page Number
28
Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

REPORTING FROM THE AHA SCIENTIFIC SESSIONS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
192909
Vitals

Key clinical point: Heavier stroke patients have lower 30-day and 1-year all-cause mortality.

Major finding: The 30-day stroke-related mortality rate after in-hospital stroke was reduced by 45% in VA patients with Class III obesity.

Study details: This retrospective study looked at the relationship between body mass index and post-stroke mortality in more than 26,000 veterans who had an inpatient stroke, with extensive adjustments made for potential confounders.

Disclosures: The presenter reported having no financial conflicts regarding the study, which was sponsored by the Department of Veterans Affairs.

Source: Costa L. Circulation. 2018;138(suppl 1): Abstract 14288.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

FDA approves generic version of vigabatrin

Article Type
Changed
Fri, 01/18/2019 - 18:15

 

The Food and Drug Administration has approved the first generic version of vigabatrin (Sabril) 500-mg tablets. The drug is approved for the adjunctive treatment of focal seizures in patients aged 10 years and older who have not had an adequate response to other therapies.

The approval was granted to Teva Pharmaceuticals.

An FDA announcement noted that the agency has prioritized the approval of generic versions of drugs to improve access to treatments and to lower drug costs. Vigabatrin had been included on an FDA list of off-patent, off-exclusivity branded drugs without approved generics. The approval of generic vigabatrin “demonstrates that there is an open pathway to approving products like this one,” said FDA Commissioner Scott Gottlieb, MD.

The label for vigabatrin tablets includes a boxed warning for permanent vision loss. The generic vigabatrin tablets are part of a single shared-system Risk Evaluation and Mitigation Strategy (REMS) program with other drug products containing vigabatrin.

The most common side effects associated with vigabatrin tablets include dizziness, fatigue, sleepiness, involuntary eye movement, tremor, blurred vision, memory impairment, weight gain, joint pain, upper respiratory tract infection, aggression, double vision, abnormal coordination, and a confused state. Serious side effects associated with vigabatrin tablets include permanent vision loss and risk of suicidal thoughts or actions.
 

Publications
Topics
Sections

 

The Food and Drug Administration has approved the first generic version of vigabatrin (Sabril) 500-mg tablets. The drug is approved for the adjunctive treatment of focal seizures in patients aged 10 years and older who have not had an adequate response to other therapies.

The approval was granted to Teva Pharmaceuticals.

An FDA announcement noted that the agency has prioritized the approval of generic versions of drugs to improve access to treatments and to lower drug costs. Vigabatrin had been included on an FDA list of off-patent, off-exclusivity branded drugs without approved generics. The approval of generic vigabatrin “demonstrates that there is an open pathway to approving products like this one,” said FDA Commissioner Scott Gottlieb, MD.

The label for vigabatrin tablets includes a boxed warning for permanent vision loss. The generic vigabatrin tablets are part of a single shared-system Risk Evaluation and Mitigation Strategy (REMS) program with other drug products containing vigabatrin.

The most common side effects associated with vigabatrin tablets include dizziness, fatigue, sleepiness, involuntary eye movement, tremor, blurred vision, memory impairment, weight gain, joint pain, upper respiratory tract infection, aggression, double vision, abnormal coordination, and a confused state. Serious side effects associated with vigabatrin tablets include permanent vision loss and risk of suicidal thoughts or actions.
 

 

The Food and Drug Administration has approved the first generic version of vigabatrin (Sabril) 500-mg tablets. The drug is approved for the adjunctive treatment of focal seizures in patients aged 10 years and older who have not had an adequate response to other therapies.

The approval was granted to Teva Pharmaceuticals.

An FDA announcement noted that the agency has prioritized the approval of generic versions of drugs to improve access to treatments and to lower drug costs. Vigabatrin had been included on an FDA list of off-patent, off-exclusivity branded drugs without approved generics. The approval of generic vigabatrin “demonstrates that there is an open pathway to approving products like this one,” said FDA Commissioner Scott Gottlieb, MD.

The label for vigabatrin tablets includes a boxed warning for permanent vision loss. The generic vigabatrin tablets are part of a single shared-system Risk Evaluation and Mitigation Strategy (REMS) program with other drug products containing vigabatrin.

The most common side effects associated with vigabatrin tablets include dizziness, fatigue, sleepiness, involuntary eye movement, tremor, blurred vision, memory impairment, weight gain, joint pain, upper respiratory tract infection, aggression, double vision, abnormal coordination, and a confused state. Serious side effects associated with vigabatrin tablets include permanent vision loss and risk of suicidal thoughts or actions.
 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Prioritize oral route for inpatient opioids with subcutaneous route as alternative

Article Type
Changed
Wed, 01/16/2019 - 07:00

Clinical question: Can adoption of a local opioid standard of practice for hospitalized patients reduce intravenous and overall opioid exposure while providing effective pain control?

Background: Inpatient use of intravenous opioids may be excessive, considering that oral opioids may provide more consistent pain control with less risk of adverse effects. If oral treatment is not possible, subcutaneous administration of opioids is an effective and possibly less addictive alternative to the intravenous route.

Study design: Historical control pilot study.

Setting: Single adult general medicine unit in an urban academic medical center.

Dr. Nathan Wanner

Synopsis: A 6-month historical period with 287 patients was compared with a 3-month intervention period with 127 patients. The intervention consisted of a clinical practice standard that was presented to medical and nursing staff via didactic sessions and email. The standard recommended the oral route for opioids in patients tolerating oral intake and endorsed subcutaneous over intravenous administration.

Intravenous doses decreased by 84% (0.06 vs. 0.39 doses/patient-day; P less than .001), the daily rate of patients receiving any parenteral opioid decreased by 57% (6% vs. 14%; P less than .001), and the mean daily overall morphine-milligram equivalents decreased by 31% (6.30 vs. 9.11). Pain scores were unchanged for hospital days 1 through 3 but were significantly improved on day 4 (P = .004) and day 5 (P = .009).

Limitations of this study include the small number of patients on one unit, in one institution, with one clinician group. Attractive features of the intervention include its scalability and potential for augmentation via additional processes such as EHR changes, prescribing restrictions, and pharmacy monitoring.

Bottom line: A standard of practice intervention with peer-to-peer education was associated with decreased intravenous opioid exposure, decreased total opioid exposure, and effective pain control.



Citation: Ackerman AL et al. Association of an opioid standard of practice intervention with intravenous opioid exposure in hospitalized patients. JAMA Int Med. 2018;178(6):759-63.



Dr. Wanner is director, hospital medicine section, and associate chief, division of general internal medicine, University of Utah, Salt Lake City.

Publications
Topics
Sections

Clinical question: Can adoption of a local opioid standard of practice for hospitalized patients reduce intravenous and overall opioid exposure while providing effective pain control?

Background: Inpatient use of intravenous opioids may be excessive, considering that oral opioids may provide more consistent pain control with less risk of adverse effects. If oral treatment is not possible, subcutaneous administration of opioids is an effective and possibly less addictive alternative to the intravenous route.

Study design: Historical control pilot study.

Setting: Single adult general medicine unit in an urban academic medical center.

Dr. Nathan Wanner

Synopsis: A 6-month historical period with 287 patients was compared with a 3-month intervention period with 127 patients. The intervention consisted of a clinical practice standard that was presented to medical and nursing staff via didactic sessions and email. The standard recommended the oral route for opioids in patients tolerating oral intake and endorsed subcutaneous over intravenous administration.

Intravenous doses decreased by 84% (0.06 vs. 0.39 doses/patient-day; P less than .001), the daily rate of patients receiving any parenteral opioid decreased by 57% (6% vs. 14%; P less than .001), and the mean daily overall morphine-milligram equivalents decreased by 31% (6.30 vs. 9.11). Pain scores were unchanged for hospital days 1 through 3 but were significantly improved on day 4 (P = .004) and day 5 (P = .009).

Limitations of this study include the small number of patients on one unit, in one institution, with one clinician group. Attractive features of the intervention include its scalability and potential for augmentation via additional processes such as EHR changes, prescribing restrictions, and pharmacy monitoring.

Bottom line: A standard of practice intervention with peer-to-peer education was associated with decreased intravenous opioid exposure, decreased total opioid exposure, and effective pain control.



Citation: Ackerman AL et al. Association of an opioid standard of practice intervention with intravenous opioid exposure in hospitalized patients. JAMA Int Med. 2018;178(6):759-63.



Dr. Wanner is director, hospital medicine section, and associate chief, division of general internal medicine, University of Utah, Salt Lake City.

Clinical question: Can adoption of a local opioid standard of practice for hospitalized patients reduce intravenous and overall opioid exposure while providing effective pain control?

Background: Inpatient use of intravenous opioids may be excessive, considering that oral opioids may provide more consistent pain control with less risk of adverse effects. If oral treatment is not possible, subcutaneous administration of opioids is an effective and possibly less addictive alternative to the intravenous route.

Study design: Historical control pilot study.

Setting: Single adult general medicine unit in an urban academic medical center.

Dr. Nathan Wanner

Synopsis: A 6-month historical period with 287 patients was compared with a 3-month intervention period with 127 patients. The intervention consisted of a clinical practice standard that was presented to medical and nursing staff via didactic sessions and email. The standard recommended the oral route for opioids in patients tolerating oral intake and endorsed subcutaneous over intravenous administration.

Intravenous doses decreased by 84% (0.06 vs. 0.39 doses/patient-day; P less than .001), the daily rate of patients receiving any parenteral opioid decreased by 57% (6% vs. 14%; P less than .001), and the mean daily overall morphine-milligram equivalents decreased by 31% (6.30 vs. 9.11). Pain scores were unchanged for hospital days 1 through 3 but were significantly improved on day 4 (P = .004) and day 5 (P = .009).

Limitations of this study include the small number of patients on one unit, in one institution, with one clinician group. Attractive features of the intervention include its scalability and potential for augmentation via additional processes such as EHR changes, prescribing restrictions, and pharmacy monitoring.

Bottom line: A standard of practice intervention with peer-to-peer education was associated with decreased intravenous opioid exposure, decreased total opioid exposure, and effective pain control.



Citation: Ackerman AL et al. Association of an opioid standard of practice intervention with intravenous opioid exposure in hospitalized patients. JAMA Int Med. 2018;178(6):759-63.



Dr. Wanner is director, hospital medicine section, and associate chief, division of general internal medicine, University of Utah, Salt Lake City.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

DMTs, stem cell transplants both reduce disease progression in MS

Although effective, DMTs and HSCT entail risks
Article Type
Changed
Thu, 12/15/2022 - 15:47

 

Disease-modifying therapies give patients with relapsing-remitting multiple sclerosis a lower risk of developing secondary progressive disease that may only be topped in specific patients with highly active disease by the use of nonmyeloablative hematopoietic stem cell transplantation, according to findings from two studies published online Jan. 15 in JAMA.

copyright Zerbor/Thinkstock

The first study found that interferon-beta, glatiramer acetate (Copaxone), fingolimod (Gilenya), natalizumab (Tysabri), and alemtuzumab (Lemtrada) are associated with a lower risk of conversion to secondary progressive MS, compared with no treatment. Initial treatment with the newer therapies provided a greater risk reduction, compared with initial treatment with interferon-beta or glatiramer acetate.

The second study, described as “the first randomized trial of HSCT [nonmyeloablative hematopoietic stem cell transplantation] in patients with relapsing-remitting MS,” suggests that HSCT prolongs the time to disease progression, compared with disease-modifying therapies (DMTs). It also suggests that HSCT can lead to clinical improvement.
 

DMTs reduced risk of conversion to secondary progressive MS

Few previous studies have examined the association between DMTs and the risk of conversion from relapsing-remitting MS to secondary progressive MS. Those that have analyzed this association have not used a validated definition of secondary progressive MS. J. William L. Brown, MD, of the University of Cambridge, England, and his colleagues used a validated definition of secondary progressive MS that was published in 2016 to investigate how DMTs affect the rate of conversion, compared with no treatment. The researchers also compared the risk reduction provided by fingolimod, alemtuzumab, or natalizumab with that provided by interferon-beta or glatiramer acetate.

Dr. Brown and his colleagues analyzed prospectively collected clinical data from an international observational cohort study called MSBase. Eligible participants had relapsing-remitting MS, the complete MSBase minimum data set, at least one Expanded Disability Status Scale (EDSS) score recorded within 6 months before baseline, and at least two EDSS scores recorded after baseline. Participants initiated a DMT or began clinical monitoring during 1988-2012. The population had a minimum follow-up duration of 4 years. Patients who stopped their initial therapy within 6 months and those participating in clinical trials were excluded.

The primary outcome was conversion to secondary progressive MS. Dr. Brown and his colleagues defined this outcome as an EDSS increase of 1 point for participants with a baseline EDSS score of 5.5 or less and as an increase of 0.5 points for participants with a baseline EDSS score higher than 5.5. This increase had to occur in the absence of relapses and be confirmed at a subsequent visit 3 or fewer months later. In addition, the increased EDSS score had to be 4 or more.

After excluding ineligible participants, the investigators matched 1,555 patients from 68 centers in 21 countries. Each therapy analyzed was associated with reduced risk of converting to secondary progressive MS, compared with no treatment. The hazard ratios for conversion were 0.71 for interferon-beta or glatiramer acetate, 0.37 for fingolimod, 0.61 for natalizumab, and 0.52 for alemtuzumab, compared with no treatment.

Treatment with interferon-beta or glatiramer acetate within 5 years of disease onset was associated with a reduced risk of conversion (HR, 0.77), compared with treatment later than 5 years after disease onset. Similarly, patients who escalated treatment from interferon-beta or glatiramer acetate to any of the other three DMTs within 5 years of disease onset had a significantly lower risk of conversion (HR, 0.76) than did those who escalated later. Furthermore, initial treatment with fingolimod, alemtuzumab, or natalizumab was associated with a significantly reduced risk of conversion (HR, 0.66), compared with initial treatment with interferon-beta or glatiramer acetate.

One of the study’s limitations is its observational design, which precludes the determination of causality, Dr. Brown and his colleagues said. In addition, functional score subcomponents of the EDSS were unavailable, which prevented the researchers from using the definition of secondary progressive MS with the best combination of sensitivity, specificity, and accuracy. Some analyses were limited by small numbers of patients, and the study did not evaluate the risks associated with DMTs. Nevertheless, “these findings, considered along with these therapies’ risks, may help inform decisions about DMT selection,” the authors concluded.

Financial support for this study was provided by the National Health and Medical Research Council of Australia and the University of Melbourne. Dr. Brown received a Next Generation Fellowship funded by the Grand Charity of the Freemasons and an MSBase 2017 Fellowship. Alemtuzumab studies conducted in Cambridge were supported by the National Institute for Health Research Cambridge Biomedical Research Centre and the MS Society UK.
 

 

 

HSCT delayed disease progression

In a previous case series, Richard K. Burt, MD, of Northwestern University in Chicago, and his colleagues found that patients with relapsing-remitting MS who underwent nonmyeloablative HSCT had neurologic improvement and a 70% likelihood of having a 4-year period of disease remission. Dr. Burt and his colleagues undertook the MS international stem cell transplant trial to compare the effects of nonmyeloablative HSCT with those of continued DMT treatment on disease progression in participants with highly active relapsing-remitting MS.

The researchers enrolled 110 participants at four international centers into their open-label trial. Eligible participants had two or more clinical relapses or one relapse and at least one gadolinium-enhancing lesion at a separate time within the previous 12 months, despite DMT treatment. The investigators also required participants to have an EDSS score between 2.0 and 6.0. Patients with primary or secondary progressive MS were excluded.

Dr. Burt and his colleagues randomized participants to receive HSCT or an approved DMT that was more effective or in a different class than the one they were receiving at baseline. Ocrelizumab (Ocrevus) was not administered during the study because it had not yet been approved. The investigators excluded alemtuzumab because of its association with persistent lymphopenia and autoimmune disorders. After 1 year of treatment, patients receiving a DMT who had disability progression could cross over to the HSCT arm. Patients randomized to HSCT stopped taking their usual DMT.

Time to disease progression was the study’s primary endpoint. The investigators defined disease progression as an increase in EDSS score of at least 1 point on two evaluations 6 months apart after at least 1 year of treatment. The increase was required to result from MS. The neurologist who recorded participants’ EDSS evaluations was blinded to treatment group assignment.

The researchers randomized 55 patients to each study arm. Approximately 66% of participants were women, and the sample’s mean age was 36 years. There were no significant baseline differences between groups on demographic, clinical, or imaging characteristics. Three patients in the HSCT group were withdrawn from the study, and four in the DMT group were lost to follow-up after seeking HSCT at outside facilities.

Three patients in the HSCT group and 34 patients in the DMT group had disease progression. Mean follow-up duration was 2.8 years. The investigators could not calculate the median time to progression in the HSCT group because too few events occurred. Median time to progression was 24 months in the DMT group (HR, 0.07). During the first year, mean EDSS scores decreased (indicating improvement) from 3.38 to 2.36 in the HSCT group. Mean EDSS scores increased from 3.31 to 3.98 in the DMT group. No participants died, and no patients who received HSCT-developed nonhematopoietic grade 4 toxicities.

“To our knowledge, this is the first randomized trial of HSCT in patients with relapsing-remitting MS,” Dr. Burt and his colleagues said. Although observational studies have found similar EDSS improvements following HSCT, “this degree of improvement has not been demonstrated in pharmaceutical trials even with more intensive DMT such as alemtuzumab,” they concluded.

The Danhakl Family Foundation, the Cumming Foundation, the McNamara Purcell Foundation, Morgan Stanley, and the National Institute for Health Research Sheffield Clinical Research Facility provided financial support for this study. No pharmaceutical companies supported the study.

SOURCEs: Brown JWL et al. JAMA. 2019;321(2):175-87. doi: 10.1001/jama.2018.20588.; and Burt RK et al. JAMA. 2019;321(2):165-74. doi: 10.1001/jama.2018.18743.

Body

 

The study by Brown et al. provides evidence that DMTs slow the appearance of persistent disabilities in patients with multiple sclerosis (MS), Harold Atkins, MD, wrote in an accompanying editorial (JAMA. 2019 Jan 15;321[2]:153-4). Although disease-modifying therapies (DMTs) may suppress clinical signs of disease activity for long periods in some patients, these therapies slow MS rather than halt it. DMTs require long-term administration and may cause intolerable side effects that impair patients’ quality of life. These therapies also may result in complications such as severe depression or progressive multifocal leukoencephalopathy.

“The study by Burt et al. ... provides a rigorous indication that HSCT [hematopoietic stem cell transplantation] can be an effective treatment for selected patients with MS,” Dr. Atkins said. Treating physicians, however, have concerns about this procedure, which is resource-intensive and “requires specialized medical and nursing expertise and dedicated hospital infrastructure to minimize its risks.” Many patients in the study had moderate to severe acute toxicity following treatment, and patient selection thus requires caution.

An important limitation of the study is that participants did not have access to alemtuzumab or ocrelizumab, which arguably are the most effective DMTs, Dr. Atkins said. The study began in 2005, when fewer DMTs were available. “The inclusion of patients who were less than optimally treated in the DMT group needs to be considered when interpreting the results of this study,” Dr. Atkins said.

Furthermore, Burt and colleagues studied patients with highly active MS, but “only a small proportion of the MS patient population exhibits this degree of activity,” he added. The results therefore may not be generalizable. Nevertheless, “even with the limitations of the trial, the results support a role for HSCT delivered at centers that are experienced in the clinical care of patients with highly active MS,” Dr. Atkins concluded.

Dr. Atkins is affiliated with the Ottawa Hospital Blood and Marrow Transplant Program at the University of Ottawa in Ontario. He reported no conflicts of interest.

Issue
Neurology Reviews- 27(2)
Publications
Topics
Page Number
10-13
Sections
Body

 

The study by Brown et al. provides evidence that DMTs slow the appearance of persistent disabilities in patients with multiple sclerosis (MS), Harold Atkins, MD, wrote in an accompanying editorial (JAMA. 2019 Jan 15;321[2]:153-4). Although disease-modifying therapies (DMTs) may suppress clinical signs of disease activity for long periods in some patients, these therapies slow MS rather than halt it. DMTs require long-term administration and may cause intolerable side effects that impair patients’ quality of life. These therapies also may result in complications such as severe depression or progressive multifocal leukoencephalopathy.

“The study by Burt et al. ... provides a rigorous indication that HSCT [hematopoietic stem cell transplantation] can be an effective treatment for selected patients with MS,” Dr. Atkins said. Treating physicians, however, have concerns about this procedure, which is resource-intensive and “requires specialized medical and nursing expertise and dedicated hospital infrastructure to minimize its risks.” Many patients in the study had moderate to severe acute toxicity following treatment, and patient selection thus requires caution.

An important limitation of the study is that participants did not have access to alemtuzumab or ocrelizumab, which arguably are the most effective DMTs, Dr. Atkins said. The study began in 2005, when fewer DMTs were available. “The inclusion of patients who were less than optimally treated in the DMT group needs to be considered when interpreting the results of this study,” Dr. Atkins said.

Furthermore, Burt and colleagues studied patients with highly active MS, but “only a small proportion of the MS patient population exhibits this degree of activity,” he added. The results therefore may not be generalizable. Nevertheless, “even with the limitations of the trial, the results support a role for HSCT delivered at centers that are experienced in the clinical care of patients with highly active MS,” Dr. Atkins concluded.

Dr. Atkins is affiliated with the Ottawa Hospital Blood and Marrow Transplant Program at the University of Ottawa in Ontario. He reported no conflicts of interest.

Body

 

The study by Brown et al. provides evidence that DMTs slow the appearance of persistent disabilities in patients with multiple sclerosis (MS), Harold Atkins, MD, wrote in an accompanying editorial (JAMA. 2019 Jan 15;321[2]:153-4). Although disease-modifying therapies (DMTs) may suppress clinical signs of disease activity for long periods in some patients, these therapies slow MS rather than halt it. DMTs require long-term administration and may cause intolerable side effects that impair patients’ quality of life. These therapies also may result in complications such as severe depression or progressive multifocal leukoencephalopathy.

“The study by Burt et al. ... provides a rigorous indication that HSCT [hematopoietic stem cell transplantation] can be an effective treatment for selected patients with MS,” Dr. Atkins said. Treating physicians, however, have concerns about this procedure, which is resource-intensive and “requires specialized medical and nursing expertise and dedicated hospital infrastructure to minimize its risks.” Many patients in the study had moderate to severe acute toxicity following treatment, and patient selection thus requires caution.

An important limitation of the study is that participants did not have access to alemtuzumab or ocrelizumab, which arguably are the most effective DMTs, Dr. Atkins said. The study began in 2005, when fewer DMTs were available. “The inclusion of patients who were less than optimally treated in the DMT group needs to be considered when interpreting the results of this study,” Dr. Atkins said.

Furthermore, Burt and colleagues studied patients with highly active MS, but “only a small proportion of the MS patient population exhibits this degree of activity,” he added. The results therefore may not be generalizable. Nevertheless, “even with the limitations of the trial, the results support a role for HSCT delivered at centers that are experienced in the clinical care of patients with highly active MS,” Dr. Atkins concluded.

Dr. Atkins is affiliated with the Ottawa Hospital Blood and Marrow Transplant Program at the University of Ottawa in Ontario. He reported no conflicts of interest.

Title
Although effective, DMTs and HSCT entail risks
Although effective, DMTs and HSCT entail risks

 

Disease-modifying therapies give patients with relapsing-remitting multiple sclerosis a lower risk of developing secondary progressive disease that may only be topped in specific patients with highly active disease by the use of nonmyeloablative hematopoietic stem cell transplantation, according to findings from two studies published online Jan. 15 in JAMA.

copyright Zerbor/Thinkstock

The first study found that interferon-beta, glatiramer acetate (Copaxone), fingolimod (Gilenya), natalizumab (Tysabri), and alemtuzumab (Lemtrada) are associated with a lower risk of conversion to secondary progressive MS, compared with no treatment. Initial treatment with the newer therapies provided a greater risk reduction, compared with initial treatment with interferon-beta or glatiramer acetate.

The second study, described as “the first randomized trial of HSCT [nonmyeloablative hematopoietic stem cell transplantation] in patients with relapsing-remitting MS,” suggests that HSCT prolongs the time to disease progression, compared with disease-modifying therapies (DMTs). It also suggests that HSCT can lead to clinical improvement.
 

DMTs reduced risk of conversion to secondary progressive MS

Few previous studies have examined the association between DMTs and the risk of conversion from relapsing-remitting MS to secondary progressive MS. Those that have analyzed this association have not used a validated definition of secondary progressive MS. J. William L. Brown, MD, of the University of Cambridge, England, and his colleagues used a validated definition of secondary progressive MS that was published in 2016 to investigate how DMTs affect the rate of conversion, compared with no treatment. The researchers also compared the risk reduction provided by fingolimod, alemtuzumab, or natalizumab with that provided by interferon-beta or glatiramer acetate.

Dr. Brown and his colleagues analyzed prospectively collected clinical data from an international observational cohort study called MSBase. Eligible participants had relapsing-remitting MS, the complete MSBase minimum data set, at least one Expanded Disability Status Scale (EDSS) score recorded within 6 months before baseline, and at least two EDSS scores recorded after baseline. Participants initiated a DMT or began clinical monitoring during 1988-2012. The population had a minimum follow-up duration of 4 years. Patients who stopped their initial therapy within 6 months and those participating in clinical trials were excluded.

The primary outcome was conversion to secondary progressive MS. Dr. Brown and his colleagues defined this outcome as an EDSS increase of 1 point for participants with a baseline EDSS score of 5.5 or less and as an increase of 0.5 points for participants with a baseline EDSS score higher than 5.5. This increase had to occur in the absence of relapses and be confirmed at a subsequent visit 3 or fewer months later. In addition, the increased EDSS score had to be 4 or more.

After excluding ineligible participants, the investigators matched 1,555 patients from 68 centers in 21 countries. Each therapy analyzed was associated with reduced risk of converting to secondary progressive MS, compared with no treatment. The hazard ratios for conversion were 0.71 for interferon-beta or glatiramer acetate, 0.37 for fingolimod, 0.61 for natalizumab, and 0.52 for alemtuzumab, compared with no treatment.

Treatment with interferon-beta or glatiramer acetate within 5 years of disease onset was associated with a reduced risk of conversion (HR, 0.77), compared with treatment later than 5 years after disease onset. Similarly, patients who escalated treatment from interferon-beta or glatiramer acetate to any of the other three DMTs within 5 years of disease onset had a significantly lower risk of conversion (HR, 0.76) than did those who escalated later. Furthermore, initial treatment with fingolimod, alemtuzumab, or natalizumab was associated with a significantly reduced risk of conversion (HR, 0.66), compared with initial treatment with interferon-beta or glatiramer acetate.

One of the study’s limitations is its observational design, which precludes the determination of causality, Dr. Brown and his colleagues said. In addition, functional score subcomponents of the EDSS were unavailable, which prevented the researchers from using the definition of secondary progressive MS with the best combination of sensitivity, specificity, and accuracy. Some analyses were limited by small numbers of patients, and the study did not evaluate the risks associated with DMTs. Nevertheless, “these findings, considered along with these therapies’ risks, may help inform decisions about DMT selection,” the authors concluded.

Financial support for this study was provided by the National Health and Medical Research Council of Australia and the University of Melbourne. Dr. Brown received a Next Generation Fellowship funded by the Grand Charity of the Freemasons and an MSBase 2017 Fellowship. Alemtuzumab studies conducted in Cambridge were supported by the National Institute for Health Research Cambridge Biomedical Research Centre and the MS Society UK.
 

 

 

HSCT delayed disease progression

In a previous case series, Richard K. Burt, MD, of Northwestern University in Chicago, and his colleagues found that patients with relapsing-remitting MS who underwent nonmyeloablative HSCT had neurologic improvement and a 70% likelihood of having a 4-year period of disease remission. Dr. Burt and his colleagues undertook the MS international stem cell transplant trial to compare the effects of nonmyeloablative HSCT with those of continued DMT treatment on disease progression in participants with highly active relapsing-remitting MS.

The researchers enrolled 110 participants at four international centers into their open-label trial. Eligible participants had two or more clinical relapses or one relapse and at least one gadolinium-enhancing lesion at a separate time within the previous 12 months, despite DMT treatment. The investigators also required participants to have an EDSS score between 2.0 and 6.0. Patients with primary or secondary progressive MS were excluded.

Dr. Burt and his colleagues randomized participants to receive HSCT or an approved DMT that was more effective or in a different class than the one they were receiving at baseline. Ocrelizumab (Ocrevus) was not administered during the study because it had not yet been approved. The investigators excluded alemtuzumab because of its association with persistent lymphopenia and autoimmune disorders. After 1 year of treatment, patients receiving a DMT who had disability progression could cross over to the HSCT arm. Patients randomized to HSCT stopped taking their usual DMT.

Time to disease progression was the study’s primary endpoint. The investigators defined disease progression as an increase in EDSS score of at least 1 point on two evaluations 6 months apart after at least 1 year of treatment. The increase was required to result from MS. The neurologist who recorded participants’ EDSS evaluations was blinded to treatment group assignment.

The researchers randomized 55 patients to each study arm. Approximately 66% of participants were women, and the sample’s mean age was 36 years. There were no significant baseline differences between groups on demographic, clinical, or imaging characteristics. Three patients in the HSCT group were withdrawn from the study, and four in the DMT group were lost to follow-up after seeking HSCT at outside facilities.

Three patients in the HSCT group and 34 patients in the DMT group had disease progression. Mean follow-up duration was 2.8 years. The investigators could not calculate the median time to progression in the HSCT group because too few events occurred. Median time to progression was 24 months in the DMT group (HR, 0.07). During the first year, mean EDSS scores decreased (indicating improvement) from 3.38 to 2.36 in the HSCT group. Mean EDSS scores increased from 3.31 to 3.98 in the DMT group. No participants died, and no patients who received HSCT-developed nonhematopoietic grade 4 toxicities.

“To our knowledge, this is the first randomized trial of HSCT in patients with relapsing-remitting MS,” Dr. Burt and his colleagues said. Although observational studies have found similar EDSS improvements following HSCT, “this degree of improvement has not been demonstrated in pharmaceutical trials even with more intensive DMT such as alemtuzumab,” they concluded.

The Danhakl Family Foundation, the Cumming Foundation, the McNamara Purcell Foundation, Morgan Stanley, and the National Institute for Health Research Sheffield Clinical Research Facility provided financial support for this study. No pharmaceutical companies supported the study.

SOURCEs: Brown JWL et al. JAMA. 2019;321(2):175-87. doi: 10.1001/jama.2018.20588.; and Burt RK et al. JAMA. 2019;321(2):165-74. doi: 10.1001/jama.2018.18743.

 

Disease-modifying therapies give patients with relapsing-remitting multiple sclerosis a lower risk of developing secondary progressive disease that may only be topped in specific patients with highly active disease by the use of nonmyeloablative hematopoietic stem cell transplantation, according to findings from two studies published online Jan. 15 in JAMA.

copyright Zerbor/Thinkstock

The first study found that interferon-beta, glatiramer acetate (Copaxone), fingolimod (Gilenya), natalizumab (Tysabri), and alemtuzumab (Lemtrada) are associated with a lower risk of conversion to secondary progressive MS, compared with no treatment. Initial treatment with the newer therapies provided a greater risk reduction, compared with initial treatment with interferon-beta or glatiramer acetate.

The second study, described as “the first randomized trial of HSCT [nonmyeloablative hematopoietic stem cell transplantation] in patients with relapsing-remitting MS,” suggests that HSCT prolongs the time to disease progression, compared with disease-modifying therapies (DMTs). It also suggests that HSCT can lead to clinical improvement.
 

DMTs reduced risk of conversion to secondary progressive MS

Few previous studies have examined the association between DMTs and the risk of conversion from relapsing-remitting MS to secondary progressive MS. Those that have analyzed this association have not used a validated definition of secondary progressive MS. J. William L. Brown, MD, of the University of Cambridge, England, and his colleagues used a validated definition of secondary progressive MS that was published in 2016 to investigate how DMTs affect the rate of conversion, compared with no treatment. The researchers also compared the risk reduction provided by fingolimod, alemtuzumab, or natalizumab with that provided by interferon-beta or glatiramer acetate.

Dr. Brown and his colleagues analyzed prospectively collected clinical data from an international observational cohort study called MSBase. Eligible participants had relapsing-remitting MS, the complete MSBase minimum data set, at least one Expanded Disability Status Scale (EDSS) score recorded within 6 months before baseline, and at least two EDSS scores recorded after baseline. Participants initiated a DMT or began clinical monitoring during 1988-2012. The population had a minimum follow-up duration of 4 years. Patients who stopped their initial therapy within 6 months and those participating in clinical trials were excluded.

The primary outcome was conversion to secondary progressive MS. Dr. Brown and his colleagues defined this outcome as an EDSS increase of 1 point for participants with a baseline EDSS score of 5.5 or less and as an increase of 0.5 points for participants with a baseline EDSS score higher than 5.5. This increase had to occur in the absence of relapses and be confirmed at a subsequent visit 3 or fewer months later. In addition, the increased EDSS score had to be 4 or more.

After excluding ineligible participants, the investigators matched 1,555 patients from 68 centers in 21 countries. Each therapy analyzed was associated with reduced risk of converting to secondary progressive MS, compared with no treatment. The hazard ratios for conversion were 0.71 for interferon-beta or glatiramer acetate, 0.37 for fingolimod, 0.61 for natalizumab, and 0.52 for alemtuzumab, compared with no treatment.

Treatment with interferon-beta or glatiramer acetate within 5 years of disease onset was associated with a reduced risk of conversion (HR, 0.77), compared with treatment later than 5 years after disease onset. Similarly, patients who escalated treatment from interferon-beta or glatiramer acetate to any of the other three DMTs within 5 years of disease onset had a significantly lower risk of conversion (HR, 0.76) than did those who escalated later. Furthermore, initial treatment with fingolimod, alemtuzumab, or natalizumab was associated with a significantly reduced risk of conversion (HR, 0.66), compared with initial treatment with interferon-beta or glatiramer acetate.

One of the study’s limitations is its observational design, which precludes the determination of causality, Dr. Brown and his colleagues said. In addition, functional score subcomponents of the EDSS were unavailable, which prevented the researchers from using the definition of secondary progressive MS with the best combination of sensitivity, specificity, and accuracy. Some analyses were limited by small numbers of patients, and the study did not evaluate the risks associated with DMTs. Nevertheless, “these findings, considered along with these therapies’ risks, may help inform decisions about DMT selection,” the authors concluded.

Financial support for this study was provided by the National Health and Medical Research Council of Australia and the University of Melbourne. Dr. Brown received a Next Generation Fellowship funded by the Grand Charity of the Freemasons and an MSBase 2017 Fellowship. Alemtuzumab studies conducted in Cambridge were supported by the National Institute for Health Research Cambridge Biomedical Research Centre and the MS Society UK.
 

 

 

HSCT delayed disease progression

In a previous case series, Richard K. Burt, MD, of Northwestern University in Chicago, and his colleagues found that patients with relapsing-remitting MS who underwent nonmyeloablative HSCT had neurologic improvement and a 70% likelihood of having a 4-year period of disease remission. Dr. Burt and his colleagues undertook the MS international stem cell transplant trial to compare the effects of nonmyeloablative HSCT with those of continued DMT treatment on disease progression in participants with highly active relapsing-remitting MS.

The researchers enrolled 110 participants at four international centers into their open-label trial. Eligible participants had two or more clinical relapses or one relapse and at least one gadolinium-enhancing lesion at a separate time within the previous 12 months, despite DMT treatment. The investigators also required participants to have an EDSS score between 2.0 and 6.0. Patients with primary or secondary progressive MS were excluded.

Dr. Burt and his colleagues randomized participants to receive HSCT or an approved DMT that was more effective or in a different class than the one they were receiving at baseline. Ocrelizumab (Ocrevus) was not administered during the study because it had not yet been approved. The investigators excluded alemtuzumab because of its association with persistent lymphopenia and autoimmune disorders. After 1 year of treatment, patients receiving a DMT who had disability progression could cross over to the HSCT arm. Patients randomized to HSCT stopped taking their usual DMT.

Time to disease progression was the study’s primary endpoint. The investigators defined disease progression as an increase in EDSS score of at least 1 point on two evaluations 6 months apart after at least 1 year of treatment. The increase was required to result from MS. The neurologist who recorded participants’ EDSS evaluations was blinded to treatment group assignment.

The researchers randomized 55 patients to each study arm. Approximately 66% of participants were women, and the sample’s mean age was 36 years. There were no significant baseline differences between groups on demographic, clinical, or imaging characteristics. Three patients in the HSCT group were withdrawn from the study, and four in the DMT group were lost to follow-up after seeking HSCT at outside facilities.

Three patients in the HSCT group and 34 patients in the DMT group had disease progression. Mean follow-up duration was 2.8 years. The investigators could not calculate the median time to progression in the HSCT group because too few events occurred. Median time to progression was 24 months in the DMT group (HR, 0.07). During the first year, mean EDSS scores decreased (indicating improvement) from 3.38 to 2.36 in the HSCT group. Mean EDSS scores increased from 3.31 to 3.98 in the DMT group. No participants died, and no patients who received HSCT-developed nonhematopoietic grade 4 toxicities.

“To our knowledge, this is the first randomized trial of HSCT in patients with relapsing-remitting MS,” Dr. Burt and his colleagues said. Although observational studies have found similar EDSS improvements following HSCT, “this degree of improvement has not been demonstrated in pharmaceutical trials even with more intensive DMT such as alemtuzumab,” they concluded.

The Danhakl Family Foundation, the Cumming Foundation, the McNamara Purcell Foundation, Morgan Stanley, and the National Institute for Health Research Sheffield Clinical Research Facility provided financial support for this study. No pharmaceutical companies supported the study.

SOURCEs: Brown JWL et al. JAMA. 2019;321(2):175-87. doi: 10.1001/jama.2018.20588.; and Burt RK et al. JAMA. 2019;321(2):165-74. doi: 10.1001/jama.2018.18743.

Issue
Neurology Reviews- 27(2)
Issue
Neurology Reviews- 27(2)
Page Number
10-13
Page Number
10-13
Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Know the red flags for synaptic autoimmune psychosis

Article Type
Changed
Tue, 01/15/2019 - 13:36

 

– Consider the possibility of an autoantibody-related etiology in all cases of first-onset psychosis, Josep Dalmau, MD, PhD, urged at the annual congress of the European College of Neuropsychopharmacology.

Bruce Jancin/MDedge News
Dr. Josep Dalmau

“There are patients in our clinics all of us – neurologists and psychiatrists – are missing. These patients are believed to have psychiatric presentations, but they do not. They are autoimmune,” said Dr. Dalmau, professor of neurology at the University of Barcelona.

Dr. Dalmau urged psychiatrists to become familiar with the red flags suggestive of synaptic autoimmunity as the underlying cause of first-episode, out-of-the-blue psychosis.

“If you have a patient with a classical presentation of schizophrenia or bipolar disorder, you probably won’t find antibodies,” according to the neurologist.

It’s important to have a high index of suspicion, because anti–NMDA receptor encephalitis is treatable with immunotherapy. And firm evidence shows that earlier recognition and treatment lead to improved outcomes. Also, the disorder is refractory to antipsychotics; indeed, antipsychotic agents make affected patients much worse, even to the point of developing something akin to neuroleptic malignant syndrome.

Manifestations of anti–NMDA receptor encephalitis follow a characteristic pattern, beginning with a prodromal flulike phase lasting several days to a week. This is followed by acute-onset bizarre behavioral changes, irritability, and psychosis with delusions and/or hallucinations, often progressing to catatonia. After 1-4 weeks of this, florid neurologic symptoms usually appear, including seizures, abnormal movements, autonomic dysregulation, and hypoventilation requiring prolonged ICU support for weeks to months. This is followed by a prolonged recovery phase lasting 5-24 months, and a period marked by deficits in executive function and working memory, impulsivity, and disinhibition. Impressively, the patient has no memory of the illness.

In one large series of patients with confirmed anti–NMDA receptor encephalitis reported by Dr. Dalmau and coinvestigators, psychiatric symptoms occurred in isolation without subsequent neurologic involvement in just 4% of cases (JAMA Neurol. 2013 Sep 1;70[9]:1133-9).

Dr. Dalmau was senior author of an international cohort study including 577 patients with anti-NMDA receptor encephalitis with serial follow-up for 24 months. The study provided an unprecedented picture of the epidemiology and clinical features of the disorder.

“It’s a disease predominantly of women and young people,” he observed.

Indeed, the median age of the study population was 21 years, and 37% of subjects were less than 18 years of age. Roughly 80% of patients were female and most of them had a benign ovarian teratoma, which played a key role in their neuropsychiatric disease (Lancet Neurol. 2013 Feb;12[2]:157-65). These benign tumors express the NMDA receptor in ectopic nerve tissue, triggering a systemic immune response.

One or more relapses – again treatable via immunotherapy – occurred in 12% of patients during 24 months of follow-up.



When a red flag suggestive of synaptic autoimmunity is present, it’s important to obtain a cerebrospinal fluid (CSF) sample for analysis, along with an EEG and/or brain MRI.

“I don’t know if you as psychiatrists are set up to do spinal taps in all persons with first presentation of psychosis, but this would be my suggestion. It’s extremely useful in this situation,” Dr. Dalmau said.

The vast majority of patients with anti–NMDA receptor encephalitis have CSF pleocytosis with a mild lymphocytic predominance. The MRI is abnormal in about 35% of cases. EEG abnormalities are common but nonspecific. The diagnosis is confirmed by identification of anti–NMDA receptor antibodies in the CSF.

First-line therapy is corticosteroids, intravenous immunoglobulin, and/or plasma exchange to remove the pathogenic antibodies, along with resection of the tumor if present. These treatments are effective in almost half of affected patients. When they’re not, the second-line options are rituximab (Rituxan) and cyclophosphamide, alone or combined.

Antibodies to the NMDA receptor are far and away the most common cause of synaptic autoimmunity-induced psychosis, but other targets of autoimmunity have been documented as well, including the alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor, contactin-associated protein-like 2 (CASPR2), and neurexin-3-alpha.

Dr. Dalmau and various collaborators continue to advance the understanding of this novel category of neuropsychiatric disease. They have developed a simple 5-point score, known as the NEOS score, that predicts 1-year functional status in patients with anti–NMDA receptor encephalitis (Neurology. 2018 Dec 21. doi: 10.1212/WNL.0000000000006783). He and his colleagues have also recently shown in a prospective study that herpes simplex encephalitis can result in an autoimmune encephalitis, with NMDA receptor antibodies present in most cases (Lancet Neurol. 2018 Sep;17[9]:760-72).

Dr. Dalmau’s research is supported by the U.S. National Institute of Neurological Disorders and Stroke, the Spanish Ministry of Health, and Spanish research foundations. He reported receiving royalties from the use of several neuronal antibody tests.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Consider the possibility of an autoantibody-related etiology in all cases of first-onset psychosis, Josep Dalmau, MD, PhD, urged at the annual congress of the European College of Neuropsychopharmacology.

Bruce Jancin/MDedge News
Dr. Josep Dalmau

“There are patients in our clinics all of us – neurologists and psychiatrists – are missing. These patients are believed to have psychiatric presentations, but they do not. They are autoimmune,” said Dr. Dalmau, professor of neurology at the University of Barcelona.

Dr. Dalmau urged psychiatrists to become familiar with the red flags suggestive of synaptic autoimmunity as the underlying cause of first-episode, out-of-the-blue psychosis.

“If you have a patient with a classical presentation of schizophrenia or bipolar disorder, you probably won’t find antibodies,” according to the neurologist.

It’s important to have a high index of suspicion, because anti–NMDA receptor encephalitis is treatable with immunotherapy. And firm evidence shows that earlier recognition and treatment lead to improved outcomes. Also, the disorder is refractory to antipsychotics; indeed, antipsychotic agents make affected patients much worse, even to the point of developing something akin to neuroleptic malignant syndrome.

Manifestations of anti–NMDA receptor encephalitis follow a characteristic pattern, beginning with a prodromal flulike phase lasting several days to a week. This is followed by acute-onset bizarre behavioral changes, irritability, and psychosis with delusions and/or hallucinations, often progressing to catatonia. After 1-4 weeks of this, florid neurologic symptoms usually appear, including seizures, abnormal movements, autonomic dysregulation, and hypoventilation requiring prolonged ICU support for weeks to months. This is followed by a prolonged recovery phase lasting 5-24 months, and a period marked by deficits in executive function and working memory, impulsivity, and disinhibition. Impressively, the patient has no memory of the illness.

In one large series of patients with confirmed anti–NMDA receptor encephalitis reported by Dr. Dalmau and coinvestigators, psychiatric symptoms occurred in isolation without subsequent neurologic involvement in just 4% of cases (JAMA Neurol. 2013 Sep 1;70[9]:1133-9).

Dr. Dalmau was senior author of an international cohort study including 577 patients with anti-NMDA receptor encephalitis with serial follow-up for 24 months. The study provided an unprecedented picture of the epidemiology and clinical features of the disorder.

“It’s a disease predominantly of women and young people,” he observed.

Indeed, the median age of the study population was 21 years, and 37% of subjects were less than 18 years of age. Roughly 80% of patients were female and most of them had a benign ovarian teratoma, which played a key role in their neuropsychiatric disease (Lancet Neurol. 2013 Feb;12[2]:157-65). These benign tumors express the NMDA receptor in ectopic nerve tissue, triggering a systemic immune response.

One or more relapses – again treatable via immunotherapy – occurred in 12% of patients during 24 months of follow-up.



When a red flag suggestive of synaptic autoimmunity is present, it’s important to obtain a cerebrospinal fluid (CSF) sample for analysis, along with an EEG and/or brain MRI.

“I don’t know if you as psychiatrists are set up to do spinal taps in all persons with first presentation of psychosis, but this would be my suggestion. It’s extremely useful in this situation,” Dr. Dalmau said.

The vast majority of patients with anti–NMDA receptor encephalitis have CSF pleocytosis with a mild lymphocytic predominance. The MRI is abnormal in about 35% of cases. EEG abnormalities are common but nonspecific. The diagnosis is confirmed by identification of anti–NMDA receptor antibodies in the CSF.

First-line therapy is corticosteroids, intravenous immunoglobulin, and/or plasma exchange to remove the pathogenic antibodies, along with resection of the tumor if present. These treatments are effective in almost half of affected patients. When they’re not, the second-line options are rituximab (Rituxan) and cyclophosphamide, alone or combined.

Antibodies to the NMDA receptor are far and away the most common cause of synaptic autoimmunity-induced psychosis, but other targets of autoimmunity have been documented as well, including the alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor, contactin-associated protein-like 2 (CASPR2), and neurexin-3-alpha.

Dr. Dalmau and various collaborators continue to advance the understanding of this novel category of neuropsychiatric disease. They have developed a simple 5-point score, known as the NEOS score, that predicts 1-year functional status in patients with anti–NMDA receptor encephalitis (Neurology. 2018 Dec 21. doi: 10.1212/WNL.0000000000006783). He and his colleagues have also recently shown in a prospective study that herpes simplex encephalitis can result in an autoimmune encephalitis, with NMDA receptor antibodies present in most cases (Lancet Neurol. 2018 Sep;17[9]:760-72).

Dr. Dalmau’s research is supported by the U.S. National Institute of Neurological Disorders and Stroke, the Spanish Ministry of Health, and Spanish research foundations. He reported receiving royalties from the use of several neuronal antibody tests.

 

– Consider the possibility of an autoantibody-related etiology in all cases of first-onset psychosis, Josep Dalmau, MD, PhD, urged at the annual congress of the European College of Neuropsychopharmacology.

Bruce Jancin/MDedge News
Dr. Josep Dalmau

“There are patients in our clinics all of us – neurologists and psychiatrists – are missing. These patients are believed to have psychiatric presentations, but they do not. They are autoimmune,” said Dr. Dalmau, professor of neurology at the University of Barcelona.

Dr. Dalmau urged psychiatrists to become familiar with the red flags suggestive of synaptic autoimmunity as the underlying cause of first-episode, out-of-the-blue psychosis.

“If you have a patient with a classical presentation of schizophrenia or bipolar disorder, you probably won’t find antibodies,” according to the neurologist.

It’s important to have a high index of suspicion, because anti–NMDA receptor encephalitis is treatable with immunotherapy. And firm evidence shows that earlier recognition and treatment lead to improved outcomes. Also, the disorder is refractory to antipsychotics; indeed, antipsychotic agents make affected patients much worse, even to the point of developing something akin to neuroleptic malignant syndrome.

Manifestations of anti–NMDA receptor encephalitis follow a characteristic pattern, beginning with a prodromal flulike phase lasting several days to a week. This is followed by acute-onset bizarre behavioral changes, irritability, and psychosis with delusions and/or hallucinations, often progressing to catatonia. After 1-4 weeks of this, florid neurologic symptoms usually appear, including seizures, abnormal movements, autonomic dysregulation, and hypoventilation requiring prolonged ICU support for weeks to months. This is followed by a prolonged recovery phase lasting 5-24 months, and a period marked by deficits in executive function and working memory, impulsivity, and disinhibition. Impressively, the patient has no memory of the illness.

In one large series of patients with confirmed anti–NMDA receptor encephalitis reported by Dr. Dalmau and coinvestigators, psychiatric symptoms occurred in isolation without subsequent neurologic involvement in just 4% of cases (JAMA Neurol. 2013 Sep 1;70[9]:1133-9).

Dr. Dalmau was senior author of an international cohort study including 577 patients with anti-NMDA receptor encephalitis with serial follow-up for 24 months. The study provided an unprecedented picture of the epidemiology and clinical features of the disorder.

“It’s a disease predominantly of women and young people,” he observed.

Indeed, the median age of the study population was 21 years, and 37% of subjects were less than 18 years of age. Roughly 80% of patients were female and most of them had a benign ovarian teratoma, which played a key role in their neuropsychiatric disease (Lancet Neurol. 2013 Feb;12[2]:157-65). These benign tumors express the NMDA receptor in ectopic nerve tissue, triggering a systemic immune response.

One or more relapses – again treatable via immunotherapy – occurred in 12% of patients during 24 months of follow-up.



When a red flag suggestive of synaptic autoimmunity is present, it’s important to obtain a cerebrospinal fluid (CSF) sample for analysis, along with an EEG and/or brain MRI.

“I don’t know if you as psychiatrists are set up to do spinal taps in all persons with first presentation of psychosis, but this would be my suggestion. It’s extremely useful in this situation,” Dr. Dalmau said.

The vast majority of patients with anti–NMDA receptor encephalitis have CSF pleocytosis with a mild lymphocytic predominance. The MRI is abnormal in about 35% of cases. EEG abnormalities are common but nonspecific. The diagnosis is confirmed by identification of anti–NMDA receptor antibodies in the CSF.

First-line therapy is corticosteroids, intravenous immunoglobulin, and/or plasma exchange to remove the pathogenic antibodies, along with resection of the tumor if present. These treatments are effective in almost half of affected patients. When they’re not, the second-line options are rituximab (Rituxan) and cyclophosphamide, alone or combined.

Antibodies to the NMDA receptor are far and away the most common cause of synaptic autoimmunity-induced psychosis, but other targets of autoimmunity have been documented as well, including the alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor, contactin-associated protein-like 2 (CASPR2), and neurexin-3-alpha.

Dr. Dalmau and various collaborators continue to advance the understanding of this novel category of neuropsychiatric disease. They have developed a simple 5-point score, known as the NEOS score, that predicts 1-year functional status in patients with anti–NMDA receptor encephalitis (Neurology. 2018 Dec 21. doi: 10.1212/WNL.0000000000006783). He and his colleagues have also recently shown in a prospective study that herpes simplex encephalitis can result in an autoimmune encephalitis, with NMDA receptor antibodies present in most cases (Lancet Neurol. 2018 Sep;17[9]:760-72).

Dr. Dalmau’s research is supported by the U.S. National Institute of Neurological Disorders and Stroke, the Spanish Ministry of Health, and Spanish research foundations. He reported receiving royalties from the use of several neuronal antibody tests.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE ECNP CONGRESS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Tic disorders are associated with obesity and diabetes

Article Type
Changed
Thu, 12/15/2022 - 15:47

 

Tourette syndrome and chronic tic disorder are associated with a “substantial risk” of metabolic and cardiovascular disorders such as obesity, type 2 diabetes mellitus (T2DM), and circulatory system diseases, according to a study published online Jan. 14 in JAMA Neurology.

The movement disorders are associated with cardiometabolic problems “even after taking into account a number of covariates and shared familial confounders and excluding relevant psychiatric comorbidities,” the researchers wrote. “The results highlight the importance of carefully monitoring cardiometabolic health in patients with Tourette syndrome or chronic tic disorder across the lifespan, particularly in those with comorbid attention-deficit/hyperactivity disorder (ADHD).”

Gustaf Brander, a researcher in the department of clinical neuroscience at Karolinska Institutet in Stockholm, and his colleagues conducted a longitudinal population-based cohort study of individuals living in Sweden between Jan. 1, 1973, and Dec. 31, 2013. The researchers assessed outcomes for patients with previously validated diagnoses of Tourette syndrome or chronic tic disorder in the Swedish National Patient Register. Main outcomes included obesity, dyslipidemia, hypertension, T2DM, and cardiovascular diseases, including ischemic heart diseases, arrhythmia, cerebrovascular diseases, transient ischemic attack, and arteriosclerosis. In addition, the researchers identified families with full siblings discordant for Tourette syndrome or chronic tic disorder.

Of the more than 14 million individuals in the cohort, 7,804 (76.4% male; median age at first diagnosis, 13.3 years) had a diagnosis of Tourette syndrome or chronic tic disorder in specialist care. Furthermore, the cohort included 5,141 families with full siblings who were discordant for these disorders.

Individuals with Tourette syndrome or chronic tic disorder had a higher risk for any metabolic or cardiovascular disorder, compared with the general population (hazard ratio adjusted by sex and birth year [aHR], 1.99) and sibling controls (aHR, 1.37). Specifically, individuals with Tourette syndrome or chronic tic disorder had higher risks for obesity (aHR, 2.76), T2DM(aHR, 1.67), and circulatory system diseases (aHR, 1.76).

The increased risk of any cardiometabolic disorder was significantly greater for males than it was for females (aHRs, 2.13 vs. 1.79), as was the risk of obesity (aHRs, 3.24 vs. 1.97).

The increased risk for cardiometabolic disorders in this patient population was evident by age 8 years. Exclusion of those patients with comorbid ADHD reduced but did not eliminate the risk (aHR, 1.52). The exclusion of other comorbidities did not significantly affect the results. Among patients with Tourette syndrome or chronic tic disorder, those who had received antipsychotic treatment for more than 1 year were significantly less likely to have metabolic and cardiovascular disorders, compared with patients not taking antipsychotic medication. This association may be related to “greater medical vigilance” and “should not be taken as evidence that antipsychotics are free from cardiometabolic adverse effects,” the authors noted.

The study was supported by a research grant from Tourettes Action. In addition, authors reported support from the Swedish Research Council and a Karolinska Institutet PhD stipend. Two authors disclosed personal fees from publishers, and one author disclosed grants and other funding from Shire.

SOURCE: Brander G et al. JAMA Neurol. 2019 Jan 14. doi: 10.1001/jamaneurol.2018.4279.

Issue
Neurology Reviews- 27(3)
Publications
Topics
Page Number
38
Sections

 

Tourette syndrome and chronic tic disorder are associated with a “substantial risk” of metabolic and cardiovascular disorders such as obesity, type 2 diabetes mellitus (T2DM), and circulatory system diseases, according to a study published online Jan. 14 in JAMA Neurology.

The movement disorders are associated with cardiometabolic problems “even after taking into account a number of covariates and shared familial confounders and excluding relevant psychiatric comorbidities,” the researchers wrote. “The results highlight the importance of carefully monitoring cardiometabolic health in patients with Tourette syndrome or chronic tic disorder across the lifespan, particularly in those with comorbid attention-deficit/hyperactivity disorder (ADHD).”

Gustaf Brander, a researcher in the department of clinical neuroscience at Karolinska Institutet in Stockholm, and his colleagues conducted a longitudinal population-based cohort study of individuals living in Sweden between Jan. 1, 1973, and Dec. 31, 2013. The researchers assessed outcomes for patients with previously validated diagnoses of Tourette syndrome or chronic tic disorder in the Swedish National Patient Register. Main outcomes included obesity, dyslipidemia, hypertension, T2DM, and cardiovascular diseases, including ischemic heart diseases, arrhythmia, cerebrovascular diseases, transient ischemic attack, and arteriosclerosis. In addition, the researchers identified families with full siblings discordant for Tourette syndrome or chronic tic disorder.

Of the more than 14 million individuals in the cohort, 7,804 (76.4% male; median age at first diagnosis, 13.3 years) had a diagnosis of Tourette syndrome or chronic tic disorder in specialist care. Furthermore, the cohort included 5,141 families with full siblings who were discordant for these disorders.

Individuals with Tourette syndrome or chronic tic disorder had a higher risk for any metabolic or cardiovascular disorder, compared with the general population (hazard ratio adjusted by sex and birth year [aHR], 1.99) and sibling controls (aHR, 1.37). Specifically, individuals with Tourette syndrome or chronic tic disorder had higher risks for obesity (aHR, 2.76), T2DM(aHR, 1.67), and circulatory system diseases (aHR, 1.76).

The increased risk of any cardiometabolic disorder was significantly greater for males than it was for females (aHRs, 2.13 vs. 1.79), as was the risk of obesity (aHRs, 3.24 vs. 1.97).

The increased risk for cardiometabolic disorders in this patient population was evident by age 8 years. Exclusion of those patients with comorbid ADHD reduced but did not eliminate the risk (aHR, 1.52). The exclusion of other comorbidities did not significantly affect the results. Among patients with Tourette syndrome or chronic tic disorder, those who had received antipsychotic treatment for more than 1 year were significantly less likely to have metabolic and cardiovascular disorders, compared with patients not taking antipsychotic medication. This association may be related to “greater medical vigilance” and “should not be taken as evidence that antipsychotics are free from cardiometabolic adverse effects,” the authors noted.

The study was supported by a research grant from Tourettes Action. In addition, authors reported support from the Swedish Research Council and a Karolinska Institutet PhD stipend. Two authors disclosed personal fees from publishers, and one author disclosed grants and other funding from Shire.

SOURCE: Brander G et al. JAMA Neurol. 2019 Jan 14. doi: 10.1001/jamaneurol.2018.4279.

 

Tourette syndrome and chronic tic disorder are associated with a “substantial risk” of metabolic and cardiovascular disorders such as obesity, type 2 diabetes mellitus (T2DM), and circulatory system diseases, according to a study published online Jan. 14 in JAMA Neurology.

The movement disorders are associated with cardiometabolic problems “even after taking into account a number of covariates and shared familial confounders and excluding relevant psychiatric comorbidities,” the researchers wrote. “The results highlight the importance of carefully monitoring cardiometabolic health in patients with Tourette syndrome or chronic tic disorder across the lifespan, particularly in those with comorbid attention-deficit/hyperactivity disorder (ADHD).”

Gustaf Brander, a researcher in the department of clinical neuroscience at Karolinska Institutet in Stockholm, and his colleagues conducted a longitudinal population-based cohort study of individuals living in Sweden between Jan. 1, 1973, and Dec. 31, 2013. The researchers assessed outcomes for patients with previously validated diagnoses of Tourette syndrome or chronic tic disorder in the Swedish National Patient Register. Main outcomes included obesity, dyslipidemia, hypertension, T2DM, and cardiovascular diseases, including ischemic heart diseases, arrhythmia, cerebrovascular diseases, transient ischemic attack, and arteriosclerosis. In addition, the researchers identified families with full siblings discordant for Tourette syndrome or chronic tic disorder.

Of the more than 14 million individuals in the cohort, 7,804 (76.4% male; median age at first diagnosis, 13.3 years) had a diagnosis of Tourette syndrome or chronic tic disorder in specialist care. Furthermore, the cohort included 5,141 families with full siblings who were discordant for these disorders.

Individuals with Tourette syndrome or chronic tic disorder had a higher risk for any metabolic or cardiovascular disorder, compared with the general population (hazard ratio adjusted by sex and birth year [aHR], 1.99) and sibling controls (aHR, 1.37). Specifically, individuals with Tourette syndrome or chronic tic disorder had higher risks for obesity (aHR, 2.76), T2DM(aHR, 1.67), and circulatory system diseases (aHR, 1.76).

The increased risk of any cardiometabolic disorder was significantly greater for males than it was for females (aHRs, 2.13 vs. 1.79), as was the risk of obesity (aHRs, 3.24 vs. 1.97).

The increased risk for cardiometabolic disorders in this patient population was evident by age 8 years. Exclusion of those patients with comorbid ADHD reduced but did not eliminate the risk (aHR, 1.52). The exclusion of other comorbidities did not significantly affect the results. Among patients with Tourette syndrome or chronic tic disorder, those who had received antipsychotic treatment for more than 1 year were significantly less likely to have metabolic and cardiovascular disorders, compared with patients not taking antipsychotic medication. This association may be related to “greater medical vigilance” and “should not be taken as evidence that antipsychotics are free from cardiometabolic adverse effects,” the authors noted.

The study was supported by a research grant from Tourettes Action. In addition, authors reported support from the Swedish Research Council and a Karolinska Institutet PhD stipend. Two authors disclosed personal fees from publishers, and one author disclosed grants and other funding from Shire.

SOURCE: Brander G et al. JAMA Neurol. 2019 Jan 14. doi: 10.1001/jamaneurol.2018.4279.

Issue
Neurology Reviews- 27(3)
Issue
Neurology Reviews- 27(3)
Page Number
38
Page Number
38
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Monitor cardiometabolic health in patients with Tourette syndrome or chronic tic disorder.

Major finding: Patients with Tourette syndrome or chronic tic disorder have a higher risk of metabolic or cardiovascular disorders, compared with the general population (adjusted hazard ratio, 1.99) and sibling controls (adjusted hazard ratio, 1.37).

Study details: A Swedish longitudinal, population-based cohort study of 7,804 individuals with Tourette syndrome or chronic tic disorder.

Disclosures: The study was supported by a research grant from Tourettes Action. Authors reported support from the Swedish Research Council and a Karolinska Institutet PhD stipend. Two authors disclosed personal fees from publishers, and one author disclosed grants and other funding from Shire.

Source: Brander G et al. JAMA Neurol. 2019 Jan 14. doi: 10.1001/jamaneurol.2018.4279.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Population-level rate of SUDEP may have decreased

Article Type
Changed
Tue, 02/12/2019 - 11:57

The population-level rate of sudden unexpected death in epilepsy (SUDEP) may have decreased over time, according to data described at the annual meeting of the American Epilepsy Society. Whether this decrease resulted from an improved understanding of SUDEP risk or a focus on risk-reduction strategies is unknown, said Daniel Friedman, MD, associate professor of neurology at the New York University Langone Health.

Dr. Daniel Friedman

In addition, the rates of SUDEP in various populations differ according to their socioeconomic status. Differences in access to care are a potential, but unconfirmed, explanation for this association, said Dr. Friedman. Another possible explanation is that confounders such as mental health disorders, substance abuse, and insufficient social support affect individuals’ ability to manage their disorder.

Dr. Friedman and colleagues initially examined SUDEP rates over time in a cohort of patients who received vagus nerve stimulator (VNS) implantation for drug-resistant epilepsy. They analyzed data for 40,443 patients who underwent surgery during 1988-2012. The age-adjusted SUDEP rate per 1,000 person-years of follow-up decreased significantly from 2.47 in years 1-2 to 1.68 in years 3-10. “There was no control group, so we couldn’t necessarily attribute the SUDEP rate reduction to the intervention,” said Dr. Friedman. A study by Tomson et al of patients with epilepsy who received VNS implantation had similar findings.

The literature about the mechanisms of SUDEP and reduction of SUDEP risk has increased in recent years. Neurologists have advocated for greater disclosure to patients of SUDEP risk, as well as better risk counseling. Dr. Friedman and his colleagues decided to investigate whether these factors have affected the risk of SUDEP during the past decade.

They retrospectively examined data for people whose deaths had been investigated at medical examiner’s offices in New York City, San Diego County, and Maryland. They focused on decedents for whom epilepsy or seizure was listed as a cause or contributor to death or as a comorbid condition on the death certificate. They reviewed all available reports, including investigator notes, autopsy reports, and medical records. Next, Dr. Friedman and his colleagues calculated the annual SUDEP rate as a proportion of the general population, estimated using annual Census and American Community Survey data. They used the Mann-Kendall test to analyze the trends in SUDEP rate during 2009-2015.

Of 1,466 deaths in people with epilepsy during this period, 1,124 were classified as definite SUDEP, probable SUDEP, or near SUDEP. Approximately 63% of SUDEP cases were male, and 45% were African-American. The mean age at death was 38 years.

Dr. Friedman’s group found a significant decrease in the overall incidence of SUDEP in the total population during 2009-2015. When they examined the three regions separately, they found decreases in SUDEP incidence in New York City and Maryland, but not in San Diego County. They found no difference in SUDEP rates by season or by day of the week.

In a subsequent analysis, Dr. Friedman and his colleagues adjudicated all deaths related to seizure and epilepsy in the three regions during 2009-2010 and 2014-2015 and identified all cases of definite and probable SUDEP. The estimated rate of SUDEP decreased by about 36% from the first period to the second period. SUDEP rates as a proportion of the total population in those regions also declined.

The investigators also examined differences in estimated SUDEP rates in the United States according to median household income. In New York, the zip codes with the highest SUDEP rates tended to have the lowest median household incomes. The zip codes in the lowest quartile of family household income had a SUDEP rate more than twice as high as that in the zip codes in the highest income quartile. This association held true for the period from 2009-2010 and for 2014-2015.

Dr. Friedman and colleagues received funding from Finding a Cure for Epilepsy and Seizures, which is affiliated with the NYU Comprehensive Epilepsy Center and NYU Langone Health.
 

SOURCE: Cihan E et al. AES 2018, Abstract 2.419.

Meeting/Event
Issue
Neurology Reviews- 27(2)
Publications
Topics
Page Number
1, 39
Sections
Meeting/Event
Meeting/Event

The population-level rate of sudden unexpected death in epilepsy (SUDEP) may have decreased over time, according to data described at the annual meeting of the American Epilepsy Society. Whether this decrease resulted from an improved understanding of SUDEP risk or a focus on risk-reduction strategies is unknown, said Daniel Friedman, MD, associate professor of neurology at the New York University Langone Health.

Dr. Daniel Friedman

In addition, the rates of SUDEP in various populations differ according to their socioeconomic status. Differences in access to care are a potential, but unconfirmed, explanation for this association, said Dr. Friedman. Another possible explanation is that confounders such as mental health disorders, substance abuse, and insufficient social support affect individuals’ ability to manage their disorder.

Dr. Friedman and colleagues initially examined SUDEP rates over time in a cohort of patients who received vagus nerve stimulator (VNS) implantation for drug-resistant epilepsy. They analyzed data for 40,443 patients who underwent surgery during 1988-2012. The age-adjusted SUDEP rate per 1,000 person-years of follow-up decreased significantly from 2.47 in years 1-2 to 1.68 in years 3-10. “There was no control group, so we couldn’t necessarily attribute the SUDEP rate reduction to the intervention,” said Dr. Friedman. A study by Tomson et al of patients with epilepsy who received VNS implantation had similar findings.

The literature about the mechanisms of SUDEP and reduction of SUDEP risk has increased in recent years. Neurologists have advocated for greater disclosure to patients of SUDEP risk, as well as better risk counseling. Dr. Friedman and his colleagues decided to investigate whether these factors have affected the risk of SUDEP during the past decade.

They retrospectively examined data for people whose deaths had been investigated at medical examiner’s offices in New York City, San Diego County, and Maryland. They focused on decedents for whom epilepsy or seizure was listed as a cause or contributor to death or as a comorbid condition on the death certificate. They reviewed all available reports, including investigator notes, autopsy reports, and medical records. Next, Dr. Friedman and his colleagues calculated the annual SUDEP rate as a proportion of the general population, estimated using annual Census and American Community Survey data. They used the Mann-Kendall test to analyze the trends in SUDEP rate during 2009-2015.

Of 1,466 deaths in people with epilepsy during this period, 1,124 were classified as definite SUDEP, probable SUDEP, or near SUDEP. Approximately 63% of SUDEP cases were male, and 45% were African-American. The mean age at death was 38 years.

Dr. Friedman’s group found a significant decrease in the overall incidence of SUDEP in the total population during 2009-2015. When they examined the three regions separately, they found decreases in SUDEP incidence in New York City and Maryland, but not in San Diego County. They found no difference in SUDEP rates by season or by day of the week.

In a subsequent analysis, Dr. Friedman and his colleagues adjudicated all deaths related to seizure and epilepsy in the three regions during 2009-2010 and 2014-2015 and identified all cases of definite and probable SUDEP. The estimated rate of SUDEP decreased by about 36% from the first period to the second period. SUDEP rates as a proportion of the total population in those regions also declined.

The investigators also examined differences in estimated SUDEP rates in the United States according to median household income. In New York, the zip codes with the highest SUDEP rates tended to have the lowest median household incomes. The zip codes in the lowest quartile of family household income had a SUDEP rate more than twice as high as that in the zip codes in the highest income quartile. This association held true for the period from 2009-2010 and for 2014-2015.

Dr. Friedman and colleagues received funding from Finding a Cure for Epilepsy and Seizures, which is affiliated with the NYU Comprehensive Epilepsy Center and NYU Langone Health.
 

SOURCE: Cihan E et al. AES 2018, Abstract 2.419.

The population-level rate of sudden unexpected death in epilepsy (SUDEP) may have decreased over time, according to data described at the annual meeting of the American Epilepsy Society. Whether this decrease resulted from an improved understanding of SUDEP risk or a focus on risk-reduction strategies is unknown, said Daniel Friedman, MD, associate professor of neurology at the New York University Langone Health.

Dr. Daniel Friedman

In addition, the rates of SUDEP in various populations differ according to their socioeconomic status. Differences in access to care are a potential, but unconfirmed, explanation for this association, said Dr. Friedman. Another possible explanation is that confounders such as mental health disorders, substance abuse, and insufficient social support affect individuals’ ability to manage their disorder.

Dr. Friedman and colleagues initially examined SUDEP rates over time in a cohort of patients who received vagus nerve stimulator (VNS) implantation for drug-resistant epilepsy. They analyzed data for 40,443 patients who underwent surgery during 1988-2012. The age-adjusted SUDEP rate per 1,000 person-years of follow-up decreased significantly from 2.47 in years 1-2 to 1.68 in years 3-10. “There was no control group, so we couldn’t necessarily attribute the SUDEP rate reduction to the intervention,” said Dr. Friedman. A study by Tomson et al of patients with epilepsy who received VNS implantation had similar findings.

The literature about the mechanisms of SUDEP and reduction of SUDEP risk has increased in recent years. Neurologists have advocated for greater disclosure to patients of SUDEP risk, as well as better risk counseling. Dr. Friedman and his colleagues decided to investigate whether these factors have affected the risk of SUDEP during the past decade.

They retrospectively examined data for people whose deaths had been investigated at medical examiner’s offices in New York City, San Diego County, and Maryland. They focused on decedents for whom epilepsy or seizure was listed as a cause or contributor to death or as a comorbid condition on the death certificate. They reviewed all available reports, including investigator notes, autopsy reports, and medical records. Next, Dr. Friedman and his colleagues calculated the annual SUDEP rate as a proportion of the general population, estimated using annual Census and American Community Survey data. They used the Mann-Kendall test to analyze the trends in SUDEP rate during 2009-2015.

Of 1,466 deaths in people with epilepsy during this period, 1,124 were classified as definite SUDEP, probable SUDEP, or near SUDEP. Approximately 63% of SUDEP cases were male, and 45% were African-American. The mean age at death was 38 years.

Dr. Friedman’s group found a significant decrease in the overall incidence of SUDEP in the total population during 2009-2015. When they examined the three regions separately, they found decreases in SUDEP incidence in New York City and Maryland, but not in San Diego County. They found no difference in SUDEP rates by season or by day of the week.

In a subsequent analysis, Dr. Friedman and his colleagues adjudicated all deaths related to seizure and epilepsy in the three regions during 2009-2010 and 2014-2015 and identified all cases of definite and probable SUDEP. The estimated rate of SUDEP decreased by about 36% from the first period to the second period. SUDEP rates as a proportion of the total population in those regions also declined.

The investigators also examined differences in estimated SUDEP rates in the United States according to median household income. In New York, the zip codes with the highest SUDEP rates tended to have the lowest median household incomes. The zip codes in the lowest quartile of family household income had a SUDEP rate more than twice as high as that in the zip codes in the highest income quartile. This association held true for the period from 2009-2010 and for 2014-2015.

Dr. Friedman and colleagues received funding from Finding a Cure for Epilepsy and Seizures, which is affiliated with the NYU Comprehensive Epilepsy Center and NYU Langone Health.
 

SOURCE: Cihan E et al. AES 2018, Abstract 2.419.

Issue
Neurology Reviews- 27(2)
Issue
Neurology Reviews- 27(2)
Page Number
1, 39
Page Number
1, 39
Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM AES 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Data indicate a decline over time in the incidence of SUDEP.

Major finding: The incidence of SUDEP declined by 36% from 2009-2010 to 2014-2015.

Study details: A retrospective analysis of medical examiner data on 1,466 deaths in people with epilepsy.

Disclosures: Finding a Cure for Epilepsy and Seizures provided funding for the study.

Source: Cihan E et al. AES 2018, Abstract 2.419.

Disqus Comments
Default
Use ProPublica