User login
Which Factors Predict Conversion From CIS to MS?
Age at onset, exposure to disease-modifying drugs (DMDs), disability at onset, MRI criteria, and oligoclonal bands affect the likelihood of conversion from clinically isolated syndrome (CIS) to multiple sclerosis (MS), according to research published in the September issue of Multiple Sclerosis Journal. The study results corroborate and augment previous research into factors that predict clinical conversion, said the authors.
In addition, the researchers developed and validated a nomogram intended to predict individualized risk of relapse after CIS. The nomogram’s estimated probabilities of conversion to MS had high concordance with observed rates of conversion.
An Analysis of Prospective Data
Tim Spelman, MBBS
Dr. Spelman and colleagues used Cox proportional hazards regression to examine the correlation between previously identified predictors and time to first relapse after CIS. They used their baseline-adjusted data modeling to create a nomogram to predict conversion to clinically definite MS.
Drug Exposure Reduced Risk
Approximately 43% of participants initiated intramuscular interferon β-1a, 34% initiated subcutaneous interferon β-1a, 18% initiated interferon β-1b, and 14% initiated glatiramer acetate. In all, 1,953 patients (59%) had a relapse during a median follow-up of 1.92 years.
Older age at CIS was associated with a 10% reduction in the risk of clinical conversion. Every one-point increase in baseline EDSS score was associated with 1.16 times the rate of subsequent conversion. Compared with the optic pathway, first symptom location in the brainstem was associated with 1.17 times the rate of second attack, and first symptom location in the supratentorial region was associated with 1.29 times the rate of second attack. Any exposure to DMD during follow-up was associated with a 42% rate reduction in time to first relapse, compared with nonexposure.
CSF-restricted oligoclonal bands were associated with 1.52 times the rate of relapse, compared with the absence of oligoclonal bands. Having at least one T1 gadolinium-enhancing lesion was associated with 1.24 times the rate of relapse. Having three or more periventricular lesions was associated with 1.68 times the rate of relapse, compared with no lesions. Having at least one infratentorial and at least one juxtacortical lesion on brain MRI were associated with 1.21 times and 1.21 times the rate of first post-CIS relapse, respectively, compared with no lesions.
“This multinational, prospective study represents the largest post-CIS cohort reported to date,” said Dr. Spelman. “Identification of patient, disease, and examination factors associated with higher probability of second attack in clinical practice may enable clinicians to flag patients that could benefit from more intensive follow-up and consideration of early DMD treatment intervention, [thus] facilitating more favorable patient outcomes.”
—Erik Greb
Suggested Reading
Spelman T, Meyniel C, Rojas JI, et al. Quantifying risk of early relapse in patients with first demyelinating events: Prediction in clinical practice. Mult Scler. 2017;23(10):1346-1357.
Age at onset, exposure to disease-modifying drugs (DMDs), disability at onset, MRI criteria, and oligoclonal bands affect the likelihood of conversion from clinically isolated syndrome (CIS) to multiple sclerosis (MS), according to research published in the September issue of Multiple Sclerosis Journal. The study results corroborate and augment previous research into factors that predict clinical conversion, said the authors.
In addition, the researchers developed and validated a nomogram intended to predict individualized risk of relapse after CIS. The nomogram’s estimated probabilities of conversion to MS had high concordance with observed rates of conversion.
An Analysis of Prospective Data
Tim Spelman, MBBS
Dr. Spelman and colleagues used Cox proportional hazards regression to examine the correlation between previously identified predictors and time to first relapse after CIS. They used their baseline-adjusted data modeling to create a nomogram to predict conversion to clinically definite MS.
Drug Exposure Reduced Risk
Approximately 43% of participants initiated intramuscular interferon β-1a, 34% initiated subcutaneous interferon β-1a, 18% initiated interferon β-1b, and 14% initiated glatiramer acetate. In all, 1,953 patients (59%) had a relapse during a median follow-up of 1.92 years.
Older age at CIS was associated with a 10% reduction in the risk of clinical conversion. Every one-point increase in baseline EDSS score was associated with 1.16 times the rate of subsequent conversion. Compared with the optic pathway, first symptom location in the brainstem was associated with 1.17 times the rate of second attack, and first symptom location in the supratentorial region was associated with 1.29 times the rate of second attack. Any exposure to DMD during follow-up was associated with a 42% rate reduction in time to first relapse, compared with nonexposure.
CSF-restricted oligoclonal bands were associated with 1.52 times the rate of relapse, compared with the absence of oligoclonal bands. Having at least one T1 gadolinium-enhancing lesion was associated with 1.24 times the rate of relapse. Having three or more periventricular lesions was associated with 1.68 times the rate of relapse, compared with no lesions. Having at least one infratentorial and at least one juxtacortical lesion on brain MRI were associated with 1.21 times and 1.21 times the rate of first post-CIS relapse, respectively, compared with no lesions.
“This multinational, prospective study represents the largest post-CIS cohort reported to date,” said Dr. Spelman. “Identification of patient, disease, and examination factors associated with higher probability of second attack in clinical practice may enable clinicians to flag patients that could benefit from more intensive follow-up and consideration of early DMD treatment intervention, [thus] facilitating more favorable patient outcomes.”
—Erik Greb
Suggested Reading
Spelman T, Meyniel C, Rojas JI, et al. Quantifying risk of early relapse in patients with first demyelinating events: Prediction in clinical practice. Mult Scler. 2017;23(10):1346-1357.
Age at onset, exposure to disease-modifying drugs (DMDs), disability at onset, MRI criteria, and oligoclonal bands affect the likelihood of conversion from clinically isolated syndrome (CIS) to multiple sclerosis (MS), according to research published in the September issue of Multiple Sclerosis Journal. The study results corroborate and augment previous research into factors that predict clinical conversion, said the authors.
In addition, the researchers developed and validated a nomogram intended to predict individualized risk of relapse after CIS. The nomogram’s estimated probabilities of conversion to MS had high concordance with observed rates of conversion.
An Analysis of Prospective Data
Tim Spelman, MBBS
Dr. Spelman and colleagues used Cox proportional hazards regression to examine the correlation between previously identified predictors and time to first relapse after CIS. They used their baseline-adjusted data modeling to create a nomogram to predict conversion to clinically definite MS.
Drug Exposure Reduced Risk
Approximately 43% of participants initiated intramuscular interferon β-1a, 34% initiated subcutaneous interferon β-1a, 18% initiated interferon β-1b, and 14% initiated glatiramer acetate. In all, 1,953 patients (59%) had a relapse during a median follow-up of 1.92 years.
Older age at CIS was associated with a 10% reduction in the risk of clinical conversion. Every one-point increase in baseline EDSS score was associated with 1.16 times the rate of subsequent conversion. Compared with the optic pathway, first symptom location in the brainstem was associated with 1.17 times the rate of second attack, and first symptom location in the supratentorial region was associated with 1.29 times the rate of second attack. Any exposure to DMD during follow-up was associated with a 42% rate reduction in time to first relapse, compared with nonexposure.
CSF-restricted oligoclonal bands were associated with 1.52 times the rate of relapse, compared with the absence of oligoclonal bands. Having at least one T1 gadolinium-enhancing lesion was associated with 1.24 times the rate of relapse. Having three or more periventricular lesions was associated with 1.68 times the rate of relapse, compared with no lesions. Having at least one infratentorial and at least one juxtacortical lesion on brain MRI were associated with 1.21 times and 1.21 times the rate of first post-CIS relapse, respectively, compared with no lesions.
“This multinational, prospective study represents the largest post-CIS cohort reported to date,” said Dr. Spelman. “Identification of patient, disease, and examination factors associated with higher probability of second attack in clinical practice may enable clinicians to flag patients that could benefit from more intensive follow-up and consideration of early DMD treatment intervention, [thus] facilitating more favorable patient outcomes.”
—Erik Greb
Suggested Reading
Spelman T, Meyniel C, Rojas JI, et al. Quantifying risk of early relapse in patients with first demyelinating events: Prediction in clinical practice. Mult Scler. 2017;23(10):1346-1357.
How Does Gender Influence Perceived Health in Older People With MS?
Older men adapt more poorly to aging with multiple sclerosis (MS), compared with older women, according to research published in the July–August issue of International Journal of MS Care. Health and lifestyle behaviors may put older men with MS at greater risk of health decline, said the authors. Older women, however, appear to have more confidence in their ability to cope with challenges and control the course of their disease.
Healthy Aging With MS
Improved longevity in patients with MS has increased interest in understanding factors associated with healthy aging. Previous studies suggested that factors such as depression, disability, decreased levels of social support, and unemployment predict health-related quality of life in MS.
Two studies examining sex differences in health-related quality of life in young to middle-aged patients with MS found that the association between disability and health-related quality of life was stronger in men than in women. No studies, however, have examined sex differences in health perception among older people with MS, according to the authors.
Analysis of a Canadian Postal Survey
To determine whether older women and men with MS have different health and lifestyle behaviors and whether there are sex differences in contributors to perceived health, Dr. Ploughman and colleagues analyzed data from the Canadian Survey of Health, Lifestyle, and Aging With MS. This cross-sectional study included Canadians older than 55 who had had MS for at least 20 years. Of 921 people contacted, 743 (577 women) returned the mailed questionnaire.
The questionnaire asked about biologic factors (eg, comorbid conditions, years since MS diagnosis), symptoms (eg, depression, anxiety, fatigue, and stress), function (eg, disability and participation), and individual and environmental factors (eg, socioeconomic status, education, and social or health support). Researchers used multiple regression analysis to build explanatory models of health perception.
Older Men With MS Were Less Resilient
Investigators found no differences in disability between men and women, nor differences in age, years of education, or years since MS diagnosis. Older men had lower perceived health and lower resilience, and participated less in life roles than did older women.
In addition, men had more depressive symptoms, and women reported more anxiety. Women also reported higher adherence to a healthy diet (ie, one high in fruits and vegetables and low in meat). Men consumed more alcohol weekly.
Depression was the strongest predictor of health perception in women and men. Other contributors included household participation, fatigue, resilience, and disability in women and physical activity, financial flexibility, and alcohol use in men.
More research is necessary to examine healthy aging in the oldest people with MS, such as octogenarians, said the authors.
—Erica Tricarico
Suggested Reading
Ploughman M, Collins K, Wallack EM, et al. Women’s and men’s differing experiences of health, lifestyle, and aging with multiple sclerosis. Int J MS Care. 2017;19(4):165-171.
Older men adapt more poorly to aging with multiple sclerosis (MS), compared with older women, according to research published in the July–August issue of International Journal of MS Care. Health and lifestyle behaviors may put older men with MS at greater risk of health decline, said the authors. Older women, however, appear to have more confidence in their ability to cope with challenges and control the course of their disease.
Healthy Aging With MS
Improved longevity in patients with MS has increased interest in understanding factors associated with healthy aging. Previous studies suggested that factors such as depression, disability, decreased levels of social support, and unemployment predict health-related quality of life in MS.
Two studies examining sex differences in health-related quality of life in young to middle-aged patients with MS found that the association between disability and health-related quality of life was stronger in men than in women. No studies, however, have examined sex differences in health perception among older people with MS, according to the authors.
Analysis of a Canadian Postal Survey
To determine whether older women and men with MS have different health and lifestyle behaviors and whether there are sex differences in contributors to perceived health, Dr. Ploughman and colleagues analyzed data from the Canadian Survey of Health, Lifestyle, and Aging With MS. This cross-sectional study included Canadians older than 55 who had had MS for at least 20 years. Of 921 people contacted, 743 (577 women) returned the mailed questionnaire.
The questionnaire asked about biologic factors (eg, comorbid conditions, years since MS diagnosis), symptoms (eg, depression, anxiety, fatigue, and stress), function (eg, disability and participation), and individual and environmental factors (eg, socioeconomic status, education, and social or health support). Researchers used multiple regression analysis to build explanatory models of health perception.
Older Men With MS Were Less Resilient
Investigators found no differences in disability between men and women, nor differences in age, years of education, or years since MS diagnosis. Older men had lower perceived health and lower resilience, and participated less in life roles than did older women.
In addition, men had more depressive symptoms, and women reported more anxiety. Women also reported higher adherence to a healthy diet (ie, one high in fruits and vegetables and low in meat). Men consumed more alcohol weekly.
Depression was the strongest predictor of health perception in women and men. Other contributors included household participation, fatigue, resilience, and disability in women and physical activity, financial flexibility, and alcohol use in men.
More research is necessary to examine healthy aging in the oldest people with MS, such as octogenarians, said the authors.
—Erica Tricarico
Suggested Reading
Ploughman M, Collins K, Wallack EM, et al. Women’s and men’s differing experiences of health, lifestyle, and aging with multiple sclerosis. Int J MS Care. 2017;19(4):165-171.
Older men adapt more poorly to aging with multiple sclerosis (MS), compared with older women, according to research published in the July–August issue of International Journal of MS Care. Health and lifestyle behaviors may put older men with MS at greater risk of health decline, said the authors. Older women, however, appear to have more confidence in their ability to cope with challenges and control the course of their disease.
Healthy Aging With MS
Improved longevity in patients with MS has increased interest in understanding factors associated with healthy aging. Previous studies suggested that factors such as depression, disability, decreased levels of social support, and unemployment predict health-related quality of life in MS.
Two studies examining sex differences in health-related quality of life in young to middle-aged patients with MS found that the association between disability and health-related quality of life was stronger in men than in women. No studies, however, have examined sex differences in health perception among older people with MS, according to the authors.
Analysis of a Canadian Postal Survey
To determine whether older women and men with MS have different health and lifestyle behaviors and whether there are sex differences in contributors to perceived health, Dr. Ploughman and colleagues analyzed data from the Canadian Survey of Health, Lifestyle, and Aging With MS. This cross-sectional study included Canadians older than 55 who had had MS for at least 20 years. Of 921 people contacted, 743 (577 women) returned the mailed questionnaire.
The questionnaire asked about biologic factors (eg, comorbid conditions, years since MS diagnosis), symptoms (eg, depression, anxiety, fatigue, and stress), function (eg, disability and participation), and individual and environmental factors (eg, socioeconomic status, education, and social or health support). Researchers used multiple regression analysis to build explanatory models of health perception.
Older Men With MS Were Less Resilient
Investigators found no differences in disability between men and women, nor differences in age, years of education, or years since MS diagnosis. Older men had lower perceived health and lower resilience, and participated less in life roles than did older women.
In addition, men had more depressive symptoms, and women reported more anxiety. Women also reported higher adherence to a healthy diet (ie, one high in fruits and vegetables and low in meat). Men consumed more alcohol weekly.
Depression was the strongest predictor of health perception in women and men. Other contributors included household participation, fatigue, resilience, and disability in women and physical activity, financial flexibility, and alcohol use in men.
More research is necessary to examine healthy aging in the oldest people with MS, such as octogenarians, said the authors.
—Erica Tricarico
Suggested Reading
Ploughman M, Collins K, Wallack EM, et al. Women’s and men’s differing experiences of health, lifestyle, and aging with multiple sclerosis. Int J MS Care. 2017;19(4):165-171.
Combination of Rivaroxaban and Aspirin Improves Cardiovascular Outcomes
Compared with aspirin alone, a regimen of rivaroxaban plus aspirin is associated with better cardiovascular outcomes among patients with stable atherosclerotic vascular disease, according to research published August 27 in the New England Journal of Medicine. Although the combination increases the risk of major bleeding events, it has greater net clinical benefit than aspirin alone, said the investigators.
COMPASS: An International Trial
“Efforts to improve aspirin have focused primarily on combining aspirin with another antiplatelet drug or replacing aspirin with another antiplatelet drug, but this [tactic] has had only limited success,” said Dr. Eikelboom. He and his colleagues conducted the Cardiovascular Outcomes for People Using Anticoagulation Strategies (COMPASS) trial, a double-blind study to evaluate whether rivaroxaban, a selective direct factor Xa inhibitor, either alone or in combination with aspirin, would be more effective than aspirin alone for secondary cardiovascular prevention.
The study took place at 602 centers in 33 countries. Eligible patients met the criteria for coronary artery disease, peripheral arterial disease, or both. Among the exclusion criteria were high bleeding risk, recent stroke or previous hemorrhagic or lacunar stroke, severe heart failure, and advanced stable kidney disease. During a run-in phase, participants received a rivaroxaban-matched placebo twice daily and aspirin (100 mg/day). Participants who adhered to this regimen were randomized in equal groups to rivaroxaban (2.5 mg bid) plus aspirin (100 mg/day), rivaroxaban (5 mg bid) plus placebo once daily, or aspirin (100 mg/day) plus placebo twice daily.
The primary efficacy outcome was the composite of cardiovascular death, stroke, or myocardial infarction. The main safety outcome was a modification of the International Society on Thrombosis and Hemostasis criteria for major bleeding. The investigators intended to continue the trial until at least 2,200 participants had a confirmed primary efficacy outcome. They planned formal interim analyses of efficacy for when 50% and 75% of primary efficacy events had occurred.
Study Was Stopped Early for Efficacy
The investigators enrolled 27,395 participants into the trial. The population’s mean age was 68.2, and 22.0% of participants were women. The mean systolic blood pressure was 136 mm Hg, the mean diastolic blood pressure was 78 mm Hg, and the mean total cholesterol level was 4.2 mmol/L. Having observed a consistent difference in the primary efficacy outcome in favor of rivaroxaban plus aspirin, the independent data and safety monitoring board recommended early termination of the study at the first formal interim analysis for efficacy.
The rate of primary outcome events was 4.1% (379 patients) in the rivaroxaban-plus-aspirin group, 4.9% (448 patients) in the rivaroxaban group, and 5.4% (496 patients) in the aspirin group. Compared with aspirin alone, rivaroxaban plus aspirin reduced the risk of the primary outcome by 24%. Rivaroxaban alone reduced the risk of the primary outcome by 10%, compared with aspirin alone, but this result was not statistically significant.
The rate of major bleeding events was 3.1% (288 patients) in the rivaroxaban-plus-aspirin group and 1.9% (170 patients) in the aspirin-alone group. Compared with aspirin alone, rivaroxaban plus aspirin increased the risk of major bleeding by 70%. Most of the excess major bleeding occurred in the gastrointestinal tract. The researchers saw no significant between-group difference in the rates of fatal bleeding, intracranial bleeding, or symptomatic bleeding into a critical organ. The rate of serious adverse events was 7.9% (721 patients) in the rivaroxaban-plus-aspirin group, 7.7% (702 patients) in the rivaroxaban group, and 7.3% (662 patients) in the aspirin group.
The risk of the net-clinical-benefit outcome (ie, a composite of cardiovascular death, stroke, myocardial infarction, fatal bleeding, or symptomatic bleeding into a critical organ) was 20% lower with rivaroxaban plus aspirin than with aspirin alone. The risk of the net-clinical-benefit outcome was not significantly lower with rivaroxaban alone than with aspirin alone.
Could Practice Guidelines Change?
Although the rate of stroke was lower among patients receiving rivaroxaban plus aspirin than among patients receiving aspirin alone, the researchers found no statistically significant difference between groups in the rate of myocardial infarction, said Eugene Braunwald, MD, Professor of Cardiovascular Medicine at Brigham and Women’s Hospital in Boston, in an accompanying editorial. Nevertheless, “this trial represents an important step forward in thrombocardiology, and it is likely to change practice guidelines,” he added.
Future clinical investigation in this field could pursue several paths. For example, a head-to-head comparison between aspirin plus a second antiplatelet drug and a low dose of a factor Xa inhibitor could be of great interest, said Dr. Braunwald. “Perhaps substituting a P2Y12 inhibitor or thrombin-receptor antagonist for aspirin, together with a very low dose of a factor Xa inhibitor, might lead to even greater efficacy by reducing myocardial infarction,” he added. Also, different subgroups of patients with stable ischemic heart disease may respond differently to these various drug combinations, and these different responses could enable a personalized approach to patients with stable ischemic heart disease, he concluded.
—Erik Greb
Suggested Reading
Braunwald E. An important step for thrombocardiology. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Eikelboom JW, Connolly SJ, Bosch J, et al. Rivaroxaban with or without aspirin in stable cardiovascular disease. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Compared with aspirin alone, a regimen of rivaroxaban plus aspirin is associated with better cardiovascular outcomes among patients with stable atherosclerotic vascular disease, according to research published August 27 in the New England Journal of Medicine. Although the combination increases the risk of major bleeding events, it has greater net clinical benefit than aspirin alone, said the investigators.
COMPASS: An International Trial
“Efforts to improve aspirin have focused primarily on combining aspirin with another antiplatelet drug or replacing aspirin with another antiplatelet drug, but this [tactic] has had only limited success,” said Dr. Eikelboom. He and his colleagues conducted the Cardiovascular Outcomes for People Using Anticoagulation Strategies (COMPASS) trial, a double-blind study to evaluate whether rivaroxaban, a selective direct factor Xa inhibitor, either alone or in combination with aspirin, would be more effective than aspirin alone for secondary cardiovascular prevention.
The study took place at 602 centers in 33 countries. Eligible patients met the criteria for coronary artery disease, peripheral arterial disease, or both. Among the exclusion criteria were high bleeding risk, recent stroke or previous hemorrhagic or lacunar stroke, severe heart failure, and advanced stable kidney disease. During a run-in phase, participants received a rivaroxaban-matched placebo twice daily and aspirin (100 mg/day). Participants who adhered to this regimen were randomized in equal groups to rivaroxaban (2.5 mg bid) plus aspirin (100 mg/day), rivaroxaban (5 mg bid) plus placebo once daily, or aspirin (100 mg/day) plus placebo twice daily.
The primary efficacy outcome was the composite of cardiovascular death, stroke, or myocardial infarction. The main safety outcome was a modification of the International Society on Thrombosis and Hemostasis criteria for major bleeding. The investigators intended to continue the trial until at least 2,200 participants had a confirmed primary efficacy outcome. They planned formal interim analyses of efficacy for when 50% and 75% of primary efficacy events had occurred.
Study Was Stopped Early for Efficacy
The investigators enrolled 27,395 participants into the trial. The population’s mean age was 68.2, and 22.0% of participants were women. The mean systolic blood pressure was 136 mm Hg, the mean diastolic blood pressure was 78 mm Hg, and the mean total cholesterol level was 4.2 mmol/L. Having observed a consistent difference in the primary efficacy outcome in favor of rivaroxaban plus aspirin, the independent data and safety monitoring board recommended early termination of the study at the first formal interim analysis for efficacy.
The rate of primary outcome events was 4.1% (379 patients) in the rivaroxaban-plus-aspirin group, 4.9% (448 patients) in the rivaroxaban group, and 5.4% (496 patients) in the aspirin group. Compared with aspirin alone, rivaroxaban plus aspirin reduced the risk of the primary outcome by 24%. Rivaroxaban alone reduced the risk of the primary outcome by 10%, compared with aspirin alone, but this result was not statistically significant.
The rate of major bleeding events was 3.1% (288 patients) in the rivaroxaban-plus-aspirin group and 1.9% (170 patients) in the aspirin-alone group. Compared with aspirin alone, rivaroxaban plus aspirin increased the risk of major bleeding by 70%. Most of the excess major bleeding occurred in the gastrointestinal tract. The researchers saw no significant between-group difference in the rates of fatal bleeding, intracranial bleeding, or symptomatic bleeding into a critical organ. The rate of serious adverse events was 7.9% (721 patients) in the rivaroxaban-plus-aspirin group, 7.7% (702 patients) in the rivaroxaban group, and 7.3% (662 patients) in the aspirin group.
The risk of the net-clinical-benefit outcome (ie, a composite of cardiovascular death, stroke, myocardial infarction, fatal bleeding, or symptomatic bleeding into a critical organ) was 20% lower with rivaroxaban plus aspirin than with aspirin alone. The risk of the net-clinical-benefit outcome was not significantly lower with rivaroxaban alone than with aspirin alone.
Could Practice Guidelines Change?
Although the rate of stroke was lower among patients receiving rivaroxaban plus aspirin than among patients receiving aspirin alone, the researchers found no statistically significant difference between groups in the rate of myocardial infarction, said Eugene Braunwald, MD, Professor of Cardiovascular Medicine at Brigham and Women’s Hospital in Boston, in an accompanying editorial. Nevertheless, “this trial represents an important step forward in thrombocardiology, and it is likely to change practice guidelines,” he added.
Future clinical investigation in this field could pursue several paths. For example, a head-to-head comparison between aspirin plus a second antiplatelet drug and a low dose of a factor Xa inhibitor could be of great interest, said Dr. Braunwald. “Perhaps substituting a P2Y12 inhibitor or thrombin-receptor antagonist for aspirin, together with a very low dose of a factor Xa inhibitor, might lead to even greater efficacy by reducing myocardial infarction,” he added. Also, different subgroups of patients with stable ischemic heart disease may respond differently to these various drug combinations, and these different responses could enable a personalized approach to patients with stable ischemic heart disease, he concluded.
—Erik Greb
Suggested Reading
Braunwald E. An important step for thrombocardiology. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Eikelboom JW, Connolly SJ, Bosch J, et al. Rivaroxaban with or without aspirin in stable cardiovascular disease. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Compared with aspirin alone, a regimen of rivaroxaban plus aspirin is associated with better cardiovascular outcomes among patients with stable atherosclerotic vascular disease, according to research published August 27 in the New England Journal of Medicine. Although the combination increases the risk of major bleeding events, it has greater net clinical benefit than aspirin alone, said the investigators.
COMPASS: An International Trial
“Efforts to improve aspirin have focused primarily on combining aspirin with another antiplatelet drug or replacing aspirin with another antiplatelet drug, but this [tactic] has had only limited success,” said Dr. Eikelboom. He and his colleagues conducted the Cardiovascular Outcomes for People Using Anticoagulation Strategies (COMPASS) trial, a double-blind study to evaluate whether rivaroxaban, a selective direct factor Xa inhibitor, either alone or in combination with aspirin, would be more effective than aspirin alone for secondary cardiovascular prevention.
The study took place at 602 centers in 33 countries. Eligible patients met the criteria for coronary artery disease, peripheral arterial disease, or both. Among the exclusion criteria were high bleeding risk, recent stroke or previous hemorrhagic or lacunar stroke, severe heart failure, and advanced stable kidney disease. During a run-in phase, participants received a rivaroxaban-matched placebo twice daily and aspirin (100 mg/day). Participants who adhered to this regimen were randomized in equal groups to rivaroxaban (2.5 mg bid) plus aspirin (100 mg/day), rivaroxaban (5 mg bid) plus placebo once daily, or aspirin (100 mg/day) plus placebo twice daily.
The primary efficacy outcome was the composite of cardiovascular death, stroke, or myocardial infarction. The main safety outcome was a modification of the International Society on Thrombosis and Hemostasis criteria for major bleeding. The investigators intended to continue the trial until at least 2,200 participants had a confirmed primary efficacy outcome. They planned formal interim analyses of efficacy for when 50% and 75% of primary efficacy events had occurred.
Study Was Stopped Early for Efficacy
The investigators enrolled 27,395 participants into the trial. The population’s mean age was 68.2, and 22.0% of participants were women. The mean systolic blood pressure was 136 mm Hg, the mean diastolic blood pressure was 78 mm Hg, and the mean total cholesterol level was 4.2 mmol/L. Having observed a consistent difference in the primary efficacy outcome in favor of rivaroxaban plus aspirin, the independent data and safety monitoring board recommended early termination of the study at the first formal interim analysis for efficacy.
The rate of primary outcome events was 4.1% (379 patients) in the rivaroxaban-plus-aspirin group, 4.9% (448 patients) in the rivaroxaban group, and 5.4% (496 patients) in the aspirin group. Compared with aspirin alone, rivaroxaban plus aspirin reduced the risk of the primary outcome by 24%. Rivaroxaban alone reduced the risk of the primary outcome by 10%, compared with aspirin alone, but this result was not statistically significant.
The rate of major bleeding events was 3.1% (288 patients) in the rivaroxaban-plus-aspirin group and 1.9% (170 patients) in the aspirin-alone group. Compared with aspirin alone, rivaroxaban plus aspirin increased the risk of major bleeding by 70%. Most of the excess major bleeding occurred in the gastrointestinal tract. The researchers saw no significant between-group difference in the rates of fatal bleeding, intracranial bleeding, or symptomatic bleeding into a critical organ. The rate of serious adverse events was 7.9% (721 patients) in the rivaroxaban-plus-aspirin group, 7.7% (702 patients) in the rivaroxaban group, and 7.3% (662 patients) in the aspirin group.
The risk of the net-clinical-benefit outcome (ie, a composite of cardiovascular death, stroke, myocardial infarction, fatal bleeding, or symptomatic bleeding into a critical organ) was 20% lower with rivaroxaban plus aspirin than with aspirin alone. The risk of the net-clinical-benefit outcome was not significantly lower with rivaroxaban alone than with aspirin alone.
Could Practice Guidelines Change?
Although the rate of stroke was lower among patients receiving rivaroxaban plus aspirin than among patients receiving aspirin alone, the researchers found no statistically significant difference between groups in the rate of myocardial infarction, said Eugene Braunwald, MD, Professor of Cardiovascular Medicine at Brigham and Women’s Hospital in Boston, in an accompanying editorial. Nevertheless, “this trial represents an important step forward in thrombocardiology, and it is likely to change practice guidelines,” he added.
Future clinical investigation in this field could pursue several paths. For example, a head-to-head comparison between aspirin plus a second antiplatelet drug and a low dose of a factor Xa inhibitor could be of great interest, said Dr. Braunwald. “Perhaps substituting a P2Y12 inhibitor or thrombin-receptor antagonist for aspirin, together with a very low dose of a factor Xa inhibitor, might lead to even greater efficacy by reducing myocardial infarction,” he added. Also, different subgroups of patients with stable ischemic heart disease may respond differently to these various drug combinations, and these different responses could enable a personalized approach to patients with stable ischemic heart disease, he concluded.
—Erik Greb
Suggested Reading
Braunwald E. An important step for thrombocardiology. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Eikelboom JW, Connolly SJ, Bosch J, et al. Rivaroxaban with or without aspirin in stable cardiovascular disease. N Engl J Med. 2017 Aug 27 [Epub ahead of print].
Exenatide Aids Motor Function in Parkinson’s Disease
Exenatide improves motor function in patients with Parkinson’s disease, according to research published online ahead of print August 3 in Lancet. The improvements may persist for months after treatment exposure.
Exenatide, an analogue of glucagon-like peptide-1, is used to treat type 2 diabetes. In rodent models of Parkinson’s disease, exenatide had neuroprotective effects and improved motor performance, behavior, learning, and memory. The drug also provided motor and cognitive benefits in a proof-of-concept study including patients with Parkinson’s disease.
Active Group Was Slightly Older
Dilan Athauda, MBBS, Senior Clinical Research Associate at University College London, and colleagues conducted a double-blind study to assess exenatide’s potential disease-modifying effects. At screening for study entry, patients with idiopathic Parkinson’s disease underwent physical and neurologic examinations, assessments of mood and cognition, and blood sampling. The investigators randomized eligible participants to subcutaneous injections of exenatide (2 mg) or placebo once weekly for 48 weeks. Participants continued to take their regular medications. Investigators examined patients in an off-medication state and collected blood and urine at baseline and weeks 12, 24, 36, and 48. Study drugs were withdrawn after 48 weeks, and the final follow-up visit was at week 60.
The primary outcome was change in Movement Disorders Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) part 3 score at 60 weeks. Secondary outcomes included differences between exenatide and placebo in each subsection of the MDS-UPDRS in the on-medication state, and the Mattis Dementia Rating Scale at weeks 48 and 60.
In all, 62 participants were randomized. Patients assigned to exenatide were slightly older, had higher baseline MDS-UPDRS part 3 scores, and had lower levodopa equivalent dose than did controls. Average age was about 62 in the exenatide group and about 58 among controls. About 26% of the population was female, and the mean disease duration at baseline was 6.4 years. Approximately 97% of the population were between Hoehn and Yahr stage 1 and 2 at baseline.
Exenatide Yielded Motor Improvement
At week 60, off-medication MDS-UPDRS part 3 scores had worsened by 2.1 points in the placebo group and improved by 1.0 point in the exenatide group, yielding a significant adjusted difference of –3.5 points. At week 48, scores among controls had deteriorated by 1.7 points, and those in the exenatide group had improved by 2.3 points, resulting in a significant adjusted between-group difference of –4.3 points.
On-medication scores on MDS-UPDRS parts 1 through 4 did not differ significantly between groups at weeks 48 or 60. The researchers also did not observe a significant difference between groups in Mattis Dementia Rating Scale score at those time points. The frequency of adverse events was similar between groups.
“Exenatide could have a longer-lasting effect on disease severity beyond conventional drug effects on dopaminergic receptors,” said the researchers. “Whether exenatide affects the underlying disease pathophysiology or simply induces long-lasting symptomatic effects is uncertain. Exenatide represents a major new avenue for investigation in Parkinson’s disease, and effects on everyday symptoms should be examined in longer-term trials.”
—Erik Greb
Suggested Reading
Athauda D, Maclagan K, Skene SS, et al. Exenatide once weekly versus placebo in Parkinson’s disease: a randomised, double-blind, placebo-controlled trial. Lancet. 2017 Aug 3 [Epub ahead of print].
Exenatide improves motor function in patients with Parkinson’s disease, according to research published online ahead of print August 3 in Lancet. The improvements may persist for months after treatment exposure.
Exenatide, an analogue of glucagon-like peptide-1, is used to treat type 2 diabetes. In rodent models of Parkinson’s disease, exenatide had neuroprotective effects and improved motor performance, behavior, learning, and memory. The drug also provided motor and cognitive benefits in a proof-of-concept study including patients with Parkinson’s disease.
Active Group Was Slightly Older
Dilan Athauda, MBBS, Senior Clinical Research Associate at University College London, and colleagues conducted a double-blind study to assess exenatide’s potential disease-modifying effects. At screening for study entry, patients with idiopathic Parkinson’s disease underwent physical and neurologic examinations, assessments of mood and cognition, and blood sampling. The investigators randomized eligible participants to subcutaneous injections of exenatide (2 mg) or placebo once weekly for 48 weeks. Participants continued to take their regular medications. Investigators examined patients in an off-medication state and collected blood and urine at baseline and weeks 12, 24, 36, and 48. Study drugs were withdrawn after 48 weeks, and the final follow-up visit was at week 60.
The primary outcome was change in Movement Disorders Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) part 3 score at 60 weeks. Secondary outcomes included differences between exenatide and placebo in each subsection of the MDS-UPDRS in the on-medication state, and the Mattis Dementia Rating Scale at weeks 48 and 60.
In all, 62 participants were randomized. Patients assigned to exenatide were slightly older, had higher baseline MDS-UPDRS part 3 scores, and had lower levodopa equivalent dose than did controls. Average age was about 62 in the exenatide group and about 58 among controls. About 26% of the population was female, and the mean disease duration at baseline was 6.4 years. Approximately 97% of the population were between Hoehn and Yahr stage 1 and 2 at baseline.
Exenatide Yielded Motor Improvement
At week 60, off-medication MDS-UPDRS part 3 scores had worsened by 2.1 points in the placebo group and improved by 1.0 point in the exenatide group, yielding a significant adjusted difference of –3.5 points. At week 48, scores among controls had deteriorated by 1.7 points, and those in the exenatide group had improved by 2.3 points, resulting in a significant adjusted between-group difference of –4.3 points.
On-medication scores on MDS-UPDRS parts 1 through 4 did not differ significantly between groups at weeks 48 or 60. The researchers also did not observe a significant difference between groups in Mattis Dementia Rating Scale score at those time points. The frequency of adverse events was similar between groups.
“Exenatide could have a longer-lasting effect on disease severity beyond conventional drug effects on dopaminergic receptors,” said the researchers. “Whether exenatide affects the underlying disease pathophysiology or simply induces long-lasting symptomatic effects is uncertain. Exenatide represents a major new avenue for investigation in Parkinson’s disease, and effects on everyday symptoms should be examined in longer-term trials.”
—Erik Greb
Suggested Reading
Athauda D, Maclagan K, Skene SS, et al. Exenatide once weekly versus placebo in Parkinson’s disease: a randomised, double-blind, placebo-controlled trial. Lancet. 2017 Aug 3 [Epub ahead of print].
Exenatide improves motor function in patients with Parkinson’s disease, according to research published online ahead of print August 3 in Lancet. The improvements may persist for months after treatment exposure.
Exenatide, an analogue of glucagon-like peptide-1, is used to treat type 2 diabetes. In rodent models of Parkinson’s disease, exenatide had neuroprotective effects and improved motor performance, behavior, learning, and memory. The drug also provided motor and cognitive benefits in a proof-of-concept study including patients with Parkinson’s disease.
Active Group Was Slightly Older
Dilan Athauda, MBBS, Senior Clinical Research Associate at University College London, and colleagues conducted a double-blind study to assess exenatide’s potential disease-modifying effects. At screening for study entry, patients with idiopathic Parkinson’s disease underwent physical and neurologic examinations, assessments of mood and cognition, and blood sampling. The investigators randomized eligible participants to subcutaneous injections of exenatide (2 mg) or placebo once weekly for 48 weeks. Participants continued to take their regular medications. Investigators examined patients in an off-medication state and collected blood and urine at baseline and weeks 12, 24, 36, and 48. Study drugs were withdrawn after 48 weeks, and the final follow-up visit was at week 60.
The primary outcome was change in Movement Disorders Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) part 3 score at 60 weeks. Secondary outcomes included differences between exenatide and placebo in each subsection of the MDS-UPDRS in the on-medication state, and the Mattis Dementia Rating Scale at weeks 48 and 60.
In all, 62 participants were randomized. Patients assigned to exenatide were slightly older, had higher baseline MDS-UPDRS part 3 scores, and had lower levodopa equivalent dose than did controls. Average age was about 62 in the exenatide group and about 58 among controls. About 26% of the population was female, and the mean disease duration at baseline was 6.4 years. Approximately 97% of the population were between Hoehn and Yahr stage 1 and 2 at baseline.
Exenatide Yielded Motor Improvement
At week 60, off-medication MDS-UPDRS part 3 scores had worsened by 2.1 points in the placebo group and improved by 1.0 point in the exenatide group, yielding a significant adjusted difference of –3.5 points. At week 48, scores among controls had deteriorated by 1.7 points, and those in the exenatide group had improved by 2.3 points, resulting in a significant adjusted between-group difference of –4.3 points.
On-medication scores on MDS-UPDRS parts 1 through 4 did not differ significantly between groups at weeks 48 or 60. The researchers also did not observe a significant difference between groups in Mattis Dementia Rating Scale score at those time points. The frequency of adverse events was similar between groups.
“Exenatide could have a longer-lasting effect on disease severity beyond conventional drug effects on dopaminergic receptors,” said the researchers. “Whether exenatide affects the underlying disease pathophysiology or simply induces long-lasting symptomatic effects is uncertain. Exenatide represents a major new avenue for investigation in Parkinson’s disease, and effects on everyday symptoms should be examined in longer-term trials.”
—Erik Greb
Suggested Reading
Athauda D, Maclagan K, Skene SS, et al. Exenatide once weekly versus placebo in Parkinson’s disease: a randomised, double-blind, placebo-controlled trial. Lancet. 2017 Aug 3 [Epub ahead of print].
Are Bladder Dysfunction and Falls Related in MS?
Urinary urgency with incontinence is associated with recurrent falls in people with relapsing-remitting multiple sclerosis (MS) with mild to moderate disability, according to data published in the July–August issue of International Journal of MS Care. Urinary urgency with incontinence often responds to physical, behavioral, and pharmaceutical interventions, and neurologists should ask patients with MS about bladder symptoms and fall history, according to the authors.
Bladder dysfunction and falls are highly prevalent among people with MS, and bladder dysfunction is associated with falls in older adults. Studies of the association between bladder dysfunction and falls in people with MS, however, are limited and have produced mixed results. Jaime E. Zelaya, PhD, a doctoral student at Oregon Health and Science University in Portland, and colleagues conducted a longitudinal observational cohort study to clarify the possible association between baseline urinary symptoms and future falls.
Participants Prospectively Recorded Falls
The investigators recruited participants from outpatient MS clinics in the Veterans Affairs Portland Health Care System, Oregon Health and Science University MS clinics, and the surrounding community. Eligible participants had a diagnosis of relapsing-remitting MS, mild to moderate MS-related disability, and no relapse within 30 days of baseline. Patients with another condition that affected their balance or gait were excluded from the study.
At baseline, Dr. Zelaya and colleagues asked participants whether they had urinary incontinence, urinary frequency, or urinary urgency. Participants then prospectively recorded their number of falls each day using fall calendars. They were asked to return their calendars to the investigators at the end of each month. The researchers defined four patient categories based on the number of falls during three months. Recurrent fallers fell two or more times, nonrecurrent fallers fell once or not at all, fallers had one fall or more, and nonfallers did not fall. The investigators analyzed the data using age, sex, and disability as potential confounders.
Most Patients Fell at Least Once
The final analysis included 51 participants (37 women). Mean age was 40, and median Expanded Disability Status Scale (EDSS) score was 3.0. In all, 15 participants (29%) were recurrent fallers, and 36 (71%) were nonrecurrent fallers. Furthermore, 32 (63%) participants were fallers, and 19 (37%) were nonfallers.
Urinary dysfunction was more prevalent in fallers and recurrent fallers than in nonrecurrent fallers or nonfallers. In the adjusted analyses, urinary urgency with incontinence was significantly associated with recurrent falls (odds ratio [OR], 57.57). The researchers did not find a significant association between urinary urgency without incontinence and recurrent falls, or between urinary frequency and recurrent falls. They also did not find significant associations between urinary urgency with incontinence, urinary urgency without incontinence, or urinary frequency and sustaining one or more falls.
The high prevalence of falls and bladder dysfunction in this population and previous studies “suggests that both falls and bladder dysfunction are common, early, and persistent symptoms in MS,” said the authors. The findings suggest that fall-prevention programs “should particularly be considered for reducing fall risk in recurrent fallers, and that such programs should include strategies for managing urinary urgency with incontinence,” they concluded.
—Erik Greb
Suggested Reading
Zelaya JE, Murchison C, Cameron M. Associations between bladder dysfunction and falls in people with relapsing-remitting multiple sclerosis. Int J MS Care. 2017;19(4):184-190.
Urinary urgency with incontinence is associated with recurrent falls in people with relapsing-remitting multiple sclerosis (MS) with mild to moderate disability, according to data published in the July–August issue of International Journal of MS Care. Urinary urgency with incontinence often responds to physical, behavioral, and pharmaceutical interventions, and neurologists should ask patients with MS about bladder symptoms and fall history, according to the authors.
Bladder dysfunction and falls are highly prevalent among people with MS, and bladder dysfunction is associated with falls in older adults. Studies of the association between bladder dysfunction and falls in people with MS, however, are limited and have produced mixed results. Jaime E. Zelaya, PhD, a doctoral student at Oregon Health and Science University in Portland, and colleagues conducted a longitudinal observational cohort study to clarify the possible association between baseline urinary symptoms and future falls.
Participants Prospectively Recorded Falls
The investigators recruited participants from outpatient MS clinics in the Veterans Affairs Portland Health Care System, Oregon Health and Science University MS clinics, and the surrounding community. Eligible participants had a diagnosis of relapsing-remitting MS, mild to moderate MS-related disability, and no relapse within 30 days of baseline. Patients with another condition that affected their balance or gait were excluded from the study.
At baseline, Dr. Zelaya and colleagues asked participants whether they had urinary incontinence, urinary frequency, or urinary urgency. Participants then prospectively recorded their number of falls each day using fall calendars. They were asked to return their calendars to the investigators at the end of each month. The researchers defined four patient categories based on the number of falls during three months. Recurrent fallers fell two or more times, nonrecurrent fallers fell once or not at all, fallers had one fall or more, and nonfallers did not fall. The investigators analyzed the data using age, sex, and disability as potential confounders.
Most Patients Fell at Least Once
The final analysis included 51 participants (37 women). Mean age was 40, and median Expanded Disability Status Scale (EDSS) score was 3.0. In all, 15 participants (29%) were recurrent fallers, and 36 (71%) were nonrecurrent fallers. Furthermore, 32 (63%) participants were fallers, and 19 (37%) were nonfallers.
Urinary dysfunction was more prevalent in fallers and recurrent fallers than in nonrecurrent fallers or nonfallers. In the adjusted analyses, urinary urgency with incontinence was significantly associated with recurrent falls (odds ratio [OR], 57.57). The researchers did not find a significant association between urinary urgency without incontinence and recurrent falls, or between urinary frequency and recurrent falls. They also did not find significant associations between urinary urgency with incontinence, urinary urgency without incontinence, or urinary frequency and sustaining one or more falls.
The high prevalence of falls and bladder dysfunction in this population and previous studies “suggests that both falls and bladder dysfunction are common, early, and persistent symptoms in MS,” said the authors. The findings suggest that fall-prevention programs “should particularly be considered for reducing fall risk in recurrent fallers, and that such programs should include strategies for managing urinary urgency with incontinence,” they concluded.
—Erik Greb
Suggested Reading
Zelaya JE, Murchison C, Cameron M. Associations between bladder dysfunction and falls in people with relapsing-remitting multiple sclerosis. Int J MS Care. 2017;19(4):184-190.
Urinary urgency with incontinence is associated with recurrent falls in people with relapsing-remitting multiple sclerosis (MS) with mild to moderate disability, according to data published in the July–August issue of International Journal of MS Care. Urinary urgency with incontinence often responds to physical, behavioral, and pharmaceutical interventions, and neurologists should ask patients with MS about bladder symptoms and fall history, according to the authors.
Bladder dysfunction and falls are highly prevalent among people with MS, and bladder dysfunction is associated with falls in older adults. Studies of the association between bladder dysfunction and falls in people with MS, however, are limited and have produced mixed results. Jaime E. Zelaya, PhD, a doctoral student at Oregon Health and Science University in Portland, and colleagues conducted a longitudinal observational cohort study to clarify the possible association between baseline urinary symptoms and future falls.
Participants Prospectively Recorded Falls
The investigators recruited participants from outpatient MS clinics in the Veterans Affairs Portland Health Care System, Oregon Health and Science University MS clinics, and the surrounding community. Eligible participants had a diagnosis of relapsing-remitting MS, mild to moderate MS-related disability, and no relapse within 30 days of baseline. Patients with another condition that affected their balance or gait were excluded from the study.
At baseline, Dr. Zelaya and colleagues asked participants whether they had urinary incontinence, urinary frequency, or urinary urgency. Participants then prospectively recorded their number of falls each day using fall calendars. They were asked to return their calendars to the investigators at the end of each month. The researchers defined four patient categories based on the number of falls during three months. Recurrent fallers fell two or more times, nonrecurrent fallers fell once or not at all, fallers had one fall or more, and nonfallers did not fall. The investigators analyzed the data using age, sex, and disability as potential confounders.
Most Patients Fell at Least Once
The final analysis included 51 participants (37 women). Mean age was 40, and median Expanded Disability Status Scale (EDSS) score was 3.0. In all, 15 participants (29%) were recurrent fallers, and 36 (71%) were nonrecurrent fallers. Furthermore, 32 (63%) participants were fallers, and 19 (37%) were nonfallers.
Urinary dysfunction was more prevalent in fallers and recurrent fallers than in nonrecurrent fallers or nonfallers. In the adjusted analyses, urinary urgency with incontinence was significantly associated with recurrent falls (odds ratio [OR], 57.57). The researchers did not find a significant association between urinary urgency without incontinence and recurrent falls, or between urinary frequency and recurrent falls. They also did not find significant associations between urinary urgency with incontinence, urinary urgency without incontinence, or urinary frequency and sustaining one or more falls.
The high prevalence of falls and bladder dysfunction in this population and previous studies “suggests that both falls and bladder dysfunction are common, early, and persistent symptoms in MS,” said the authors. The findings suggest that fall-prevention programs “should particularly be considered for reducing fall risk in recurrent fallers, and that such programs should include strategies for managing urinary urgency with incontinence,” they concluded.
—Erik Greb
Suggested Reading
Zelaya JE, Murchison C, Cameron M. Associations between bladder dysfunction and falls in people with relapsing-remitting multiple sclerosis. Int J MS Care. 2017;19(4):184-190.
How Does Cognitive Demand Affect Mobility in MS?
Patients with multiple sclerosis (MS) with an Expanded Disability Status Scale (EDSS) score between 4 and 6 have significantly slower times on the Timed Up and Go (TUG) test with the addition of a simple cognitive task, according to research published in the July–August issue of International Journal of MS Care. This reduction in performance “might have implications for a person’s more complex everyday activities,” the researchers said.
Patients with MS may develop cognitive impairment (eg, reduced processing speed or working memory), but standard cognitive assessments overlook how cognitive function affects mobility. To assess how the addition of a cognitive task affects mobility in patients with MS, George H. Kraft, MD, Emeritus Alvord Professor of MS Research at the University of Washington in Seattle, and colleagues conducted a study that included 52 adults with MS and 57 healthy controls. Participants had a mean age of about 47, and most were women.
The participants completed three versions of the TUG test: the standard test, the test plus reciting the alphabet, and the test plus subtracting from a number by threes. Times to complete the tests were compared between controls and three groups of participants with MS—those with an EDSS score of 0–3.5 (n = 26), those with an EDSS score of 4.0–5.5 (n = 11), and those with an EDSS score of 6 (n = 15).
Overall mean times for the four groups were 8.0, 8.2, 11.1, and 11.6 seconds, respectively. Controls did not differ from people with MS without mobility problems (ie, those with an EDSS score of 0–3.5), but did differ from the other two groups.
“Individuals with MS and no mobility problems have ... very little increase in time due to the addition of cognitive tasks to the TUG test. The two more severe groups perform similarly to each other, with a steeper increase in time to perform the test when the cognitive demand increases,” the researchers said. “Although we cannot automatically generalize the results to more complex everyday activities, such as walking or driving a car while talking on a cell phone, the reduction in performance is an important issue that should be discussed with the patient and his or her caregiver.”
—Jake Remaly
Suggested Reading
Ciol MA, Matsuda PN, Khurana SR, et al. Effect of cognitive demand on functional mobility in ambulatory individuals with multiple sclerosis. Int J MS Care. 2017;19(4):217-224.
Patients with multiple sclerosis (MS) with an Expanded Disability Status Scale (EDSS) score between 4 and 6 have significantly slower times on the Timed Up and Go (TUG) test with the addition of a simple cognitive task, according to research published in the July–August issue of International Journal of MS Care. This reduction in performance “might have implications for a person’s more complex everyday activities,” the researchers said.
Patients with MS may develop cognitive impairment (eg, reduced processing speed or working memory), but standard cognitive assessments overlook how cognitive function affects mobility. To assess how the addition of a cognitive task affects mobility in patients with MS, George H. Kraft, MD, Emeritus Alvord Professor of MS Research at the University of Washington in Seattle, and colleagues conducted a study that included 52 adults with MS and 57 healthy controls. Participants had a mean age of about 47, and most were women.
The participants completed three versions of the TUG test: the standard test, the test plus reciting the alphabet, and the test plus subtracting from a number by threes. Times to complete the tests were compared between controls and three groups of participants with MS—those with an EDSS score of 0–3.5 (n = 26), those with an EDSS score of 4.0–5.5 (n = 11), and those with an EDSS score of 6 (n = 15).
Overall mean times for the four groups were 8.0, 8.2, 11.1, and 11.6 seconds, respectively. Controls did not differ from people with MS without mobility problems (ie, those with an EDSS score of 0–3.5), but did differ from the other two groups.
“Individuals with MS and no mobility problems have ... very little increase in time due to the addition of cognitive tasks to the TUG test. The two more severe groups perform similarly to each other, with a steeper increase in time to perform the test when the cognitive demand increases,” the researchers said. “Although we cannot automatically generalize the results to more complex everyday activities, such as walking or driving a car while talking on a cell phone, the reduction in performance is an important issue that should be discussed with the patient and his or her caregiver.”
—Jake Remaly
Suggested Reading
Ciol MA, Matsuda PN, Khurana SR, et al. Effect of cognitive demand on functional mobility in ambulatory individuals with multiple sclerosis. Int J MS Care. 2017;19(4):217-224.
Patients with multiple sclerosis (MS) with an Expanded Disability Status Scale (EDSS) score between 4 and 6 have significantly slower times on the Timed Up and Go (TUG) test with the addition of a simple cognitive task, according to research published in the July–August issue of International Journal of MS Care. This reduction in performance “might have implications for a person’s more complex everyday activities,” the researchers said.
Patients with MS may develop cognitive impairment (eg, reduced processing speed or working memory), but standard cognitive assessments overlook how cognitive function affects mobility. To assess how the addition of a cognitive task affects mobility in patients with MS, George H. Kraft, MD, Emeritus Alvord Professor of MS Research at the University of Washington in Seattle, and colleagues conducted a study that included 52 adults with MS and 57 healthy controls. Participants had a mean age of about 47, and most were women.
The participants completed three versions of the TUG test: the standard test, the test plus reciting the alphabet, and the test plus subtracting from a number by threes. Times to complete the tests were compared between controls and three groups of participants with MS—those with an EDSS score of 0–3.5 (n = 26), those with an EDSS score of 4.0–5.5 (n = 11), and those with an EDSS score of 6 (n = 15).
Overall mean times for the four groups were 8.0, 8.2, 11.1, and 11.6 seconds, respectively. Controls did not differ from people with MS without mobility problems (ie, those with an EDSS score of 0–3.5), but did differ from the other two groups.
“Individuals with MS and no mobility problems have ... very little increase in time due to the addition of cognitive tasks to the TUG test. The two more severe groups perform similarly to each other, with a steeper increase in time to perform the test when the cognitive demand increases,” the researchers said. “Although we cannot automatically generalize the results to more complex everyday activities, such as walking or driving a car while talking on a cell phone, the reduction in performance is an important issue that should be discussed with the patient and his or her caregiver.”
—Jake Remaly
Suggested Reading
Ciol MA, Matsuda PN, Khurana SR, et al. Effect of cognitive demand on functional mobility in ambulatory individuals with multiple sclerosis. Int J MS Care. 2017;19(4):217-224.
Multifaceted Intervention Improves Anticoagulant Use in Patients With Atrial Fibrillation
A customized educational intervention significantly increases the use of oral anticoagulants among patients with atrial fibrillation at risk for stroke, according to data published online ahead of print August 28 in Lancet. The intervention also appears to reduce the risk of stroke.
“If this intervention could be broadly applied, which we believe is possible, the public health implications would be substantial,” said Christopher B. Granger, MD, Professor of Medicine at Duke University School of Medicine in Durham, North Carolina. “More than 33 million people worldwide have atrial fibrillation, which is a leading cause of stoke. Improving adherence to anticoagulation therapy would be a lifesaver.”
Intervention Included Education and Monitoring
Dr. Granger and colleagues conducted IMPACT-AF—a prospective, cluster-randomized, controlled trial—to evaluate the effect of a multifaceted educational intervention on the use of oral anticoagulation in patients with atrial fibrillation, compared with usual care. The study was conducted in Argentina, Brazil, China, India, and Romania. Eligible patients were 18 or older, had atrial fibrillation not resulting from reversible causes, and had an indication for oral anticoagulation. Patients with an absolute contraindication to oral anticoagulation, those with a mechanical prosthetic valve, and those who were not able to have one year of follow-up were excluded.
Clusters (ie, sites) in each country were paired and randomized 1:1 to receive an educational intervention or usual care. The intervention’s two components were education and regular monitoring. Education was provided through various media (eg, web-based materials, videos, and guideline recommendations) to patients, their families, and healthcare providers. It was customized for each country and described the benefits, risks, and costs of anticoagulant therapies. The monitoring component, which included feedback, was designed to promote anticoagulant initiation among appropriate candidates who were not being treated, prevent discontinuation among treated patients, and improve adherence. The investigators collected data at baseline, six months, and 12 months at all sites. The intervention sites had additional telephone calls or patient visits at one month, three months, and nine months.
The primary end point was the change in the proportion of patients treated with oral anticoagulants from baseline to one year. Key secondary end points included the proportion of patients who were on oral anticoagulation at baseline, six months, and 12 months; and the proportion of patients who were not on oral anticoagulation at baseline, but were on this therapy at six months and 12 months. Other secondary clinical outcomes included all-cause death, stroke, transient ischemic attack, and major bleeding.
Anticoagulant Use Increased
In all, researchers enrolled 2,281 participants at 48 clusters. Mean age was approximately 70, and about 47% of participants were women. Five patients (three in the intervention group) were lost to follow-up after baseline. The median follow-up duration was 12 months. Age, sex, educational level, and socioeconomic factors were well balanced between the two groups. The intervention group, however, had a higher proportion of patients with permanent atrial fibrillation, history of major bleeding, systemic embolism, and uncontrolled hypertension, and a lower proportion of patients with rheumatic valvular heart disease, heart failure or left ventricular dysfunction, vascular disease, and previous myocardial infarction, than the control group.
The proportion of patients on oral anticoagulation increased from 68% at baseline to 80% at one year in the intervention group and from 64% to 67% in the control group. The absolute difference between groups in the change of oral anticoagulation use was 9.1%. This result yielded an odds ratio of 3.28, representing the proportional increase in anticoagulation use from baseline to one year in the intervention group, compared with the control group. The effect size in favor of the intervention was consistent across prespecified subgroups. The primary outcome result was consistent in all five countries.
Furthermore, 95% of patients who were on oral anticoagulants at baseline in the intervention group and 94% of patients in the control group continued taking oral anticoagulants at one year. For patients who were not on oral anticoagulants at baseline, 48% in the intervention group and 18% in the control group were on oral anticoagulants at one year. This result yielded an odds ratio of 4.60, representing the proportional increase in anticoagulation use from baseline to one year in the intervention group, compared with the control group.
“Our study also found a reduction in strokes in the intervention group, compared with the control group,” said Renato D. Lopes, MD, PhD, Professor of Medicine at Duke University School of Medicine and principal investigator for Brazil. The hazard ratio of stroke was 0.48 for the intervention group, compared with the control group. “While this was a secondary outcome, it highlights the potential benefit of improved anticoagulation care,” he added.
The number needed to treat was 100 patients exposed to intervention to prevent one stroke event over one year. The rates of all-cause death and the composite of stroke, systemic embolism, or major bleeding did not differ between the intervention and control groups.
Intervention Might Help Patients Worldwide
The majority of participants were using vitamin K antagonists at baseline, and data suggest that non-vitamin K antagonists have advantages (eg, reduced intracerebral bleeds) over this class of oral anticoagulants. Approximately 8% of participants in the intervention group had switched from vitamin K antagonists to non-vitamin K antagonists at one year, while participants in the control group did not change their medication. “A similar educational approach could be used for all anticoagulant classes with even greater benefit to patients,” said Michael D. Ezekowitz, MD, a cardiologist at Lankenau Medical Center in Wynnewood, Pennsylvania, and Anthony P. Kent, MD, a resident at Bridgeport Hospital in Bridgeport, Connecticut, in an accompanying editorial.
The study was limited to five countries and relied on a technological intervention, which might hinder the generalization of the results to broader clinical practice, said Drs. Ezekowitz and Kent. Nevertheless, “we are confident that the impact of IMPACT-AF will benefit patients with atrial fibrillation worldwide,” they concluded.
—Erik Greb
Suggested Reading
Ezekowitz MD, Kent AP. The impact of IMPACT-AF. Lancet. 2017 Aug 28 [Epub ahead of print].
Vinereanu D, Lopes RD, Bahit MC, et al. A multifaceted intervention to improve treatment with oral anticoagulants in atrial fibrillation (IMPACT-AF): an international, cluster-randomised trial. Lancet. 2017 Aug 28 [Epub ahead of print].
A customized educational intervention significantly increases the use of oral anticoagulants among patients with atrial fibrillation at risk for stroke, according to data published online ahead of print August 28 in Lancet. The intervention also appears to reduce the risk of stroke.
“If this intervention could be broadly applied, which we believe is possible, the public health implications would be substantial,” said Christopher B. Granger, MD, Professor of Medicine at Duke University School of Medicine in Durham, North Carolina. “More than 33 million people worldwide have atrial fibrillation, which is a leading cause of stoke. Improving adherence to anticoagulation therapy would be a lifesaver.”
Intervention Included Education and Monitoring
Dr. Granger and colleagues conducted IMPACT-AF—a prospective, cluster-randomized, controlled trial—to evaluate the effect of a multifaceted educational intervention on the use of oral anticoagulation in patients with atrial fibrillation, compared with usual care. The study was conducted in Argentina, Brazil, China, India, and Romania. Eligible patients were 18 or older, had atrial fibrillation not resulting from reversible causes, and had an indication for oral anticoagulation. Patients with an absolute contraindication to oral anticoagulation, those with a mechanical prosthetic valve, and those who were not able to have one year of follow-up were excluded.
Clusters (ie, sites) in each country were paired and randomized 1:1 to receive an educational intervention or usual care. The intervention’s two components were education and regular monitoring. Education was provided through various media (eg, web-based materials, videos, and guideline recommendations) to patients, their families, and healthcare providers. It was customized for each country and described the benefits, risks, and costs of anticoagulant therapies. The monitoring component, which included feedback, was designed to promote anticoagulant initiation among appropriate candidates who were not being treated, prevent discontinuation among treated patients, and improve adherence. The investigators collected data at baseline, six months, and 12 months at all sites. The intervention sites had additional telephone calls or patient visits at one month, three months, and nine months.
The primary end point was the change in the proportion of patients treated with oral anticoagulants from baseline to one year. Key secondary end points included the proportion of patients who were on oral anticoagulation at baseline, six months, and 12 months; and the proportion of patients who were not on oral anticoagulation at baseline, but were on this therapy at six months and 12 months. Other secondary clinical outcomes included all-cause death, stroke, transient ischemic attack, and major bleeding.
Anticoagulant Use Increased
In all, researchers enrolled 2,281 participants at 48 clusters. Mean age was approximately 70, and about 47% of participants were women. Five patients (three in the intervention group) were lost to follow-up after baseline. The median follow-up duration was 12 months. Age, sex, educational level, and socioeconomic factors were well balanced between the two groups. The intervention group, however, had a higher proportion of patients with permanent atrial fibrillation, history of major bleeding, systemic embolism, and uncontrolled hypertension, and a lower proportion of patients with rheumatic valvular heart disease, heart failure or left ventricular dysfunction, vascular disease, and previous myocardial infarction, than the control group.
The proportion of patients on oral anticoagulation increased from 68% at baseline to 80% at one year in the intervention group and from 64% to 67% in the control group. The absolute difference between groups in the change of oral anticoagulation use was 9.1%. This result yielded an odds ratio of 3.28, representing the proportional increase in anticoagulation use from baseline to one year in the intervention group, compared with the control group. The effect size in favor of the intervention was consistent across prespecified subgroups. The primary outcome result was consistent in all five countries.
Furthermore, 95% of patients who were on oral anticoagulants at baseline in the intervention group and 94% of patients in the control group continued taking oral anticoagulants at one year. For patients who were not on oral anticoagulants at baseline, 48% in the intervention group and 18% in the control group were on oral anticoagulants at one year. This result yielded an odds ratio of 4.60, representing the proportional increase in anticoagulation use from baseline to one year in the intervention group, compared with the control group.
“Our study also found a reduction in strokes in the intervention group, compared with the control group,” said Renato D. Lopes, MD, PhD, Professor of Medicine at Duke University School of Medicine and principal investigator for Brazil. The hazard ratio of stroke was 0.48 for the intervention group, compared with the control group. “While this was a secondary outcome, it highlights the potential benefit of improved anticoagulation care,” he added.
The number needed to treat was 100 patients exposed to intervention to prevent one stroke event over one year. The rates of all-cause death and the composite of stroke, systemic embolism, or major bleeding did not differ between the intervention and control groups.
Intervention Might Help Patients Worldwide
The majority of participants were using vitamin K antagonists at baseline, and data suggest that non-vitamin K antagonists have advantages (eg, reduced intracerebral bleeds) over this class of oral anticoagulants. Approximately 8% of participants in the intervention group had switched from vitamin K antagonists to non-vitamin K antagonists at one year, while participants in the control group did not change their medication. “A similar educational approach could be used for all anticoagulant classes with even greater benefit to patients,” said Michael D. Ezekowitz, MD, a cardiologist at Lankenau Medical Center in Wynnewood, Pennsylvania, and Anthony P. Kent, MD, a resident at Bridgeport Hospital in Bridgeport, Connecticut, in an accompanying editorial.
The study was limited to five countries and relied on a technological intervention, which might hinder the generalization of the results to broader clinical practice, said Drs. Ezekowitz and Kent. Nevertheless, “we are confident that the impact of IMPACT-AF will benefit patients with atrial fibrillation worldwide,” they concluded.
—Erik Greb
Suggested Reading
Ezekowitz MD, Kent AP. The impact of IMPACT-AF. Lancet. 2017 Aug 28 [Epub ahead of print].
Vinereanu D, Lopes RD, Bahit MC, et al. A multifaceted intervention to improve treatment with oral anticoagulants in atrial fibrillation (IMPACT-AF): an international, cluster-randomised trial. Lancet. 2017 Aug 28 [Epub ahead of print].
A customized educational intervention significantly increases the use of oral anticoagulants among patients with atrial fibrillation at risk for stroke, according to data published online ahead of print August 28 in Lancet. The intervention also appears to reduce the risk of stroke.
“If this intervention could be broadly applied, which we believe is possible, the public health implications would be substantial,” said Christopher B. Granger, MD, Professor of Medicine at Duke University School of Medicine in Durham, North Carolina. “More than 33 million people worldwide have atrial fibrillation, which is a leading cause of stoke. Improving adherence to anticoagulation therapy would be a lifesaver.”
Intervention Included Education and Monitoring
Dr. Granger and colleagues conducted IMPACT-AF—a prospective, cluster-randomized, controlled trial—to evaluate the effect of a multifaceted educational intervention on the use of oral anticoagulation in patients with atrial fibrillation, compared with usual care. The study was conducted in Argentina, Brazil, China, India, and Romania. Eligible patients were 18 or older, had atrial fibrillation not resulting from reversible causes, and had an indication for oral anticoagulation. Patients with an absolute contraindication to oral anticoagulation, those with a mechanical prosthetic valve, and those who were not able to have one year of follow-up were excluded.
Clusters (ie, sites) in each country were paired and randomized 1:1 to receive an educational intervention or usual care. The intervention’s two components were education and regular monitoring. Education was provided through various media (eg, web-based materials, videos, and guideline recommendations) to patients, their families, and healthcare providers. It was customized for each country and described the benefits, risks, and costs of anticoagulant therapies. The monitoring component, which included feedback, was designed to promote anticoagulant initiation among appropriate candidates who were not being treated, prevent discontinuation among treated patients, and improve adherence. The investigators collected data at baseline, six months, and 12 months at all sites. The intervention sites had additional telephone calls or patient visits at one month, three months, and nine months.
The primary end point was the change in the proportion of patients treated with oral anticoagulants from baseline to one year. Key secondary end points included the proportion of patients who were on oral anticoagulation at baseline, six months, and 12 months; and the proportion of patients who were not on oral anticoagulation at baseline, but were on this therapy at six months and 12 months. Other secondary clinical outcomes included all-cause death, stroke, transient ischemic attack, and major bleeding.
Anticoagulant Use Increased
In all, researchers enrolled 2,281 participants at 48 clusters. Mean age was approximately 70, and about 47% of participants were women. Five patients (three in the intervention group) were lost to follow-up after baseline. The median follow-up duration was 12 months. Age, sex, educational level, and socioeconomic factors were well balanced between the two groups. The intervention group, however, had a higher proportion of patients with permanent atrial fibrillation, history of major bleeding, systemic embolism, and uncontrolled hypertension, and a lower proportion of patients with rheumatic valvular heart disease, heart failure or left ventricular dysfunction, vascular disease, and previous myocardial infarction, than the control group.
The proportion of patients on oral anticoagulation increased from 68% at baseline to 80% at one year in the intervention group and from 64% to 67% in the control group. The absolute difference between groups in the change of oral anticoagulation use was 9.1%. This result yielded an odds ratio of 3.28, representing the proportional increase in anticoagulation use from baseline to one year in the intervention group, compared with the control group. The effect size in favor of the intervention was consistent across prespecified subgroups. The primary outcome result was consistent in all five countries.
Furthermore, 95% of patients who were on oral anticoagulants at baseline in the intervention group and 94% of patients in the control group continued taking oral anticoagulants at one year. For patients who were not on oral anticoagulants at baseline, 48% in the intervention group and 18% in the control group were on oral anticoagulants at one year. This result yielded an odds ratio of 4.60, representing the proportional increase in anticoagulation use from baseline to one year in the intervention group, compared with the control group.
“Our study also found a reduction in strokes in the intervention group, compared with the control group,” said Renato D. Lopes, MD, PhD, Professor of Medicine at Duke University School of Medicine and principal investigator for Brazil. The hazard ratio of stroke was 0.48 for the intervention group, compared with the control group. “While this was a secondary outcome, it highlights the potential benefit of improved anticoagulation care,” he added.
The number needed to treat was 100 patients exposed to intervention to prevent one stroke event over one year. The rates of all-cause death and the composite of stroke, systemic embolism, or major bleeding did not differ between the intervention and control groups.
Intervention Might Help Patients Worldwide
The majority of participants were using vitamin K antagonists at baseline, and data suggest that non-vitamin K antagonists have advantages (eg, reduced intracerebral bleeds) over this class of oral anticoagulants. Approximately 8% of participants in the intervention group had switched from vitamin K antagonists to non-vitamin K antagonists at one year, while participants in the control group did not change their medication. “A similar educational approach could be used for all anticoagulant classes with even greater benefit to patients,” said Michael D. Ezekowitz, MD, a cardiologist at Lankenau Medical Center in Wynnewood, Pennsylvania, and Anthony P. Kent, MD, a resident at Bridgeport Hospital in Bridgeport, Connecticut, in an accompanying editorial.
The study was limited to five countries and relied on a technological intervention, which might hinder the generalization of the results to broader clinical practice, said Drs. Ezekowitz and Kent. Nevertheless, “we are confident that the impact of IMPACT-AF will benefit patients with atrial fibrillation worldwide,” they concluded.
—Erik Greb
Suggested Reading
Ezekowitz MD, Kent AP. The impact of IMPACT-AF. Lancet. 2017 Aug 28 [Epub ahead of print].
Vinereanu D, Lopes RD, Bahit MC, et al. A multifaceted intervention to improve treatment with oral anticoagulants in atrial fibrillation (IMPACT-AF): an international, cluster-randomised trial. Lancet. 2017 Aug 28 [Epub ahead of print].
Pembrolizumab and Nivolumab May Cause Neurologic Events
Approximately 3% of patients developed adverse neurologic events within 12 months of receiving nivolumab or pembrolizumab, according to the results of a single-center retrospective study published online ahead of print September 5 in JAMA Neurology.
These syndromes included myopathy, axonal thoracolumbar polyradiculopathy, severe demyelinating length-dependent peripheral neuropathy with axonal loss, asymmetric vasculitic neuropathy, cerebellar ataxia, autoimmune retinopathy, bilateral internuclear ophthalmoplegia, and headache, said Justin C. Kao, MBChB, a neurologist at Mayo Clinic in Rochester, Minnesota, and his coinvestigators.
Nivolumab and pembrolizumab are anti-programmed death–1 (PD-1) antibodies. In response to an increase in reports of neurologic events associated with anti–PD-1 therapy, the investigators searched the Mayo Cancer Pharmacy Database and identified 347 patients treated with pembrolizumab or nivolumab between 2014 and 2016. Ten patients (2.9%, two women) developed neurologic complications within 12 months of anti–PD-1 exposure. The median age was 71. None of their neurologic symptoms could be attributed directly to other treatments or to metastatic disease. Median modified Rankin Scale (mRS) score was 2.5. Symptom severity peaked at between one day and more than three months after starting anti–PD-1 treatment.
Stopping anti–PD-1 treatment and starting high-dose corticosteroids led to neurologic improvements (median mRS score, 2). A patient with necrotizing myopathy who had been receiving pembrolizumab for stage 4 melanoma developed extraocular, bulbar, and respiratory muscle weakness. These symptoms worsened over three weeks and did not respond to prednisone (80 mg/day) or to three sessions of plasmapheresis. The patient subsequently died.
If a patient on anti–PD-1 therapy develops neurologic symptoms, clinicians should promptly stop treatment and pursue a full workup, including electrodiagnostic studies and consideration of muscle or nerve biopsy to clarify the underlying pathophysiologic mechanisms, the researchers said. “If the clinical examination demonstrates severe clinical deficits at onset or worsens despite medication discontinuation, additional immune suppressant treatment should be considered,” they said.
—Amy Karon
Suggested Reading
Kao JC, Liao B, Markovic SN, et al. Neurological complications associated with anti-programmed death 1 (PD-1) antibodies. JAMA Neurol. 2017 Sep 5 [Epub ahead of print].
Approximately 3% of patients developed adverse neurologic events within 12 months of receiving nivolumab or pembrolizumab, according to the results of a single-center retrospective study published online ahead of print September 5 in JAMA Neurology.
These syndromes included myopathy, axonal thoracolumbar polyradiculopathy, severe demyelinating length-dependent peripheral neuropathy with axonal loss, asymmetric vasculitic neuropathy, cerebellar ataxia, autoimmune retinopathy, bilateral internuclear ophthalmoplegia, and headache, said Justin C. Kao, MBChB, a neurologist at Mayo Clinic in Rochester, Minnesota, and his coinvestigators.
Nivolumab and pembrolizumab are anti-programmed death–1 (PD-1) antibodies. In response to an increase in reports of neurologic events associated with anti–PD-1 therapy, the investigators searched the Mayo Cancer Pharmacy Database and identified 347 patients treated with pembrolizumab or nivolumab between 2014 and 2016. Ten patients (2.9%, two women) developed neurologic complications within 12 months of anti–PD-1 exposure. The median age was 71. None of their neurologic symptoms could be attributed directly to other treatments or to metastatic disease. Median modified Rankin Scale (mRS) score was 2.5. Symptom severity peaked at between one day and more than three months after starting anti–PD-1 treatment.
Stopping anti–PD-1 treatment and starting high-dose corticosteroids led to neurologic improvements (median mRS score, 2). A patient with necrotizing myopathy who had been receiving pembrolizumab for stage 4 melanoma developed extraocular, bulbar, and respiratory muscle weakness. These symptoms worsened over three weeks and did not respond to prednisone (80 mg/day) or to three sessions of plasmapheresis. The patient subsequently died.
If a patient on anti–PD-1 therapy develops neurologic symptoms, clinicians should promptly stop treatment and pursue a full workup, including electrodiagnostic studies and consideration of muscle or nerve biopsy to clarify the underlying pathophysiologic mechanisms, the researchers said. “If the clinical examination demonstrates severe clinical deficits at onset or worsens despite medication discontinuation, additional immune suppressant treatment should be considered,” they said.
—Amy Karon
Suggested Reading
Kao JC, Liao B, Markovic SN, et al. Neurological complications associated with anti-programmed death 1 (PD-1) antibodies. JAMA Neurol. 2017 Sep 5 [Epub ahead of print].
Approximately 3% of patients developed adverse neurologic events within 12 months of receiving nivolumab or pembrolizumab, according to the results of a single-center retrospective study published online ahead of print September 5 in JAMA Neurology.
These syndromes included myopathy, axonal thoracolumbar polyradiculopathy, severe demyelinating length-dependent peripheral neuropathy with axonal loss, asymmetric vasculitic neuropathy, cerebellar ataxia, autoimmune retinopathy, bilateral internuclear ophthalmoplegia, and headache, said Justin C. Kao, MBChB, a neurologist at Mayo Clinic in Rochester, Minnesota, and his coinvestigators.
Nivolumab and pembrolizumab are anti-programmed death–1 (PD-1) antibodies. In response to an increase in reports of neurologic events associated with anti–PD-1 therapy, the investigators searched the Mayo Cancer Pharmacy Database and identified 347 patients treated with pembrolizumab or nivolumab between 2014 and 2016. Ten patients (2.9%, two women) developed neurologic complications within 12 months of anti–PD-1 exposure. The median age was 71. None of their neurologic symptoms could be attributed directly to other treatments or to metastatic disease. Median modified Rankin Scale (mRS) score was 2.5. Symptom severity peaked at between one day and more than three months after starting anti–PD-1 treatment.
Stopping anti–PD-1 treatment and starting high-dose corticosteroids led to neurologic improvements (median mRS score, 2). A patient with necrotizing myopathy who had been receiving pembrolizumab for stage 4 melanoma developed extraocular, bulbar, and respiratory muscle weakness. These symptoms worsened over three weeks and did not respond to prednisone (80 mg/day) or to three sessions of plasmapheresis. The patient subsequently died.
If a patient on anti–PD-1 therapy develops neurologic symptoms, clinicians should promptly stop treatment and pursue a full workup, including electrodiagnostic studies and consideration of muscle or nerve biopsy to clarify the underlying pathophysiologic mechanisms, the researchers said. “If the clinical examination demonstrates severe clinical deficits at onset or worsens despite medication discontinuation, additional immune suppressant treatment should be considered,” they said.
—Amy Karon
Suggested Reading
Kao JC, Liao B, Markovic SN, et al. Neurological complications associated with anti-programmed death 1 (PD-1) antibodies. JAMA Neurol. 2017 Sep 5 [Epub ahead of print].
Blood Test Accurately Detects GLUT1 Deficiency Syndrome
A blood test can detect glucose transporter type 1 (GLUT1) deficiency syndrome accurately and rapidly, according to a brief communication published in the July issue of Annals of Neurology. The new test has a diagnostic rate comparable to that of CSF glucose and may be more cost-effective than the combination of lumbar puncture and genetic testing, according to the researchers.
To confirm the diagnosis in a patient whose phenotype suggests GLUT1 deficiency syndrome, neurologists traditionally measure CSF glucose concentration and perform SLC2A1 molecular analysis. Lumbar puncture requires fasting and may entail complications, however, and analysis of the coding regions of SLC2A1 can be tedious and may fail to identify variants.
Analysis of Red Blood Cells
Domitille Gras, MD, a neurologist at Robert-Debré University Hospital in Paris, and colleagues tested a novel diagnostic method based on flow cytometry analysis of red blood cells. For their proof-of-concept study, the researchers enrolled 30 patients (13 females) between ages 2 and 50 with GLUT1 deficiency syndrome. They also enrolled 18 patients (six females) with paroxysmal movement disorders attributed to genetic defects other than in SLC2A1. Finally, the investigators examined 346 healthy controls.
For all participants, Dr. Gras and colleagues measured CSF glucose concentration, performed SLC2A1 molecular analysis, and used flow cytometry to analyze GLUT1 surface expression on circulating red blood cells. To perform the latter method, researchers who were blinded to patients’ disease condition collected at least 0.5 mL of nonfasted venous blood from each participant. Results were available within 24 hours.
Age Did Not Affect Test Results
GLUT1 expression on red blood cells varied by 15% among healthy controls. The blood test identified 23 (78%) of the patients with GLUT1 deficiency syndrome who had a decrease in GLUT1 expression of at least 20%. Dr. Gras and colleagues saw no overlap between the test results of patients and those of controls. The new test detected three patients with GLUT1 deficiency syndrome who had a CSF glucose concentration greater than 2.2 mM, which is the most commonly used cutoff. Two patients with a presentation suggestive of GLUT1 deficiency syndrome and low CSF glucose and lactate, but no SLC2A1 mutation, had an abnormal blood test.
The blood test detected GLUT1 deficiency syndrome regardless of the patient’s age and disease severity. Patients not detected by the test may have mutations that mildly affect glucose uptake, but not GLUT1 expression, or may have a GLUT1 deficiency restricted to the brain. Most patients were analyzed at least twice, and blood test results were consistent for each patient.
Measuring GLUT1 at the surface of red blood cells could avoid diagnostic delays that currently are considerable, said Dr. Gras and colleagues. “Although more studies are required to establish the diagnostic gain of the red blood cell test on a larger cohort, such a simple diagnostic test, readily available in clinical practice, ought to greatly enlarge the screening of GLUT1 deficiency syndrome in any patient, child, or adult presenting with cognitive impairment, epilepsy, ataxia and/or dystonia, or paroxysmal movement disorder,” they concluded.
—Erik Greb
Suggested Reading
Gras D, Cousin C, Kappeler C, et al. A simple blood test expedites the diagnosis of glucose transporter type 1 deficiency syndrome. Ann Neurol. 2017;82(1):133-138.
A blood test can detect glucose transporter type 1 (GLUT1) deficiency syndrome accurately and rapidly, according to a brief communication published in the July issue of Annals of Neurology. The new test has a diagnostic rate comparable to that of CSF glucose and may be more cost-effective than the combination of lumbar puncture and genetic testing, according to the researchers.
To confirm the diagnosis in a patient whose phenotype suggests GLUT1 deficiency syndrome, neurologists traditionally measure CSF glucose concentration and perform SLC2A1 molecular analysis. Lumbar puncture requires fasting and may entail complications, however, and analysis of the coding regions of SLC2A1 can be tedious and may fail to identify variants.
Analysis of Red Blood Cells
Domitille Gras, MD, a neurologist at Robert-Debré University Hospital in Paris, and colleagues tested a novel diagnostic method based on flow cytometry analysis of red blood cells. For their proof-of-concept study, the researchers enrolled 30 patients (13 females) between ages 2 and 50 with GLUT1 deficiency syndrome. They also enrolled 18 patients (six females) with paroxysmal movement disorders attributed to genetic defects other than in SLC2A1. Finally, the investigators examined 346 healthy controls.
For all participants, Dr. Gras and colleagues measured CSF glucose concentration, performed SLC2A1 molecular analysis, and used flow cytometry to analyze GLUT1 surface expression on circulating red blood cells. To perform the latter method, researchers who were blinded to patients’ disease condition collected at least 0.5 mL of nonfasted venous blood from each participant. Results were available within 24 hours.
Age Did Not Affect Test Results
GLUT1 expression on red blood cells varied by 15% among healthy controls. The blood test identified 23 (78%) of the patients with GLUT1 deficiency syndrome who had a decrease in GLUT1 expression of at least 20%. Dr. Gras and colleagues saw no overlap between the test results of patients and those of controls. The new test detected three patients with GLUT1 deficiency syndrome who had a CSF glucose concentration greater than 2.2 mM, which is the most commonly used cutoff. Two patients with a presentation suggestive of GLUT1 deficiency syndrome and low CSF glucose and lactate, but no SLC2A1 mutation, had an abnormal blood test.
The blood test detected GLUT1 deficiency syndrome regardless of the patient’s age and disease severity. Patients not detected by the test may have mutations that mildly affect glucose uptake, but not GLUT1 expression, or may have a GLUT1 deficiency restricted to the brain. Most patients were analyzed at least twice, and blood test results were consistent for each patient.
Measuring GLUT1 at the surface of red blood cells could avoid diagnostic delays that currently are considerable, said Dr. Gras and colleagues. “Although more studies are required to establish the diagnostic gain of the red blood cell test on a larger cohort, such a simple diagnostic test, readily available in clinical practice, ought to greatly enlarge the screening of GLUT1 deficiency syndrome in any patient, child, or adult presenting with cognitive impairment, epilepsy, ataxia and/or dystonia, or paroxysmal movement disorder,” they concluded.
—Erik Greb
Suggested Reading
Gras D, Cousin C, Kappeler C, et al. A simple blood test expedites the diagnosis of glucose transporter type 1 deficiency syndrome. Ann Neurol. 2017;82(1):133-138.
A blood test can detect glucose transporter type 1 (GLUT1) deficiency syndrome accurately and rapidly, according to a brief communication published in the July issue of Annals of Neurology. The new test has a diagnostic rate comparable to that of CSF glucose and may be more cost-effective than the combination of lumbar puncture and genetic testing, according to the researchers.
To confirm the diagnosis in a patient whose phenotype suggests GLUT1 deficiency syndrome, neurologists traditionally measure CSF glucose concentration and perform SLC2A1 molecular analysis. Lumbar puncture requires fasting and may entail complications, however, and analysis of the coding regions of SLC2A1 can be tedious and may fail to identify variants.
Analysis of Red Blood Cells
Domitille Gras, MD, a neurologist at Robert-Debré University Hospital in Paris, and colleagues tested a novel diagnostic method based on flow cytometry analysis of red blood cells. For their proof-of-concept study, the researchers enrolled 30 patients (13 females) between ages 2 and 50 with GLUT1 deficiency syndrome. They also enrolled 18 patients (six females) with paroxysmal movement disorders attributed to genetic defects other than in SLC2A1. Finally, the investigators examined 346 healthy controls.
For all participants, Dr. Gras and colleagues measured CSF glucose concentration, performed SLC2A1 molecular analysis, and used flow cytometry to analyze GLUT1 surface expression on circulating red blood cells. To perform the latter method, researchers who were blinded to patients’ disease condition collected at least 0.5 mL of nonfasted venous blood from each participant. Results were available within 24 hours.
Age Did Not Affect Test Results
GLUT1 expression on red blood cells varied by 15% among healthy controls. The blood test identified 23 (78%) of the patients with GLUT1 deficiency syndrome who had a decrease in GLUT1 expression of at least 20%. Dr. Gras and colleagues saw no overlap between the test results of patients and those of controls. The new test detected three patients with GLUT1 deficiency syndrome who had a CSF glucose concentration greater than 2.2 mM, which is the most commonly used cutoff. Two patients with a presentation suggestive of GLUT1 deficiency syndrome and low CSF glucose and lactate, but no SLC2A1 mutation, had an abnormal blood test.
The blood test detected GLUT1 deficiency syndrome regardless of the patient’s age and disease severity. Patients not detected by the test may have mutations that mildly affect glucose uptake, but not GLUT1 expression, or may have a GLUT1 deficiency restricted to the brain. Most patients were analyzed at least twice, and blood test results were consistent for each patient.
Measuring GLUT1 at the surface of red blood cells could avoid diagnostic delays that currently are considerable, said Dr. Gras and colleagues. “Although more studies are required to establish the diagnostic gain of the red blood cell test on a larger cohort, such a simple diagnostic test, readily available in clinical practice, ought to greatly enlarge the screening of GLUT1 deficiency syndrome in any patient, child, or adult presenting with cognitive impairment, epilepsy, ataxia and/or dystonia, or paroxysmal movement disorder,” they concluded.
—Erik Greb
Suggested Reading
Gras D, Cousin C, Kappeler C, et al. A simple blood test expedites the diagnosis of glucose transporter type 1 deficiency syndrome. Ann Neurol. 2017;82(1):133-138.
Can Today’s Stress Level Predict Tomorrow’s Migraine Attack?
Neurologists and patients may be able to predict migraine attacks using a model based on the level of stress from daily hassles, according to research published in the July issue of Headache. The model was well calibrated, but its forecasts were based on participants’ base rates of headache, said the authors. With additional adjustments, the model could enable patients to treat migraine attacks pre-emptively.
Although headache disorders are common, it remains unclear what triggers a migraine attack. Patients have identified many possible triggers, including perceived stress. In people with episodic migraine and chronic migraine, perceived stress is associated with the onset of headache. Researchers previously had not provided evidence that any of the potential triggers could predict a migraine attack, however.
Electronic Diaries Captured Headache Frequency
Timothy T. Houle, PhD, Associate Professor of Anesthesia at Massachusetts General Hospital in Boston, and colleagues conducted the prospective Headache Prediction Study to examine precipitating factors of migraine headache. They recruited participants with episodic migraine who had more than two headache attacks per month and had between four and 14 headache days per month. Secondary headache disorder and change in the nature of headache symptoms in the previous six weeks were among the exclusion criteria.
Participants completed morning and evening diary entries daily using electronic systems. In the entries, the participants recorded headaches, headache characteristics, and abortive medications used since the last entry. Participants used the Daily Stress Inventory to assess stress in their evening diary entries. Using these assessments, the investigators examined the frequency of stressors, the sum of the stress impact ratings, and the average stress impact ratings. The primary analysis was the prediction of a future headache attack based on current levels of stress and headache.
Potential for New Treatment Strategies
Dr. Houle and colleagues enrolled 100 participants between September 2009 and May 2014. Five participants dropped out. Approximately 91% of participants were female, and 87% were Caucasian. Mean age was 40. The 95 participants contributed 4,626 days of diary data. In all, 431 diary entries were missing or unavailable for analysis. Participants had a headache attack on approximately 39% of days. Days that preceded a headache were associated with greater stress than days that did not precede a headache.
After estimating a series of models, the researchers found that a generalized linear mixed-effects model using either the frequency of stressful events or the perceived intensity of stressful events fit the data well. The forecasting model had “promising predictive utility” in the training sample and in a validation sample, said the authors. The model had good calibration between forecast probabilities and observed headache frequencies, but had low levels of resolution, meaning that “the forecast probabilities are close to the individual’s long-run average,” said Dr. Houle.
“This appears to be the first evidence that individual headache attacks can be forecast within an individual sufferer, and this finding creates substantial opportunities for additional treatment strategies if the forecasting model can be refined,” said Dr. Houle. “A forecasting model could be used to enhance pharmacologic treatment opportunities, reduce anxiety about the unpredictability of attacks, increase locus-of-control beliefs, and lead to increased self-efficacy assessments about the self-management of migraine attacks.” Neurologists should consider the investigators’ stress model a first step toward headache prediction, and not a final model for widespread clinical use, he added.
Complexities Need Consideration
These data are “fascinating,” but neurologists should consider several complexities as they develop methods for the short-term prevention of predictable migraine, said Richard B. Lipton, MD, Edwin S. Lowe Chair in Neurology at Albert Einstein College of Medicine in New York and Director of the Montefiore Headache Center, and colleagues in an accompanying editorial. First, they must distinguish group-level and within-person analyses of attack predictors. Trigger factors vary from person to person, and within-person analysis may be crucial to prediction and prevention, said Dr. Lipton. Second, in addition to stress, other trigger factors such as premonitory features, self-prediction, and biomarkers also may aid in forecasting attacks. Finally, researchers can measure and model predictors of impending attacks in various ways (eg, lead–lag effects and cumulative effects).
“Houle et al have set the stage for short-term prediction of headaches in persons with migraine as a potential foundation for short-term preventive therapies,” said Dr. Lipton. “To realize the potential of these approaches, we must refine the art of headache forecasting and then test targeted interventions in carefully selected patients.”
—Erik Greb
Suggested Reading
Houle TT, Turner DP, Golding AN, et al. Forecasting individual headache attacks using perceived stress: Development of a multivariable prediction model for persons with episodic migraine. Headache. 2017;57(7):1041-1050.
Lipton RB, Pavlovic JM, Buse DC. Why migraine forecasting matters. Headache. 2017;57(7):1023-1025.
Neurologists and patients may be able to predict migraine attacks using a model based on the level of stress from daily hassles, according to research published in the July issue of Headache. The model was well calibrated, but its forecasts were based on participants’ base rates of headache, said the authors. With additional adjustments, the model could enable patients to treat migraine attacks pre-emptively.
Although headache disorders are common, it remains unclear what triggers a migraine attack. Patients have identified many possible triggers, including perceived stress. In people with episodic migraine and chronic migraine, perceived stress is associated with the onset of headache. Researchers previously had not provided evidence that any of the potential triggers could predict a migraine attack, however.
Electronic Diaries Captured Headache Frequency
Timothy T. Houle, PhD, Associate Professor of Anesthesia at Massachusetts General Hospital in Boston, and colleagues conducted the prospective Headache Prediction Study to examine precipitating factors of migraine headache. They recruited participants with episodic migraine who had more than two headache attacks per month and had between four and 14 headache days per month. Secondary headache disorder and change in the nature of headache symptoms in the previous six weeks were among the exclusion criteria.
Participants completed morning and evening diary entries daily using electronic systems. In the entries, the participants recorded headaches, headache characteristics, and abortive medications used since the last entry. Participants used the Daily Stress Inventory to assess stress in their evening diary entries. Using these assessments, the investigators examined the frequency of stressors, the sum of the stress impact ratings, and the average stress impact ratings. The primary analysis was the prediction of a future headache attack based on current levels of stress and headache.
Potential for New Treatment Strategies
Dr. Houle and colleagues enrolled 100 participants between September 2009 and May 2014. Five participants dropped out. Approximately 91% of participants were female, and 87% were Caucasian. Mean age was 40. The 95 participants contributed 4,626 days of diary data. In all, 431 diary entries were missing or unavailable for analysis. Participants had a headache attack on approximately 39% of days. Days that preceded a headache were associated with greater stress than days that did not precede a headache.
After estimating a series of models, the researchers found that a generalized linear mixed-effects model using either the frequency of stressful events or the perceived intensity of stressful events fit the data well. The forecasting model had “promising predictive utility” in the training sample and in a validation sample, said the authors. The model had good calibration between forecast probabilities and observed headache frequencies, but had low levels of resolution, meaning that “the forecast probabilities are close to the individual’s long-run average,” said Dr. Houle.
“This appears to be the first evidence that individual headache attacks can be forecast within an individual sufferer, and this finding creates substantial opportunities for additional treatment strategies if the forecasting model can be refined,” said Dr. Houle. “A forecasting model could be used to enhance pharmacologic treatment opportunities, reduce anxiety about the unpredictability of attacks, increase locus-of-control beliefs, and lead to increased self-efficacy assessments about the self-management of migraine attacks.” Neurologists should consider the investigators’ stress model a first step toward headache prediction, and not a final model for widespread clinical use, he added.
Complexities Need Consideration
These data are “fascinating,” but neurologists should consider several complexities as they develop methods for the short-term prevention of predictable migraine, said Richard B. Lipton, MD, Edwin S. Lowe Chair in Neurology at Albert Einstein College of Medicine in New York and Director of the Montefiore Headache Center, and colleagues in an accompanying editorial. First, they must distinguish group-level and within-person analyses of attack predictors. Trigger factors vary from person to person, and within-person analysis may be crucial to prediction and prevention, said Dr. Lipton. Second, in addition to stress, other trigger factors such as premonitory features, self-prediction, and biomarkers also may aid in forecasting attacks. Finally, researchers can measure and model predictors of impending attacks in various ways (eg, lead–lag effects and cumulative effects).
“Houle et al have set the stage for short-term prediction of headaches in persons with migraine as a potential foundation for short-term preventive therapies,” said Dr. Lipton. “To realize the potential of these approaches, we must refine the art of headache forecasting and then test targeted interventions in carefully selected patients.”
—Erik Greb
Suggested Reading
Houle TT, Turner DP, Golding AN, et al. Forecasting individual headache attacks using perceived stress: Development of a multivariable prediction model for persons with episodic migraine. Headache. 2017;57(7):1041-1050.
Lipton RB, Pavlovic JM, Buse DC. Why migraine forecasting matters. Headache. 2017;57(7):1023-1025.
Neurologists and patients may be able to predict migraine attacks using a model based on the level of stress from daily hassles, according to research published in the July issue of Headache. The model was well calibrated, but its forecasts were based on participants’ base rates of headache, said the authors. With additional adjustments, the model could enable patients to treat migraine attacks pre-emptively.
Although headache disorders are common, it remains unclear what triggers a migraine attack. Patients have identified many possible triggers, including perceived stress. In people with episodic migraine and chronic migraine, perceived stress is associated with the onset of headache. Researchers previously had not provided evidence that any of the potential triggers could predict a migraine attack, however.
Electronic Diaries Captured Headache Frequency
Timothy T. Houle, PhD, Associate Professor of Anesthesia at Massachusetts General Hospital in Boston, and colleagues conducted the prospective Headache Prediction Study to examine precipitating factors of migraine headache. They recruited participants with episodic migraine who had more than two headache attacks per month and had between four and 14 headache days per month. Secondary headache disorder and change in the nature of headache symptoms in the previous six weeks were among the exclusion criteria.
Participants completed morning and evening diary entries daily using electronic systems. In the entries, the participants recorded headaches, headache characteristics, and abortive medications used since the last entry. Participants used the Daily Stress Inventory to assess stress in their evening diary entries. Using these assessments, the investigators examined the frequency of stressors, the sum of the stress impact ratings, and the average stress impact ratings. The primary analysis was the prediction of a future headache attack based on current levels of stress and headache.
Potential for New Treatment Strategies
Dr. Houle and colleagues enrolled 100 participants between September 2009 and May 2014. Five participants dropped out. Approximately 91% of participants were female, and 87% were Caucasian. Mean age was 40. The 95 participants contributed 4,626 days of diary data. In all, 431 diary entries were missing or unavailable for analysis. Participants had a headache attack on approximately 39% of days. Days that preceded a headache were associated with greater stress than days that did not precede a headache.
After estimating a series of models, the researchers found that a generalized linear mixed-effects model using either the frequency of stressful events or the perceived intensity of stressful events fit the data well. The forecasting model had “promising predictive utility” in the training sample and in a validation sample, said the authors. The model had good calibration between forecast probabilities and observed headache frequencies, but had low levels of resolution, meaning that “the forecast probabilities are close to the individual’s long-run average,” said Dr. Houle.
“This appears to be the first evidence that individual headache attacks can be forecast within an individual sufferer, and this finding creates substantial opportunities for additional treatment strategies if the forecasting model can be refined,” said Dr. Houle. “A forecasting model could be used to enhance pharmacologic treatment opportunities, reduce anxiety about the unpredictability of attacks, increase locus-of-control beliefs, and lead to increased self-efficacy assessments about the self-management of migraine attacks.” Neurologists should consider the investigators’ stress model a first step toward headache prediction, and not a final model for widespread clinical use, he added.
Complexities Need Consideration
These data are “fascinating,” but neurologists should consider several complexities as they develop methods for the short-term prevention of predictable migraine, said Richard B. Lipton, MD, Edwin S. Lowe Chair in Neurology at Albert Einstein College of Medicine in New York and Director of the Montefiore Headache Center, and colleagues in an accompanying editorial. First, they must distinguish group-level and within-person analyses of attack predictors. Trigger factors vary from person to person, and within-person analysis may be crucial to prediction and prevention, said Dr. Lipton. Second, in addition to stress, other trigger factors such as premonitory features, self-prediction, and biomarkers also may aid in forecasting attacks. Finally, researchers can measure and model predictors of impending attacks in various ways (eg, lead–lag effects and cumulative effects).
“Houle et al have set the stage for short-term prediction of headaches in persons with migraine as a potential foundation for short-term preventive therapies,” said Dr. Lipton. “To realize the potential of these approaches, we must refine the art of headache forecasting and then test targeted interventions in carefully selected patients.”
—Erik Greb
Suggested Reading
Houle TT, Turner DP, Golding AN, et al. Forecasting individual headache attacks using perceived stress: Development of a multivariable prediction model for persons with episodic migraine. Headache. 2017;57(7):1041-1050.
Lipton RB, Pavlovic JM, Buse DC. Why migraine forecasting matters. Headache. 2017;57(7):1023-1025.