User login
Spinal cord stimulation restores poststroke arm, hand function in two patients
The results provide “promising, albeit preliminary, evidence that spinal cord stimulation could be an assistive as well as a restorative approach for upper-limb recovery after stroke,” wrote first author Marc P. Powell, PhD, of Reach Neuro Inc., Pittsburgh, and colleagues.
The findings were published online in Nature Medicine.
Top cause of paralysis
“Stroke is the largest cause of paralysis in the world,” with nearly three-quarters of patients with stroke experiencing lasting deficits in motor control of their arm and hand, co–senior study author Marco Capogrosso, PhD, assistant professor of neurological surgery at the University of Pittsburgh, said during a press briefing.
Stroke can disrupt communication between the brain and the spinal cord, leading to motor deficits in the arm and hand. However, below the lesion, the spinal circuits that control movement remain intact and could be targeted to restore function, Dr. Capogrosso noted.
Spinal cord stimulation has shown promise in promoting long-lasting recovery of leg motor function in patients with spinal cord injury; but until now, it’s been largely unexplored for upper-limb recovery.
In this “first-in-human” study, the investigators percutaneously implanted two linear leads in the dorsolateral epidural space targeting neural circuits that control arm and hand muscles in two patients.
One of the patients was a woman (age, 31 years) who had experienced a right thalamic hemorrhagic stroke secondary to a cavernous malformation 9 years before enrolling in the pilot study.
The other patient was a woman (age, 47 years) who experienced a right ischemic middle cerebral artery (MCA) stroke secondary to a right carotid dissection, resulting in a large MCA territory infarct 3 years before entering the study.
In both patients, continuous stimulation of the targeted neural circuits led to significant and immediate improvement in arm and hand strength and dexterity. This enabled the patients to perform movements that they couldn’t perform without spinal cord stimulation.
The process also enabled fine motor skills, such as opening a lock and using utensils to eat independently – tasks that the younger woman had not been able to do for 9 years.
“Perhaps even more interesting, we found that after a few weeks of use, some of these improvements endure when the stimulation is switched off, indicating exciting avenues for the future of stroke therapies,” Dr. Capogrosso said in a news release.
No serious adverse events were reported.
‘Easily translated’
Dr. Capogrosso said that, thanks to years of preclinical research, the investigators have developed a practical, easy-to-use stimulation protocol adapting existing clinical technologies that “could be easily translated to the hospital and quickly moved from the lab to the clinic.”
The researchers noted, however, that further studies in larger cohorts will be required to validate the safety and efficacy of this approach.
They are currently working with more patients with stroke to fine-tune placement of the leads and stimulation protocol, as well as determine which patients are best suited for the approach.
“Creating effective neurorehabilitation solutions for people affected by movement impairment after stroke is becoming ever more urgent,” co–senior author Elvira Pirondini, PhD, assistant professor of physical medicine and rehabilitation at the University of Pittsburgh, said in the release.
“Even mild deficits resulting from a stroke can isolate people from social and professional lives and become very debilitating, with motor impairments in the arm and hand being especially taxing and impeding simple daily activities, such as writing, eating, and getting dressed,” she added.
This research was funded by the National Institutes of Health BRAIN Initiative, with additional research support provided by the Department of Neurological Surgery and the Department of Physical Medicine and Rehabilitation at Pitt, and the Department of Mechanical Engineering and the Neuroscience Institute at Carnegie Mellon University. Three investigators have financial interests in Reach Neuro, which has an interest in the technology being evaluated in this study.
A version of this article first appeared on Medscape.com.
The results provide “promising, albeit preliminary, evidence that spinal cord stimulation could be an assistive as well as a restorative approach for upper-limb recovery after stroke,” wrote first author Marc P. Powell, PhD, of Reach Neuro Inc., Pittsburgh, and colleagues.
The findings were published online in Nature Medicine.
Top cause of paralysis
“Stroke is the largest cause of paralysis in the world,” with nearly three-quarters of patients with stroke experiencing lasting deficits in motor control of their arm and hand, co–senior study author Marco Capogrosso, PhD, assistant professor of neurological surgery at the University of Pittsburgh, said during a press briefing.
Stroke can disrupt communication between the brain and the spinal cord, leading to motor deficits in the arm and hand. However, below the lesion, the spinal circuits that control movement remain intact and could be targeted to restore function, Dr. Capogrosso noted.
Spinal cord stimulation has shown promise in promoting long-lasting recovery of leg motor function in patients with spinal cord injury; but until now, it’s been largely unexplored for upper-limb recovery.
In this “first-in-human” study, the investigators percutaneously implanted two linear leads in the dorsolateral epidural space targeting neural circuits that control arm and hand muscles in two patients.
One of the patients was a woman (age, 31 years) who had experienced a right thalamic hemorrhagic stroke secondary to a cavernous malformation 9 years before enrolling in the pilot study.
The other patient was a woman (age, 47 years) who experienced a right ischemic middle cerebral artery (MCA) stroke secondary to a right carotid dissection, resulting in a large MCA territory infarct 3 years before entering the study.
In both patients, continuous stimulation of the targeted neural circuits led to significant and immediate improvement in arm and hand strength and dexterity. This enabled the patients to perform movements that they couldn’t perform without spinal cord stimulation.
The process also enabled fine motor skills, such as opening a lock and using utensils to eat independently – tasks that the younger woman had not been able to do for 9 years.
“Perhaps even more interesting, we found that after a few weeks of use, some of these improvements endure when the stimulation is switched off, indicating exciting avenues for the future of stroke therapies,” Dr. Capogrosso said in a news release.
No serious adverse events were reported.
‘Easily translated’
Dr. Capogrosso said that, thanks to years of preclinical research, the investigators have developed a practical, easy-to-use stimulation protocol adapting existing clinical technologies that “could be easily translated to the hospital and quickly moved from the lab to the clinic.”
The researchers noted, however, that further studies in larger cohorts will be required to validate the safety and efficacy of this approach.
They are currently working with more patients with stroke to fine-tune placement of the leads and stimulation protocol, as well as determine which patients are best suited for the approach.
“Creating effective neurorehabilitation solutions for people affected by movement impairment after stroke is becoming ever more urgent,” co–senior author Elvira Pirondini, PhD, assistant professor of physical medicine and rehabilitation at the University of Pittsburgh, said in the release.
“Even mild deficits resulting from a stroke can isolate people from social and professional lives and become very debilitating, with motor impairments in the arm and hand being especially taxing and impeding simple daily activities, such as writing, eating, and getting dressed,” she added.
This research was funded by the National Institutes of Health BRAIN Initiative, with additional research support provided by the Department of Neurological Surgery and the Department of Physical Medicine and Rehabilitation at Pitt, and the Department of Mechanical Engineering and the Neuroscience Institute at Carnegie Mellon University. Three investigators have financial interests in Reach Neuro, which has an interest in the technology being evaluated in this study.
A version of this article first appeared on Medscape.com.
The results provide “promising, albeit preliminary, evidence that spinal cord stimulation could be an assistive as well as a restorative approach for upper-limb recovery after stroke,” wrote first author Marc P. Powell, PhD, of Reach Neuro Inc., Pittsburgh, and colleagues.
The findings were published online in Nature Medicine.
Top cause of paralysis
“Stroke is the largest cause of paralysis in the world,” with nearly three-quarters of patients with stroke experiencing lasting deficits in motor control of their arm and hand, co–senior study author Marco Capogrosso, PhD, assistant professor of neurological surgery at the University of Pittsburgh, said during a press briefing.
Stroke can disrupt communication between the brain and the spinal cord, leading to motor deficits in the arm and hand. However, below the lesion, the spinal circuits that control movement remain intact and could be targeted to restore function, Dr. Capogrosso noted.
Spinal cord stimulation has shown promise in promoting long-lasting recovery of leg motor function in patients with spinal cord injury; but until now, it’s been largely unexplored for upper-limb recovery.
In this “first-in-human” study, the investigators percutaneously implanted two linear leads in the dorsolateral epidural space targeting neural circuits that control arm and hand muscles in two patients.
One of the patients was a woman (age, 31 years) who had experienced a right thalamic hemorrhagic stroke secondary to a cavernous malformation 9 years before enrolling in the pilot study.
The other patient was a woman (age, 47 years) who experienced a right ischemic middle cerebral artery (MCA) stroke secondary to a right carotid dissection, resulting in a large MCA territory infarct 3 years before entering the study.
In both patients, continuous stimulation of the targeted neural circuits led to significant and immediate improvement in arm and hand strength and dexterity. This enabled the patients to perform movements that they couldn’t perform without spinal cord stimulation.
The process also enabled fine motor skills, such as opening a lock and using utensils to eat independently – tasks that the younger woman had not been able to do for 9 years.
“Perhaps even more interesting, we found that after a few weeks of use, some of these improvements endure when the stimulation is switched off, indicating exciting avenues for the future of stroke therapies,” Dr. Capogrosso said in a news release.
No serious adverse events were reported.
‘Easily translated’
Dr. Capogrosso said that, thanks to years of preclinical research, the investigators have developed a practical, easy-to-use stimulation protocol adapting existing clinical technologies that “could be easily translated to the hospital and quickly moved from the lab to the clinic.”
The researchers noted, however, that further studies in larger cohorts will be required to validate the safety and efficacy of this approach.
They are currently working with more patients with stroke to fine-tune placement of the leads and stimulation protocol, as well as determine which patients are best suited for the approach.
“Creating effective neurorehabilitation solutions for people affected by movement impairment after stroke is becoming ever more urgent,” co–senior author Elvira Pirondini, PhD, assistant professor of physical medicine and rehabilitation at the University of Pittsburgh, said in the release.
“Even mild deficits resulting from a stroke can isolate people from social and professional lives and become very debilitating, with motor impairments in the arm and hand being especially taxing and impeding simple daily activities, such as writing, eating, and getting dressed,” she added.
This research was funded by the National Institutes of Health BRAIN Initiative, with additional research support provided by the Department of Neurological Surgery and the Department of Physical Medicine and Rehabilitation at Pitt, and the Department of Mechanical Engineering and the Neuroscience Institute at Carnegie Mellon University. Three investigators have financial interests in Reach Neuro, which has an interest in the technology being evaluated in this study.
A version of this article first appeared on Medscape.com.
FROM NATURE MEDICINE
‘Quick, affordable’ test helps predict CGRP response for migraine
new research suggests.
The ictal phase refers to “sensitization occurring during a time when central trigeminovascular neurons receive massive nociceptive input from active meningeal nociceptors,” whereas the nonictal phase refers to “sensitization occurring during a time when central trigeminovascular neurons receive no or subliminal nociceptive input from meningeal nociceptors,” investigators noted.
In an observational, open-label cohort study, pretreatment nonictal cephalic allodynia identified galcanezumab responders with nearly 80% accuracy, and it identified nonresponders with nearly 85% accuracy.
“Detection of nonictal allodynia with a simplified paradigm of Quantitative Sensory Testing (QST) may provide a quick, affordable, noninvasive, and patient-friendly way to prospectively distinguish between responders and nonresponders to the prophylactic treatment of chronic and high-frequency episodic migraine with drugs that reduce CGRP signaling,” Sait Ashina, MD, of Beth Israel Deaconess Medical Center and Harvard Medical School, both in Boston, and colleagues wrote.
The findings were published online in Cephalalgia.
Immediate clinical relevance
Investigator Rami Burstein, PhD, also with Beth Israel Deaconess Medical Center and Harvard Medical School, developed the concept of predicting response to anti-CGRP treatment by testing for the presence or absence of nonictal cephalic allodynia in collaboration with the company CGRP Diagnostics.
In 43 anti–CGRP-naive patients with migraine, the researchers used a simplified QST algorithm to determine the presence/absence of cephalic or extracephalic allodynia during the nonictal phase of migraine – defined as the period from less than 12 hours after a migraine attack to less than 12 hours before the next attack.
Patients were considered to have allodynia if heat pain thresholds were between 32° C and 40° C, if cold pain thresholds were between 32° C and 20° C, or if the mechanical pain was threshold was less than 60 g.
Using these strict criteria, pretreatment nonictal cephalic allodynia was a statistically significant predictor of response to anti-CGRP therapy. It was present in 84% of the 19 nonresponders and was absent in 79% of the 24 responders, for an overall accuracy rate of 86% (P < .0001).
Nonictal cephalic allodynia was “consistently” predictive of response for patients with chronic migraine as well as for those with high-frequency episodic migraine, the researchers reported.
In contrast, they noted that assessing nonictal extracephalic allodynia with QST missed nearly 50% of the patients with allodynia among the nonresponders (accuracy rate of 42%) and added little to the assessment of allodynia among the responders.
Mark Hasleton, PhD, CEO of CGRP Diagnostics, said in an interview that the study shows it’s possible to determine response to anti-CGRP therapy and to prescribe these medications to patients who are most likely to respond.
Dr. Hasleton, who was not personally involved with the current study, noted that pretreatment testing for nonictal cephalic allodynia may also allow for earlier prescription of anti-CGRP therapy and potentially dispense without the need for the current trial-and-error approach to prescribing. He noted that if one anti-CGRP fails the patient, it is highly likely that others will also fail.
Given the “very high correlation of the presence of nonictal cephalic allodynia in responders to galcanezumab, our recommendation would be to routinely pretest all potential anti-CGRP candidates prior to prescription,” he said.
End of trial-and-error prescribing
In a comment, Shaheen Lakhan, MD, a neurologist and researcher in Boston, said this research is “very noteworthy, moving us one step closer to predictive, precision medicine and away from the practice of trial-and-error prescribing.
“The trial-and-error approach to migraine management is daunting. These are very costly therapies, and when they don’t work, there is continued tremendous suffering and loss of quality of life for patients,” said Dr. Lakhan, who was not involved in the study.
He added that the failure of drugs to benefit individual patients “may lead to distrust of the health care provider” and to the system as a whole, which in turn could lead to less access to care for other conditions or for preventive measures.
“I envision a time when these predictive measures collectively (interictal allodynia, as in this study, plus biobehavioral data) will assist us neurologists in appropriately selecting migraine therapies,” Dr. Lakhan said.
“Beyond that, we will eventually test new therapies not in cells, animals, and even humans but in silico. In the very near future, we will have solutions tailored to not people suffering a disease but to you – an individual with a unique genetic, protein, physical, developmental, psychological, and behavioral makeup,” he added.
The study was funded in part by Eli Lilly, the National Institutes of Health, and the anesthesia department at Beth Israel Deaconess Medical Center. Galcanezumab was provided by Eli Lilly. Dr. Lakhan reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
new research suggests.
The ictal phase refers to “sensitization occurring during a time when central trigeminovascular neurons receive massive nociceptive input from active meningeal nociceptors,” whereas the nonictal phase refers to “sensitization occurring during a time when central trigeminovascular neurons receive no or subliminal nociceptive input from meningeal nociceptors,” investigators noted.
In an observational, open-label cohort study, pretreatment nonictal cephalic allodynia identified galcanezumab responders with nearly 80% accuracy, and it identified nonresponders with nearly 85% accuracy.
“Detection of nonictal allodynia with a simplified paradigm of Quantitative Sensory Testing (QST) may provide a quick, affordable, noninvasive, and patient-friendly way to prospectively distinguish between responders and nonresponders to the prophylactic treatment of chronic and high-frequency episodic migraine with drugs that reduce CGRP signaling,” Sait Ashina, MD, of Beth Israel Deaconess Medical Center and Harvard Medical School, both in Boston, and colleagues wrote.
The findings were published online in Cephalalgia.
Immediate clinical relevance
Investigator Rami Burstein, PhD, also with Beth Israel Deaconess Medical Center and Harvard Medical School, developed the concept of predicting response to anti-CGRP treatment by testing for the presence or absence of nonictal cephalic allodynia in collaboration with the company CGRP Diagnostics.
In 43 anti–CGRP-naive patients with migraine, the researchers used a simplified QST algorithm to determine the presence/absence of cephalic or extracephalic allodynia during the nonictal phase of migraine – defined as the period from less than 12 hours after a migraine attack to less than 12 hours before the next attack.
Patients were considered to have allodynia if heat pain thresholds were between 32° C and 40° C, if cold pain thresholds were between 32° C and 20° C, or if the mechanical pain was threshold was less than 60 g.
Using these strict criteria, pretreatment nonictal cephalic allodynia was a statistically significant predictor of response to anti-CGRP therapy. It was present in 84% of the 19 nonresponders and was absent in 79% of the 24 responders, for an overall accuracy rate of 86% (P < .0001).
Nonictal cephalic allodynia was “consistently” predictive of response for patients with chronic migraine as well as for those with high-frequency episodic migraine, the researchers reported.
In contrast, they noted that assessing nonictal extracephalic allodynia with QST missed nearly 50% of the patients with allodynia among the nonresponders (accuracy rate of 42%) and added little to the assessment of allodynia among the responders.
Mark Hasleton, PhD, CEO of CGRP Diagnostics, said in an interview that the study shows it’s possible to determine response to anti-CGRP therapy and to prescribe these medications to patients who are most likely to respond.
Dr. Hasleton, who was not personally involved with the current study, noted that pretreatment testing for nonictal cephalic allodynia may also allow for earlier prescription of anti-CGRP therapy and potentially dispense without the need for the current trial-and-error approach to prescribing. He noted that if one anti-CGRP fails the patient, it is highly likely that others will also fail.
Given the “very high correlation of the presence of nonictal cephalic allodynia in responders to galcanezumab, our recommendation would be to routinely pretest all potential anti-CGRP candidates prior to prescription,” he said.
End of trial-and-error prescribing
In a comment, Shaheen Lakhan, MD, a neurologist and researcher in Boston, said this research is “very noteworthy, moving us one step closer to predictive, precision medicine and away from the practice of trial-and-error prescribing.
“The trial-and-error approach to migraine management is daunting. These are very costly therapies, and when they don’t work, there is continued tremendous suffering and loss of quality of life for patients,” said Dr. Lakhan, who was not involved in the study.
He added that the failure of drugs to benefit individual patients “may lead to distrust of the health care provider” and to the system as a whole, which in turn could lead to less access to care for other conditions or for preventive measures.
“I envision a time when these predictive measures collectively (interictal allodynia, as in this study, plus biobehavioral data) will assist us neurologists in appropriately selecting migraine therapies,” Dr. Lakhan said.
“Beyond that, we will eventually test new therapies not in cells, animals, and even humans but in silico. In the very near future, we will have solutions tailored to not people suffering a disease but to you – an individual with a unique genetic, protein, physical, developmental, psychological, and behavioral makeup,” he added.
The study was funded in part by Eli Lilly, the National Institutes of Health, and the anesthesia department at Beth Israel Deaconess Medical Center. Galcanezumab was provided by Eli Lilly. Dr. Lakhan reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
new research suggests.
The ictal phase refers to “sensitization occurring during a time when central trigeminovascular neurons receive massive nociceptive input from active meningeal nociceptors,” whereas the nonictal phase refers to “sensitization occurring during a time when central trigeminovascular neurons receive no or subliminal nociceptive input from meningeal nociceptors,” investigators noted.
In an observational, open-label cohort study, pretreatment nonictal cephalic allodynia identified galcanezumab responders with nearly 80% accuracy, and it identified nonresponders with nearly 85% accuracy.
“Detection of nonictal allodynia with a simplified paradigm of Quantitative Sensory Testing (QST) may provide a quick, affordable, noninvasive, and patient-friendly way to prospectively distinguish between responders and nonresponders to the prophylactic treatment of chronic and high-frequency episodic migraine with drugs that reduce CGRP signaling,” Sait Ashina, MD, of Beth Israel Deaconess Medical Center and Harvard Medical School, both in Boston, and colleagues wrote.
The findings were published online in Cephalalgia.
Immediate clinical relevance
Investigator Rami Burstein, PhD, also with Beth Israel Deaconess Medical Center and Harvard Medical School, developed the concept of predicting response to anti-CGRP treatment by testing for the presence or absence of nonictal cephalic allodynia in collaboration with the company CGRP Diagnostics.
In 43 anti–CGRP-naive patients with migraine, the researchers used a simplified QST algorithm to determine the presence/absence of cephalic or extracephalic allodynia during the nonictal phase of migraine – defined as the period from less than 12 hours after a migraine attack to less than 12 hours before the next attack.
Patients were considered to have allodynia if heat pain thresholds were between 32° C and 40° C, if cold pain thresholds were between 32° C and 20° C, or if the mechanical pain was threshold was less than 60 g.
Using these strict criteria, pretreatment nonictal cephalic allodynia was a statistically significant predictor of response to anti-CGRP therapy. It was present in 84% of the 19 nonresponders and was absent in 79% of the 24 responders, for an overall accuracy rate of 86% (P < .0001).
Nonictal cephalic allodynia was “consistently” predictive of response for patients with chronic migraine as well as for those with high-frequency episodic migraine, the researchers reported.
In contrast, they noted that assessing nonictal extracephalic allodynia with QST missed nearly 50% of the patients with allodynia among the nonresponders (accuracy rate of 42%) and added little to the assessment of allodynia among the responders.
Mark Hasleton, PhD, CEO of CGRP Diagnostics, said in an interview that the study shows it’s possible to determine response to anti-CGRP therapy and to prescribe these medications to patients who are most likely to respond.
Dr. Hasleton, who was not personally involved with the current study, noted that pretreatment testing for nonictal cephalic allodynia may also allow for earlier prescription of anti-CGRP therapy and potentially dispense without the need for the current trial-and-error approach to prescribing. He noted that if one anti-CGRP fails the patient, it is highly likely that others will also fail.
Given the “very high correlation of the presence of nonictal cephalic allodynia in responders to galcanezumab, our recommendation would be to routinely pretest all potential anti-CGRP candidates prior to prescription,” he said.
End of trial-and-error prescribing
In a comment, Shaheen Lakhan, MD, a neurologist and researcher in Boston, said this research is “very noteworthy, moving us one step closer to predictive, precision medicine and away from the practice of trial-and-error prescribing.
“The trial-and-error approach to migraine management is daunting. These are very costly therapies, and when they don’t work, there is continued tremendous suffering and loss of quality of life for patients,” said Dr. Lakhan, who was not involved in the study.
He added that the failure of drugs to benefit individual patients “may lead to distrust of the health care provider” and to the system as a whole, which in turn could lead to less access to care for other conditions or for preventive measures.
“I envision a time when these predictive measures collectively (interictal allodynia, as in this study, plus biobehavioral data) will assist us neurologists in appropriately selecting migraine therapies,” Dr. Lakhan said.
“Beyond that, we will eventually test new therapies not in cells, animals, and even humans but in silico. In the very near future, we will have solutions tailored to not people suffering a disease but to you – an individual with a unique genetic, protein, physical, developmental, psychological, and behavioral makeup,” he added.
The study was funded in part by Eli Lilly, the National Institutes of Health, and the anesthesia department at Beth Israel Deaconess Medical Center. Galcanezumab was provided by Eli Lilly. Dr. Lakhan reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM CEPHALALGIA
Adult brains contain millions of ‘silent synapses’
according to neuroscientists from the Massachusetts Institute of Technology.
What to know:
- An estimated 30% of all synapses in the brain’s cortex are silent and become active to allow the adult brain to continually form new memories and leave existing conventional synapses unmodified.
- Silent synapses are looking for new connections, and when important new information is presented, connections between the relevant neurons are strengthened to allow the brain to remember new things.
- Using the silent synapses for the new memories does not overwrite the important memories stored in more mature synapses, which are harder to change.
- The brain’s neurons display a wide range of plasticity mechanisms that account for how brains can efficiently learn new things and retain them in long-term memory.
- Flexibility of synapses is critical for acquiring new information, and stability is required to retain important information, enabling one to more easily adjust and change behaviors and habits or incorporate new information.
This is a summary of the article, “Filopodia Are a Structural Substrate for Silent Synapses in Adult Neocortex,” published in Nature Nov. 30, 2022. The full article can be found at nature.com .
A version of this article first appeared on Medscape.com.
according to neuroscientists from the Massachusetts Institute of Technology.
What to know:
- An estimated 30% of all synapses in the brain’s cortex are silent and become active to allow the adult brain to continually form new memories and leave existing conventional synapses unmodified.
- Silent synapses are looking for new connections, and when important new information is presented, connections between the relevant neurons are strengthened to allow the brain to remember new things.
- Using the silent synapses for the new memories does not overwrite the important memories stored in more mature synapses, which are harder to change.
- The brain’s neurons display a wide range of plasticity mechanisms that account for how brains can efficiently learn new things and retain them in long-term memory.
- Flexibility of synapses is critical for acquiring new information, and stability is required to retain important information, enabling one to more easily adjust and change behaviors and habits or incorporate new information.
This is a summary of the article, “Filopodia Are a Structural Substrate for Silent Synapses in Adult Neocortex,” published in Nature Nov. 30, 2022. The full article can be found at nature.com .
A version of this article first appeared on Medscape.com.
according to neuroscientists from the Massachusetts Institute of Technology.
What to know:
- An estimated 30% of all synapses in the brain’s cortex are silent and become active to allow the adult brain to continually form new memories and leave existing conventional synapses unmodified.
- Silent synapses are looking for new connections, and when important new information is presented, connections between the relevant neurons are strengthened to allow the brain to remember new things.
- Using the silent synapses for the new memories does not overwrite the important memories stored in more mature synapses, which are harder to change.
- The brain’s neurons display a wide range of plasticity mechanisms that account for how brains can efficiently learn new things and retain them in long-term memory.
- Flexibility of synapses is critical for acquiring new information, and stability is required to retain important information, enabling one to more easily adjust and change behaviors and habits or incorporate new information.
This is a summary of the article, “Filopodia Are a Structural Substrate for Silent Synapses in Adult Neocortex,” published in Nature Nov. 30, 2022. The full article can be found at nature.com .
A version of this article first appeared on Medscape.com.
Diabetes drug tied to lower dementia risk
new research suggests.
Overall, in a large cohort study from South Korea, patients who took pioglitazone were 16% less likely to develop dementia over an average of 10 years than peers who did not take the drug.
However, the dementia risk reduction was 54% among those with ischemic heart disease and 43% among those with a history of stroke.
“Our study was to see the association between pioglitazone use and incidence of dementia, not how (with what mechanisms) this drug can suppress dementia pathology,” coinvestigator Eosu Kim, MD, PhD, Yonsei University, Seoul, South Korea, said in an interview.
However, “as we found this drug is more effective in diabetic patients who have blood circulation problems in the heart or brain than in those without such problems, we speculate that pioglitazone’s antidementia action may be related to improving blood vessel’s health,” Dr. Kim said.
This finding suggests that pioglitazone could be used as a personalized treatment approach for dementia prevention in this subgroup of patients with diabetes, the researchers noted.
The results were published online in Neurology.
Dose-response relationship
Risk for dementia is doubled in adults with T2DM, the investigators wrote. Prior studies have suggested that pioglitazone may protect against dementia, as well as a first or recurrent stroke, in patients with T2DM.
This led Dr. Kim and colleagues to examine the effects of pioglitazone on dementia risk overall and in relation to stroke and ischemic heart disease.
Using the national Korean health database, the researchers identified 91,218 adults aged 50 and older with new-onset T2DM who did not have dementia. A total of 3,467 were treated with pioglitazone.
Pioglitazone exposure was defined as a total cumulative daily dose of 90 or more calculated from all dispensations during 4 years after T2DM diagnosis, with outcomes assessed after this period.
Over an average of 10 years, 8.3% of pioglitazone users developed dementia, compared with 10.0% of nonusers.
There was a statistically significant 16% lower risk for developing all-cause dementia among pioglitazone users than among nonusers (adjusted hazard ratio, 0.84; 95% confidence interval, 0.75-0.95).
A dose-response relationship was evident; pioglitazone users who received the highest cumulative daily dose were at lower risk for dementia (aHR, 0.72; 95% CI, 0.55-0.94).
Several limitations
The reduced risk for dementia was more pronounced among patients who used pioglitazone for 4 years in comparison with patients who did not use the drug (aHR, 0.63; 95% CI, 0.44-0.90).
The apparent protective effect of pioglitazone with regard to dementia was greater among those with a history of ischemic heart disease (aHR, 0.46; 95% CI, 0.24-0.90) or stroke (aHR, 0.57; 95% CI, 0.38-0.86) before diabetes diagnosis.
The incidence of stroke was also reduced with pioglitazone use (aHR, 0.81; 95% CI, 0.66-1.0).
“These results provide valuable information on who could potentially benefit from pioglitazone use for prevention of dementia,” Dr. Kim said in a news release.
However, “the risk and benefit balance of long-term use of this drug to prevent dementia should be prospectively assessed,” he said in an interview.
The researchers cautioned that the study was observational; hence, the reported associations cannot address causal relationships. Also, because of the use of claims data, drug compliance could not be guaranteed, and exposure may have been overestimated.
There is also the potential for selection bias, and no information on apolipoprotein E was available, they noted.
More data needed
In an accompanying editorial, Colleen J. Maxwell, PhD, University of Waterloo (Ont.), and colleagues wrote that the results “not only support previous studies showing the potential cognitive benefit of pioglitazone but also extend our understanding of this benefit through the mediating effect of reducing ischemic stroke.”
However, because of their associated risks, which include fractures, weight gain, heart failure, and bladder cancer, thiazolidinediones are not currently favored in diabetes management guidelines – and their use has significantly declined since the mid to late 2000s, the editorialists noted.
They agreed that it will be important to reassess the risk-benefit profile of pioglitazone in T2DM as additional findings emerge.
They also noted that sodium-glucose cotransporter-2 inhibitors, which have significant cardiovascular and renal benefits and minimal side effects, may also lower the risk for dementia.
“As both pioglitazone and SGLT-2 inhibitors are second-line options for physicians, the current decision would easily be in favor of SGLT-2 inhibitors given their safety profile,” Dr. Maxwell and colleagues wrote.
For now, pioglitazone “should not be used to prevent dementia in patients with T2DM,” they concluded.
The study was supported by grants from the National Research Foundation of Korea funded by the Korean government and the Ministry of Health and Welfare. The investigators and editorialists report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
new research suggests.
Overall, in a large cohort study from South Korea, patients who took pioglitazone were 16% less likely to develop dementia over an average of 10 years than peers who did not take the drug.
However, the dementia risk reduction was 54% among those with ischemic heart disease and 43% among those with a history of stroke.
“Our study was to see the association between pioglitazone use and incidence of dementia, not how (with what mechanisms) this drug can suppress dementia pathology,” coinvestigator Eosu Kim, MD, PhD, Yonsei University, Seoul, South Korea, said in an interview.
However, “as we found this drug is more effective in diabetic patients who have blood circulation problems in the heart or brain than in those without such problems, we speculate that pioglitazone’s antidementia action may be related to improving blood vessel’s health,” Dr. Kim said.
This finding suggests that pioglitazone could be used as a personalized treatment approach for dementia prevention in this subgroup of patients with diabetes, the researchers noted.
The results were published online in Neurology.
Dose-response relationship
Risk for dementia is doubled in adults with T2DM, the investigators wrote. Prior studies have suggested that pioglitazone may protect against dementia, as well as a first or recurrent stroke, in patients with T2DM.
This led Dr. Kim and colleagues to examine the effects of pioglitazone on dementia risk overall and in relation to stroke and ischemic heart disease.
Using the national Korean health database, the researchers identified 91,218 adults aged 50 and older with new-onset T2DM who did not have dementia. A total of 3,467 were treated with pioglitazone.
Pioglitazone exposure was defined as a total cumulative daily dose of 90 or more calculated from all dispensations during 4 years after T2DM diagnosis, with outcomes assessed after this period.
Over an average of 10 years, 8.3% of pioglitazone users developed dementia, compared with 10.0% of nonusers.
There was a statistically significant 16% lower risk for developing all-cause dementia among pioglitazone users than among nonusers (adjusted hazard ratio, 0.84; 95% confidence interval, 0.75-0.95).
A dose-response relationship was evident; pioglitazone users who received the highest cumulative daily dose were at lower risk for dementia (aHR, 0.72; 95% CI, 0.55-0.94).
Several limitations
The reduced risk for dementia was more pronounced among patients who used pioglitazone for 4 years in comparison with patients who did not use the drug (aHR, 0.63; 95% CI, 0.44-0.90).
The apparent protective effect of pioglitazone with regard to dementia was greater among those with a history of ischemic heart disease (aHR, 0.46; 95% CI, 0.24-0.90) or stroke (aHR, 0.57; 95% CI, 0.38-0.86) before diabetes diagnosis.
The incidence of stroke was also reduced with pioglitazone use (aHR, 0.81; 95% CI, 0.66-1.0).
“These results provide valuable information on who could potentially benefit from pioglitazone use for prevention of dementia,” Dr. Kim said in a news release.
However, “the risk and benefit balance of long-term use of this drug to prevent dementia should be prospectively assessed,” he said in an interview.
The researchers cautioned that the study was observational; hence, the reported associations cannot address causal relationships. Also, because of the use of claims data, drug compliance could not be guaranteed, and exposure may have been overestimated.
There is also the potential for selection bias, and no information on apolipoprotein E was available, they noted.
More data needed
In an accompanying editorial, Colleen J. Maxwell, PhD, University of Waterloo (Ont.), and colleagues wrote that the results “not only support previous studies showing the potential cognitive benefit of pioglitazone but also extend our understanding of this benefit through the mediating effect of reducing ischemic stroke.”
However, because of their associated risks, which include fractures, weight gain, heart failure, and bladder cancer, thiazolidinediones are not currently favored in diabetes management guidelines – and their use has significantly declined since the mid to late 2000s, the editorialists noted.
They agreed that it will be important to reassess the risk-benefit profile of pioglitazone in T2DM as additional findings emerge.
They also noted that sodium-glucose cotransporter-2 inhibitors, which have significant cardiovascular and renal benefits and minimal side effects, may also lower the risk for dementia.
“As both pioglitazone and SGLT-2 inhibitors are second-line options for physicians, the current decision would easily be in favor of SGLT-2 inhibitors given their safety profile,” Dr. Maxwell and colleagues wrote.
For now, pioglitazone “should not be used to prevent dementia in patients with T2DM,” they concluded.
The study was supported by grants from the National Research Foundation of Korea funded by the Korean government and the Ministry of Health and Welfare. The investigators and editorialists report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
new research suggests.
Overall, in a large cohort study from South Korea, patients who took pioglitazone were 16% less likely to develop dementia over an average of 10 years than peers who did not take the drug.
However, the dementia risk reduction was 54% among those with ischemic heart disease and 43% among those with a history of stroke.
“Our study was to see the association between pioglitazone use and incidence of dementia, not how (with what mechanisms) this drug can suppress dementia pathology,” coinvestigator Eosu Kim, MD, PhD, Yonsei University, Seoul, South Korea, said in an interview.
However, “as we found this drug is more effective in diabetic patients who have blood circulation problems in the heart or brain than in those without such problems, we speculate that pioglitazone’s antidementia action may be related to improving blood vessel’s health,” Dr. Kim said.
This finding suggests that pioglitazone could be used as a personalized treatment approach for dementia prevention in this subgroup of patients with diabetes, the researchers noted.
The results were published online in Neurology.
Dose-response relationship
Risk for dementia is doubled in adults with T2DM, the investigators wrote. Prior studies have suggested that pioglitazone may protect against dementia, as well as a first or recurrent stroke, in patients with T2DM.
This led Dr. Kim and colleagues to examine the effects of pioglitazone on dementia risk overall and in relation to stroke and ischemic heart disease.
Using the national Korean health database, the researchers identified 91,218 adults aged 50 and older with new-onset T2DM who did not have dementia. A total of 3,467 were treated with pioglitazone.
Pioglitazone exposure was defined as a total cumulative daily dose of 90 or more calculated from all dispensations during 4 years after T2DM diagnosis, with outcomes assessed after this period.
Over an average of 10 years, 8.3% of pioglitazone users developed dementia, compared with 10.0% of nonusers.
There was a statistically significant 16% lower risk for developing all-cause dementia among pioglitazone users than among nonusers (adjusted hazard ratio, 0.84; 95% confidence interval, 0.75-0.95).
A dose-response relationship was evident; pioglitazone users who received the highest cumulative daily dose were at lower risk for dementia (aHR, 0.72; 95% CI, 0.55-0.94).
Several limitations
The reduced risk for dementia was more pronounced among patients who used pioglitazone for 4 years in comparison with patients who did not use the drug (aHR, 0.63; 95% CI, 0.44-0.90).
The apparent protective effect of pioglitazone with regard to dementia was greater among those with a history of ischemic heart disease (aHR, 0.46; 95% CI, 0.24-0.90) or stroke (aHR, 0.57; 95% CI, 0.38-0.86) before diabetes diagnosis.
The incidence of stroke was also reduced with pioglitazone use (aHR, 0.81; 95% CI, 0.66-1.0).
“These results provide valuable information on who could potentially benefit from pioglitazone use for prevention of dementia,” Dr. Kim said in a news release.
However, “the risk and benefit balance of long-term use of this drug to prevent dementia should be prospectively assessed,” he said in an interview.
The researchers cautioned that the study was observational; hence, the reported associations cannot address causal relationships. Also, because of the use of claims data, drug compliance could not be guaranteed, and exposure may have been overestimated.
There is also the potential for selection bias, and no information on apolipoprotein E was available, they noted.
More data needed
In an accompanying editorial, Colleen J. Maxwell, PhD, University of Waterloo (Ont.), and colleagues wrote that the results “not only support previous studies showing the potential cognitive benefit of pioglitazone but also extend our understanding of this benefit through the mediating effect of reducing ischemic stroke.”
However, because of their associated risks, which include fractures, weight gain, heart failure, and bladder cancer, thiazolidinediones are not currently favored in diabetes management guidelines – and their use has significantly declined since the mid to late 2000s, the editorialists noted.
They agreed that it will be important to reassess the risk-benefit profile of pioglitazone in T2DM as additional findings emerge.
They also noted that sodium-glucose cotransporter-2 inhibitors, which have significant cardiovascular and renal benefits and minimal side effects, may also lower the risk for dementia.
“As both pioglitazone and SGLT-2 inhibitors are second-line options for physicians, the current decision would easily be in favor of SGLT-2 inhibitors given their safety profile,” Dr. Maxwell and colleagues wrote.
For now, pioglitazone “should not be used to prevent dementia in patients with T2DM,” they concluded.
The study was supported by grants from the National Research Foundation of Korea funded by the Korean government and the Ministry of Health and Welfare. The investigators and editorialists report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
FROM NEUROLOGY
Immunodeficiencies tied to psychiatric disorders in offspring
new research suggests.
Results from a cohort study of more than 4.2 million individuals showed that offspring of mothers with PIDs had a 17% increased risk for a psychiatric disorder and a 20% increased risk for suicidal behavior, compared with their peers with mothers who did not have PIDs.
The risk was more pronounced in offspring of mothers with both PIDs and autoimmune diseases. These risks remained after strictly controlling for different covariates, such as the parents’ psychiatric history, offspring PIDs, and offspring autoimmune diseases.
The investigators, led by Josef Isung, MD, PhD, Centre for Psychiatry Research, department of clinical neuroscience, Karolinska Institutet, Stockholm, noted that they could not “pinpoint a precise causal mechanism” underlying these findings.
Still, “the results add to the existing literature suggesting that the intrauterine immune environment may have implications for fetal neurodevelopment and that a compromised maternal immune system during pregnancy may be a risk factor for psychiatric disorders and suicidal behavior in their offspring in the long term,” they wrote.
The findings were published online in JAMA Psychiatry.
‘Natural experiment’
Maternal immune activation (MIA) is “an overarching term for aberrant and disrupted immune activity in the mother during gestation [and] has long been of interest in relation to adverse health outcomes in the offspring,” Dr. Isung noted.
“In relation to negative psychiatric outcomes, there is an abundance of preclinical evidence that has shown a negative impact on offspring secondary to MIA. And in humans, there are several observational studies supporting this link,” he said in an interview.
Dr. Isung added that PIDs are “rare conditions” known to be associated with repeated infections and high rates of autoimmune diseases, causing substantial disability.
“PIDs represent an interesting ‘natural experiment’ for researchers to understand more about the association between immune system dysfunctions and mental health,” he said.
Dr. Isung’s group previously showed that individuals with PIDs have increased odds of psychiatric disorders and suicidal behavior. The link was more pronounced in women with PIDs – and was even more pronounced in those with both PIDs and autoimmune diseases.
In the current study, “we wanted to see whether offspring of individuals were differentially at risk of psychiatric disorders and suicidal behavior, depending on being offspring of mothers or fathers with PIDs,” Dr. Isung said.
“Our hypothesis was that mothers with PIDs would have an increased risk of having offspring with neuropsychiatric outcomes, and that this risk could be due to MIA,” he added.
The researchers turned to Swedish nationwide health and administrative registers. They analyzed data on all individuals with diagnoses of PIDs identified between 1973 and 2013. Offspring born prior to 2003 were included, and parent-offspring pairs in which both parents had a history of PIDs were excluded.
The final study sample consisted of 4,294,169 offspring (51.4% boys). Of these participants, 7,270 (0.17%) had a parent with PIDs.
The researchers identified lifetime records of 10 psychiatric disorders: obsessive-compulsive disorder, ADHD, autism spectrum disorders, schizophrenia and other psychotic disorders, bipolar disorders, major depressive disorder and other mood disorders, anxiety and stress-related disorders, eating disorders, substance use disorders, and Tourette syndrome and chronic tic disorders.
The investigators included parental birth year, psychopathology, suicide attempts, suicide deaths, and autoimmune diseases as covariates, as well as offsprings’ birth year and gender.
Elucidation needed
Results showed that, of the 4,676 offspring of mothers with PID, 17.1% had a psychiatric disorder versus 12.7% of offspring of mothers without PIDs. This translated “into a 17% increased risk for offspring of mothers with PIDs in the fully adjusted model,” the investigators reported.
The risk was even higher for offspring of mothers who had not only PIDs but also one of six of the individual psychiatric disorders, with incident rate ratios ranging from 1.15 to 1.71.
“In fully adjusted models, offspring of mothers with PIDs had an increased risk of any psychiatric disorder, while no such risks were observed in offspring of fathers with PIDs” (IRR, 1.17 vs. 1.03; P < .001), the researchers reported.
A higher risk for suicidal behavior was also observed among offspring of mothers with PIDS, in contrast to those of fathers with PIDs (IRR, 1.2 vs. 1.1; P = .01).
The greatest risk for any psychiatric disorder, as well as suicidal behavior, was found in offspring of mothers who had both PIDs and autoimmune diseases (IRRs, 1.24 and 1.44, respectively).
“The results could be seen as substantiating the hypothesis that immune disruption may be important in the pathophysiology of psychiatric disorders and suicidal behavior,” Dr. Isung said.
“Furthermore, the fact that only offspring of mothers and not offspring of fathers with PIDs had this association would align with our hypothesis that MIA is of importance,” he added.
However, he noted that “the specific mechanisms are most likely multifactorial and remain to be elucidated.”
Important piece of the puzzle?
In a comment, Michael Eriksen Benros, MD, PhD, professor of immunopsychiatry, department of immunology and microbiology, health, and medical sciences, University of Copenhagen, said this was a “high-quality study” that used a “rich data source.”
Dr. Benros, who is also head of research (biological and precision psychiatry) at the Copenhagen Research Centre for Mental Health, Copenhagen University Hospital, was not involved with the current study.
He noted that prior studies, including some conducted by his own group, have shown that maternal infections overall did not seem to be “specifically linked to mental disorders in the offspring.”
However, “specific maternal infections or specific brain-reactive antibodies during the pregnancy period have been shown to be associated with neurodevelopmental outcomes among the children,” such as intellectual disability, he said.
Regarding direct clinical implications of the study, “it is important to note that the increased risk of psychiatric disorders and suicidality in the offspring of mothers with PID were small,” Dr. Benros said.
“However, it adds an important part to the scientific puzzle regarding the role of maternal immune activation during pregnancy and the risk of mental disorders,” he added.
The study was funded by the Söderström König Foundation and the Fredrik and Ingrid Thuring Foundation. Neither Dr. Isung nor Dr. Benros reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
new research suggests.
Results from a cohort study of more than 4.2 million individuals showed that offspring of mothers with PIDs had a 17% increased risk for a psychiatric disorder and a 20% increased risk for suicidal behavior, compared with their peers with mothers who did not have PIDs.
The risk was more pronounced in offspring of mothers with both PIDs and autoimmune diseases. These risks remained after strictly controlling for different covariates, such as the parents’ psychiatric history, offspring PIDs, and offspring autoimmune diseases.
The investigators, led by Josef Isung, MD, PhD, Centre for Psychiatry Research, department of clinical neuroscience, Karolinska Institutet, Stockholm, noted that they could not “pinpoint a precise causal mechanism” underlying these findings.
Still, “the results add to the existing literature suggesting that the intrauterine immune environment may have implications for fetal neurodevelopment and that a compromised maternal immune system during pregnancy may be a risk factor for psychiatric disorders and suicidal behavior in their offspring in the long term,” they wrote.
The findings were published online in JAMA Psychiatry.
‘Natural experiment’
Maternal immune activation (MIA) is “an overarching term for aberrant and disrupted immune activity in the mother during gestation [and] has long been of interest in relation to adverse health outcomes in the offspring,” Dr. Isung noted.
“In relation to negative psychiatric outcomes, there is an abundance of preclinical evidence that has shown a negative impact on offspring secondary to MIA. And in humans, there are several observational studies supporting this link,” he said in an interview.
Dr. Isung added that PIDs are “rare conditions” known to be associated with repeated infections and high rates of autoimmune diseases, causing substantial disability.
“PIDs represent an interesting ‘natural experiment’ for researchers to understand more about the association between immune system dysfunctions and mental health,” he said.
Dr. Isung’s group previously showed that individuals with PIDs have increased odds of psychiatric disorders and suicidal behavior. The link was more pronounced in women with PIDs – and was even more pronounced in those with both PIDs and autoimmune diseases.
In the current study, “we wanted to see whether offspring of individuals were differentially at risk of psychiatric disorders and suicidal behavior, depending on being offspring of mothers or fathers with PIDs,” Dr. Isung said.
“Our hypothesis was that mothers with PIDs would have an increased risk of having offspring with neuropsychiatric outcomes, and that this risk could be due to MIA,” he added.
The researchers turned to Swedish nationwide health and administrative registers. They analyzed data on all individuals with diagnoses of PIDs identified between 1973 and 2013. Offspring born prior to 2003 were included, and parent-offspring pairs in which both parents had a history of PIDs were excluded.
The final study sample consisted of 4,294,169 offspring (51.4% boys). Of these participants, 7,270 (0.17%) had a parent with PIDs.
The researchers identified lifetime records of 10 psychiatric disorders: obsessive-compulsive disorder, ADHD, autism spectrum disorders, schizophrenia and other psychotic disorders, bipolar disorders, major depressive disorder and other mood disorders, anxiety and stress-related disorders, eating disorders, substance use disorders, and Tourette syndrome and chronic tic disorders.
The investigators included parental birth year, psychopathology, suicide attempts, suicide deaths, and autoimmune diseases as covariates, as well as offsprings’ birth year and gender.
Elucidation needed
Results showed that, of the 4,676 offspring of mothers with PID, 17.1% had a psychiatric disorder versus 12.7% of offspring of mothers without PIDs. This translated “into a 17% increased risk for offspring of mothers with PIDs in the fully adjusted model,” the investigators reported.
The risk was even higher for offspring of mothers who had not only PIDs but also one of six of the individual psychiatric disorders, with incident rate ratios ranging from 1.15 to 1.71.
“In fully adjusted models, offspring of mothers with PIDs had an increased risk of any psychiatric disorder, while no such risks were observed in offspring of fathers with PIDs” (IRR, 1.17 vs. 1.03; P < .001), the researchers reported.
A higher risk for suicidal behavior was also observed among offspring of mothers with PIDS, in contrast to those of fathers with PIDs (IRR, 1.2 vs. 1.1; P = .01).
The greatest risk for any psychiatric disorder, as well as suicidal behavior, was found in offspring of mothers who had both PIDs and autoimmune diseases (IRRs, 1.24 and 1.44, respectively).
“The results could be seen as substantiating the hypothesis that immune disruption may be important in the pathophysiology of psychiatric disorders and suicidal behavior,” Dr. Isung said.
“Furthermore, the fact that only offspring of mothers and not offspring of fathers with PIDs had this association would align with our hypothesis that MIA is of importance,” he added.
However, he noted that “the specific mechanisms are most likely multifactorial and remain to be elucidated.”
Important piece of the puzzle?
In a comment, Michael Eriksen Benros, MD, PhD, professor of immunopsychiatry, department of immunology and microbiology, health, and medical sciences, University of Copenhagen, said this was a “high-quality study” that used a “rich data source.”
Dr. Benros, who is also head of research (biological and precision psychiatry) at the Copenhagen Research Centre for Mental Health, Copenhagen University Hospital, was not involved with the current study.
He noted that prior studies, including some conducted by his own group, have shown that maternal infections overall did not seem to be “specifically linked to mental disorders in the offspring.”
However, “specific maternal infections or specific brain-reactive antibodies during the pregnancy period have been shown to be associated with neurodevelopmental outcomes among the children,” such as intellectual disability, he said.
Regarding direct clinical implications of the study, “it is important to note that the increased risk of psychiatric disorders and suicidality in the offspring of mothers with PID were small,” Dr. Benros said.
“However, it adds an important part to the scientific puzzle regarding the role of maternal immune activation during pregnancy and the risk of mental disorders,” he added.
The study was funded by the Söderström König Foundation and the Fredrik and Ingrid Thuring Foundation. Neither Dr. Isung nor Dr. Benros reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
new research suggests.
Results from a cohort study of more than 4.2 million individuals showed that offspring of mothers with PIDs had a 17% increased risk for a psychiatric disorder and a 20% increased risk for suicidal behavior, compared with their peers with mothers who did not have PIDs.
The risk was more pronounced in offspring of mothers with both PIDs and autoimmune diseases. These risks remained after strictly controlling for different covariates, such as the parents’ psychiatric history, offspring PIDs, and offspring autoimmune diseases.
The investigators, led by Josef Isung, MD, PhD, Centre for Psychiatry Research, department of clinical neuroscience, Karolinska Institutet, Stockholm, noted that they could not “pinpoint a precise causal mechanism” underlying these findings.
Still, “the results add to the existing literature suggesting that the intrauterine immune environment may have implications for fetal neurodevelopment and that a compromised maternal immune system during pregnancy may be a risk factor for psychiatric disorders and suicidal behavior in their offspring in the long term,” they wrote.
The findings were published online in JAMA Psychiatry.
‘Natural experiment’
Maternal immune activation (MIA) is “an overarching term for aberrant and disrupted immune activity in the mother during gestation [and] has long been of interest in relation to adverse health outcomes in the offspring,” Dr. Isung noted.
“In relation to negative psychiatric outcomes, there is an abundance of preclinical evidence that has shown a negative impact on offspring secondary to MIA. And in humans, there are several observational studies supporting this link,” he said in an interview.
Dr. Isung added that PIDs are “rare conditions” known to be associated with repeated infections and high rates of autoimmune diseases, causing substantial disability.
“PIDs represent an interesting ‘natural experiment’ for researchers to understand more about the association between immune system dysfunctions and mental health,” he said.
Dr. Isung’s group previously showed that individuals with PIDs have increased odds of psychiatric disorders and suicidal behavior. The link was more pronounced in women with PIDs – and was even more pronounced in those with both PIDs and autoimmune diseases.
In the current study, “we wanted to see whether offspring of individuals were differentially at risk of psychiatric disorders and suicidal behavior, depending on being offspring of mothers or fathers with PIDs,” Dr. Isung said.
“Our hypothesis was that mothers with PIDs would have an increased risk of having offspring with neuropsychiatric outcomes, and that this risk could be due to MIA,” he added.
The researchers turned to Swedish nationwide health and administrative registers. They analyzed data on all individuals with diagnoses of PIDs identified between 1973 and 2013. Offspring born prior to 2003 were included, and parent-offspring pairs in which both parents had a history of PIDs were excluded.
The final study sample consisted of 4,294,169 offspring (51.4% boys). Of these participants, 7,270 (0.17%) had a parent with PIDs.
The researchers identified lifetime records of 10 psychiatric disorders: obsessive-compulsive disorder, ADHD, autism spectrum disorders, schizophrenia and other psychotic disorders, bipolar disorders, major depressive disorder and other mood disorders, anxiety and stress-related disorders, eating disorders, substance use disorders, and Tourette syndrome and chronic tic disorders.
The investigators included parental birth year, psychopathology, suicide attempts, suicide deaths, and autoimmune diseases as covariates, as well as offsprings’ birth year and gender.
Elucidation needed
Results showed that, of the 4,676 offspring of mothers with PID, 17.1% had a psychiatric disorder versus 12.7% of offspring of mothers without PIDs. This translated “into a 17% increased risk for offspring of mothers with PIDs in the fully adjusted model,” the investigators reported.
The risk was even higher for offspring of mothers who had not only PIDs but also one of six of the individual psychiatric disorders, with incident rate ratios ranging from 1.15 to 1.71.
“In fully adjusted models, offspring of mothers with PIDs had an increased risk of any psychiatric disorder, while no such risks were observed in offspring of fathers with PIDs” (IRR, 1.17 vs. 1.03; P < .001), the researchers reported.
A higher risk for suicidal behavior was also observed among offspring of mothers with PIDS, in contrast to those of fathers with PIDs (IRR, 1.2 vs. 1.1; P = .01).
The greatest risk for any psychiatric disorder, as well as suicidal behavior, was found in offspring of mothers who had both PIDs and autoimmune diseases (IRRs, 1.24 and 1.44, respectively).
“The results could be seen as substantiating the hypothesis that immune disruption may be important in the pathophysiology of psychiatric disorders and suicidal behavior,” Dr. Isung said.
“Furthermore, the fact that only offspring of mothers and not offspring of fathers with PIDs had this association would align with our hypothesis that MIA is of importance,” he added.
However, he noted that “the specific mechanisms are most likely multifactorial and remain to be elucidated.”
Important piece of the puzzle?
In a comment, Michael Eriksen Benros, MD, PhD, professor of immunopsychiatry, department of immunology and microbiology, health, and medical sciences, University of Copenhagen, said this was a “high-quality study” that used a “rich data source.”
Dr. Benros, who is also head of research (biological and precision psychiatry) at the Copenhagen Research Centre for Mental Health, Copenhagen University Hospital, was not involved with the current study.
He noted that prior studies, including some conducted by his own group, have shown that maternal infections overall did not seem to be “specifically linked to mental disorders in the offspring.”
However, “specific maternal infections or specific brain-reactive antibodies during the pregnancy period have been shown to be associated with neurodevelopmental outcomes among the children,” such as intellectual disability, he said.
Regarding direct clinical implications of the study, “it is important to note that the increased risk of psychiatric disorders and suicidality in the offspring of mothers with PID were small,” Dr. Benros said.
“However, it adds an important part to the scientific puzzle regarding the role of maternal immune activation during pregnancy and the risk of mental disorders,” he added.
The study was funded by the Söderström König Foundation and the Fredrik and Ingrid Thuring Foundation. Neither Dr. Isung nor Dr. Benros reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
FROM JAMA PSYCHIATRY
Be aware of hepatic encephalopathy, dementia overlap in older patients with cirrhosis
, according to a new study involving U.S. veterans.
The overlap between dementia and HE was also independent of alcohol use, brain injury, age, and other metabolic risk factors.
“The aging of patients with cirrhosis leads us to encounter several individuals who may be prone to both of these diseases,” senior author Jasmohan Bajaj, MD, a professor of gastroenterology, hepatology, and nutrition at Virginia Commonwealth University Medical Center and GI section of the Central Virginia Veterans Healthcare System in Richmond, said in an interview.
“Given the epidemic of metabolic syndrome and alcohol, consider excluding cirrhosis in your patient [for] whom the presumptive diagnosis is dementia, since they could have concomitant HE,” he said.
“On the flip side, in those with HE who have predominant long-term memory issues and persistent cognitive changes, consider consulting a neuropsychiatrist or neurologist to ensure there is a resolution of the underlying disease process,” Dr. Bajaj added.
The study was published online in The American Journal of Gastroenterology.
Analyzing associations
HE is a common decompensating event in patients with cirrhosis. Because of the aging population of patients with cirrhosis, however, it’s important to differentiate HE from nonhepatic etiologies of cognitive impairment, such as dementia, the authors note.
Using data from the VA Corporate Data Warehouse, Dr. Bajaj and colleagues identified veterans with cirrhosis who received VA care between October 2019 and September 2021 and compared baseline characteristics between the cohorts based on the presence or absence of dementia. The research team then evaluated factors associated with having a diagnosis of dementia, adjusting for demographics, comorbid illnesses, cirrhosis etiology, and cirrhosis complications.
Investigators identified 71,522 veterans with diagnostic codes for cirrhosis who were engaged in VA care in 2019. They were mostly men (96.2%) and had a median age of 66. The most common etiologies of cirrhosis were alcohol and hepatitis C, followed by nonalcoholic steatohepatitis (NASH). The group also included veterans with predominantly compensated cirrhosis and a median MELD-Na score of 9. The MELD-Na score gauges the severity of chronic liver disease using values such as serum bilirubin, serum creatinine, and the international normalized ratio for prothrombin time and sodium to predict survival.
Among those with cirrhosis, 5,647 (7.9%) also had dementia diagnosis codes. This rate is higher than the prevalence of dementia in the general population and equivalent to the rate of dementia in veterans without cirrhosis who are older than 65, the authors note.
In general, veterans with dementia tended to be older, to be White, to live in an urban area, and to have higher MELD-Na scores, and they were more frequently diagnosed with alcohol-related cirrhosis, alcohol and tobacco use disorder, diabetes, chronic kidney disease, chronic heart failure, brain trauma, and cerebrovascular disease.
In a multivariable analysis, the presence of any decompensating event was significantly associated with dementia. In subsequent analyses of individual decompensating events, however, the strongest association was with HE, while ascites or variceal bleeding did not add to the risk.
When HE was defined as patients who filled prescriptions for lactulose or rifaximin, the frequency of patients with HE decreased from 13.7% to 10.9%. In an analysis with HE as the decompensating event, the association between HE and dementia remained significant compared to when HE was defined by diagnostic codes alone.
“We were surprised by the high proportion of patients with dementia who also had cirrhosis, and given the genuine difficulty that clinicians have with defining HE vs. dementia, we were not very surprised at that overlap,” Dr. Bajaj said.
“We were also surprised at the specificity of this overlap only with HE and not with other decompensating events, which was also independent of head injury, alcohol use, and PTSD,” he added.
Additional research needed
Future research should look at the characteristics of HE, including the number of episodes or breakthrough episodes, and should focus on objective biomarkers to differentiate dementia and HE, the study authors write.
“The distinction and study of potential overlapping features among HE and dementia is important because HE is often treatable with medications and reverses after liver transplant, while this does not occur with dementia,” they add.
Dr. Bajaj and colleagues call for a greater awareness of disease processes and complications in older patients with cirrhosis, particularly since diagnostic imprecision can lead to patient and family confusion, distrust, and ineffective treatment.
The study will help physicians better understand the important overlap between dementia and HE, said Eric Orman, MD, an associate professor of medicine at Indiana University, Indianapolis.
Dr. Orman, who wasn’t involved with this study, has researched recent trends in the characteristics and outcomes of patients with newly diagnosed cirrhosis and has found that the proportion of older adults has increased, as well as those with alcoholic cirrhosis and NASH, which has implications for future patient care.
“It is important to recognize that both dementia and HE can occur either separately or concurrently in individuals with cirrhosis,” Dr. Orman told this news organization. “When seeing patients with cognitive impairment, having a high index of suspicion for both conditions is critical to ensure appropriate diagnosis and treatment.”
The study’s findings “represent the tip of the iceberg,” Neal Parikh, MD, an assistant professor of neurology and neuroscience at Weill Cornell Medicine in New York, said in an interview. “There is a tremendous amount left to be discovered regarding the role of the liver in brain health.”
Dr. Parikh, who wasn’t associated with this study, has researched the impact of chronic liver conditions on cognitive impairment and dementia. He is working on a project that addresses HE in detail.
“There is growing recognition of a so-called ‘liver-brain axis,’ with several researchers, including my group, showing that a range of chronic liver conditions may detrimentally impact cognitive function and increase the risk of dementia,” he said. “Studying the specific contributions of cirrhosis is critical for understanding the role of hepatic encephalopathy in age-related cognitive decline.”
The study received no financial support. The authors reported no potential competing interests.
A version of this article first appeared on Medscape.com.
, according to a new study involving U.S. veterans.
The overlap between dementia and HE was also independent of alcohol use, brain injury, age, and other metabolic risk factors.
“The aging of patients with cirrhosis leads us to encounter several individuals who may be prone to both of these diseases,” senior author Jasmohan Bajaj, MD, a professor of gastroenterology, hepatology, and nutrition at Virginia Commonwealth University Medical Center and GI section of the Central Virginia Veterans Healthcare System in Richmond, said in an interview.
“Given the epidemic of metabolic syndrome and alcohol, consider excluding cirrhosis in your patient [for] whom the presumptive diagnosis is dementia, since they could have concomitant HE,” he said.
“On the flip side, in those with HE who have predominant long-term memory issues and persistent cognitive changes, consider consulting a neuropsychiatrist or neurologist to ensure there is a resolution of the underlying disease process,” Dr. Bajaj added.
The study was published online in The American Journal of Gastroenterology.
Analyzing associations
HE is a common decompensating event in patients with cirrhosis. Because of the aging population of patients with cirrhosis, however, it’s important to differentiate HE from nonhepatic etiologies of cognitive impairment, such as dementia, the authors note.
Using data from the VA Corporate Data Warehouse, Dr. Bajaj and colleagues identified veterans with cirrhosis who received VA care between October 2019 and September 2021 and compared baseline characteristics between the cohorts based on the presence or absence of dementia. The research team then evaluated factors associated with having a diagnosis of dementia, adjusting for demographics, comorbid illnesses, cirrhosis etiology, and cirrhosis complications.
Investigators identified 71,522 veterans with diagnostic codes for cirrhosis who were engaged in VA care in 2019. They were mostly men (96.2%) and had a median age of 66. The most common etiologies of cirrhosis were alcohol and hepatitis C, followed by nonalcoholic steatohepatitis (NASH). The group also included veterans with predominantly compensated cirrhosis and a median MELD-Na score of 9. The MELD-Na score gauges the severity of chronic liver disease using values such as serum bilirubin, serum creatinine, and the international normalized ratio for prothrombin time and sodium to predict survival.
Among those with cirrhosis, 5,647 (7.9%) also had dementia diagnosis codes. This rate is higher than the prevalence of dementia in the general population and equivalent to the rate of dementia in veterans without cirrhosis who are older than 65, the authors note.
In general, veterans with dementia tended to be older, to be White, to live in an urban area, and to have higher MELD-Na scores, and they were more frequently diagnosed with alcohol-related cirrhosis, alcohol and tobacco use disorder, diabetes, chronic kidney disease, chronic heart failure, brain trauma, and cerebrovascular disease.
In a multivariable analysis, the presence of any decompensating event was significantly associated with dementia. In subsequent analyses of individual decompensating events, however, the strongest association was with HE, while ascites or variceal bleeding did not add to the risk.
When HE was defined as patients who filled prescriptions for lactulose or rifaximin, the frequency of patients with HE decreased from 13.7% to 10.9%. In an analysis with HE as the decompensating event, the association between HE and dementia remained significant compared to when HE was defined by diagnostic codes alone.
“We were surprised by the high proportion of patients with dementia who also had cirrhosis, and given the genuine difficulty that clinicians have with defining HE vs. dementia, we were not very surprised at that overlap,” Dr. Bajaj said.
“We were also surprised at the specificity of this overlap only with HE and not with other decompensating events, which was also independent of head injury, alcohol use, and PTSD,” he added.
Additional research needed
Future research should look at the characteristics of HE, including the number of episodes or breakthrough episodes, and should focus on objective biomarkers to differentiate dementia and HE, the study authors write.
“The distinction and study of potential overlapping features among HE and dementia is important because HE is often treatable with medications and reverses after liver transplant, while this does not occur with dementia,” they add.
Dr. Bajaj and colleagues call for a greater awareness of disease processes and complications in older patients with cirrhosis, particularly since diagnostic imprecision can lead to patient and family confusion, distrust, and ineffective treatment.
The study will help physicians better understand the important overlap between dementia and HE, said Eric Orman, MD, an associate professor of medicine at Indiana University, Indianapolis.
Dr. Orman, who wasn’t involved with this study, has researched recent trends in the characteristics and outcomes of patients with newly diagnosed cirrhosis and has found that the proportion of older adults has increased, as well as those with alcoholic cirrhosis and NASH, which has implications for future patient care.
“It is important to recognize that both dementia and HE can occur either separately or concurrently in individuals with cirrhosis,” Dr. Orman told this news organization. “When seeing patients with cognitive impairment, having a high index of suspicion for both conditions is critical to ensure appropriate diagnosis and treatment.”
The study’s findings “represent the tip of the iceberg,” Neal Parikh, MD, an assistant professor of neurology and neuroscience at Weill Cornell Medicine in New York, said in an interview. “There is a tremendous amount left to be discovered regarding the role of the liver in brain health.”
Dr. Parikh, who wasn’t associated with this study, has researched the impact of chronic liver conditions on cognitive impairment and dementia. He is working on a project that addresses HE in detail.
“There is growing recognition of a so-called ‘liver-brain axis,’ with several researchers, including my group, showing that a range of chronic liver conditions may detrimentally impact cognitive function and increase the risk of dementia,” he said. “Studying the specific contributions of cirrhosis is critical for understanding the role of hepatic encephalopathy in age-related cognitive decline.”
The study received no financial support. The authors reported no potential competing interests.
A version of this article first appeared on Medscape.com.
, according to a new study involving U.S. veterans.
The overlap between dementia and HE was also independent of alcohol use, brain injury, age, and other metabolic risk factors.
“The aging of patients with cirrhosis leads us to encounter several individuals who may be prone to both of these diseases,” senior author Jasmohan Bajaj, MD, a professor of gastroenterology, hepatology, and nutrition at Virginia Commonwealth University Medical Center and GI section of the Central Virginia Veterans Healthcare System in Richmond, said in an interview.
“Given the epidemic of metabolic syndrome and alcohol, consider excluding cirrhosis in your patient [for] whom the presumptive diagnosis is dementia, since they could have concomitant HE,” he said.
“On the flip side, in those with HE who have predominant long-term memory issues and persistent cognitive changes, consider consulting a neuropsychiatrist or neurologist to ensure there is a resolution of the underlying disease process,” Dr. Bajaj added.
The study was published online in The American Journal of Gastroenterology.
Analyzing associations
HE is a common decompensating event in patients with cirrhosis. Because of the aging population of patients with cirrhosis, however, it’s important to differentiate HE from nonhepatic etiologies of cognitive impairment, such as dementia, the authors note.
Using data from the VA Corporate Data Warehouse, Dr. Bajaj and colleagues identified veterans with cirrhosis who received VA care between October 2019 and September 2021 and compared baseline characteristics between the cohorts based on the presence or absence of dementia. The research team then evaluated factors associated with having a diagnosis of dementia, adjusting for demographics, comorbid illnesses, cirrhosis etiology, and cirrhosis complications.
Investigators identified 71,522 veterans with diagnostic codes for cirrhosis who were engaged in VA care in 2019. They were mostly men (96.2%) and had a median age of 66. The most common etiologies of cirrhosis were alcohol and hepatitis C, followed by nonalcoholic steatohepatitis (NASH). The group also included veterans with predominantly compensated cirrhosis and a median MELD-Na score of 9. The MELD-Na score gauges the severity of chronic liver disease using values such as serum bilirubin, serum creatinine, and the international normalized ratio for prothrombin time and sodium to predict survival.
Among those with cirrhosis, 5,647 (7.9%) also had dementia diagnosis codes. This rate is higher than the prevalence of dementia in the general population and equivalent to the rate of dementia in veterans without cirrhosis who are older than 65, the authors note.
In general, veterans with dementia tended to be older, to be White, to live in an urban area, and to have higher MELD-Na scores, and they were more frequently diagnosed with alcohol-related cirrhosis, alcohol and tobacco use disorder, diabetes, chronic kidney disease, chronic heart failure, brain trauma, and cerebrovascular disease.
In a multivariable analysis, the presence of any decompensating event was significantly associated with dementia. In subsequent analyses of individual decompensating events, however, the strongest association was with HE, while ascites or variceal bleeding did not add to the risk.
When HE was defined as patients who filled prescriptions for lactulose or rifaximin, the frequency of patients with HE decreased from 13.7% to 10.9%. In an analysis with HE as the decompensating event, the association between HE and dementia remained significant compared to when HE was defined by diagnostic codes alone.
“We were surprised by the high proportion of patients with dementia who also had cirrhosis, and given the genuine difficulty that clinicians have with defining HE vs. dementia, we were not very surprised at that overlap,” Dr. Bajaj said.
“We were also surprised at the specificity of this overlap only with HE and not with other decompensating events, which was also independent of head injury, alcohol use, and PTSD,” he added.
Additional research needed
Future research should look at the characteristics of HE, including the number of episodes or breakthrough episodes, and should focus on objective biomarkers to differentiate dementia and HE, the study authors write.
“The distinction and study of potential overlapping features among HE and dementia is important because HE is often treatable with medications and reverses after liver transplant, while this does not occur with dementia,” they add.
Dr. Bajaj and colleagues call for a greater awareness of disease processes and complications in older patients with cirrhosis, particularly since diagnostic imprecision can lead to patient and family confusion, distrust, and ineffective treatment.
The study will help physicians better understand the important overlap between dementia and HE, said Eric Orman, MD, an associate professor of medicine at Indiana University, Indianapolis.
Dr. Orman, who wasn’t involved with this study, has researched recent trends in the characteristics and outcomes of patients with newly diagnosed cirrhosis and has found that the proportion of older adults has increased, as well as those with alcoholic cirrhosis and NASH, which has implications for future patient care.
“It is important to recognize that both dementia and HE can occur either separately or concurrently in individuals with cirrhosis,” Dr. Orman told this news organization. “When seeing patients with cognitive impairment, having a high index of suspicion for both conditions is critical to ensure appropriate diagnosis and treatment.”
The study’s findings “represent the tip of the iceberg,” Neal Parikh, MD, an assistant professor of neurology and neuroscience at Weill Cornell Medicine in New York, said in an interview. “There is a tremendous amount left to be discovered regarding the role of the liver in brain health.”
Dr. Parikh, who wasn’t associated with this study, has researched the impact of chronic liver conditions on cognitive impairment and dementia. He is working on a project that addresses HE in detail.
“There is growing recognition of a so-called ‘liver-brain axis,’ with several researchers, including my group, showing that a range of chronic liver conditions may detrimentally impact cognitive function and increase the risk of dementia,” he said. “Studying the specific contributions of cirrhosis is critical for understanding the role of hepatic encephalopathy in age-related cognitive decline.”
The study received no financial support. The authors reported no potential competing interests.
A version of this article first appeared on Medscape.com.
FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY
We don’t lose our keys (or other things) as much as we think
Can’t find your keys? Misplaced your glasses? No clue where you parked your car?
We all lose things from time to time. And we’ve all heard the standard-issue advice: Picture when you had the object last. Despite this common experience,
“It is well known that we have massive recognition memory for objects,” says study coauthor Jeremy Wolfe, PhD, a professor of ophthalmology and radiology at Harvard Medical School, Boston. In other words, we’re good at recognizing objects we’ve seen before. “For example, after viewing 100 objects for 2-3 seconds each, observers can discriminate those 100 old images from 100 new ones with well over 80% accuracy.”
But remembering what your keys look like won’t necessarily help you find them. “We often want to know when and where we saw [an object],” Dr. Wolfe says. “So our goal was to measure these spatial and temporal memories.”
In a series of experiments, reported in Current Biology, Wolfe and colleagues asked people in the study to remember objects placed on a grid. They viewed 300 objects (pictures of things like a vase, a wedding dress, camo pants, a wet suit) and were asked to recall each one and where it had been located on the grid.
About a third of the people remembered 100 or more locations, by choosing either the correct square on the grid or one directly next to it. Another third remembered between 50 and 100, and the rest remembered less than 50.
Results would likely be even better in the real world “because no one gives up and decides ‘I can’t remember where anything is. I will just guess in this silly experiment,’ ” Dr. Wolfe says.
Later, they were shown items one at a time and asked to click on a time line to indicate when they had seen them. Between 60% and 80% of the time, they identified when they had seen an object within 10% of the correct time. That’s a lot better than the 40% they would have achieved by guessing.
The findings build on previous research and expand our understanding of memory, Dr. Wolfe says. “We knew that people could remember where some things were located. However, no one had tried to quantify that memory,” he says.
But wait: If we’re so good at remembering the where and when, why do we struggle to locate lost objects so much? Chances are, we don’t. We just feel that way because we tend to focus on the fails and overlook the many wins.
“This [study] is showing us something about how we come to know where hundreds of things are in our world,” Dr. Wolfe says. “We tend to notice when this fails – ‘where are my keys?’ – but on a normal day, you are successfully tapping a massive memory on a regular basis.”
Next, the researchers plan to investigate whether spatial and temporal memories are correlated – if you’re good at one, are you good at the other? So far, “that correlation looks rather weak,” Dr. Wolfe says.
A version of this article first appeared on WebMD.com.
Can’t find your keys? Misplaced your glasses? No clue where you parked your car?
We all lose things from time to time. And we’ve all heard the standard-issue advice: Picture when you had the object last. Despite this common experience,
“It is well known that we have massive recognition memory for objects,” says study coauthor Jeremy Wolfe, PhD, a professor of ophthalmology and radiology at Harvard Medical School, Boston. In other words, we’re good at recognizing objects we’ve seen before. “For example, after viewing 100 objects for 2-3 seconds each, observers can discriminate those 100 old images from 100 new ones with well over 80% accuracy.”
But remembering what your keys look like won’t necessarily help you find them. “We often want to know when and where we saw [an object],” Dr. Wolfe says. “So our goal was to measure these spatial and temporal memories.”
In a series of experiments, reported in Current Biology, Wolfe and colleagues asked people in the study to remember objects placed on a grid. They viewed 300 objects (pictures of things like a vase, a wedding dress, camo pants, a wet suit) and were asked to recall each one and where it had been located on the grid.
About a third of the people remembered 100 or more locations, by choosing either the correct square on the grid or one directly next to it. Another third remembered between 50 and 100, and the rest remembered less than 50.
Results would likely be even better in the real world “because no one gives up and decides ‘I can’t remember where anything is. I will just guess in this silly experiment,’ ” Dr. Wolfe says.
Later, they were shown items one at a time and asked to click on a time line to indicate when they had seen them. Between 60% and 80% of the time, they identified when they had seen an object within 10% of the correct time. That’s a lot better than the 40% they would have achieved by guessing.
The findings build on previous research and expand our understanding of memory, Dr. Wolfe says. “We knew that people could remember where some things were located. However, no one had tried to quantify that memory,” he says.
But wait: If we’re so good at remembering the where and when, why do we struggle to locate lost objects so much? Chances are, we don’t. We just feel that way because we tend to focus on the fails and overlook the many wins.
“This [study] is showing us something about how we come to know where hundreds of things are in our world,” Dr. Wolfe says. “We tend to notice when this fails – ‘where are my keys?’ – but on a normal day, you are successfully tapping a massive memory on a regular basis.”
Next, the researchers plan to investigate whether spatial and temporal memories are correlated – if you’re good at one, are you good at the other? So far, “that correlation looks rather weak,” Dr. Wolfe says.
A version of this article first appeared on WebMD.com.
Can’t find your keys? Misplaced your glasses? No clue where you parked your car?
We all lose things from time to time. And we’ve all heard the standard-issue advice: Picture when you had the object last. Despite this common experience,
“It is well known that we have massive recognition memory for objects,” says study coauthor Jeremy Wolfe, PhD, a professor of ophthalmology and radiology at Harvard Medical School, Boston. In other words, we’re good at recognizing objects we’ve seen before. “For example, after viewing 100 objects for 2-3 seconds each, observers can discriminate those 100 old images from 100 new ones with well over 80% accuracy.”
But remembering what your keys look like won’t necessarily help you find them. “We often want to know when and where we saw [an object],” Dr. Wolfe says. “So our goal was to measure these spatial and temporal memories.”
In a series of experiments, reported in Current Biology, Wolfe and colleagues asked people in the study to remember objects placed on a grid. They viewed 300 objects (pictures of things like a vase, a wedding dress, camo pants, a wet suit) and were asked to recall each one and where it had been located on the grid.
About a third of the people remembered 100 or more locations, by choosing either the correct square on the grid or one directly next to it. Another third remembered between 50 and 100, and the rest remembered less than 50.
Results would likely be even better in the real world “because no one gives up and decides ‘I can’t remember where anything is. I will just guess in this silly experiment,’ ” Dr. Wolfe says.
Later, they were shown items one at a time and asked to click on a time line to indicate when they had seen them. Between 60% and 80% of the time, they identified when they had seen an object within 10% of the correct time. That’s a lot better than the 40% they would have achieved by guessing.
The findings build on previous research and expand our understanding of memory, Dr. Wolfe says. “We knew that people could remember where some things were located. However, no one had tried to quantify that memory,” he says.
But wait: If we’re so good at remembering the where and when, why do we struggle to locate lost objects so much? Chances are, we don’t. We just feel that way because we tend to focus on the fails and overlook the many wins.
“This [study] is showing us something about how we come to know where hundreds of things are in our world,” Dr. Wolfe says. “We tend to notice when this fails – ‘where are my keys?’ – but on a normal day, you are successfully tapping a massive memory on a regular basis.”
Next, the researchers plan to investigate whether spatial and temporal memories are correlated – if you’re good at one, are you good at the other? So far, “that correlation looks rather weak,” Dr. Wolfe says.
A version of this article first appeared on WebMD.com.
FROM CURRENT BIOLOGY
Remote electrical neuromodulation device helps reduce migraine days
recent research published in the journal Headache.
, according toThe prospective, randomized, double-blind, placebo-controlled, multicenter trial showed that remote electrical neuromodulation (REN) with Nerivio (Theranica Bio-Electronics Ltd.; Bridgewater, N.J.) found a mean reduction/decrease in the number of migraine days by an average of 4.0 days per month, according to Stewart J. Tepper MD, of the Geisel School of Medicine at Dartmouth in Hanover, N.H., and colleagues.*
“The statistically significant results were maintained in separate subanalyses of the chronic and episodic subsamples, as well as in the separate subanalyses of participants who used and did not use migraine prophylaxis,” Dr. Tepper and colleagues wrote.
A nonpharmacological alternative
Researchers randomized 248 participants into active and placebo groups, with 95 participants in the active group and 84 participants in the placebo group meeting the criteria for a modified intention-to-treat (mITT) analysis. Most of the participants in the ITT dataset were women (85.9%) with an average age of 41.7 years, and a baseline average of 12.2 migraine days and 15.6 headache days. Overall, 52.4% of participants in the ITT dataset had chronic migraine, 25.0% had migraine with aura, and 41.1% were taking preventative medication.
Dr. Tepper and colleagues followed participants for 4 weeks at baseline for observation followed by 8 weeks of participants using the REN device every other day for 45 minutes, or a placebo device that “produces electrical pulses of the same maximum intensity (34 mA) and overall energy, but with different pulse durations and much lower frequencies compared with the active device.” Participants completed a daily diary where they recorded their symptoms.
Researchers assessed the mean change in number of migraine days per month as a primary outcome, and evaluated participants who experienced episodic and chronic migraines separately in subgroup analyses. Secondary outcome measures included mean change in number of moderate or severe headache days, 50% reduction in mean number of headache days compared with baseline, Headache Impact Test short form (HIT-6) and Migraine Specific Quality of Life Questionnaire (MSQ) Role Function Domain total score mean change at 12 weeks compared with week 1, and reduction in mean number of days taking acute headache or migraine medication.
Participants receiving REN treatment had a significant reduction in mean migraine days per month compared with the placebo group (4.0 days vs. 1.3 days; 95% confidence interval, –3.9 days to –1.5 days; P < .001). In subgroup analyses, a significant reduction in migraine days was seen in participants receiving REN treatment with episodic migraine (3.2 days vs. 1.0 days; P = .003) and chronic migraine (4.7 days vs. 1.6 days; P = .001) compared with placebo.
Dr. Tepper and colleagues found a significant reduction in moderate and/or severe headache days among participants receiving REN treatment compared with placebo (3.8 days vs. 2.2 days; P = .005), a significant reduction in headache days overall compared with placebo (4.5 days vs. 1.8 days; P < .001), a significant percentage of patients who experienced 50% reduction in moderate and/or severe headache days compared with placebo (51.6% vs. 35.7%; P = .033), and a significant reduction in acute medication days compared with placebo (3.5 days vs. 1.4 days; P = .001). Dr. Tepper and colleagues found no serious device-related adverse events in either group.
The researchers noted that REN therapy is a “much-needed nonpharmacological alternative” to other preventive and acute treatments for migraine. “Given the previously well-established clinical efficacy and high safety profile in acute treatment of migraine, REN can cover the entire treatment spectrum of migraine, including both acute and preventive treatments,” they said.
‘A good place to start’
Commenting on the study, Alan M. Rapoport, MD, clinical professor of neurology at University of California, Los Angeles; past president of the International Headache Society; and editor-in-chief of Neurology Reviews, said the study was well designed, but acknowledged the 8-week follow-up time for participants as one potential area where he would have wanted to see more data.
As a medical device cleared for use by the Food and Drug Administration for acute treatment of migraine, the REM device also appears to be effective as a migraine preventative based on the results of the study with “virtually no adverse events,” he noted.
“I think this is a great treatment. I think it’s a good place to start,” Dr. Rapoport said. Given the low adverse event rate, he said he would be willing to offer the device to patients as a first option for preventing migraine and either switch to another preventative option or add an additional medication in combination based on how the patient responds. However, at the moment, he noted that this device is not covered by insurance.
Now that a REN device has been shown to work in the acute setting and as a preventative, Dr. Rapoport said he is interested in seeing other devices that have been cleared by the FDA as migraine treatments evaluated in migraine prevention. “I think we need more patients tried on the devices so we get an idea of which ones work acutely, which ones work preventively,” he said.
The authors reported personal and institutional relationships in the form of advisory board positions, consultancies, grants, research principal investigator roles, royalties, speakers bureau positions, and stockholders for a variety of pharmaceutical companies, agencies, and other organizations. Several authors disclosed ties with Theranica, the manufacturer of the REN device used in the study. Dr. Rapoport is editor-in-chief of Neurology Reviews and a consultant for Theranica, but was not involved in studies associated with the REN device.
Correction, 2/10/23: An earlier version of this article misstated the reduction in number of migraine days.
recent research published in the journal Headache.
, according toThe prospective, randomized, double-blind, placebo-controlled, multicenter trial showed that remote electrical neuromodulation (REN) with Nerivio (Theranica Bio-Electronics Ltd.; Bridgewater, N.J.) found a mean reduction/decrease in the number of migraine days by an average of 4.0 days per month, according to Stewart J. Tepper MD, of the Geisel School of Medicine at Dartmouth in Hanover, N.H., and colleagues.*
“The statistically significant results were maintained in separate subanalyses of the chronic and episodic subsamples, as well as in the separate subanalyses of participants who used and did not use migraine prophylaxis,” Dr. Tepper and colleagues wrote.
A nonpharmacological alternative
Researchers randomized 248 participants into active and placebo groups, with 95 participants in the active group and 84 participants in the placebo group meeting the criteria for a modified intention-to-treat (mITT) analysis. Most of the participants in the ITT dataset were women (85.9%) with an average age of 41.7 years, and a baseline average of 12.2 migraine days and 15.6 headache days. Overall, 52.4% of participants in the ITT dataset had chronic migraine, 25.0% had migraine with aura, and 41.1% were taking preventative medication.
Dr. Tepper and colleagues followed participants for 4 weeks at baseline for observation followed by 8 weeks of participants using the REN device every other day for 45 minutes, or a placebo device that “produces electrical pulses of the same maximum intensity (34 mA) and overall energy, but with different pulse durations and much lower frequencies compared with the active device.” Participants completed a daily diary where they recorded their symptoms.
Researchers assessed the mean change in number of migraine days per month as a primary outcome, and evaluated participants who experienced episodic and chronic migraines separately in subgroup analyses. Secondary outcome measures included mean change in number of moderate or severe headache days, 50% reduction in mean number of headache days compared with baseline, Headache Impact Test short form (HIT-6) and Migraine Specific Quality of Life Questionnaire (MSQ) Role Function Domain total score mean change at 12 weeks compared with week 1, and reduction in mean number of days taking acute headache or migraine medication.
Participants receiving REN treatment had a significant reduction in mean migraine days per month compared with the placebo group (4.0 days vs. 1.3 days; 95% confidence interval, –3.9 days to –1.5 days; P < .001). In subgroup analyses, a significant reduction in migraine days was seen in participants receiving REN treatment with episodic migraine (3.2 days vs. 1.0 days; P = .003) and chronic migraine (4.7 days vs. 1.6 days; P = .001) compared with placebo.
Dr. Tepper and colleagues found a significant reduction in moderate and/or severe headache days among participants receiving REN treatment compared with placebo (3.8 days vs. 2.2 days; P = .005), a significant reduction in headache days overall compared with placebo (4.5 days vs. 1.8 days; P < .001), a significant percentage of patients who experienced 50% reduction in moderate and/or severe headache days compared with placebo (51.6% vs. 35.7%; P = .033), and a significant reduction in acute medication days compared with placebo (3.5 days vs. 1.4 days; P = .001). Dr. Tepper and colleagues found no serious device-related adverse events in either group.
The researchers noted that REN therapy is a “much-needed nonpharmacological alternative” to other preventive and acute treatments for migraine. “Given the previously well-established clinical efficacy and high safety profile in acute treatment of migraine, REN can cover the entire treatment spectrum of migraine, including both acute and preventive treatments,” they said.
‘A good place to start’
Commenting on the study, Alan M. Rapoport, MD, clinical professor of neurology at University of California, Los Angeles; past president of the International Headache Society; and editor-in-chief of Neurology Reviews, said the study was well designed, but acknowledged the 8-week follow-up time for participants as one potential area where he would have wanted to see more data.
As a medical device cleared for use by the Food and Drug Administration for acute treatment of migraine, the REM device also appears to be effective as a migraine preventative based on the results of the study with “virtually no adverse events,” he noted.
“I think this is a great treatment. I think it’s a good place to start,” Dr. Rapoport said. Given the low adverse event rate, he said he would be willing to offer the device to patients as a first option for preventing migraine and either switch to another preventative option or add an additional medication in combination based on how the patient responds. However, at the moment, he noted that this device is not covered by insurance.
Now that a REN device has been shown to work in the acute setting and as a preventative, Dr. Rapoport said he is interested in seeing other devices that have been cleared by the FDA as migraine treatments evaluated in migraine prevention. “I think we need more patients tried on the devices so we get an idea of which ones work acutely, which ones work preventively,” he said.
The authors reported personal and institutional relationships in the form of advisory board positions, consultancies, grants, research principal investigator roles, royalties, speakers bureau positions, and stockholders for a variety of pharmaceutical companies, agencies, and other organizations. Several authors disclosed ties with Theranica, the manufacturer of the REN device used in the study. Dr. Rapoport is editor-in-chief of Neurology Reviews and a consultant for Theranica, but was not involved in studies associated with the REN device.
Correction, 2/10/23: An earlier version of this article misstated the reduction in number of migraine days.
recent research published in the journal Headache.
, according toThe prospective, randomized, double-blind, placebo-controlled, multicenter trial showed that remote electrical neuromodulation (REN) with Nerivio (Theranica Bio-Electronics Ltd.; Bridgewater, N.J.) found a mean reduction/decrease in the number of migraine days by an average of 4.0 days per month, according to Stewart J. Tepper MD, of the Geisel School of Medicine at Dartmouth in Hanover, N.H., and colleagues.*
“The statistically significant results were maintained in separate subanalyses of the chronic and episodic subsamples, as well as in the separate subanalyses of participants who used and did not use migraine prophylaxis,” Dr. Tepper and colleagues wrote.
A nonpharmacological alternative
Researchers randomized 248 participants into active and placebo groups, with 95 participants in the active group and 84 participants in the placebo group meeting the criteria for a modified intention-to-treat (mITT) analysis. Most of the participants in the ITT dataset were women (85.9%) with an average age of 41.7 years, and a baseline average of 12.2 migraine days and 15.6 headache days. Overall, 52.4% of participants in the ITT dataset had chronic migraine, 25.0% had migraine with aura, and 41.1% were taking preventative medication.
Dr. Tepper and colleagues followed participants for 4 weeks at baseline for observation followed by 8 weeks of participants using the REN device every other day for 45 minutes, or a placebo device that “produces electrical pulses of the same maximum intensity (34 mA) and overall energy, but with different pulse durations and much lower frequencies compared with the active device.” Participants completed a daily diary where they recorded their symptoms.
Researchers assessed the mean change in number of migraine days per month as a primary outcome, and evaluated participants who experienced episodic and chronic migraines separately in subgroup analyses. Secondary outcome measures included mean change in number of moderate or severe headache days, 50% reduction in mean number of headache days compared with baseline, Headache Impact Test short form (HIT-6) and Migraine Specific Quality of Life Questionnaire (MSQ) Role Function Domain total score mean change at 12 weeks compared with week 1, and reduction in mean number of days taking acute headache or migraine medication.
Participants receiving REN treatment had a significant reduction in mean migraine days per month compared with the placebo group (4.0 days vs. 1.3 days; 95% confidence interval, –3.9 days to –1.5 days; P < .001). In subgroup analyses, a significant reduction in migraine days was seen in participants receiving REN treatment with episodic migraine (3.2 days vs. 1.0 days; P = .003) and chronic migraine (4.7 days vs. 1.6 days; P = .001) compared with placebo.
Dr. Tepper and colleagues found a significant reduction in moderate and/or severe headache days among participants receiving REN treatment compared with placebo (3.8 days vs. 2.2 days; P = .005), a significant reduction in headache days overall compared with placebo (4.5 days vs. 1.8 days; P < .001), a significant percentage of patients who experienced 50% reduction in moderate and/or severe headache days compared with placebo (51.6% vs. 35.7%; P = .033), and a significant reduction in acute medication days compared with placebo (3.5 days vs. 1.4 days; P = .001). Dr. Tepper and colleagues found no serious device-related adverse events in either group.
The researchers noted that REN therapy is a “much-needed nonpharmacological alternative” to other preventive and acute treatments for migraine. “Given the previously well-established clinical efficacy and high safety profile in acute treatment of migraine, REN can cover the entire treatment spectrum of migraine, including both acute and preventive treatments,” they said.
‘A good place to start’
Commenting on the study, Alan M. Rapoport, MD, clinical professor of neurology at University of California, Los Angeles; past president of the International Headache Society; and editor-in-chief of Neurology Reviews, said the study was well designed, but acknowledged the 8-week follow-up time for participants as one potential area where he would have wanted to see more data.
As a medical device cleared for use by the Food and Drug Administration for acute treatment of migraine, the REM device also appears to be effective as a migraine preventative based on the results of the study with “virtually no adverse events,” he noted.
“I think this is a great treatment. I think it’s a good place to start,” Dr. Rapoport said. Given the low adverse event rate, he said he would be willing to offer the device to patients as a first option for preventing migraine and either switch to another preventative option or add an additional medication in combination based on how the patient responds. However, at the moment, he noted that this device is not covered by insurance.
Now that a REN device has been shown to work in the acute setting and as a preventative, Dr. Rapoport said he is interested in seeing other devices that have been cleared by the FDA as migraine treatments evaluated in migraine prevention. “I think we need more patients tried on the devices so we get an idea of which ones work acutely, which ones work preventively,” he said.
The authors reported personal and institutional relationships in the form of advisory board positions, consultancies, grants, research principal investigator roles, royalties, speakers bureau positions, and stockholders for a variety of pharmaceutical companies, agencies, and other organizations. Several authors disclosed ties with Theranica, the manufacturer of the REN device used in the study. Dr. Rapoport is editor-in-chief of Neurology Reviews and a consultant for Theranica, but was not involved in studies associated with the REN device.
Correction, 2/10/23: An earlier version of this article misstated the reduction in number of migraine days.
FROM HEADACHE
Cognitive testing for older drivers: Is there a benefit?
, according to results from a large population-based study using data from Japan.
But the same study, published in the Journal of the American Geriatrics Society, also reported a concurrent increase in pedestrian and cycling injuries, possibly because more older former drivers were getting around by alternative means. That finding echoed a 2012 study from Denmark, which also looked at the effects of an age-based cognitive screening policy for older drivers, and saw more fatal road injuries among older people who were not driving.
While some governments, including those of Denmark, Taiwan, and Japan, have implemented age-based cognitive screening for older drivers, there has been little evidence to date that such policies improve road safety. Guidelines issued in 2010 by the American Academy of Neurology discourage age-based screening, advising instead that people diagnosed with cognitive disorders be carefully evaluated for driving fitness and recommending one widely used scale, the Clinical Dementia Rating, as useful in identifying potentially unsafe drivers.
Japan’s national screening policy: Did it work?
The new study, led by Haruhiko Inada, MD, PhD, an epidemiologist at Johns Hopkins University in Baltimore, used national crash data from Japan, where since 2017 all drivers 75 and older not only must take cognitive tests measuring temporal orientation and memory at license renewal, but are also referred for medical evaluation if they fail them. People receiving a subsequent dementia diagnosis can have their licenses suspended or revoked.
Dr. Inada and his colleagues looked at national data from nearly 603,000 police-reported vehicle collisions and nearly 197,000 pedestrian or cyclist road injuries between March 2012 and December 2019, all involving people aged 70 and older. To assess the screening policy’s impact, the researchers calculated estimated monthly collision or injury incidence rates per 100,000 person-years. This way, they could “control for secular trends that were unaffected by the policy, such as the decreasing incidence of motor vehicle collisions year by year,” the researchers explained.
After the screening was implemented, cumulative estimated collisions among drivers 75 or older decreased by 3,670 (95% confidence interval, 5,125-2,104), while reported pedestrian or cyclist injuries increased by an estimated 959 (95% CI, 24-1,834). Dr. Inada and colleagues found that crashes declined among men but not women, noting also that more older men than women are licensed to drive in Japan. Pedestrian and cyclist injuries were highest among men aged 80-84, and women aged 80 and older.
“Cognitively screening older drivers at license renewal and promoting voluntary surrender of licenses may prevent motor vehicle collisions,” Dr. Inada and his colleagues concluded. “However, they are associated with an increase in road injuries for older pedestrians and cyclists. Future studies should examine the effectiveness of mitigation measures, such as alternative, safe transportation, and accommodations for pedestrians and cyclists.”
No definitive answers
Two investigators who have studied cognitive screening related to road safety were contacted for commentary on the study findings.
Anu Siren, PhD, professor of gerontology at Tampere (Finland) University, who in 2012 reported higher injuries after implementation of older-driver cognitive screening in Denmark, commented that the new study, while benefiting from a much larger data set than earlier studies, still “fails to show that decrease in collisions is because ‘unfit’ drivers were removed from the road. But it does confirm previous findings about how strict screening policies make people shift from cars to unprotected modes of transportation,” which are riskier.
In studies measuring driving safety, the usual definition of risk is incidents per exposure, Dr. Siren noted. In Dr. Inada and colleagues’ study, “the incident measure, or numerator, is the number of collisions. The exposure measure or denominator is population. Because the study uses population and not driver licenses (or distance traveled) as an exposure measure, the observed decrease in collisions does not say much about how the collision risk develops after the implementation of screening.”
Older driver screening “is likely to cause some older persons to cease from driving and probably continue to travel as unprotected road users,” Dr. Siren continued. “Similar to what we found [in 2012], the injury rates for pedestrians and cyclists went up after the introduction of screening, which suggests that screening indirectly causes increasing number of injuries among older unprotected road users.”
Matthew Rizzo, MD, professor and chair of the department of neurological sciences at the University of Nebraska Medical Center and codirector of the Nebraska Neuroscience Alliance in Omaha, Neb., and the lead author of the 2010 AAN guidelines on cognitive impairment and driving risk, cautioned against ageism in designing policies meant to protect motorists.
“We find some erratic/weak effects of age here and there, but the big effects we consistently find are from cognitive and visual decline – which is somewhat correlated with age, but with huge variance,” Dr. Rizzo said. “It is hard to say what an optimal age threshold for risk would be, and if 75 is it.”
U.S. crash data from the last decade points to drivers 80 and older as significantly more accident-prone than those in their 70s, or even late 70s, Dr. Rizzo noted. Moreover, “willingness to get on the road, number of miles driven, type of road (urban, rural, highway, commercial, residential), type of vehicle driven, traffic, and environment (day, night, weather), et cetera, are all factors to consider in driving risk and restriction,” he said.
Dr. Rizzo added that the 2010 AAN guidelines might need to be revisited in light of newer vehicle safety systems and automation.
Dr. Inada and colleagues’ study was funded by Japanese government grants, and Dr. Inada and his coauthors reported no financial conflicts of interest. Dr. Siren and Dr. Rizzo reported no financial conflicts of interest.
, according to results from a large population-based study using data from Japan.
But the same study, published in the Journal of the American Geriatrics Society, also reported a concurrent increase in pedestrian and cycling injuries, possibly because more older former drivers were getting around by alternative means. That finding echoed a 2012 study from Denmark, which also looked at the effects of an age-based cognitive screening policy for older drivers, and saw more fatal road injuries among older people who were not driving.
While some governments, including those of Denmark, Taiwan, and Japan, have implemented age-based cognitive screening for older drivers, there has been little evidence to date that such policies improve road safety. Guidelines issued in 2010 by the American Academy of Neurology discourage age-based screening, advising instead that people diagnosed with cognitive disorders be carefully evaluated for driving fitness and recommending one widely used scale, the Clinical Dementia Rating, as useful in identifying potentially unsafe drivers.
Japan’s national screening policy: Did it work?
The new study, led by Haruhiko Inada, MD, PhD, an epidemiologist at Johns Hopkins University in Baltimore, used national crash data from Japan, where since 2017 all drivers 75 and older not only must take cognitive tests measuring temporal orientation and memory at license renewal, but are also referred for medical evaluation if they fail them. People receiving a subsequent dementia diagnosis can have their licenses suspended or revoked.
Dr. Inada and his colleagues looked at national data from nearly 603,000 police-reported vehicle collisions and nearly 197,000 pedestrian or cyclist road injuries between March 2012 and December 2019, all involving people aged 70 and older. To assess the screening policy’s impact, the researchers calculated estimated monthly collision or injury incidence rates per 100,000 person-years. This way, they could “control for secular trends that were unaffected by the policy, such as the decreasing incidence of motor vehicle collisions year by year,” the researchers explained.
After the screening was implemented, cumulative estimated collisions among drivers 75 or older decreased by 3,670 (95% confidence interval, 5,125-2,104), while reported pedestrian or cyclist injuries increased by an estimated 959 (95% CI, 24-1,834). Dr. Inada and colleagues found that crashes declined among men but not women, noting also that more older men than women are licensed to drive in Japan. Pedestrian and cyclist injuries were highest among men aged 80-84, and women aged 80 and older.
“Cognitively screening older drivers at license renewal and promoting voluntary surrender of licenses may prevent motor vehicle collisions,” Dr. Inada and his colleagues concluded. “However, they are associated with an increase in road injuries for older pedestrians and cyclists. Future studies should examine the effectiveness of mitigation measures, such as alternative, safe transportation, and accommodations for pedestrians and cyclists.”
No definitive answers
Two investigators who have studied cognitive screening related to road safety were contacted for commentary on the study findings.
Anu Siren, PhD, professor of gerontology at Tampere (Finland) University, who in 2012 reported higher injuries after implementation of older-driver cognitive screening in Denmark, commented that the new study, while benefiting from a much larger data set than earlier studies, still “fails to show that decrease in collisions is because ‘unfit’ drivers were removed from the road. But it does confirm previous findings about how strict screening policies make people shift from cars to unprotected modes of transportation,” which are riskier.
In studies measuring driving safety, the usual definition of risk is incidents per exposure, Dr. Siren noted. In Dr. Inada and colleagues’ study, “the incident measure, or numerator, is the number of collisions. The exposure measure or denominator is population. Because the study uses population and not driver licenses (or distance traveled) as an exposure measure, the observed decrease in collisions does not say much about how the collision risk develops after the implementation of screening.”
Older driver screening “is likely to cause some older persons to cease from driving and probably continue to travel as unprotected road users,” Dr. Siren continued. “Similar to what we found [in 2012], the injury rates for pedestrians and cyclists went up after the introduction of screening, which suggests that screening indirectly causes increasing number of injuries among older unprotected road users.”
Matthew Rizzo, MD, professor and chair of the department of neurological sciences at the University of Nebraska Medical Center and codirector of the Nebraska Neuroscience Alliance in Omaha, Neb., and the lead author of the 2010 AAN guidelines on cognitive impairment and driving risk, cautioned against ageism in designing policies meant to protect motorists.
“We find some erratic/weak effects of age here and there, but the big effects we consistently find are from cognitive and visual decline – which is somewhat correlated with age, but with huge variance,” Dr. Rizzo said. “It is hard to say what an optimal age threshold for risk would be, and if 75 is it.”
U.S. crash data from the last decade points to drivers 80 and older as significantly more accident-prone than those in their 70s, or even late 70s, Dr. Rizzo noted. Moreover, “willingness to get on the road, number of miles driven, type of road (urban, rural, highway, commercial, residential), type of vehicle driven, traffic, and environment (day, night, weather), et cetera, are all factors to consider in driving risk and restriction,” he said.
Dr. Rizzo added that the 2010 AAN guidelines might need to be revisited in light of newer vehicle safety systems and automation.
Dr. Inada and colleagues’ study was funded by Japanese government grants, and Dr. Inada and his coauthors reported no financial conflicts of interest. Dr. Siren and Dr. Rizzo reported no financial conflicts of interest.
, according to results from a large population-based study using data from Japan.
But the same study, published in the Journal of the American Geriatrics Society, also reported a concurrent increase in pedestrian and cycling injuries, possibly because more older former drivers were getting around by alternative means. That finding echoed a 2012 study from Denmark, which also looked at the effects of an age-based cognitive screening policy for older drivers, and saw more fatal road injuries among older people who were not driving.
While some governments, including those of Denmark, Taiwan, and Japan, have implemented age-based cognitive screening for older drivers, there has been little evidence to date that such policies improve road safety. Guidelines issued in 2010 by the American Academy of Neurology discourage age-based screening, advising instead that people diagnosed with cognitive disorders be carefully evaluated for driving fitness and recommending one widely used scale, the Clinical Dementia Rating, as useful in identifying potentially unsafe drivers.
Japan’s national screening policy: Did it work?
The new study, led by Haruhiko Inada, MD, PhD, an epidemiologist at Johns Hopkins University in Baltimore, used national crash data from Japan, where since 2017 all drivers 75 and older not only must take cognitive tests measuring temporal orientation and memory at license renewal, but are also referred for medical evaluation if they fail them. People receiving a subsequent dementia diagnosis can have their licenses suspended or revoked.
Dr. Inada and his colleagues looked at national data from nearly 603,000 police-reported vehicle collisions and nearly 197,000 pedestrian or cyclist road injuries between March 2012 and December 2019, all involving people aged 70 and older. To assess the screening policy’s impact, the researchers calculated estimated monthly collision or injury incidence rates per 100,000 person-years. This way, they could “control for secular trends that were unaffected by the policy, such as the decreasing incidence of motor vehicle collisions year by year,” the researchers explained.
After the screening was implemented, cumulative estimated collisions among drivers 75 or older decreased by 3,670 (95% confidence interval, 5,125-2,104), while reported pedestrian or cyclist injuries increased by an estimated 959 (95% CI, 24-1,834). Dr. Inada and colleagues found that crashes declined among men but not women, noting also that more older men than women are licensed to drive in Japan. Pedestrian and cyclist injuries were highest among men aged 80-84, and women aged 80 and older.
“Cognitively screening older drivers at license renewal and promoting voluntary surrender of licenses may prevent motor vehicle collisions,” Dr. Inada and his colleagues concluded. “However, they are associated with an increase in road injuries for older pedestrians and cyclists. Future studies should examine the effectiveness of mitigation measures, such as alternative, safe transportation, and accommodations for pedestrians and cyclists.”
No definitive answers
Two investigators who have studied cognitive screening related to road safety were contacted for commentary on the study findings.
Anu Siren, PhD, professor of gerontology at Tampere (Finland) University, who in 2012 reported higher injuries after implementation of older-driver cognitive screening in Denmark, commented that the new study, while benefiting from a much larger data set than earlier studies, still “fails to show that decrease in collisions is because ‘unfit’ drivers were removed from the road. But it does confirm previous findings about how strict screening policies make people shift from cars to unprotected modes of transportation,” which are riskier.
In studies measuring driving safety, the usual definition of risk is incidents per exposure, Dr. Siren noted. In Dr. Inada and colleagues’ study, “the incident measure, or numerator, is the number of collisions. The exposure measure or denominator is population. Because the study uses population and not driver licenses (or distance traveled) as an exposure measure, the observed decrease in collisions does not say much about how the collision risk develops after the implementation of screening.”
Older driver screening “is likely to cause some older persons to cease from driving and probably continue to travel as unprotected road users,” Dr. Siren continued. “Similar to what we found [in 2012], the injury rates for pedestrians and cyclists went up after the introduction of screening, which suggests that screening indirectly causes increasing number of injuries among older unprotected road users.”
Matthew Rizzo, MD, professor and chair of the department of neurological sciences at the University of Nebraska Medical Center and codirector of the Nebraska Neuroscience Alliance in Omaha, Neb., and the lead author of the 2010 AAN guidelines on cognitive impairment and driving risk, cautioned against ageism in designing policies meant to protect motorists.
“We find some erratic/weak effects of age here and there, but the big effects we consistently find are from cognitive and visual decline – which is somewhat correlated with age, but with huge variance,” Dr. Rizzo said. “It is hard to say what an optimal age threshold for risk would be, and if 75 is it.”
U.S. crash data from the last decade points to drivers 80 and older as significantly more accident-prone than those in their 70s, or even late 70s, Dr. Rizzo noted. Moreover, “willingness to get on the road, number of miles driven, type of road (urban, rural, highway, commercial, residential), type of vehicle driven, traffic, and environment (day, night, weather), et cetera, are all factors to consider in driving risk and restriction,” he said.
Dr. Rizzo added that the 2010 AAN guidelines might need to be revisited in light of newer vehicle safety systems and automation.
Dr. Inada and colleagues’ study was funded by Japanese government grants, and Dr. Inada and his coauthors reported no financial conflicts of interest. Dr. Siren and Dr. Rizzo reported no financial conflicts of interest.
FROM THE JOURNAL OF THE AMERICAN GERIATRICS SOCIETY
Can a ‘smart’ skin patch detect early neurodegenerative diseases?
A new “smart patch” composed of microneedles that can detect proinflammatory markers via simulated skin interstitial fluid (ISF) may help diagnose neurodegenerative disorders such as Alzheimer’s disease and Parkinson’s disease very early on.
Originally developed to deliver medications and vaccines via the skin in a minimally invasive manner, the microneedle arrays were fitted with molecular sensors that, when placed on the skin, detect neuroinflammatory biomarkers such as interleukin-6 in as little as 6 minutes.
The literature suggests that these biomarkers of neurodegenerative disease are present years before patients become symptomatic, said study investigator Sanjiv Sharma, PhD.
“Neurodegenerative disorders such as Parkinson’s disease and Alzheimer’s disease are [characterized by] progressive loss in nerve cell and brain cells, which leads to memory problems and a loss of mental ability. That is why early diagnosis is key to preventing the loss of brain tissue in dementia, which can go undetected for years,” added Dr. Sharma, who is a lecturer in medical engineering at Swansea (Wales) University.
Dr. Sharma developed the patch with scientists at the Polytechnic of Porto (Portugal) School of Engineering in Portugal. In 2022, they designed, and are currently testing, a microneedle patch that will deliver the COVID vaccine.
The investigators describe their research on the patch’s ability to detect IL-6 in an article published in ACS Omega.
At-home diagnosis?
“The skin is the largest organ in the body – it contains more skin interstitial fluid than the total blood volume,” Dr. Sharma noted. “This fluid is an ultrafiltrate of blood and holds biomarkers that complement other biofluids, such as sweat, saliva, and urine. It can be sampled in a minimally invasive manner and used either for point-of-care testing or real-time using microneedle devices.”
Dr. Sharma and associates tested the microneedle patch in artificial ISF that contained the inflammatory cytokine IL-6. They found that the patch accurately detected IL-6 concentrations as low as 1 pg/mL in the fabricated ISF solution.
“In general, the transdermal sensor presented here showed simplicity in designing, short measuring time, high accuracy, and low detection limit. This approach seems a successful tool for the screening of inflammatory biomarkers in point of care testing wherein the skin acts as a window to the body,” the investigators reported.
Dr. Sharma noted that early detection of neurodegenerative diseases is crucial, as once symptoms appear, the disease may have already progressed significantly, and meaningful intervention is challenging.
The device has yet to be tested in humans, which is the next step, said Dr. Sharma.
“We will have to test the hypothesis through extensive preclinical and clinical studies to determine if bloodless, transdermal (skin) diagnostics can offer a cost-effective device that could allow testing in simpler settings such as a clinician’s practice or even home settings,” he noted.
Early days
Commenting on the research, David K. Simon, MD, PhD, professor of neurology at Harvard Medical School, Boston, said it is “a promising step regarding validation of a potentially beneficial method for rapidly and accurately measuring IL-6.”
However, he added, “many additional steps are needed to validate the method in actual human skin and to determine whether or not measuring these biomarkers in skin will be useful in studies of neurodegenerative diseases.”
He noted that one study limitation is that inflammatory cytokines such as IL-6 are highly nonspecific, and levels are elevated in various diseases associated with inflammation.
“It is highly unlikely that measuring IL-6 will be useful as a diagnostic tool. However, it does have potential as a biomarker for measuring the impact of treatments aimed at reducing inflammation. As the authors point out, it’s more likely that clinicians will require a panel of biomarkers rather than only measuring IL-6,” he said.
The study was funded by Fundação para a Ciência e Tecnologia. The investigators disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new “smart patch” composed of microneedles that can detect proinflammatory markers via simulated skin interstitial fluid (ISF) may help diagnose neurodegenerative disorders such as Alzheimer’s disease and Parkinson’s disease very early on.
Originally developed to deliver medications and vaccines via the skin in a minimally invasive manner, the microneedle arrays were fitted with molecular sensors that, when placed on the skin, detect neuroinflammatory biomarkers such as interleukin-6 in as little as 6 minutes.
The literature suggests that these biomarkers of neurodegenerative disease are present years before patients become symptomatic, said study investigator Sanjiv Sharma, PhD.
“Neurodegenerative disorders such as Parkinson’s disease and Alzheimer’s disease are [characterized by] progressive loss in nerve cell and brain cells, which leads to memory problems and a loss of mental ability. That is why early diagnosis is key to preventing the loss of brain tissue in dementia, which can go undetected for years,” added Dr. Sharma, who is a lecturer in medical engineering at Swansea (Wales) University.
Dr. Sharma developed the patch with scientists at the Polytechnic of Porto (Portugal) School of Engineering in Portugal. In 2022, they designed, and are currently testing, a microneedle patch that will deliver the COVID vaccine.
The investigators describe their research on the patch’s ability to detect IL-6 in an article published in ACS Omega.
At-home diagnosis?
“The skin is the largest organ in the body – it contains more skin interstitial fluid than the total blood volume,” Dr. Sharma noted. “This fluid is an ultrafiltrate of blood and holds biomarkers that complement other biofluids, such as sweat, saliva, and urine. It can be sampled in a minimally invasive manner and used either for point-of-care testing or real-time using microneedle devices.”
Dr. Sharma and associates tested the microneedle patch in artificial ISF that contained the inflammatory cytokine IL-6. They found that the patch accurately detected IL-6 concentrations as low as 1 pg/mL in the fabricated ISF solution.
“In general, the transdermal sensor presented here showed simplicity in designing, short measuring time, high accuracy, and low detection limit. This approach seems a successful tool for the screening of inflammatory biomarkers in point of care testing wherein the skin acts as a window to the body,” the investigators reported.
Dr. Sharma noted that early detection of neurodegenerative diseases is crucial, as once symptoms appear, the disease may have already progressed significantly, and meaningful intervention is challenging.
The device has yet to be tested in humans, which is the next step, said Dr. Sharma.
“We will have to test the hypothesis through extensive preclinical and clinical studies to determine if bloodless, transdermal (skin) diagnostics can offer a cost-effective device that could allow testing in simpler settings such as a clinician’s practice or even home settings,” he noted.
Early days
Commenting on the research, David K. Simon, MD, PhD, professor of neurology at Harvard Medical School, Boston, said it is “a promising step regarding validation of a potentially beneficial method for rapidly and accurately measuring IL-6.”
However, he added, “many additional steps are needed to validate the method in actual human skin and to determine whether or not measuring these biomarkers in skin will be useful in studies of neurodegenerative diseases.”
He noted that one study limitation is that inflammatory cytokines such as IL-6 are highly nonspecific, and levels are elevated in various diseases associated with inflammation.
“It is highly unlikely that measuring IL-6 will be useful as a diagnostic tool. However, it does have potential as a biomarker for measuring the impact of treatments aimed at reducing inflammation. As the authors point out, it’s more likely that clinicians will require a panel of biomarkers rather than only measuring IL-6,” he said.
The study was funded by Fundação para a Ciência e Tecnologia. The investigators disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new “smart patch” composed of microneedles that can detect proinflammatory markers via simulated skin interstitial fluid (ISF) may help diagnose neurodegenerative disorders such as Alzheimer’s disease and Parkinson’s disease very early on.
Originally developed to deliver medications and vaccines via the skin in a minimally invasive manner, the microneedle arrays were fitted with molecular sensors that, when placed on the skin, detect neuroinflammatory biomarkers such as interleukin-6 in as little as 6 minutes.
The literature suggests that these biomarkers of neurodegenerative disease are present years before patients become symptomatic, said study investigator Sanjiv Sharma, PhD.
“Neurodegenerative disorders such as Parkinson’s disease and Alzheimer’s disease are [characterized by] progressive loss in nerve cell and brain cells, which leads to memory problems and a loss of mental ability. That is why early diagnosis is key to preventing the loss of brain tissue in dementia, which can go undetected for years,” added Dr. Sharma, who is a lecturer in medical engineering at Swansea (Wales) University.
Dr. Sharma developed the patch with scientists at the Polytechnic of Porto (Portugal) School of Engineering in Portugal. In 2022, they designed, and are currently testing, a microneedle patch that will deliver the COVID vaccine.
The investigators describe their research on the patch’s ability to detect IL-6 in an article published in ACS Omega.
At-home diagnosis?
“The skin is the largest organ in the body – it contains more skin interstitial fluid than the total blood volume,” Dr. Sharma noted. “This fluid is an ultrafiltrate of blood and holds biomarkers that complement other biofluids, such as sweat, saliva, and urine. It can be sampled in a minimally invasive manner and used either for point-of-care testing or real-time using microneedle devices.”
Dr. Sharma and associates tested the microneedle patch in artificial ISF that contained the inflammatory cytokine IL-6. They found that the patch accurately detected IL-6 concentrations as low as 1 pg/mL in the fabricated ISF solution.
“In general, the transdermal sensor presented here showed simplicity in designing, short measuring time, high accuracy, and low detection limit. This approach seems a successful tool for the screening of inflammatory biomarkers in point of care testing wherein the skin acts as a window to the body,” the investigators reported.
Dr. Sharma noted that early detection of neurodegenerative diseases is crucial, as once symptoms appear, the disease may have already progressed significantly, and meaningful intervention is challenging.
The device has yet to be tested in humans, which is the next step, said Dr. Sharma.
“We will have to test the hypothesis through extensive preclinical and clinical studies to determine if bloodless, transdermal (skin) diagnostics can offer a cost-effective device that could allow testing in simpler settings such as a clinician’s practice or even home settings,” he noted.
Early days
Commenting on the research, David K. Simon, MD, PhD, professor of neurology at Harvard Medical School, Boston, said it is “a promising step regarding validation of a potentially beneficial method for rapidly and accurately measuring IL-6.”
However, he added, “many additional steps are needed to validate the method in actual human skin and to determine whether or not measuring these biomarkers in skin will be useful in studies of neurodegenerative diseases.”
He noted that one study limitation is that inflammatory cytokines such as IL-6 are highly nonspecific, and levels are elevated in various diseases associated with inflammation.
“It is highly unlikely that measuring IL-6 will be useful as a diagnostic tool. However, it does have potential as a biomarker for measuring the impact of treatments aimed at reducing inflammation. As the authors point out, it’s more likely that clinicians will require a panel of biomarkers rather than only measuring IL-6,” he said.
The study was funded by Fundação para a Ciência e Tecnologia. The investigators disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM ACS OMEGA