Gout linked to smaller brain volume, higher likelihood of neurodegenerative diseases

Article Type
Changed
Wed, 06/07/2023 - 14:31

 

Patients with gout may have smaller brain volumes and higher brain iron markers than people without gout, and also be more likely to develop Parkinson’s disease, probable essential tremor, and dementia, researchers in the United Kingdom report.

“We were surprised about the regions of the brain affected by gout, several of which are important for motor function. The other intriguing finding was that the risk of dementia amongst gout patients was strongly time-dependent: highest in the first 3 years after their gout diagnosis,” lead study author Anya Topiwala, BMBCh, DPhil, said in an interview.

Dr. Anya Topiwala

“Our combination of traditional and genetic approaches increases the confidence that gout is causing the brain findings,” said Dr. Topiwala, a clinical research fellow and consultant psychiatrist in the Nuffield Department of Population Health at the University of Oxford, England.

“We suggest that clinicians be vigilant for cognitive and motor problems after gout diagnosis, particularly in the early stages,” she added.


 

Links between gout and neurodegenerative diseases debated in earlier studies

Gout, the most common inflammatory arthritis, affects around 1%-4% of people, the authors wrote, with monosodium urate crystal deposits causing acute flares of pain and swelling in joints and periarticular tissues.

Whether and how gout may affect the brain has been debated in the literature. Gout and hyperuricemia have been linked with elevated stroke risk; and although observational studies have linked hyperuricemia with lower dementia risk, especially Alzheimer’s disease, Mendelian randomization studies have had conflicting results in Alzheimer’s disease.
 

A novel approach that analyzes brain structure and genetics

In a study published in Nature Communications, Dr. Topiwala and her colleagues combined observational and Mendelian randomization techniques to explore relationships between gout and neurodegenerative diseases. They analyzed data from over 303,000 volunteer participants between 40 and 69 years of age recruited between 2006 and 2010 to contribute their detailed genetic and health information to the U.K. Biobank, a large-scale biomedical database and research resource.

Patients with gout tended to be older and male. At baseline, all participants’ serum urate levels were measured, and 30.8% of patients with gout reported that they currently used urate-lowering therapy.
 

MRI shows brain changes in patients with gout

In what the authors said is the first investigation of neuroimaging markers in patients with gout, they compared differences in gray matter volumes found in the 1,165 participants with gout and the 32,202 controls without gout who had MRI data.

They found no marked sex differences in associations. Urate was inversely linked with global brain volume and with gray and white matter volumes, and gout appeared to age global gray matter by 2 years.

Patients with gout and higher urate showed significant differences in regional gray matter volumes, especially in the cerebellum, pons, and midbrain, as well as subcortical differences in the nucleus accumbens, putamen, and caudate. They also showed significant differences in white matter tract microstructure in the fornix.

Patients with gout were more likely to develop dementia (average hazard ratio [HR] over study = 1.60), especially in the first 3 years after gout diagnosis (HR = 7.40). They were also at higher risk for vascular dementia (average HR = 2.41), compared with all-cause dementia, but not for Alzheimer’s disease (average HR = 1.62).

In asymptomatic participants though, urate and dementia were inversely linked (HR = 0.85), with no time dependence.

Gout was linked with higher incidence of Parkinson’s disease (HR = 1.43) and probable essential tremor (HR = 6.75). In asymptomatic participants, urate and Parkinson’s disease (HR = 0.89), but not probable essential tremor, were inversely linked.
 

 

 

Genetic analyses reinforce MRI results

Using Mendelian randomization estimates, the authors found that genetic links generally reflected their observational findings. Both genetically predicted gout and serum urate were significantly linked with regional gray matter volumes, including cerebellar, midbrain, pons, and brainstem.

They also found significant links with higher magnetic susceptibility in the putamen and caudate, markers of higher iron. But while genetically predicted gout was significantly linked with global gray matter volume, urate was not.

In males, but not in females, urate was positively linked with alcohol intake and lower socioeconomic status.

Dr. Topiwala acknowledged several limitations to the study, writing that “the results from the volunteer participants may not apply to other populations; the cross-sectional serum urate measurements may not reflect chronic exposure; and Parkinson’s disease and essential tremor may have been diagnostically confounded.”
 

A novel approach that suggests further related research

Asked to comment on the study, Puja Khanna, MD, MPH, a rheumatologist and clinical associate professor of medicine at the University of Michigan, Ann Arbor, called its novel use of neuroimaging interesting.

Dr. Puja Khanna

Dr. Khanna, who was not involved in the study, said she would like to know more about the role that horizontal pleiotropy – one genetic variant having independent effects on multiple traits – plays in this disease process, and about the impact of the antioxidative properties of urate in maintaining neuroprotection.

“[The] U.K. Biobank is an excellent database to look at questions of association,” John D. FitzGerald, MD, PhD, MPH, MBA, professor and clinical chief of rheumatology at the University of California, Los Angeles, said in an interview.

Dr. John D. FitzGerald


“This is a fairly rigorous study,” added Dr. FitzGerald, also not involved in the study. “While it has lots of strengths,” including its large sample size and Mendelian randomization, it also has “abundant weaknesses,” he added. “It is largely cross-sectional, with single urate measurement and single brain MRI.”

“Causation is the big question,” Dr. FitzGerald noted. “Does treating gout (or urate) help prevent dementia or neurodegenerative decline?”


 

Early diagnosis benefits patients

Dr. Khanna and Dr. FitzGerald joined the authors in advising doctors to monitor their gout patients for cognitive and motor symptoms of neurodegenerative disease.

“It is clearly important to pay close attention to the neurologic exam and history in gout, especially because it is a disease of the aging population,” Dr. Khanna advised. “Addressing dementia when gout is diagnosed can lead to prompt mitigation strategies that can hugely impact patients.”

Dr. Topiwala and her colleagues would like to investigate why the dementia risk was time-dependent. “Is this because of the acute inflammatory response in gout, or could it just be that patients with gout visit their doctors more frequently, so any cognitive problems are picked up sooner?” she asked.

The authors, and Dr. Khanna and Dr. FitzGerald, report no relevant financial relationships. The Wellcome Trust; the U.K. Medical Research Council; the European Commission Horizon 2020 research and innovation program; the British Heart Foundation; the U.S. National Institutes of Health; the Engineering and Physical Sciences Research Council; and the National Institute for Health and Care Research funded the study.

Publications
Topics
Sections

 

Patients with gout may have smaller brain volumes and higher brain iron markers than people without gout, and also be more likely to develop Parkinson’s disease, probable essential tremor, and dementia, researchers in the United Kingdom report.

“We were surprised about the regions of the brain affected by gout, several of which are important for motor function. The other intriguing finding was that the risk of dementia amongst gout patients was strongly time-dependent: highest in the first 3 years after their gout diagnosis,” lead study author Anya Topiwala, BMBCh, DPhil, said in an interview.

Dr. Anya Topiwala

“Our combination of traditional and genetic approaches increases the confidence that gout is causing the brain findings,” said Dr. Topiwala, a clinical research fellow and consultant psychiatrist in the Nuffield Department of Population Health at the University of Oxford, England.

“We suggest that clinicians be vigilant for cognitive and motor problems after gout diagnosis, particularly in the early stages,” she added.


 

Links between gout and neurodegenerative diseases debated in earlier studies

Gout, the most common inflammatory arthritis, affects around 1%-4% of people, the authors wrote, with monosodium urate crystal deposits causing acute flares of pain and swelling in joints and periarticular tissues.

Whether and how gout may affect the brain has been debated in the literature. Gout and hyperuricemia have been linked with elevated stroke risk; and although observational studies have linked hyperuricemia with lower dementia risk, especially Alzheimer’s disease, Mendelian randomization studies have had conflicting results in Alzheimer’s disease.
 

A novel approach that analyzes brain structure and genetics

In a study published in Nature Communications, Dr. Topiwala and her colleagues combined observational and Mendelian randomization techniques to explore relationships between gout and neurodegenerative diseases. They analyzed data from over 303,000 volunteer participants between 40 and 69 years of age recruited between 2006 and 2010 to contribute their detailed genetic and health information to the U.K. Biobank, a large-scale biomedical database and research resource.

Patients with gout tended to be older and male. At baseline, all participants’ serum urate levels were measured, and 30.8% of patients with gout reported that they currently used urate-lowering therapy.
 

MRI shows brain changes in patients with gout

In what the authors said is the first investigation of neuroimaging markers in patients with gout, they compared differences in gray matter volumes found in the 1,165 participants with gout and the 32,202 controls without gout who had MRI data.

They found no marked sex differences in associations. Urate was inversely linked with global brain volume and with gray and white matter volumes, and gout appeared to age global gray matter by 2 years.

Patients with gout and higher urate showed significant differences in regional gray matter volumes, especially in the cerebellum, pons, and midbrain, as well as subcortical differences in the nucleus accumbens, putamen, and caudate. They also showed significant differences in white matter tract microstructure in the fornix.

Patients with gout were more likely to develop dementia (average hazard ratio [HR] over study = 1.60), especially in the first 3 years after gout diagnosis (HR = 7.40). They were also at higher risk for vascular dementia (average HR = 2.41), compared with all-cause dementia, but not for Alzheimer’s disease (average HR = 1.62).

In asymptomatic participants though, urate and dementia were inversely linked (HR = 0.85), with no time dependence.

Gout was linked with higher incidence of Parkinson’s disease (HR = 1.43) and probable essential tremor (HR = 6.75). In asymptomatic participants, urate and Parkinson’s disease (HR = 0.89), but not probable essential tremor, were inversely linked.
 

 

 

Genetic analyses reinforce MRI results

Using Mendelian randomization estimates, the authors found that genetic links generally reflected their observational findings. Both genetically predicted gout and serum urate were significantly linked with regional gray matter volumes, including cerebellar, midbrain, pons, and brainstem.

They also found significant links with higher magnetic susceptibility in the putamen and caudate, markers of higher iron. But while genetically predicted gout was significantly linked with global gray matter volume, urate was not.

In males, but not in females, urate was positively linked with alcohol intake and lower socioeconomic status.

Dr. Topiwala acknowledged several limitations to the study, writing that “the results from the volunteer participants may not apply to other populations; the cross-sectional serum urate measurements may not reflect chronic exposure; and Parkinson’s disease and essential tremor may have been diagnostically confounded.”
 

A novel approach that suggests further related research

Asked to comment on the study, Puja Khanna, MD, MPH, a rheumatologist and clinical associate professor of medicine at the University of Michigan, Ann Arbor, called its novel use of neuroimaging interesting.

Dr. Puja Khanna

Dr. Khanna, who was not involved in the study, said she would like to know more about the role that horizontal pleiotropy – one genetic variant having independent effects on multiple traits – plays in this disease process, and about the impact of the antioxidative properties of urate in maintaining neuroprotection.

“[The] U.K. Biobank is an excellent database to look at questions of association,” John D. FitzGerald, MD, PhD, MPH, MBA, professor and clinical chief of rheumatology at the University of California, Los Angeles, said in an interview.

Dr. John D. FitzGerald


“This is a fairly rigorous study,” added Dr. FitzGerald, also not involved in the study. “While it has lots of strengths,” including its large sample size and Mendelian randomization, it also has “abundant weaknesses,” he added. “It is largely cross-sectional, with single urate measurement and single brain MRI.”

“Causation is the big question,” Dr. FitzGerald noted. “Does treating gout (or urate) help prevent dementia or neurodegenerative decline?”


 

Early diagnosis benefits patients

Dr. Khanna and Dr. FitzGerald joined the authors in advising doctors to monitor their gout patients for cognitive and motor symptoms of neurodegenerative disease.

“It is clearly important to pay close attention to the neurologic exam and history in gout, especially because it is a disease of the aging population,” Dr. Khanna advised. “Addressing dementia when gout is diagnosed can lead to prompt mitigation strategies that can hugely impact patients.”

Dr. Topiwala and her colleagues would like to investigate why the dementia risk was time-dependent. “Is this because of the acute inflammatory response in gout, or could it just be that patients with gout visit their doctors more frequently, so any cognitive problems are picked up sooner?” she asked.

The authors, and Dr. Khanna and Dr. FitzGerald, report no relevant financial relationships. The Wellcome Trust; the U.K. Medical Research Council; the European Commission Horizon 2020 research and innovation program; the British Heart Foundation; the U.S. National Institutes of Health; the Engineering and Physical Sciences Research Council; and the National Institute for Health and Care Research funded the study.

 

Patients with gout may have smaller brain volumes and higher brain iron markers than people without gout, and also be more likely to develop Parkinson’s disease, probable essential tremor, and dementia, researchers in the United Kingdom report.

“We were surprised about the regions of the brain affected by gout, several of which are important for motor function. The other intriguing finding was that the risk of dementia amongst gout patients was strongly time-dependent: highest in the first 3 years after their gout diagnosis,” lead study author Anya Topiwala, BMBCh, DPhil, said in an interview.

Dr. Anya Topiwala

“Our combination of traditional and genetic approaches increases the confidence that gout is causing the brain findings,” said Dr. Topiwala, a clinical research fellow and consultant psychiatrist in the Nuffield Department of Population Health at the University of Oxford, England.

“We suggest that clinicians be vigilant for cognitive and motor problems after gout diagnosis, particularly in the early stages,” she added.


 

Links between gout and neurodegenerative diseases debated in earlier studies

Gout, the most common inflammatory arthritis, affects around 1%-4% of people, the authors wrote, with monosodium urate crystal deposits causing acute flares of pain and swelling in joints and periarticular tissues.

Whether and how gout may affect the brain has been debated in the literature. Gout and hyperuricemia have been linked with elevated stroke risk; and although observational studies have linked hyperuricemia with lower dementia risk, especially Alzheimer’s disease, Mendelian randomization studies have had conflicting results in Alzheimer’s disease.
 

A novel approach that analyzes brain structure and genetics

In a study published in Nature Communications, Dr. Topiwala and her colleagues combined observational and Mendelian randomization techniques to explore relationships between gout and neurodegenerative diseases. They analyzed data from over 303,000 volunteer participants between 40 and 69 years of age recruited between 2006 and 2010 to contribute their detailed genetic and health information to the U.K. Biobank, a large-scale biomedical database and research resource.

Patients with gout tended to be older and male. At baseline, all participants’ serum urate levels were measured, and 30.8% of patients with gout reported that they currently used urate-lowering therapy.
 

MRI shows brain changes in patients with gout

In what the authors said is the first investigation of neuroimaging markers in patients with gout, they compared differences in gray matter volumes found in the 1,165 participants with gout and the 32,202 controls without gout who had MRI data.

They found no marked sex differences in associations. Urate was inversely linked with global brain volume and with gray and white matter volumes, and gout appeared to age global gray matter by 2 years.

Patients with gout and higher urate showed significant differences in regional gray matter volumes, especially in the cerebellum, pons, and midbrain, as well as subcortical differences in the nucleus accumbens, putamen, and caudate. They also showed significant differences in white matter tract microstructure in the fornix.

Patients with gout were more likely to develop dementia (average hazard ratio [HR] over study = 1.60), especially in the first 3 years after gout diagnosis (HR = 7.40). They were also at higher risk for vascular dementia (average HR = 2.41), compared with all-cause dementia, but not for Alzheimer’s disease (average HR = 1.62).

In asymptomatic participants though, urate and dementia were inversely linked (HR = 0.85), with no time dependence.

Gout was linked with higher incidence of Parkinson’s disease (HR = 1.43) and probable essential tremor (HR = 6.75). In asymptomatic participants, urate and Parkinson’s disease (HR = 0.89), but not probable essential tremor, were inversely linked.
 

 

 

Genetic analyses reinforce MRI results

Using Mendelian randomization estimates, the authors found that genetic links generally reflected their observational findings. Both genetically predicted gout and serum urate were significantly linked with regional gray matter volumes, including cerebellar, midbrain, pons, and brainstem.

They also found significant links with higher magnetic susceptibility in the putamen and caudate, markers of higher iron. But while genetically predicted gout was significantly linked with global gray matter volume, urate was not.

In males, but not in females, urate was positively linked with alcohol intake and lower socioeconomic status.

Dr. Topiwala acknowledged several limitations to the study, writing that “the results from the volunteer participants may not apply to other populations; the cross-sectional serum urate measurements may not reflect chronic exposure; and Parkinson’s disease and essential tremor may have been diagnostically confounded.”
 

A novel approach that suggests further related research

Asked to comment on the study, Puja Khanna, MD, MPH, a rheumatologist and clinical associate professor of medicine at the University of Michigan, Ann Arbor, called its novel use of neuroimaging interesting.

Dr. Puja Khanna

Dr. Khanna, who was not involved in the study, said she would like to know more about the role that horizontal pleiotropy – one genetic variant having independent effects on multiple traits – plays in this disease process, and about the impact of the antioxidative properties of urate in maintaining neuroprotection.

“[The] U.K. Biobank is an excellent database to look at questions of association,” John D. FitzGerald, MD, PhD, MPH, MBA, professor and clinical chief of rheumatology at the University of California, Los Angeles, said in an interview.

Dr. John D. FitzGerald


“This is a fairly rigorous study,” added Dr. FitzGerald, also not involved in the study. “While it has lots of strengths,” including its large sample size and Mendelian randomization, it also has “abundant weaknesses,” he added. “It is largely cross-sectional, with single urate measurement and single brain MRI.”

“Causation is the big question,” Dr. FitzGerald noted. “Does treating gout (or urate) help prevent dementia or neurodegenerative decline?”


 

Early diagnosis benefits patients

Dr. Khanna and Dr. FitzGerald joined the authors in advising doctors to monitor their gout patients for cognitive and motor symptoms of neurodegenerative disease.

“It is clearly important to pay close attention to the neurologic exam and history in gout, especially because it is a disease of the aging population,” Dr. Khanna advised. “Addressing dementia when gout is diagnosed can lead to prompt mitigation strategies that can hugely impact patients.”

Dr. Topiwala and her colleagues would like to investigate why the dementia risk was time-dependent. “Is this because of the acute inflammatory response in gout, or could it just be that patients with gout visit their doctors more frequently, so any cognitive problems are picked up sooner?” she asked.

The authors, and Dr. Khanna and Dr. FitzGerald, report no relevant financial relationships. The Wellcome Trust; the U.K. Medical Research Council; the European Commission Horizon 2020 research and innovation program; the British Heart Foundation; the U.S. National Institutes of Health; the Engineering and Physical Sciences Research Council; and the National Institute for Health and Care Research funded the study.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE COMMUNICATIONS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Game-changing Alzheimer’s research: The latest on biomarkers

Article Type
Changed
Thu, 06/08/2023 - 10:03

The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.

The diagnostic approach for symptomatic patients

The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.

Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
 

Molecular PET biomarkers

Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.

The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.

The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.

In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
 

 

 

Molecular fluid biomarkers

Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.

A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.

Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.

We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.

Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.

The diagnostic approach for symptomatic patients

The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.

Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
 

Molecular PET biomarkers

Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.

The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.

The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.

In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
 

 

 

Molecular fluid biomarkers

Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.

A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.

Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.

We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.

Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.

A version of this article first appeared on Medscape.com.

The field of neurodegenerative dementias, particularly Alzheimer’s disease (AD), has been revolutionized by the development of imaging and cerebrospinal fluid biomarkers and is on the brink of a new development: emerging plasma biomarkers. Research now recognizes the relationship between the cognitive-behavioral syndromic diagnosis (that is, the illness) and the etiologic diagnosis (the disease) – and the need to consider each separately when developing a diagnostic formulation. The National Institute on Aging and Alzheimer’s Association Research Framework uses the amyloid, tau, and neurodegeneration system to define AD biologically in living patients. Here is an overview of the framework, which requires biomarker evidence of amyloid plaques (amyloid positivity) and neurofibrillary tangles (tau positivity), with evidence of neurodegeneration (neurodegeneration positivity) to support the diagnosis.

The diagnostic approach for symptomatic patients

The differential diagnosis in symptomatic patients with mild cognitive impairment (MCI), mild behavioral impairment, or dementia is broad and includes multiple neurodegenerative diseases (for example, AD, frontotemporal lobar degeneration, dementia with Lewy bodies, argyrophilic grain disease, hippocampal sclerosis); vascular ischemic brain injury (for example, stroke); tumors; infectious, inflammatory, paraneoplastic, or demyelinating diseases; trauma; hydrocephalus; toxic/metabolic insults; and other rare diseases. The patient’s clinical syndrome narrows the differential diagnosis.

Once the clinician has a prioritized differential diagnosis of the brain disease or condition that is probably causing or contributing to the patient’s signs and symptoms, they can then select appropriate assessments and tests, typically starting with a laboratory panel and brain MRI. Strong evidence backed by practice recommendations also supports the use of fluorodeoxyglucose PET as a marker of functional brain abnormalities associated with dementia. Although molecular biomarkers are typically considered at the later stage of the clinical workup, the anticipated future availability of plasma biomarkers will probably change the timing of molecular biomarker assessment in patients with suspected cognitive impairment owing to AD.
 

Molecular PET biomarkers

Three PET tracers approved by the U.S. Food and Drug Administration for the detection of cerebral amyloid plaques have high sensitivity (89%-98%) and specificity (88%-100%), compared with autopsy, the gold standard diagnostic tool. However, these scans are costly and are not reimbursed by Medicare and Medicaid. Because all amyloid PET scans are covered by the Veterans Administration, this test is more readily accessible for patients receiving VA benefits.

The appropriate-use criteria developed by the Amyloid Imaging Task Force recommends amyloid PET for patients with persistent or progressive MCI or dementia. In such patients, a negative amyloid PET scan would strongly weigh against AD, supporting a differential diagnosis of other etiologies. Although a positive amyloid PET scan in patients with MCI or dementia indicates the presence of amyloid plaques, it does not necessarily confirm AD as the cause. Cerebral amyloid plaques may coexist with other pathologies and increase with age, even in cognitively normal individuals.

The IDEAS study looked at the clinical utility of amyloid PET in a real-world dementia specialist setting. In the study, dementia subspecialists documented their presumed etiologic diagnosis (and level of confidence) before and after amyloid PET. Of the 11,409 patients who completed the study, the etiologic diagnosis changed from AD to non-AD in just over 25% of cases and from non-AD to AD in 10.5%. Clinical management changed in about 60% of patients with MCI and 63.5% of patients with dementia.

In May 2020, the FDA approved flortaucipir F-18, the first diagnostic tau radiotracer for use with PET to estimate the density and distribution of aggregated tau neurofibrillary tangles in adults with cognitive impairment undergoing evaluation for AD. Regulatory approval of flortaucipir F-18 was based on findings from two clinical trials of terminally ill patients who were followed to autopsy. The studies included patients with a spectrum of clinically diagnosed dementias and those with normal cognition. The primary outcome of the studies was accurate visual interpretation of the images in detecting advanced AD tau neurofibrillary tangle pathology (Braak stage V or VI tau pathology). Sensitivity of five trained readers ranged from 68% to 86%, and specificity ranged from 63% to 100%; interrater agreement was 0.87. Tau PET is not yet reimbursed and is therefore not yet readily available in the clinical setting. Moreover, appropriate use criteria have not yet been published.
 

 

 

Molecular fluid biomarkers

Cerebrospinal fluid (CSF) analysis is currently the most readily available and reimbursed test to aid in diagnosing AD, with appropriate-use criteria for patients with suspected AD. CSF biomarkers for AD are useful in cognitively impaired patients when the etiologic diagnosis is equivocal, there is only an intermediate level of diagnostic confidence, or there is very high confidence in the etiologic diagnosis. Testing for CSF biomarkers is also recommended for patients at very early clinical stages (for example, early MCI) or with atypical clinical presentations.

A decreased concentration of amyloid-beta 42 in CSF is a marker of amyloid neuritic plaques in the brain. An increased concentration of total tau in CSF reflects injury to neurons, and an increased concentration of specific isoforms of hyperphosphorylated tau reflects neurofibrillary tangles. Presently, the ratios of t-tau to amyloid-beta 42, amyloid-beta 42 to amyloid-beta 40, and phosphorylated-tau 181 to amyloid-beta 42 are the best-performing markers of AD neuropathologic changes and are more accurate than assessing individual biomarkers. These CSF biomarkers of AD have been validated against autopsy, and ratio values of CSF amyloid-beta 42 have been further validated against amyloid PET, with overall sensitivity and specificity of approximately 90% and 84%, respectively.

Some of the most exciting recent advances in AD center around the measurement of these proteins and others in plasma. Appropriate-use criteria for plasma biomarkers in the evaluation of patients with cognitive impairment were published in 2022. In addition to their use in clinical trials, these criteria cautiously recommend using these biomarkers in specialized memory clinics in the diagnostic workup of patients with cognitive symptoms, along with confirmatory CSF markers or PET. Additional data are needed before plasma biomarkers of AD are used as standalone diagnostic markers or considered in the primary care setting.

We have made remarkable progress toward more precise molecular diagnosis of brain diseases underlying cognitive impairment and dementia. Ongoing efforts to evaluate the utility of these measures in clinical practice include the need to increase diversity of patients and providers. Ultimately, the tremendous progress in molecular biomarkers for the diseases causing dementia will help the field work toward our common goal of early and accurate diagnosis, better management, and hope for people living with these diseases.

Bradford C. Dickerson, MD, MMSc, is a professor, department of neurology, Harvard Medical School, and director, Frontotemporal Disorders Unit, department of neurology, at Massachusetts General Hospital, both in Boston.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Flavanol supplement improves memory in adults with poor diets

Article Type
Changed
Wed, 06/28/2023 - 15:42

Taking a daily flavanol supplement improves hippocampal-dependent memory in older adults who have a relatively poor diet, results of a large new study suggest.

There’s increasing evidence that certain nutrients are important for the aging body and brain, study investigator Scott Small, MD, the Boris and Rose Katz Professor of Neurology, Columbia University Vagelos College of Physicians and Surgeons, New York, told this news organization.

“With this new study, I think we can begin to say flavanols might be the first one that really is a nutrient for the aging brain.”

These findings, said Dr. Small, represent “the beginning of a new era” that will eventually lead to formal recommendations” related to ideal intake of flavanols to reduce cognitive aging.

The findings were published online in the Proceedings of the National Academy of Science.
 

Better cognitive aging

Cognitive aging refers to the decline in cognitive abilities that are not thought to be caused by neurodegenerative diseases such as Alzheimer’s disease and Parkinson’s disease. Cognitive aging targets two areas of the brain: the hippocampus, which is related to memory function, and the prefrontal cortex, which is related to attention and executive function.

Previous research has linked flavanols, which are found in foods like apples, pears, berries, and cocoa beans, to improved cognitive aging. The evidence shows that consuming these nutrients might be associated with the hippocampal-dependent memory component of cognitive aging.

The new study, known as COcoa Supplement and Multivitamin Outcomes Study-Web (COSMOS-Web), included 3,562 generally healthy men and women, mean age 71 years, who were mostly well-educated and non-Hispanic/non-Latinx White individuals.

Participants were randomly assigned to receive oral flavanol-containing cocoa extract (500 mg of cocoa flavanols, including 80 mg of epicatechin) or a placebo daily.

The primary endpoint was hippocampal-dependent memory at year 1 as assessed with the ModRey, a neuropsychological test designed to measure hippocampal function.

Results showed participants in both groups had a typical learning (practice) effect, with similar improvements (d = 0.025; P = .42).

Researchers used other tests to measure cognition: the Color/Directional Flanker Task, a measure of prefrontal cortex function, and the ModBent, a measure that’s sensitive to dentate gyrus function. The flavanol intervention did not affect ModBent results or performance on the Flanker test after 1 year.

However, it was a different story for those with a poor diet at baseline. Researchers stratified participants into tertiles on the basis of diet quality as measured by the Healthy Eating Index (HEI) scores. Those in the lowest tertile had poorer baseline hippocampal-dependent memory performance but not memory related to the prefrontal cortex.

The flavanol intervention improved performance on the ModRey test, compared with placebo in participants in the low HEI tertile (overall effect: d = 0.086; P = .011) but not among those with a medium or high HEI at baseline.

“We confirmed that the flavanol intervention only benefits people who are relatively deficient at baseline,” said Dr. Small.

The correlation with hippocampal-dependent memory was confirmed in a subset of 1,361 study participants who provided a urine sample. Researchers measured urinary 5-(3′,4′-dihydroxyphenyl)-gamma-valerolactone metabolite (gVLM) concentrations, a validated biomarker of flavanol consumption.

After stratifying these results into tertiles, researchers found performance on the ModRey was significantly improved with the dietary flavanol intervention (overall effect: d = 0.141; P = .006) in the lowest gVLM tertile.
 

 

 

Memory restored

When participants in the lowest tertile consumed the supplement, “their flavanol levels went back to normal, and when that happened, their memory was restored,” said Dr. Small.

It appears that there is a sort of ceiling effect to the flavanol benefits. “It seems what you need to do is normalize your flavanol levels; if you go above normal, there was no evidence that your memory keeps on getting better,” said Dr. Small.

The study included only older adults, so it’s unclear what the impact of flavanol supplementation is in younger adults. But cognitive aging “begins its slippery side” in the 40s, said Dr. Small. “If this is truly a nutrient that is taken to prevent that slide from happening, it might be beneficial to start in our 40s.”

He recognized that the effect size is not large but said this is “very dependent” on baseline factors and most study participants had a rather healthy diet. “None of our participants were really highly deficient” in flavanols, he said.

“To see a stronger effect size, we need to do another study where we recruit people who are very low, truly deficient, in flavanols, and then see what happens.”

Showing that flavanols are linked to the hippocampal and not to the prefrontal component of cognitive aging “speaks to the mechanism,” said Dr. Small.

Though the exact mechanism linking flavanols with enhanced memory isn’t clear, there are some clues; for example, research suggests cognitive aging affects the dentate gyrus, a subregion of the hippocampus.

The flavanol supplements were well tolerated. “I can say with close to certainty that this is very safe,” said Dr. Small, adding the flavanols have now been used in numerous studies.

The findings suggest flavanol consumption might be part of future dietary guidelines. “I suspect that once there is sufficient evidence, flavanols will be part of the dietary recommendations for healthy aging,” said Dr. Small.
 

A word of caution

Heather M. Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, said that though science suggests a balanced diet is good for overall brain health, no single food, beverage, ingredient, vitamin, or supplement has yet been proven to prevent dementia, treat or cure Alzheimer’s, or benefit cognitive function or brain health.

Experts agree the best source of vitamins and other nutrients is from whole foods as part of a balanced diet. “We recognize that, for a variety of reasons, this may not always be possible,” said Dr. Snyder.

However, she noted, dietary supplements are not subject to the same rigorous review and regulation process as medications.

“The Alzheimer’s Association strongly encourages individuals to have conversations with their physicians about all medications and dietary supplements they are currently taking or interested in starting.” 

COSMOS is supported by an investigator-initiated grant from Mars Edge, a segment of Mars, company engaged in flavanol research and flavanol-related commercial activities, which included infrastructure support and the donation of study pills and packaging. Small reports receiving an unrestricted research grant from Mars.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Taking a daily flavanol supplement improves hippocampal-dependent memory in older adults who have a relatively poor diet, results of a large new study suggest.

There’s increasing evidence that certain nutrients are important for the aging body and brain, study investigator Scott Small, MD, the Boris and Rose Katz Professor of Neurology, Columbia University Vagelos College of Physicians and Surgeons, New York, told this news organization.

“With this new study, I think we can begin to say flavanols might be the first one that really is a nutrient for the aging brain.”

These findings, said Dr. Small, represent “the beginning of a new era” that will eventually lead to formal recommendations” related to ideal intake of flavanols to reduce cognitive aging.

The findings were published online in the Proceedings of the National Academy of Science.
 

Better cognitive aging

Cognitive aging refers to the decline in cognitive abilities that are not thought to be caused by neurodegenerative diseases such as Alzheimer’s disease and Parkinson’s disease. Cognitive aging targets two areas of the brain: the hippocampus, which is related to memory function, and the prefrontal cortex, which is related to attention and executive function.

Previous research has linked flavanols, which are found in foods like apples, pears, berries, and cocoa beans, to improved cognitive aging. The evidence shows that consuming these nutrients might be associated with the hippocampal-dependent memory component of cognitive aging.

The new study, known as COcoa Supplement and Multivitamin Outcomes Study-Web (COSMOS-Web), included 3,562 generally healthy men and women, mean age 71 years, who were mostly well-educated and non-Hispanic/non-Latinx White individuals.

Participants were randomly assigned to receive oral flavanol-containing cocoa extract (500 mg of cocoa flavanols, including 80 mg of epicatechin) or a placebo daily.

The primary endpoint was hippocampal-dependent memory at year 1 as assessed with the ModRey, a neuropsychological test designed to measure hippocampal function.

Results showed participants in both groups had a typical learning (practice) effect, with similar improvements (d = 0.025; P = .42).

Researchers used other tests to measure cognition: the Color/Directional Flanker Task, a measure of prefrontal cortex function, and the ModBent, a measure that’s sensitive to dentate gyrus function. The flavanol intervention did not affect ModBent results or performance on the Flanker test after 1 year.

However, it was a different story for those with a poor diet at baseline. Researchers stratified participants into tertiles on the basis of diet quality as measured by the Healthy Eating Index (HEI) scores. Those in the lowest tertile had poorer baseline hippocampal-dependent memory performance but not memory related to the prefrontal cortex.

The flavanol intervention improved performance on the ModRey test, compared with placebo in participants in the low HEI tertile (overall effect: d = 0.086; P = .011) but not among those with a medium or high HEI at baseline.

“We confirmed that the flavanol intervention only benefits people who are relatively deficient at baseline,” said Dr. Small.

The correlation with hippocampal-dependent memory was confirmed in a subset of 1,361 study participants who provided a urine sample. Researchers measured urinary 5-(3′,4′-dihydroxyphenyl)-gamma-valerolactone metabolite (gVLM) concentrations, a validated biomarker of flavanol consumption.

After stratifying these results into tertiles, researchers found performance on the ModRey was significantly improved with the dietary flavanol intervention (overall effect: d = 0.141; P = .006) in the lowest gVLM tertile.
 

 

 

Memory restored

When participants in the lowest tertile consumed the supplement, “their flavanol levels went back to normal, and when that happened, their memory was restored,” said Dr. Small.

It appears that there is a sort of ceiling effect to the flavanol benefits. “It seems what you need to do is normalize your flavanol levels; if you go above normal, there was no evidence that your memory keeps on getting better,” said Dr. Small.

The study included only older adults, so it’s unclear what the impact of flavanol supplementation is in younger adults. But cognitive aging “begins its slippery side” in the 40s, said Dr. Small. “If this is truly a nutrient that is taken to prevent that slide from happening, it might be beneficial to start in our 40s.”

He recognized that the effect size is not large but said this is “very dependent” on baseline factors and most study participants had a rather healthy diet. “None of our participants were really highly deficient” in flavanols, he said.

“To see a stronger effect size, we need to do another study where we recruit people who are very low, truly deficient, in flavanols, and then see what happens.”

Showing that flavanols are linked to the hippocampal and not to the prefrontal component of cognitive aging “speaks to the mechanism,” said Dr. Small.

Though the exact mechanism linking flavanols with enhanced memory isn’t clear, there are some clues; for example, research suggests cognitive aging affects the dentate gyrus, a subregion of the hippocampus.

The flavanol supplements were well tolerated. “I can say with close to certainty that this is very safe,” said Dr. Small, adding the flavanols have now been used in numerous studies.

The findings suggest flavanol consumption might be part of future dietary guidelines. “I suspect that once there is sufficient evidence, flavanols will be part of the dietary recommendations for healthy aging,” said Dr. Small.
 

A word of caution

Heather M. Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, said that though science suggests a balanced diet is good for overall brain health, no single food, beverage, ingredient, vitamin, or supplement has yet been proven to prevent dementia, treat or cure Alzheimer’s, or benefit cognitive function or brain health.

Experts agree the best source of vitamins and other nutrients is from whole foods as part of a balanced diet. “We recognize that, for a variety of reasons, this may not always be possible,” said Dr. Snyder.

However, she noted, dietary supplements are not subject to the same rigorous review and regulation process as medications.

“The Alzheimer’s Association strongly encourages individuals to have conversations with their physicians about all medications and dietary supplements they are currently taking or interested in starting.” 

COSMOS is supported by an investigator-initiated grant from Mars Edge, a segment of Mars, company engaged in flavanol research and flavanol-related commercial activities, which included infrastructure support and the donation of study pills and packaging. Small reports receiving an unrestricted research grant from Mars.

A version of this article first appeared on Medscape.com.

Taking a daily flavanol supplement improves hippocampal-dependent memory in older adults who have a relatively poor diet, results of a large new study suggest.

There’s increasing evidence that certain nutrients are important for the aging body and brain, study investigator Scott Small, MD, the Boris and Rose Katz Professor of Neurology, Columbia University Vagelos College of Physicians and Surgeons, New York, told this news organization.

“With this new study, I think we can begin to say flavanols might be the first one that really is a nutrient for the aging brain.”

These findings, said Dr. Small, represent “the beginning of a new era” that will eventually lead to formal recommendations” related to ideal intake of flavanols to reduce cognitive aging.

The findings were published online in the Proceedings of the National Academy of Science.
 

Better cognitive aging

Cognitive aging refers to the decline in cognitive abilities that are not thought to be caused by neurodegenerative diseases such as Alzheimer’s disease and Parkinson’s disease. Cognitive aging targets two areas of the brain: the hippocampus, which is related to memory function, and the prefrontal cortex, which is related to attention and executive function.

Previous research has linked flavanols, which are found in foods like apples, pears, berries, and cocoa beans, to improved cognitive aging. The evidence shows that consuming these nutrients might be associated with the hippocampal-dependent memory component of cognitive aging.

The new study, known as COcoa Supplement and Multivitamin Outcomes Study-Web (COSMOS-Web), included 3,562 generally healthy men and women, mean age 71 years, who were mostly well-educated and non-Hispanic/non-Latinx White individuals.

Participants were randomly assigned to receive oral flavanol-containing cocoa extract (500 mg of cocoa flavanols, including 80 mg of epicatechin) or a placebo daily.

The primary endpoint was hippocampal-dependent memory at year 1 as assessed with the ModRey, a neuropsychological test designed to measure hippocampal function.

Results showed participants in both groups had a typical learning (practice) effect, with similar improvements (d = 0.025; P = .42).

Researchers used other tests to measure cognition: the Color/Directional Flanker Task, a measure of prefrontal cortex function, and the ModBent, a measure that’s sensitive to dentate gyrus function. The flavanol intervention did not affect ModBent results or performance on the Flanker test after 1 year.

However, it was a different story for those with a poor diet at baseline. Researchers stratified participants into tertiles on the basis of diet quality as measured by the Healthy Eating Index (HEI) scores. Those in the lowest tertile had poorer baseline hippocampal-dependent memory performance but not memory related to the prefrontal cortex.

The flavanol intervention improved performance on the ModRey test, compared with placebo in participants in the low HEI tertile (overall effect: d = 0.086; P = .011) but not among those with a medium or high HEI at baseline.

“We confirmed that the flavanol intervention only benefits people who are relatively deficient at baseline,” said Dr. Small.

The correlation with hippocampal-dependent memory was confirmed in a subset of 1,361 study participants who provided a urine sample. Researchers measured urinary 5-(3′,4′-dihydroxyphenyl)-gamma-valerolactone metabolite (gVLM) concentrations, a validated biomarker of flavanol consumption.

After stratifying these results into tertiles, researchers found performance on the ModRey was significantly improved with the dietary flavanol intervention (overall effect: d = 0.141; P = .006) in the lowest gVLM tertile.
 

 

 

Memory restored

When participants in the lowest tertile consumed the supplement, “their flavanol levels went back to normal, and when that happened, their memory was restored,” said Dr. Small.

It appears that there is a sort of ceiling effect to the flavanol benefits. “It seems what you need to do is normalize your flavanol levels; if you go above normal, there was no evidence that your memory keeps on getting better,” said Dr. Small.

The study included only older adults, so it’s unclear what the impact of flavanol supplementation is in younger adults. But cognitive aging “begins its slippery side” in the 40s, said Dr. Small. “If this is truly a nutrient that is taken to prevent that slide from happening, it might be beneficial to start in our 40s.”

He recognized that the effect size is not large but said this is “very dependent” on baseline factors and most study participants had a rather healthy diet. “None of our participants were really highly deficient” in flavanols, he said.

“To see a stronger effect size, we need to do another study where we recruit people who are very low, truly deficient, in flavanols, and then see what happens.”

Showing that flavanols are linked to the hippocampal and not to the prefrontal component of cognitive aging “speaks to the mechanism,” said Dr. Small.

Though the exact mechanism linking flavanols with enhanced memory isn’t clear, there are some clues; for example, research suggests cognitive aging affects the dentate gyrus, a subregion of the hippocampus.

The flavanol supplements were well tolerated. “I can say with close to certainty that this is very safe,” said Dr. Small, adding the flavanols have now been used in numerous studies.

The findings suggest flavanol consumption might be part of future dietary guidelines. “I suspect that once there is sufficient evidence, flavanols will be part of the dietary recommendations for healthy aging,” said Dr. Small.
 

A word of caution

Heather M. Snyder, PhD, vice president of medical and scientific relations, Alzheimer’s Association, said that though science suggests a balanced diet is good for overall brain health, no single food, beverage, ingredient, vitamin, or supplement has yet been proven to prevent dementia, treat or cure Alzheimer’s, or benefit cognitive function or brain health.

Experts agree the best source of vitamins and other nutrients is from whole foods as part of a balanced diet. “We recognize that, for a variety of reasons, this may not always be possible,” said Dr. Snyder.

However, she noted, dietary supplements are not subject to the same rigorous review and regulation process as medications.

“The Alzheimer’s Association strongly encourages individuals to have conversations with their physicians about all medications and dietary supplements they are currently taking or interested in starting.” 

COSMOS is supported by an investigator-initiated grant from Mars Edge, a segment of Mars, company engaged in flavanol research and flavanol-related commercial activities, which included infrastructure support and the donation of study pills and packaging. Small reports receiving an unrestricted research grant from Mars.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Potential new treatment for REM sleep behavior disorder

Article Type
Changed
Tue, 08/08/2023 - 08:44

Dual orexin receptor antagonists (DORAs), a class of drugs approved to treat insomnia, may also be effective for rapid eye movement sleep behavior disorder (RBD), a study suggests.

About 3 million people in the United States have RBD, which is often a precursor to Parkinson’s disease. People with the disorder act out their dreams by talking, flailing their arms and legs, punching, kicking, and exhibiting other behaviors while asleep.

Researchers used an animal model for the study, which they say is the first to identify a new form of treatment for RBD.

“REM behavior disorder is difficult to treat, and the treatments are mostly limited to clonazepam and melatonin,” which may have side effects, senior investigator Andrew Varga, MD, PhD, associate professor of pulmonary, critical care, and sleep medicine at the Icahn School of Medicine at Mount Sinai, New York, told this news organization. “We’re using something completely different, which raises the possibility this might be something useful for REM behavior disorders.”

The findings, with Mount Sinai assistant professor Korey Kam, PhD, as lead author, were published online in the Journal of Neuroscience.
 

A new model for RBD?

RBD can signal risk for synucleinopathies, a group of neurological conditions such as Parkinson’s disease that involve the formation of clumps of alpha-synuclein protein in the brain.

Prior research on RBD was done in synucleinopathy mouse models. For this study, however, researchers used a tauopathy mouse model to investigate how the abnormal accumulation of tau protein might affect RBD.

Researchers collected data on biophysical properties when the mice were awake and in REM and non-REM sleep. They examined length of sleep, transitions from waking to sleep, and how some factors are related to age.

Nearly a third of the older animals showed behaviors similar to REM sleep behavior disorder in humans, including chewing and limb extension.

But after researchers administered a DORA medication twice during a 24-hour period, they noted that the medication not only helped the animals fall asleep faster and for longer, it also reduced levels of dream enactment that are a hallmark of RBD.
 

The ‘bigger highlight’

Finding RBD behaviors in a tauopathy animal model was surprising, Dr. Varga said, because RBD has been previously linked to synucleinopathies. There was no known correlation between RBD and abnormal accumulation of tau.

Another unexpected finding was the detection of RBD in some of the younger animals, who had not yet shown evidence of tau accumulation.

“It appears to be a biomarker or a signature of something that’s going on that predicts the impending tauopathy at a time where there is very little, or no, tau pathology going on in the brain,” Dr. Varga said.

If RBD is an early predictor of future tau accumulation, the model could guide future prevention and treatment. However, the more important finding is the potential new treatment for the condition.

“The bigger highlight here is less about what’s causing the RBD [than about] what you can do to make it better,” he said.

The next step in the work is to study whether the effect of DORAs on RBD seen in this tauopathy mouse model is evidenced in other animals and whether it is effective in humans with RBD, Dr. Varga said.

The study was funded by the Alzheimer’s Association and Merck Investigator Studies Program. Dr. Kam, Dr. Varga, and coauthors report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Dual orexin receptor antagonists (DORAs), a class of drugs approved to treat insomnia, may also be effective for rapid eye movement sleep behavior disorder (RBD), a study suggests.

About 3 million people in the United States have RBD, which is often a precursor to Parkinson’s disease. People with the disorder act out their dreams by talking, flailing their arms and legs, punching, kicking, and exhibiting other behaviors while asleep.

Researchers used an animal model for the study, which they say is the first to identify a new form of treatment for RBD.

“REM behavior disorder is difficult to treat, and the treatments are mostly limited to clonazepam and melatonin,” which may have side effects, senior investigator Andrew Varga, MD, PhD, associate professor of pulmonary, critical care, and sleep medicine at the Icahn School of Medicine at Mount Sinai, New York, told this news organization. “We’re using something completely different, which raises the possibility this might be something useful for REM behavior disorders.”

The findings, with Mount Sinai assistant professor Korey Kam, PhD, as lead author, were published online in the Journal of Neuroscience.
 

A new model for RBD?

RBD can signal risk for synucleinopathies, a group of neurological conditions such as Parkinson’s disease that involve the formation of clumps of alpha-synuclein protein in the brain.

Prior research on RBD was done in synucleinopathy mouse models. For this study, however, researchers used a tauopathy mouse model to investigate how the abnormal accumulation of tau protein might affect RBD.

Researchers collected data on biophysical properties when the mice were awake and in REM and non-REM sleep. They examined length of sleep, transitions from waking to sleep, and how some factors are related to age.

Nearly a third of the older animals showed behaviors similar to REM sleep behavior disorder in humans, including chewing and limb extension.

But after researchers administered a DORA medication twice during a 24-hour period, they noted that the medication not only helped the animals fall asleep faster and for longer, it also reduced levels of dream enactment that are a hallmark of RBD.
 

The ‘bigger highlight’

Finding RBD behaviors in a tauopathy animal model was surprising, Dr. Varga said, because RBD has been previously linked to synucleinopathies. There was no known correlation between RBD and abnormal accumulation of tau.

Another unexpected finding was the detection of RBD in some of the younger animals, who had not yet shown evidence of tau accumulation.

“It appears to be a biomarker or a signature of something that’s going on that predicts the impending tauopathy at a time where there is very little, or no, tau pathology going on in the brain,” Dr. Varga said.

If RBD is an early predictor of future tau accumulation, the model could guide future prevention and treatment. However, the more important finding is the potential new treatment for the condition.

“The bigger highlight here is less about what’s causing the RBD [than about] what you can do to make it better,” he said.

The next step in the work is to study whether the effect of DORAs on RBD seen in this tauopathy mouse model is evidenced in other animals and whether it is effective in humans with RBD, Dr. Varga said.

The study was funded by the Alzheimer’s Association and Merck Investigator Studies Program. Dr. Kam, Dr. Varga, and coauthors report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Dual orexin receptor antagonists (DORAs), a class of drugs approved to treat insomnia, may also be effective for rapid eye movement sleep behavior disorder (RBD), a study suggests.

About 3 million people in the United States have RBD, which is often a precursor to Parkinson’s disease. People with the disorder act out their dreams by talking, flailing their arms and legs, punching, kicking, and exhibiting other behaviors while asleep.

Researchers used an animal model for the study, which they say is the first to identify a new form of treatment for RBD.

“REM behavior disorder is difficult to treat, and the treatments are mostly limited to clonazepam and melatonin,” which may have side effects, senior investigator Andrew Varga, MD, PhD, associate professor of pulmonary, critical care, and sleep medicine at the Icahn School of Medicine at Mount Sinai, New York, told this news organization. “We’re using something completely different, which raises the possibility this might be something useful for REM behavior disorders.”

The findings, with Mount Sinai assistant professor Korey Kam, PhD, as lead author, were published online in the Journal of Neuroscience.
 

A new model for RBD?

RBD can signal risk for synucleinopathies, a group of neurological conditions such as Parkinson’s disease that involve the formation of clumps of alpha-synuclein protein in the brain.

Prior research on RBD was done in synucleinopathy mouse models. For this study, however, researchers used a tauopathy mouse model to investigate how the abnormal accumulation of tau protein might affect RBD.

Researchers collected data on biophysical properties when the mice were awake and in REM and non-REM sleep. They examined length of sleep, transitions from waking to sleep, and how some factors are related to age.

Nearly a third of the older animals showed behaviors similar to REM sleep behavior disorder in humans, including chewing and limb extension.

But after researchers administered a DORA medication twice during a 24-hour period, they noted that the medication not only helped the animals fall asleep faster and for longer, it also reduced levels of dream enactment that are a hallmark of RBD.
 

The ‘bigger highlight’

Finding RBD behaviors in a tauopathy animal model was surprising, Dr. Varga said, because RBD has been previously linked to synucleinopathies. There was no known correlation between RBD and abnormal accumulation of tau.

Another unexpected finding was the detection of RBD in some of the younger animals, who had not yet shown evidence of tau accumulation.

“It appears to be a biomarker or a signature of something that’s going on that predicts the impending tauopathy at a time where there is very little, or no, tau pathology going on in the brain,” Dr. Varga said.

If RBD is an early predictor of future tau accumulation, the model could guide future prevention and treatment. However, the more important finding is the potential new treatment for the condition.

“The bigger highlight here is less about what’s causing the RBD [than about] what you can do to make it better,” he said.

The next step in the work is to study whether the effect of DORAs on RBD seen in this tauopathy mouse model is evidenced in other animals and whether it is effective in humans with RBD, Dr. Varga said.

The study was funded by the Alzheimer’s Association and Merck Investigator Studies Program. Dr. Kam, Dr. Varga, and coauthors report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF NEUROSCIENCE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Positive top-line results for cannabinoid-based med for nerve pain

Article Type
Changed
Thu, 06/08/2023 - 11:01

An experimental, proprietary cannabinoid-based drug in development for diabetic neuropathy outperformed pregabalin (Lyrica) in a clinical trial, achieving significant reduction in pain severity, new top-line results released by Zelira Therapeutics suggest.

“The implications of these results for patients are incredibly promising,” principal investigator Bryan Doner, DO, medical director of HealthyWays Integrated Wellness Solutions, Gibsonia, Pa., said in a news release.

“Through this rigorously designed study, we have demonstrated that ZLT-L-007 is a safe, effective, and well-tolerated alternative for patients who would typically seek a Lyrica-level of pain relief,” he added.

The observational, nonblinded trial tested the efficacy, safety, and tolerability of ZLT-L-007 against pregabalin in 60 adults with diabetic nerve pain.

The study had three groups with 20 patients each (pregabalin alone, pregabalin plus ZLT-L-007, and ZLT-L-007 alone).

Top-line results show the study met its primary endpoint for change in daily pain severity as measured by the percent change from baseline at 30, 60, and 90 days on the Numerical Rating Scale.

For the pregabalin-only group, there was a reduction in symptom severity at all follow-up points, ranging from 20% to 35% (median percent change from baseline), the company said.

For the ZLT-L-007 only group, there was about a 33% reduction in symptom severity at 30 days, and 71% and 78% reduction, respectively, at 60 and 90 days, suggesting a larger improvement in symptom severity than with pregabalin alone, the company said.

For the pregabalin plus ZLT-L-007 group, there was a moderate 20% reduction in symptom severity at 30 days, but a larger reduction at 60 and 90 days (50% and 72%, respectively), which indicates substantially greater improvement in symptom severity than with pregabalin alone, the company said.

The study also met secondary endpoints, including significant decreases in daily pain severity as measured by the Visual Analog Scale and measurable changes in the short-form McGill Pain Questionnaire and Neuropathic Pain Symptom Inventory.

Dr. Doner noted that the top-line data showed “no serious adverse events, and participants’ blood pressure and other safety vitals remained unaffected throughout. This confirms that ZLT-L-007 is a well-tolerated product that delivers statistically significant pain relief, surpassing the levels achieved by Lyrica.”

The company plans to report additional insights from the full study, as they become available, during fiscal year 2023-2024.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An experimental, proprietary cannabinoid-based drug in development for diabetic neuropathy outperformed pregabalin (Lyrica) in a clinical trial, achieving significant reduction in pain severity, new top-line results released by Zelira Therapeutics suggest.

“The implications of these results for patients are incredibly promising,” principal investigator Bryan Doner, DO, medical director of HealthyWays Integrated Wellness Solutions, Gibsonia, Pa., said in a news release.

“Through this rigorously designed study, we have demonstrated that ZLT-L-007 is a safe, effective, and well-tolerated alternative for patients who would typically seek a Lyrica-level of pain relief,” he added.

The observational, nonblinded trial tested the efficacy, safety, and tolerability of ZLT-L-007 against pregabalin in 60 adults with diabetic nerve pain.

The study had three groups with 20 patients each (pregabalin alone, pregabalin plus ZLT-L-007, and ZLT-L-007 alone).

Top-line results show the study met its primary endpoint for change in daily pain severity as measured by the percent change from baseline at 30, 60, and 90 days on the Numerical Rating Scale.

For the pregabalin-only group, there was a reduction in symptom severity at all follow-up points, ranging from 20% to 35% (median percent change from baseline), the company said.

For the ZLT-L-007 only group, there was about a 33% reduction in symptom severity at 30 days, and 71% and 78% reduction, respectively, at 60 and 90 days, suggesting a larger improvement in symptom severity than with pregabalin alone, the company said.

For the pregabalin plus ZLT-L-007 group, there was a moderate 20% reduction in symptom severity at 30 days, but a larger reduction at 60 and 90 days (50% and 72%, respectively), which indicates substantially greater improvement in symptom severity than with pregabalin alone, the company said.

The study also met secondary endpoints, including significant decreases in daily pain severity as measured by the Visual Analog Scale and measurable changes in the short-form McGill Pain Questionnaire and Neuropathic Pain Symptom Inventory.

Dr. Doner noted that the top-line data showed “no serious adverse events, and participants’ blood pressure and other safety vitals remained unaffected throughout. This confirms that ZLT-L-007 is a well-tolerated product that delivers statistically significant pain relief, surpassing the levels achieved by Lyrica.”

The company plans to report additional insights from the full study, as they become available, during fiscal year 2023-2024.

A version of this article first appeared on Medscape.com.

An experimental, proprietary cannabinoid-based drug in development for diabetic neuropathy outperformed pregabalin (Lyrica) in a clinical trial, achieving significant reduction in pain severity, new top-line results released by Zelira Therapeutics suggest.

“The implications of these results for patients are incredibly promising,” principal investigator Bryan Doner, DO, medical director of HealthyWays Integrated Wellness Solutions, Gibsonia, Pa., said in a news release.

“Through this rigorously designed study, we have demonstrated that ZLT-L-007 is a safe, effective, and well-tolerated alternative for patients who would typically seek a Lyrica-level of pain relief,” he added.

The observational, nonblinded trial tested the efficacy, safety, and tolerability of ZLT-L-007 against pregabalin in 60 adults with diabetic nerve pain.

The study had three groups with 20 patients each (pregabalin alone, pregabalin plus ZLT-L-007, and ZLT-L-007 alone).

Top-line results show the study met its primary endpoint for change in daily pain severity as measured by the percent change from baseline at 30, 60, and 90 days on the Numerical Rating Scale.

For the pregabalin-only group, there was a reduction in symptom severity at all follow-up points, ranging from 20% to 35% (median percent change from baseline), the company said.

For the ZLT-L-007 only group, there was about a 33% reduction in symptom severity at 30 days, and 71% and 78% reduction, respectively, at 60 and 90 days, suggesting a larger improvement in symptom severity than with pregabalin alone, the company said.

For the pregabalin plus ZLT-L-007 group, there was a moderate 20% reduction in symptom severity at 30 days, but a larger reduction at 60 and 90 days (50% and 72%, respectively), which indicates substantially greater improvement in symptom severity than with pregabalin alone, the company said.

The study also met secondary endpoints, including significant decreases in daily pain severity as measured by the Visual Analog Scale and measurable changes in the short-form McGill Pain Questionnaire and Neuropathic Pain Symptom Inventory.

Dr. Doner noted that the top-line data showed “no serious adverse events, and participants’ blood pressure and other safety vitals remained unaffected throughout. This confirms that ZLT-L-007 is a well-tolerated product that delivers statistically significant pain relief, surpassing the levels achieved by Lyrica.”

The company plans to report additional insights from the full study, as they become available, during fiscal year 2023-2024.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ancient plague, cyclical pandemics … history lesson?

Article Type
Changed
Thu, 06/01/2023 - 09:37

 

Even the plague wanted to visit Stonehenge

We’re about to blow your mind: The history you learned in school was often inaccurate. Shocking, we know, so we’ll give you a minute to process this incredible news.

Better? Good. Now, let’s look back at high school European history. The Black Death, specifically. The common narrative is that the Mongols, while besieging a Crimean city belonging to the Genoese, catapulted dead bodies infected with some mystery disease that turned out to be the plague. The Genoese then brought the plague back to Italy, and from there, we all know the rest of the story.

The Black Death was certainly extremely important to the development of modern Europe as we know it, but the history books gloss over the much longer history of the plague. Yersinia pestis did not suddenly appear unbidden in a Mongol war camp in 1347. The Black Death wasn’t even the first horrific, continent-wide pandemic caused by the plague; the Plague of Justinian 800 years earlier crippled the Byzantine Empire during an expansionist phase and killed anywhere between 15 million and 100 million.

Today, though, LOTME looks even deeper into history, nearly beyond even history itself, back into the depths of early Bronze Age northern Europe. Specifically, to two ancient burial sites in England, where researchers have identified three 4,000-year-old cases of Y. pestis, the first recorded incidence of the disease in Britain.

Two of the individuals, identified through analysis of dental pulp, were young children buried at a mass grave in Somerset, while the third, a middle-aged woman, was found in a ring cairn in Cumbria. These sites are hundreds of miles apart, yet carbon dating suggests all three people lived and died at roughly the same time. The strain found is very similar to other samples of plague found across central and western Europe starting around 3,000 BCE, suggesting a single, easily spread disease affecting a large area in a relatively small period of time. In other words, a pandemic. Even in these ancient times, the world was connected. Not even the island of Britain could escape.

Beyond that though, the research helps confirm the cyclical nature of the plague; over time, it loses its effectiveness and goes into hiding, only to mutate and come roaring back. This is a story with absolutely no relevance at all to the modern world. Nope, no plagues or pandemics going around right now, no viruses fading into the background in any way. What a ridiculous inference to make.
 

Uncovering the invisible with artificial intelligence

This week in “What Else Can AI Do?” new research shows that a computer program can reveal brain injury that couldn’t be seen before with typical MRI.

The hot new AI, birthed by researchers at New York University, could potentially be a game changer by linking repeated head impacts with tiny, structural changes in the brains of athletes who have not been diagnosed with a concussion. By using machine learning to train the AI, the researchers were, for the first time, able to distinguish the brain of athletes who played contact sports (football, soccer, lacrosse) from those participating in noncontact sports such as baseball, basketball, and cross-country.

Andrea Danti/Thinkstock

How did they do it? The investigators “designed statistical techniques that gave their computer program the ability to ‘learn’ how to predict exposure to repeated head impacts using mathematical models,” they explained in a written statement. Adding in data from the MRI scans of 81 male athletes with no known concussion diagnosis and the ability to identify unusual brain features between athletes with and without head trauma allowed the AI to predict results with accuracy even Miss Cleo would envy.

“This method may provide an important diagnostic tool not only for concussion, but also for detecting the damage that stems from subtler and more frequent head impacts,” said lead author Junbo Chen, an engineering doctoral candidate at NYU. That could make this new AI a valuable asset to science and medicine.

There are many things the human brain can do that AI can’t, and delegation could be one of them. Examining the data that represent the human brain in minute detail? Maybe we leave that to the machine.
 

 

 

Talk about your field promotions

If you’re a surgeon doing an amputation, the list of possible assistants pretty much starts and ends in only one place: Not the closest available janitor.

That may seem like an oddly obvious thing for us to say, but there’s at least one former Mainz (Germany) University Hospital physician who really needed to get this bit of advice before he attempted an unassisted toe amputation back in October of 2020. Yes, that does seem like kind of a long time ago for us to be reporting it now, but the details of the incident only just came to light a few days ago, thanks to German public broadcaster SWR.

Ente75/Wikipedia

Since it was just a toe, the surgeon thought he could perform the operation without any help. The toe, unfortunately, had other plans. The partially anesthetized patient got restless in the operating room, but with no actual trained nurse in the vicinity, the surgeon asked the closest available person – that would be the janitor – to lend a hand.

The surgical manager heard about these goings-on and got to the operating room too late to stop the procedure but soon enough to see the cleaning staffer “at the operating table with a bloody suction cup and a bloody compress in their hands,” SWR recently reported.

The incident was reported to the hospital’s medical director and the surgeon was fired, but since the patient experienced no complications not much fuss was made about it at the time.

Well, guess what? It’s toe-tally our job to make a fuss about these kinds of things. Or could it be that our job, much like the surgeon’s employment and the patient’s digit, is here toe-day and gone toe-morrow?

Publications
Topics
Sections

 

Even the plague wanted to visit Stonehenge

We’re about to blow your mind: The history you learned in school was often inaccurate. Shocking, we know, so we’ll give you a minute to process this incredible news.

Better? Good. Now, let’s look back at high school European history. The Black Death, specifically. The common narrative is that the Mongols, while besieging a Crimean city belonging to the Genoese, catapulted dead bodies infected with some mystery disease that turned out to be the plague. The Genoese then brought the plague back to Italy, and from there, we all know the rest of the story.

The Black Death was certainly extremely important to the development of modern Europe as we know it, but the history books gloss over the much longer history of the plague. Yersinia pestis did not suddenly appear unbidden in a Mongol war camp in 1347. The Black Death wasn’t even the first horrific, continent-wide pandemic caused by the plague; the Plague of Justinian 800 years earlier crippled the Byzantine Empire during an expansionist phase and killed anywhere between 15 million and 100 million.

Today, though, LOTME looks even deeper into history, nearly beyond even history itself, back into the depths of early Bronze Age northern Europe. Specifically, to two ancient burial sites in England, where researchers have identified three 4,000-year-old cases of Y. pestis, the first recorded incidence of the disease in Britain.

Two of the individuals, identified through analysis of dental pulp, were young children buried at a mass grave in Somerset, while the third, a middle-aged woman, was found in a ring cairn in Cumbria. These sites are hundreds of miles apart, yet carbon dating suggests all three people lived and died at roughly the same time. The strain found is very similar to other samples of plague found across central and western Europe starting around 3,000 BCE, suggesting a single, easily spread disease affecting a large area in a relatively small period of time. In other words, a pandemic. Even in these ancient times, the world was connected. Not even the island of Britain could escape.

Beyond that though, the research helps confirm the cyclical nature of the plague; over time, it loses its effectiveness and goes into hiding, only to mutate and come roaring back. This is a story with absolutely no relevance at all to the modern world. Nope, no plagues or pandemics going around right now, no viruses fading into the background in any way. What a ridiculous inference to make.
 

Uncovering the invisible with artificial intelligence

This week in “What Else Can AI Do?” new research shows that a computer program can reveal brain injury that couldn’t be seen before with typical MRI.

The hot new AI, birthed by researchers at New York University, could potentially be a game changer by linking repeated head impacts with tiny, structural changes in the brains of athletes who have not been diagnosed with a concussion. By using machine learning to train the AI, the researchers were, for the first time, able to distinguish the brain of athletes who played contact sports (football, soccer, lacrosse) from those participating in noncontact sports such as baseball, basketball, and cross-country.

Andrea Danti/Thinkstock

How did they do it? The investigators “designed statistical techniques that gave their computer program the ability to ‘learn’ how to predict exposure to repeated head impacts using mathematical models,” they explained in a written statement. Adding in data from the MRI scans of 81 male athletes with no known concussion diagnosis and the ability to identify unusual brain features between athletes with and without head trauma allowed the AI to predict results with accuracy even Miss Cleo would envy.

“This method may provide an important diagnostic tool not only for concussion, but also for detecting the damage that stems from subtler and more frequent head impacts,” said lead author Junbo Chen, an engineering doctoral candidate at NYU. That could make this new AI a valuable asset to science and medicine.

There are many things the human brain can do that AI can’t, and delegation could be one of them. Examining the data that represent the human brain in minute detail? Maybe we leave that to the machine.
 

 

 

Talk about your field promotions

If you’re a surgeon doing an amputation, the list of possible assistants pretty much starts and ends in only one place: Not the closest available janitor.

That may seem like an oddly obvious thing for us to say, but there’s at least one former Mainz (Germany) University Hospital physician who really needed to get this bit of advice before he attempted an unassisted toe amputation back in October of 2020. Yes, that does seem like kind of a long time ago for us to be reporting it now, but the details of the incident only just came to light a few days ago, thanks to German public broadcaster SWR.

Ente75/Wikipedia

Since it was just a toe, the surgeon thought he could perform the operation without any help. The toe, unfortunately, had other plans. The partially anesthetized patient got restless in the operating room, but with no actual trained nurse in the vicinity, the surgeon asked the closest available person – that would be the janitor – to lend a hand.

The surgical manager heard about these goings-on and got to the operating room too late to stop the procedure but soon enough to see the cleaning staffer “at the operating table with a bloody suction cup and a bloody compress in their hands,” SWR recently reported.

The incident was reported to the hospital’s medical director and the surgeon was fired, but since the patient experienced no complications not much fuss was made about it at the time.

Well, guess what? It’s toe-tally our job to make a fuss about these kinds of things. Or could it be that our job, much like the surgeon’s employment and the patient’s digit, is here toe-day and gone toe-morrow?

 

Even the plague wanted to visit Stonehenge

We’re about to blow your mind: The history you learned in school was often inaccurate. Shocking, we know, so we’ll give you a minute to process this incredible news.

Better? Good. Now, let’s look back at high school European history. The Black Death, specifically. The common narrative is that the Mongols, while besieging a Crimean city belonging to the Genoese, catapulted dead bodies infected with some mystery disease that turned out to be the plague. The Genoese then brought the plague back to Italy, and from there, we all know the rest of the story.

The Black Death was certainly extremely important to the development of modern Europe as we know it, but the history books gloss over the much longer history of the plague. Yersinia pestis did not suddenly appear unbidden in a Mongol war camp in 1347. The Black Death wasn’t even the first horrific, continent-wide pandemic caused by the plague; the Plague of Justinian 800 years earlier crippled the Byzantine Empire during an expansionist phase and killed anywhere between 15 million and 100 million.

Today, though, LOTME looks even deeper into history, nearly beyond even history itself, back into the depths of early Bronze Age northern Europe. Specifically, to two ancient burial sites in England, where researchers have identified three 4,000-year-old cases of Y. pestis, the first recorded incidence of the disease in Britain.

Two of the individuals, identified through analysis of dental pulp, were young children buried at a mass grave in Somerset, while the third, a middle-aged woman, was found in a ring cairn in Cumbria. These sites are hundreds of miles apart, yet carbon dating suggests all three people lived and died at roughly the same time. The strain found is very similar to other samples of plague found across central and western Europe starting around 3,000 BCE, suggesting a single, easily spread disease affecting a large area in a relatively small period of time. In other words, a pandemic. Even in these ancient times, the world was connected. Not even the island of Britain could escape.

Beyond that though, the research helps confirm the cyclical nature of the plague; over time, it loses its effectiveness and goes into hiding, only to mutate and come roaring back. This is a story with absolutely no relevance at all to the modern world. Nope, no plagues or pandemics going around right now, no viruses fading into the background in any way. What a ridiculous inference to make.
 

Uncovering the invisible with artificial intelligence

This week in “What Else Can AI Do?” new research shows that a computer program can reveal brain injury that couldn’t be seen before with typical MRI.

The hot new AI, birthed by researchers at New York University, could potentially be a game changer by linking repeated head impacts with tiny, structural changes in the brains of athletes who have not been diagnosed with a concussion. By using machine learning to train the AI, the researchers were, for the first time, able to distinguish the brain of athletes who played contact sports (football, soccer, lacrosse) from those participating in noncontact sports such as baseball, basketball, and cross-country.

Andrea Danti/Thinkstock

How did they do it? The investigators “designed statistical techniques that gave their computer program the ability to ‘learn’ how to predict exposure to repeated head impacts using mathematical models,” they explained in a written statement. Adding in data from the MRI scans of 81 male athletes with no known concussion diagnosis and the ability to identify unusual brain features between athletes with and without head trauma allowed the AI to predict results with accuracy even Miss Cleo would envy.

“This method may provide an important diagnostic tool not only for concussion, but also for detecting the damage that stems from subtler and more frequent head impacts,” said lead author Junbo Chen, an engineering doctoral candidate at NYU. That could make this new AI a valuable asset to science and medicine.

There are many things the human brain can do that AI can’t, and delegation could be one of them. Examining the data that represent the human brain in minute detail? Maybe we leave that to the machine.
 

 

 

Talk about your field promotions

If you’re a surgeon doing an amputation, the list of possible assistants pretty much starts and ends in only one place: Not the closest available janitor.

That may seem like an oddly obvious thing for us to say, but there’s at least one former Mainz (Germany) University Hospital physician who really needed to get this bit of advice before he attempted an unassisted toe amputation back in October of 2020. Yes, that does seem like kind of a long time ago for us to be reporting it now, but the details of the incident only just came to light a few days ago, thanks to German public broadcaster SWR.

Ente75/Wikipedia

Since it was just a toe, the surgeon thought he could perform the operation without any help. The toe, unfortunately, had other plans. The partially anesthetized patient got restless in the operating room, but with no actual trained nurse in the vicinity, the surgeon asked the closest available person – that would be the janitor – to lend a hand.

The surgical manager heard about these goings-on and got to the operating room too late to stop the procedure but soon enough to see the cleaning staffer “at the operating table with a bloody suction cup and a bloody compress in their hands,” SWR recently reported.

The incident was reported to the hospital’s medical director and the surgeon was fired, but since the patient experienced no complications not much fuss was made about it at the time.

Well, guess what? It’s toe-tally our job to make a fuss about these kinds of things. Or could it be that our job, much like the surgeon’s employment and the patient’s digit, is here toe-day and gone toe-morrow?

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Researchers discover brain abnormalities in babies who had SIDS

Article Type
Changed
Wed, 05/31/2023 - 10:54

Researchers have discovered specific brain abnormalities in babies who died of sudden infant death syndrome.

For decades, researchers have been trying to understand why some otherwise healthy babies under 1 year old mysteriously die during their sleep. SIDS is the leading cause of infant death in the U.S., affecting 103 out of every 100,000 babies.

The new study found that babies who died of SIDS had abnormalities in certain brain receptors responsible for waking and restoring breathing. The scientists decided to look at the babies’ brains at the molecular level because previous research showed that the same kind of brain receptors in rodents are responsible for protective breathing functions during sleep.

The study was published in the Journal of Neuropathology & Experimental Neurology. The researchers compared brain stems from 70 babies, some of whom died of SIDS and some who died of other causes.

Despite discovering the differences in the babies’ brains, the lead author of the paper said more study is needed. 

Robin Haynes, PhD, who studies SIDS at Boston Children’s Hospital, said in a statement that “the relationship between the abnormalities and cause of death remains unknown.”

She said there is no way to identify babies with the brain abnormalities, and “thus, adherence to safe-sleep practices remains critical.”

The American Academy of Pediatrics recommends numerous steps for creating a safe sleeping environment for babies, including placing babies on their backs on a firm surface. Education campaigns targeting parents and caregivers in the 1990s are largely considered successful, but SIDS rates have remained steady since the practices became widely used.

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

Researchers have discovered specific brain abnormalities in babies who died of sudden infant death syndrome.

For decades, researchers have been trying to understand why some otherwise healthy babies under 1 year old mysteriously die during their sleep. SIDS is the leading cause of infant death in the U.S., affecting 103 out of every 100,000 babies.

The new study found that babies who died of SIDS had abnormalities in certain brain receptors responsible for waking and restoring breathing. The scientists decided to look at the babies’ brains at the molecular level because previous research showed that the same kind of brain receptors in rodents are responsible for protective breathing functions during sleep.

The study was published in the Journal of Neuropathology & Experimental Neurology. The researchers compared brain stems from 70 babies, some of whom died of SIDS and some who died of other causes.

Despite discovering the differences in the babies’ brains, the lead author of the paper said more study is needed. 

Robin Haynes, PhD, who studies SIDS at Boston Children’s Hospital, said in a statement that “the relationship between the abnormalities and cause of death remains unknown.”

She said there is no way to identify babies with the brain abnormalities, and “thus, adherence to safe-sleep practices remains critical.”

The American Academy of Pediatrics recommends numerous steps for creating a safe sleeping environment for babies, including placing babies on their backs on a firm surface. Education campaigns targeting parents and caregivers in the 1990s are largely considered successful, but SIDS rates have remained steady since the practices became widely used.

A version of this article first appeared on WebMD.com.

Researchers have discovered specific brain abnormalities in babies who died of sudden infant death syndrome.

For decades, researchers have been trying to understand why some otherwise healthy babies under 1 year old mysteriously die during their sleep. SIDS is the leading cause of infant death in the U.S., affecting 103 out of every 100,000 babies.

The new study found that babies who died of SIDS had abnormalities in certain brain receptors responsible for waking and restoring breathing. The scientists decided to look at the babies’ brains at the molecular level because previous research showed that the same kind of brain receptors in rodents are responsible for protective breathing functions during sleep.

The study was published in the Journal of Neuropathology & Experimental Neurology. The researchers compared brain stems from 70 babies, some of whom died of SIDS and some who died of other causes.

Despite discovering the differences in the babies’ brains, the lead author of the paper said more study is needed. 

Robin Haynes, PhD, who studies SIDS at Boston Children’s Hospital, said in a statement that “the relationship between the abnormalities and cause of death remains unknown.”

She said there is no way to identify babies with the brain abnormalities, and “thus, adherence to safe-sleep practices remains critical.”

The American Academy of Pediatrics recommends numerous steps for creating a safe sleeping environment for babies, including placing babies on their backs on a firm surface. Education campaigns targeting parents and caregivers in the 1990s are largely considered successful, but SIDS rates have remained steady since the practices became widely used.

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF NEUROPATHY & EXPERIMENTAL NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Younger age of type 2 diabetes onset linked to dementia risk

Article Type
Changed
Tue, 05/30/2023 - 10:54

People who develop type 2 diabetes before age 60 years are at threefold greater risk for dementia compared with those who don’t develop diabetes, new findings suggest.

Moreover, the new data from the prospective Atherosclerosis Risk in Communities (ARIC) cohort also suggest that the previously identified increased risk for dementia among people with prediabetes appears to be entirely explained by the subset who go on to develop type 2 diabetes.

“Our findings suggest that preventing prediabetes progression, especially in younger individuals, may be an important way to reduce the dementia burden,” wrote PhD student Jiaqi Hu of Johns Hopkins University, Baltimore, and colleagues. Their article was  published online in Diabetologia.

The result builds on previous findings linking dysglycemia and cognitive decline, the study’s lead author, Elizabeth Selvin, PhD, of the Bloomberg School of Public Health at Johns Hopkins, said in an interview.  

“Our prior work in the ARIC study suggests that improving glucose control could help prevent dementia in later life,” she said.  

Johns Hopkins University
Dr. Elizabeth Selvin

Other studies have also linked higher A1c levels and diabetes in midlife to increased rates of cognitive decline. In addition, Dr. Selvin noted, “There is growing evidence that focusing on vascular health, especially focusing on diabetes and blood pressure, in midlife can stave off dementia in later life.”

This new study is the first to examine the effect of diabetes in the relationship between prediabetes and dementia, as well as the age of diabetes onset on subsequent dementia.
 

Prediabetes linked to dementia via diabetes development

Of the 11,656 ARIC participants without diabetes at baseline during 1990-1992 (age 46-70 years), 20.0% had prediabetes (defined as A1c 5.7%-6.4% or 39-46 mmol/mol). During a median follow-up of 15.9 years, 3,143 participants developed diabetes. The proportions of patients who developed diabetes were 44.6% among those with prediabetes at baseline versus 22.5% of those without.

Dementia developed in 2,247 participants over a median follow-up of 24.7 years. The cumulative incidence of dementia was 23.9% among those who developed diabetes versus 20.5% among those who did not.

After adjustment for demographics and for the Alzheimer’s disease–linked apolipoprotein E (APOE) gene, prediabetes was significantly associated with incident dementia (hazard ratio [HR], 1.19). However, significance disappeared after adjustment for incident diabetes (HR, 1.09), the researchers reported.  
 

Younger age at diabetes diagnosis raises dementia risk  

Age at diabetes diagnosis made a difference in dementia risk. With adjustments for lifestyle, demographic, and clinical factors, those diagnosed with diabetes before age 60 years had a nearly threefold increased risk for dementia compared with those who never developed diabetes (HR, 2.92; P < .001).

The dementia risk was also significantly increased, although to a lesser degree, among those aged 60-69 years at diabetes diagnosis (HR, 1.73; P < .001) and age 70-79 years at diabetes diagnosis (HR, 1.23; P < .001). The relationship was not significant for those aged 80 years and older (HR, 1.13).  

“Prevention efforts in people with diabetes diagnosed younger than 65 years should be a high priority,” the authors urged.

Taken together, the data suggest that prolonged exposure to hyperglycemia plays a major role in dementia development.

“Putative mechanisms include acute and chronic hyperglycemia, glucose toxicity, insulin resistance, and microvascular dysfunction of the central nervous system. ... Glucose toxicity and microvascular dysfunction are associated with increased inflammatory and oxidative stress, leading to increased blood–brain permeability,” the researchers wrote.

Dr. Selvin said that her group is pursuing further work in this area using continuous glucose monitoring. “We plan to look at ... how glycemic control and different patterns of glucose in older adults may be linked to cognitive decline and other neurocognitive outcomes.”

The researchers reported no relevant financial relationships. Dr. Selvin has reported being on the advisory board for Diabetologia; she had no role in peer review of the manuscript.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

People who develop type 2 diabetes before age 60 years are at threefold greater risk for dementia compared with those who don’t develop diabetes, new findings suggest.

Moreover, the new data from the prospective Atherosclerosis Risk in Communities (ARIC) cohort also suggest that the previously identified increased risk for dementia among people with prediabetes appears to be entirely explained by the subset who go on to develop type 2 diabetes.

“Our findings suggest that preventing prediabetes progression, especially in younger individuals, may be an important way to reduce the dementia burden,” wrote PhD student Jiaqi Hu of Johns Hopkins University, Baltimore, and colleagues. Their article was  published online in Diabetologia.

The result builds on previous findings linking dysglycemia and cognitive decline, the study’s lead author, Elizabeth Selvin, PhD, of the Bloomberg School of Public Health at Johns Hopkins, said in an interview.  

“Our prior work in the ARIC study suggests that improving glucose control could help prevent dementia in later life,” she said.  

Johns Hopkins University
Dr. Elizabeth Selvin

Other studies have also linked higher A1c levels and diabetes in midlife to increased rates of cognitive decline. In addition, Dr. Selvin noted, “There is growing evidence that focusing on vascular health, especially focusing on diabetes and blood pressure, in midlife can stave off dementia in later life.”

This new study is the first to examine the effect of diabetes in the relationship between prediabetes and dementia, as well as the age of diabetes onset on subsequent dementia.
 

Prediabetes linked to dementia via diabetes development

Of the 11,656 ARIC participants without diabetes at baseline during 1990-1992 (age 46-70 years), 20.0% had prediabetes (defined as A1c 5.7%-6.4% or 39-46 mmol/mol). During a median follow-up of 15.9 years, 3,143 participants developed diabetes. The proportions of patients who developed diabetes were 44.6% among those with prediabetes at baseline versus 22.5% of those without.

Dementia developed in 2,247 participants over a median follow-up of 24.7 years. The cumulative incidence of dementia was 23.9% among those who developed diabetes versus 20.5% among those who did not.

After adjustment for demographics and for the Alzheimer’s disease–linked apolipoprotein E (APOE) gene, prediabetes was significantly associated with incident dementia (hazard ratio [HR], 1.19). However, significance disappeared after adjustment for incident diabetes (HR, 1.09), the researchers reported.  
 

Younger age at diabetes diagnosis raises dementia risk  

Age at diabetes diagnosis made a difference in dementia risk. With adjustments for lifestyle, demographic, and clinical factors, those diagnosed with diabetes before age 60 years had a nearly threefold increased risk for dementia compared with those who never developed diabetes (HR, 2.92; P < .001).

The dementia risk was also significantly increased, although to a lesser degree, among those aged 60-69 years at diabetes diagnosis (HR, 1.73; P < .001) and age 70-79 years at diabetes diagnosis (HR, 1.23; P < .001). The relationship was not significant for those aged 80 years and older (HR, 1.13).  

“Prevention efforts in people with diabetes diagnosed younger than 65 years should be a high priority,” the authors urged.

Taken together, the data suggest that prolonged exposure to hyperglycemia plays a major role in dementia development.

“Putative mechanisms include acute and chronic hyperglycemia, glucose toxicity, insulin resistance, and microvascular dysfunction of the central nervous system. ... Glucose toxicity and microvascular dysfunction are associated with increased inflammatory and oxidative stress, leading to increased blood–brain permeability,” the researchers wrote.

Dr. Selvin said that her group is pursuing further work in this area using continuous glucose monitoring. “We plan to look at ... how glycemic control and different patterns of glucose in older adults may be linked to cognitive decline and other neurocognitive outcomes.”

The researchers reported no relevant financial relationships. Dr. Selvin has reported being on the advisory board for Diabetologia; she had no role in peer review of the manuscript.

A version of this article first appeared on Medscape.com.

People who develop type 2 diabetes before age 60 years are at threefold greater risk for dementia compared with those who don’t develop diabetes, new findings suggest.

Moreover, the new data from the prospective Atherosclerosis Risk in Communities (ARIC) cohort also suggest that the previously identified increased risk for dementia among people with prediabetes appears to be entirely explained by the subset who go on to develop type 2 diabetes.

“Our findings suggest that preventing prediabetes progression, especially in younger individuals, may be an important way to reduce the dementia burden,” wrote PhD student Jiaqi Hu of Johns Hopkins University, Baltimore, and colleagues. Their article was  published online in Diabetologia.

The result builds on previous findings linking dysglycemia and cognitive decline, the study’s lead author, Elizabeth Selvin, PhD, of the Bloomberg School of Public Health at Johns Hopkins, said in an interview.  

“Our prior work in the ARIC study suggests that improving glucose control could help prevent dementia in later life,” she said.  

Johns Hopkins University
Dr. Elizabeth Selvin

Other studies have also linked higher A1c levels and diabetes in midlife to increased rates of cognitive decline. In addition, Dr. Selvin noted, “There is growing evidence that focusing on vascular health, especially focusing on diabetes and blood pressure, in midlife can stave off dementia in later life.”

This new study is the first to examine the effect of diabetes in the relationship between prediabetes and dementia, as well as the age of diabetes onset on subsequent dementia.
 

Prediabetes linked to dementia via diabetes development

Of the 11,656 ARIC participants without diabetes at baseline during 1990-1992 (age 46-70 years), 20.0% had prediabetes (defined as A1c 5.7%-6.4% or 39-46 mmol/mol). During a median follow-up of 15.9 years, 3,143 participants developed diabetes. The proportions of patients who developed diabetes were 44.6% among those with prediabetes at baseline versus 22.5% of those without.

Dementia developed in 2,247 participants over a median follow-up of 24.7 years. The cumulative incidence of dementia was 23.9% among those who developed diabetes versus 20.5% among those who did not.

After adjustment for demographics and for the Alzheimer’s disease–linked apolipoprotein E (APOE) gene, prediabetes was significantly associated with incident dementia (hazard ratio [HR], 1.19). However, significance disappeared after adjustment for incident diabetes (HR, 1.09), the researchers reported.  
 

Younger age at diabetes diagnosis raises dementia risk  

Age at diabetes diagnosis made a difference in dementia risk. With adjustments for lifestyle, demographic, and clinical factors, those diagnosed with diabetes before age 60 years had a nearly threefold increased risk for dementia compared with those who never developed diabetes (HR, 2.92; P < .001).

The dementia risk was also significantly increased, although to a lesser degree, among those aged 60-69 years at diabetes diagnosis (HR, 1.73; P < .001) and age 70-79 years at diabetes diagnosis (HR, 1.23; P < .001). The relationship was not significant for those aged 80 years and older (HR, 1.13).  

“Prevention efforts in people with diabetes diagnosed younger than 65 years should be a high priority,” the authors urged.

Taken together, the data suggest that prolonged exposure to hyperglycemia plays a major role in dementia development.

“Putative mechanisms include acute and chronic hyperglycemia, glucose toxicity, insulin resistance, and microvascular dysfunction of the central nervous system. ... Glucose toxicity and microvascular dysfunction are associated with increased inflammatory and oxidative stress, leading to increased blood–brain permeability,” the researchers wrote.

Dr. Selvin said that her group is pursuing further work in this area using continuous glucose monitoring. “We plan to look at ... how glycemic control and different patterns of glucose in older adults may be linked to cognitive decline and other neurocognitive outcomes.”

The researchers reported no relevant financial relationships. Dr. Selvin has reported being on the advisory board for Diabetologia; she had no role in peer review of the manuscript.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DIABETOLOGIA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

People still want their medical intelligence in human form

Article Type
Changed
Thu, 05/25/2023 - 09:15

 

Doctors or AI? Lukewarm vote of confidence goes to …

Well, we’ve got some good news for the physicians out there, and we’ve got some bad news. Which do you want first? Okay, we’re mostly hearing good news, so here goes: Most people would choose a human doctor over artificial intelligence for the diagnosis and treatment of their medical conditions.

Alexandra_Koch/Pixabay

And the bad news? In the survey we’re talking about, “most” was 53%, so not exactly a huge victory for the carbon-based life forms. Yup, about 47% of the 2,472 respondents said they would prefer an AI-based clinic over a human specialist, and that number went up if individuals were told that their primary care physicians were on board with AI, “or otherwise nudged to consider AI as good,” the research team said in a written statement released by the University of Arizona, Tucson.

They went on to add that “this signaled the significance of the human physician in guiding a patient’s decision.” So patients will still need their doctors in the future to … um … this is a bit awkward … tell them how good the AI is?

And yes, we know that ChatGPT is already doing the same thing to journalists, but could it write a medical-humor column? Not a chance. Probably can’t even tell a joke.

How do ghosts get rid of wrinkles? Boo-tox. There, let’s see ChatGPT do that.
 

Explaining the joke makes it funnier, right?

Here at LOTME headquarters, we live by one simple rule, passed down directly from the Buddha himself: “Never let a good presurgical assessment of refractory epilepsy go to waste. Also, don’t believe everything you read on the Internet.”

Amy/Pixabay

This human-created joke has been brought to you by the leading theory of humor, which states that comedy stems from our brain reacting to an incongruous part of reality in a positive way. These positive emotions light up our neurons in a specific fashion, and boom, comedy is achieved.

Previous studies into the science of comedy have typically used functional MRI to analyze the brain while it was gripped in the throes of a comedic reaction. Unfortunately, fMRI cannot detect the entirety of the electromagnetic spectrum generated by the brain during these moments, so observing scientists have been, quite literally, missing out on some of the joke. And that’s where a new study from France comes in.

In the study, the researchers showed a group of patients with epilepsy who were hooked up to deep brain electrodes and a high-tech neuroimaging machine – part of the aforementioned presurgical assessment – a 3-minute excerpt from a Charlie Chaplin movie and analyzed their brain activity. Why Charlie Chaplin? Simple. Slapstick is perhaps the most accessible form of comedy across cultures. We can all appreciate a man getting hit in the head with a coconut. The world’s oldest bar joke or whatever this is? Not so much.

During the funniest scenes, all study participants showed increased high-frequency gamma waves (indicating high cognitive engagement) and a decrease in low-frequency waves (indicating reduced inattention and introspection). During unfunny scenes, such as transition moments, the opposite occurred. Importantly, this inverse relationship occurred in the temporal lobe but not in other regions, supporting previous research that indicated humor was mainly processed in the temporal lobe.

The investigators suggested future research should focus on longer videos with more complex forms of comedy, such as jokes, irony, sarcasm, or reference humor. So, uh, a guy getting hit in the head with two coconuts? That’s high-brow stuff right there.
 

 

 

Hot take: Humans aren’t that special

We humans have always prided ourselves on being different from “the animals” in an exceptional way. News flash! We aren’t. We may be the apex predator, but new research shows that humans, as part of the animal kingdom, just aren’t special.

jacoblund/iStock/Getty Images

Not special? How can they say that? Are gorillas doing open-heart surgery? Do wolverines tell jokes? At a more basic level, though, the way we operate as mammals in societies is not unique or even new. Elephants are known to mourn their deceased and to have funeral-like practices, ants invented agriculture, and we’re certainly not the only species that has figured out how to use tools.

This new research just demonstrates another way we aren’t exceptional, and that’s in our mating practices and outcomes.

“Humans appear to resemble mammals that live in monogamous partnerships and to some extent, those classified as cooperative breeders, where breeding individuals have to rely on the help of others to raise their offspring,” Monique Borgerhoff Mulder, PhD, professor emerita of anthropology at the University of California, Davis, said in a written statement.

The research team, which consisted of over 100 investigators, looked at 90 human populations based on data from over 80,000 people globally and compared the human data with 49 different nonhuman mammal species. In polygynous societies in which men take several wives, they found, women have more access to resources like food, shelter, and parenting help. Monogamy, on the other hand, “can drive significant inequalities among women,” Dr. Borgerhoff Mulder said, by promoting large differences in the number of children couples produce.

Human day-to-day behavior and child-rearing habits – one parent taking a daughter to ballet class and fixing dinner so the other parent can get to exercise class before picking up the son from soccer practice – may have us thinking that we are part of an evolved society, but really we are not much different than other mammals that hunt, forage for food, and rear and teach their children, the researchers suggested.

So, yes, humans can travel to the moon, create a vaccine for smallpox, and hit other humans with coconuts, but when it comes to simply having offspring or raising them, we’re not all that special. Get over it.

Publications
Topics
Sections

 

Doctors or AI? Lukewarm vote of confidence goes to …

Well, we’ve got some good news for the physicians out there, and we’ve got some bad news. Which do you want first? Okay, we’re mostly hearing good news, so here goes: Most people would choose a human doctor over artificial intelligence for the diagnosis and treatment of their medical conditions.

Alexandra_Koch/Pixabay

And the bad news? In the survey we’re talking about, “most” was 53%, so not exactly a huge victory for the carbon-based life forms. Yup, about 47% of the 2,472 respondents said they would prefer an AI-based clinic over a human specialist, and that number went up if individuals were told that their primary care physicians were on board with AI, “or otherwise nudged to consider AI as good,” the research team said in a written statement released by the University of Arizona, Tucson.

They went on to add that “this signaled the significance of the human physician in guiding a patient’s decision.” So patients will still need their doctors in the future to … um … this is a bit awkward … tell them how good the AI is?

And yes, we know that ChatGPT is already doing the same thing to journalists, but could it write a medical-humor column? Not a chance. Probably can’t even tell a joke.

How do ghosts get rid of wrinkles? Boo-tox. There, let’s see ChatGPT do that.
 

Explaining the joke makes it funnier, right?

Here at LOTME headquarters, we live by one simple rule, passed down directly from the Buddha himself: “Never let a good presurgical assessment of refractory epilepsy go to waste. Also, don’t believe everything you read on the Internet.”

Amy/Pixabay

This human-created joke has been brought to you by the leading theory of humor, which states that comedy stems from our brain reacting to an incongruous part of reality in a positive way. These positive emotions light up our neurons in a specific fashion, and boom, comedy is achieved.

Previous studies into the science of comedy have typically used functional MRI to analyze the brain while it was gripped in the throes of a comedic reaction. Unfortunately, fMRI cannot detect the entirety of the electromagnetic spectrum generated by the brain during these moments, so observing scientists have been, quite literally, missing out on some of the joke. And that’s where a new study from France comes in.

In the study, the researchers showed a group of patients with epilepsy who were hooked up to deep brain electrodes and a high-tech neuroimaging machine – part of the aforementioned presurgical assessment – a 3-minute excerpt from a Charlie Chaplin movie and analyzed their brain activity. Why Charlie Chaplin? Simple. Slapstick is perhaps the most accessible form of comedy across cultures. We can all appreciate a man getting hit in the head with a coconut. The world’s oldest bar joke or whatever this is? Not so much.

During the funniest scenes, all study participants showed increased high-frequency gamma waves (indicating high cognitive engagement) and a decrease in low-frequency waves (indicating reduced inattention and introspection). During unfunny scenes, such as transition moments, the opposite occurred. Importantly, this inverse relationship occurred in the temporal lobe but not in other regions, supporting previous research that indicated humor was mainly processed in the temporal lobe.

The investigators suggested future research should focus on longer videos with more complex forms of comedy, such as jokes, irony, sarcasm, or reference humor. So, uh, a guy getting hit in the head with two coconuts? That’s high-brow stuff right there.
 

 

 

Hot take: Humans aren’t that special

We humans have always prided ourselves on being different from “the animals” in an exceptional way. News flash! We aren’t. We may be the apex predator, but new research shows that humans, as part of the animal kingdom, just aren’t special.

jacoblund/iStock/Getty Images

Not special? How can they say that? Are gorillas doing open-heart surgery? Do wolverines tell jokes? At a more basic level, though, the way we operate as mammals in societies is not unique or even new. Elephants are known to mourn their deceased and to have funeral-like practices, ants invented agriculture, and we’re certainly not the only species that has figured out how to use tools.

This new research just demonstrates another way we aren’t exceptional, and that’s in our mating practices and outcomes.

“Humans appear to resemble mammals that live in monogamous partnerships and to some extent, those classified as cooperative breeders, where breeding individuals have to rely on the help of others to raise their offspring,” Monique Borgerhoff Mulder, PhD, professor emerita of anthropology at the University of California, Davis, said in a written statement.

The research team, which consisted of over 100 investigators, looked at 90 human populations based on data from over 80,000 people globally and compared the human data with 49 different nonhuman mammal species. In polygynous societies in which men take several wives, they found, women have more access to resources like food, shelter, and parenting help. Monogamy, on the other hand, “can drive significant inequalities among women,” Dr. Borgerhoff Mulder said, by promoting large differences in the number of children couples produce.

Human day-to-day behavior and child-rearing habits – one parent taking a daughter to ballet class and fixing dinner so the other parent can get to exercise class before picking up the son from soccer practice – may have us thinking that we are part of an evolved society, but really we are not much different than other mammals that hunt, forage for food, and rear and teach their children, the researchers suggested.

So, yes, humans can travel to the moon, create a vaccine for smallpox, and hit other humans with coconuts, but when it comes to simply having offspring or raising them, we’re not all that special. Get over it.

 

Doctors or AI? Lukewarm vote of confidence goes to …

Well, we’ve got some good news for the physicians out there, and we’ve got some bad news. Which do you want first? Okay, we’re mostly hearing good news, so here goes: Most people would choose a human doctor over artificial intelligence for the diagnosis and treatment of their medical conditions.

Alexandra_Koch/Pixabay

And the bad news? In the survey we’re talking about, “most” was 53%, so not exactly a huge victory for the carbon-based life forms. Yup, about 47% of the 2,472 respondents said they would prefer an AI-based clinic over a human specialist, and that number went up if individuals were told that their primary care physicians were on board with AI, “or otherwise nudged to consider AI as good,” the research team said in a written statement released by the University of Arizona, Tucson.

They went on to add that “this signaled the significance of the human physician in guiding a patient’s decision.” So patients will still need their doctors in the future to … um … this is a bit awkward … tell them how good the AI is?

And yes, we know that ChatGPT is already doing the same thing to journalists, but could it write a medical-humor column? Not a chance. Probably can’t even tell a joke.

How do ghosts get rid of wrinkles? Boo-tox. There, let’s see ChatGPT do that.
 

Explaining the joke makes it funnier, right?

Here at LOTME headquarters, we live by one simple rule, passed down directly from the Buddha himself: “Never let a good presurgical assessment of refractory epilepsy go to waste. Also, don’t believe everything you read on the Internet.”

Amy/Pixabay

This human-created joke has been brought to you by the leading theory of humor, which states that comedy stems from our brain reacting to an incongruous part of reality in a positive way. These positive emotions light up our neurons in a specific fashion, and boom, comedy is achieved.

Previous studies into the science of comedy have typically used functional MRI to analyze the brain while it was gripped in the throes of a comedic reaction. Unfortunately, fMRI cannot detect the entirety of the electromagnetic spectrum generated by the brain during these moments, so observing scientists have been, quite literally, missing out on some of the joke. And that’s where a new study from France comes in.

In the study, the researchers showed a group of patients with epilepsy who were hooked up to deep brain electrodes and a high-tech neuroimaging machine – part of the aforementioned presurgical assessment – a 3-minute excerpt from a Charlie Chaplin movie and analyzed their brain activity. Why Charlie Chaplin? Simple. Slapstick is perhaps the most accessible form of comedy across cultures. We can all appreciate a man getting hit in the head with a coconut. The world’s oldest bar joke or whatever this is? Not so much.

During the funniest scenes, all study participants showed increased high-frequency gamma waves (indicating high cognitive engagement) and a decrease in low-frequency waves (indicating reduced inattention and introspection). During unfunny scenes, such as transition moments, the opposite occurred. Importantly, this inverse relationship occurred in the temporal lobe but not in other regions, supporting previous research that indicated humor was mainly processed in the temporal lobe.

The investigators suggested future research should focus on longer videos with more complex forms of comedy, such as jokes, irony, sarcasm, or reference humor. So, uh, a guy getting hit in the head with two coconuts? That’s high-brow stuff right there.
 

 

 

Hot take: Humans aren’t that special

We humans have always prided ourselves on being different from “the animals” in an exceptional way. News flash! We aren’t. We may be the apex predator, but new research shows that humans, as part of the animal kingdom, just aren’t special.

jacoblund/iStock/Getty Images

Not special? How can they say that? Are gorillas doing open-heart surgery? Do wolverines tell jokes? At a more basic level, though, the way we operate as mammals in societies is not unique or even new. Elephants are known to mourn their deceased and to have funeral-like practices, ants invented agriculture, and we’re certainly not the only species that has figured out how to use tools.

This new research just demonstrates another way we aren’t exceptional, and that’s in our mating practices and outcomes.

“Humans appear to resemble mammals that live in monogamous partnerships and to some extent, those classified as cooperative breeders, where breeding individuals have to rely on the help of others to raise their offspring,” Monique Borgerhoff Mulder, PhD, professor emerita of anthropology at the University of California, Davis, said in a written statement.

The research team, which consisted of over 100 investigators, looked at 90 human populations based on data from over 80,000 people globally and compared the human data with 49 different nonhuman mammal species. In polygynous societies in which men take several wives, they found, women have more access to resources like food, shelter, and parenting help. Monogamy, on the other hand, “can drive significant inequalities among women,” Dr. Borgerhoff Mulder said, by promoting large differences in the number of children couples produce.

Human day-to-day behavior and child-rearing habits – one parent taking a daughter to ballet class and fixing dinner so the other parent can get to exercise class before picking up the son from soccer practice – may have us thinking that we are part of an evolved society, but really we are not much different than other mammals that hunt, forage for food, and rear and teach their children, the researchers suggested.

So, yes, humans can travel to the moon, create a vaccine for smallpox, and hit other humans with coconuts, but when it comes to simply having offspring or raising them, we’re not all that special. Get over it.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Which interventions could lessen the burden of dementia?

Article Type
Changed
Tue, 05/30/2023 - 11:24

Using a microsimulation algorithm that accounts for the effect on mortality, a team from Marseille, France, has shown that interventions targeting the three main vascular risk factors for dementia – hypertension, diabetes, and physical inactivity – could significantly reduce the burden of dementia by 2040.

Among the three modifiable risk factors, the prevention of hypertension would be the most efficient, with by far the biggest impact on dementia.

Although these modeling results could appear too optimistic, since total disappearance of the risk factors was assumed, the authors say the results do show that targeted interventions for these factors could be effective in reducing the future burden of dementia.
 

Increasing prevalence

According to the World Alzheimer Report 2018, 50 million people around the world were living with dementia; a population roughly around the size of South Korea or Spain. That community is likely to rise to about 152 million people by 2050, which is similar to the size of Russia or Bangladesh, the result of an aging population.

Among modifiable risk factors, many studies support a deleterious effect of hypertension, diabetes, and physical inactivity on the risk of dementia. However, since the distribution of these risk factors could have a direct impact on mortality, reducing it should increase life expectancy and the number of cases of dementia.

The team, headed by Hélène Jacqmin-Gadda, PhD, research director at the University of Bordeaux (France), has developed a microsimulation model capable of predicting the burden of dementia while accounting for the impact on mortality. The team used this approach to assess the impact of interventions targeting these three main risk factors on the burden of dementia in France by 2040.
 

Removing risk factors

The researchers estimated the incidence of dementia for men and women using data from the 2020 PAQUID cohort, and these data were combined with the projections forecast by the French National Institute of Statistics and Economic Studies to account for mortality with and without dementia.

Without intervention, the prevalence rate of dementia in 2040 would be 9.6% among men and 14% among women older than 65 years.

These figures would decrease to 6.4% (−33%) and 10.4% (−26%), respectively, under the intervention scenario whereby the three modifiable vascular risk factors (hypertension, diabetes, and physical inactivity) would be removed simultaneously beginning in 2020. The prevalence rates are significantly reduced for men and women from age 75 years. In this scenario, life expectancy without dementia would increase by 3.4 years in men and 2.6 years in women, the result of men being more exposed to these three risk factors.

Other scenarios have estimated dementia prevalence with the disappearance of just one of these risk factors. For example, the disappearance of hypertension alone from 2020 could decrease dementia prevalence by 21% in men and 16% in women (because this risk factor is less common in women than in men) by 2040. This reduction would be associated with a decrease in the lifelong probability of dementia among men and women and a gain in life expectancy without dementia of 2 years in men and 1.4 years in women.

Among the three factors, hypertension has the largest impact on dementia burden in the French population, since this is, by far, the most prevalent (69% in men and 49% in women), while intervention targeting only diabetes or physical inactivity would lead to a reduction in dementia prevalence of only 4%-7%.

The authors reported no conflicts of interest.

This article was translated from Univadis France. A version appeared on Medscape.com.

Publications
Topics
Sections

Using a microsimulation algorithm that accounts for the effect on mortality, a team from Marseille, France, has shown that interventions targeting the three main vascular risk factors for dementia – hypertension, diabetes, and physical inactivity – could significantly reduce the burden of dementia by 2040.

Among the three modifiable risk factors, the prevention of hypertension would be the most efficient, with by far the biggest impact on dementia.

Although these modeling results could appear too optimistic, since total disappearance of the risk factors was assumed, the authors say the results do show that targeted interventions for these factors could be effective in reducing the future burden of dementia.
 

Increasing prevalence

According to the World Alzheimer Report 2018, 50 million people around the world were living with dementia; a population roughly around the size of South Korea or Spain. That community is likely to rise to about 152 million people by 2050, which is similar to the size of Russia or Bangladesh, the result of an aging population.

Among modifiable risk factors, many studies support a deleterious effect of hypertension, diabetes, and physical inactivity on the risk of dementia. However, since the distribution of these risk factors could have a direct impact on mortality, reducing it should increase life expectancy and the number of cases of dementia.

The team, headed by Hélène Jacqmin-Gadda, PhD, research director at the University of Bordeaux (France), has developed a microsimulation model capable of predicting the burden of dementia while accounting for the impact on mortality. The team used this approach to assess the impact of interventions targeting these three main risk factors on the burden of dementia in France by 2040.
 

Removing risk factors

The researchers estimated the incidence of dementia for men and women using data from the 2020 PAQUID cohort, and these data were combined with the projections forecast by the French National Institute of Statistics and Economic Studies to account for mortality with and without dementia.

Without intervention, the prevalence rate of dementia in 2040 would be 9.6% among men and 14% among women older than 65 years.

These figures would decrease to 6.4% (−33%) and 10.4% (−26%), respectively, under the intervention scenario whereby the three modifiable vascular risk factors (hypertension, diabetes, and physical inactivity) would be removed simultaneously beginning in 2020. The prevalence rates are significantly reduced for men and women from age 75 years. In this scenario, life expectancy without dementia would increase by 3.4 years in men and 2.6 years in women, the result of men being more exposed to these three risk factors.

Other scenarios have estimated dementia prevalence with the disappearance of just one of these risk factors. For example, the disappearance of hypertension alone from 2020 could decrease dementia prevalence by 21% in men and 16% in women (because this risk factor is less common in women than in men) by 2040. This reduction would be associated with a decrease in the lifelong probability of dementia among men and women and a gain in life expectancy without dementia of 2 years in men and 1.4 years in women.

Among the three factors, hypertension has the largest impact on dementia burden in the French population, since this is, by far, the most prevalent (69% in men and 49% in women), while intervention targeting only diabetes or physical inactivity would lead to a reduction in dementia prevalence of only 4%-7%.

The authors reported no conflicts of interest.

This article was translated from Univadis France. A version appeared on Medscape.com.

Using a microsimulation algorithm that accounts for the effect on mortality, a team from Marseille, France, has shown that interventions targeting the three main vascular risk factors for dementia – hypertension, diabetes, and physical inactivity – could significantly reduce the burden of dementia by 2040.

Among the three modifiable risk factors, the prevention of hypertension would be the most efficient, with by far the biggest impact on dementia.

Although these modeling results could appear too optimistic, since total disappearance of the risk factors was assumed, the authors say the results do show that targeted interventions for these factors could be effective in reducing the future burden of dementia.
 

Increasing prevalence

According to the World Alzheimer Report 2018, 50 million people around the world were living with dementia; a population roughly around the size of South Korea or Spain. That community is likely to rise to about 152 million people by 2050, which is similar to the size of Russia or Bangladesh, the result of an aging population.

Among modifiable risk factors, many studies support a deleterious effect of hypertension, diabetes, and physical inactivity on the risk of dementia. However, since the distribution of these risk factors could have a direct impact on mortality, reducing it should increase life expectancy and the number of cases of dementia.

The team, headed by Hélène Jacqmin-Gadda, PhD, research director at the University of Bordeaux (France), has developed a microsimulation model capable of predicting the burden of dementia while accounting for the impact on mortality. The team used this approach to assess the impact of interventions targeting these three main risk factors on the burden of dementia in France by 2040.
 

Removing risk factors

The researchers estimated the incidence of dementia for men and women using data from the 2020 PAQUID cohort, and these data were combined with the projections forecast by the French National Institute of Statistics and Economic Studies to account for mortality with and without dementia.

Without intervention, the prevalence rate of dementia in 2040 would be 9.6% among men and 14% among women older than 65 years.

These figures would decrease to 6.4% (−33%) and 10.4% (−26%), respectively, under the intervention scenario whereby the three modifiable vascular risk factors (hypertension, diabetes, and physical inactivity) would be removed simultaneously beginning in 2020. The prevalence rates are significantly reduced for men and women from age 75 years. In this scenario, life expectancy without dementia would increase by 3.4 years in men and 2.6 years in women, the result of men being more exposed to these three risk factors.

Other scenarios have estimated dementia prevalence with the disappearance of just one of these risk factors. For example, the disappearance of hypertension alone from 2020 could decrease dementia prevalence by 21% in men and 16% in women (because this risk factor is less common in women than in men) by 2040. This reduction would be associated with a decrease in the lifelong probability of dementia among men and women and a gain in life expectancy without dementia of 2 years in men and 1.4 years in women.

Among the three factors, hypertension has the largest impact on dementia burden in the French population, since this is, by far, the most prevalent (69% in men and 49% in women), while intervention targeting only diabetes or physical inactivity would lead to a reduction in dementia prevalence of only 4%-7%.

The authors reported no conflicts of interest.

This article was translated from Univadis France. A version appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE EUROPEAN JOURNAL OF EPIDEMIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article