Self-Rated Health Predicts Hospitalization and Death

Article Type
Changed
Fri, 08/02/2024 - 15:31

Adults who self-rated their health as poor in middle age were at least three times more likely to die or be hospitalized when older than those who self-rated their health as excellent, based on data from nearly 15,000 individuals.

Previous research has shown that self-rated health is an independent predictor of hospitalization or death, but the effects of individual subject-specific risks on these outcomes has not been examined, wrote Scott Z. Mu, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.

In a study published in the Journal of General Internal Medicine, the researchers reviewed data from 14,937 members of the Atherosclerosis Risk in Communities (ARIC) cohort, a community-based prospective study of middle-aged men and women that began with their enrollment from 1987 to 1989. The primary outcome was the association between baseline self-rated health and subsequent recurrent hospitalizations and deaths over a median follow-up period of 27.7 years.

At baseline, 34% of the participants rated their health as excellent, 47% good, 16% fair, and 3% poor. After the median follow-up, 39%, 51%, 67%, and 83% of individuals who rated their health as excellent, good, fair, and poor, respectively, had died.

The researchers used a recurrent events survival model that adjusted for clinical and demographic factors and also allowed for dependency between the rates of hospitalization and hazards of death.

After controlling for demographics and medical history, a lower self-rating of health was associated with increased rates of hospitalization and death. Compared with individuals with baseline reports of excellent health, hospitalization rates were 1.22, 2.01, and 3.13 times higher for those with baseline reports of good, fair, or poor health, respectively. Similarly, compared with individuals with baseline reports of excellent health, hazards of death were 1.30, 2.15, and 3.40 for those with baseline reports of good, fair, or poor health, respectively.

Overall, individuals who reported poor health at baseline were significantly more likely than those who reported excellent health to be older (57.0 years vs 53.0 years), obese (44% vs 18%), and current smokers (39% vs 21%). Those who reported poor health at baseline also were significantly more likely than those who reported excellent health to have a history of cancer (9.5% vs 4.4%), emphysema/COPD (18% vs 2.3%), coronary heart disease (21% vs 1.6%), myocardial infarction (19% vs 1.3%), heart failure (25% vs. 1.2%), hypertension (67% vs 19%), or diabetes (39% vs 4.6%).

Potential explanations for the independent association between poor self-rated health and poor outcomes include the ability of self-rated health to capture health information not accounted for by traditional risk factors, the researchers wrote in their discussion. “Another explanation is that self-rated health reflects subconscious bodily sensations that provide a direct sense of health unavailable to external observation,” they said. Alternatively, self-rated health may reinforce beneficial behaviors in those with higher self-rated health and harmful behaviors in those with lower self-rated health, they said.

The findings were limited by several factors including the measurement of self-rated health and the validity of hospitalization as a proxy for morbidity, the researchers noted. Other limitations include the use of models instead of repeated self-rated health measures, and a lack of data on interventions to directly or indirectly improve self-rated health, the researchers noted.

However, the study shows the potential value of self-rated health in routine clinical care to predict future hospitalizations, they said. “Clinicians can use this simple and convenient measure for individual patients to provide more accurate and personalized risk assessments,” they said.

Looking ahead, the current study findings also support the need for more research into the routine assessment not only of self-rated health but also targeted interventions to improve self-rated health and its determinants, the researchers concluded. The ARIC study has been supported by the National Heart, Lung, and Blood Institute, National Institutes of Health. Dr. Mu disclosed support from the National Heart, Lung, and Blood Institute.

Publications
Topics
Sections

Adults who self-rated their health as poor in middle age were at least three times more likely to die or be hospitalized when older than those who self-rated their health as excellent, based on data from nearly 15,000 individuals.

Previous research has shown that self-rated health is an independent predictor of hospitalization or death, but the effects of individual subject-specific risks on these outcomes has not been examined, wrote Scott Z. Mu, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.

In a study published in the Journal of General Internal Medicine, the researchers reviewed data from 14,937 members of the Atherosclerosis Risk in Communities (ARIC) cohort, a community-based prospective study of middle-aged men and women that began with their enrollment from 1987 to 1989. The primary outcome was the association between baseline self-rated health and subsequent recurrent hospitalizations and deaths over a median follow-up period of 27.7 years.

At baseline, 34% of the participants rated their health as excellent, 47% good, 16% fair, and 3% poor. After the median follow-up, 39%, 51%, 67%, and 83% of individuals who rated their health as excellent, good, fair, and poor, respectively, had died.

The researchers used a recurrent events survival model that adjusted for clinical and demographic factors and also allowed for dependency between the rates of hospitalization and hazards of death.

After controlling for demographics and medical history, a lower self-rating of health was associated with increased rates of hospitalization and death. Compared with individuals with baseline reports of excellent health, hospitalization rates were 1.22, 2.01, and 3.13 times higher for those with baseline reports of good, fair, or poor health, respectively. Similarly, compared with individuals with baseline reports of excellent health, hazards of death were 1.30, 2.15, and 3.40 for those with baseline reports of good, fair, or poor health, respectively.

Overall, individuals who reported poor health at baseline were significantly more likely than those who reported excellent health to be older (57.0 years vs 53.0 years), obese (44% vs 18%), and current smokers (39% vs 21%). Those who reported poor health at baseline also were significantly more likely than those who reported excellent health to have a history of cancer (9.5% vs 4.4%), emphysema/COPD (18% vs 2.3%), coronary heart disease (21% vs 1.6%), myocardial infarction (19% vs 1.3%), heart failure (25% vs. 1.2%), hypertension (67% vs 19%), or diabetes (39% vs 4.6%).

Potential explanations for the independent association between poor self-rated health and poor outcomes include the ability of self-rated health to capture health information not accounted for by traditional risk factors, the researchers wrote in their discussion. “Another explanation is that self-rated health reflects subconscious bodily sensations that provide a direct sense of health unavailable to external observation,” they said. Alternatively, self-rated health may reinforce beneficial behaviors in those with higher self-rated health and harmful behaviors in those with lower self-rated health, they said.

The findings were limited by several factors including the measurement of self-rated health and the validity of hospitalization as a proxy for morbidity, the researchers noted. Other limitations include the use of models instead of repeated self-rated health measures, and a lack of data on interventions to directly or indirectly improve self-rated health, the researchers noted.

However, the study shows the potential value of self-rated health in routine clinical care to predict future hospitalizations, they said. “Clinicians can use this simple and convenient measure for individual patients to provide more accurate and personalized risk assessments,” they said.

Looking ahead, the current study findings also support the need for more research into the routine assessment not only of self-rated health but also targeted interventions to improve self-rated health and its determinants, the researchers concluded. The ARIC study has been supported by the National Heart, Lung, and Blood Institute, National Institutes of Health. Dr. Mu disclosed support from the National Heart, Lung, and Blood Institute.

Adults who self-rated their health as poor in middle age were at least three times more likely to die or be hospitalized when older than those who self-rated their health as excellent, based on data from nearly 15,000 individuals.

Previous research has shown that self-rated health is an independent predictor of hospitalization or death, but the effects of individual subject-specific risks on these outcomes has not been examined, wrote Scott Z. Mu, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.

In a study published in the Journal of General Internal Medicine, the researchers reviewed data from 14,937 members of the Atherosclerosis Risk in Communities (ARIC) cohort, a community-based prospective study of middle-aged men and women that began with their enrollment from 1987 to 1989. The primary outcome was the association between baseline self-rated health and subsequent recurrent hospitalizations and deaths over a median follow-up period of 27.7 years.

At baseline, 34% of the participants rated their health as excellent, 47% good, 16% fair, and 3% poor. After the median follow-up, 39%, 51%, 67%, and 83% of individuals who rated their health as excellent, good, fair, and poor, respectively, had died.

The researchers used a recurrent events survival model that adjusted for clinical and demographic factors and also allowed for dependency between the rates of hospitalization and hazards of death.

After controlling for demographics and medical history, a lower self-rating of health was associated with increased rates of hospitalization and death. Compared with individuals with baseline reports of excellent health, hospitalization rates were 1.22, 2.01, and 3.13 times higher for those with baseline reports of good, fair, or poor health, respectively. Similarly, compared with individuals with baseline reports of excellent health, hazards of death were 1.30, 2.15, and 3.40 for those with baseline reports of good, fair, or poor health, respectively.

Overall, individuals who reported poor health at baseline were significantly more likely than those who reported excellent health to be older (57.0 years vs 53.0 years), obese (44% vs 18%), and current smokers (39% vs 21%). Those who reported poor health at baseline also were significantly more likely than those who reported excellent health to have a history of cancer (9.5% vs 4.4%), emphysema/COPD (18% vs 2.3%), coronary heart disease (21% vs 1.6%), myocardial infarction (19% vs 1.3%), heart failure (25% vs. 1.2%), hypertension (67% vs 19%), or diabetes (39% vs 4.6%).

Potential explanations for the independent association between poor self-rated health and poor outcomes include the ability of self-rated health to capture health information not accounted for by traditional risk factors, the researchers wrote in their discussion. “Another explanation is that self-rated health reflects subconscious bodily sensations that provide a direct sense of health unavailable to external observation,” they said. Alternatively, self-rated health may reinforce beneficial behaviors in those with higher self-rated health and harmful behaviors in those with lower self-rated health, they said.

The findings were limited by several factors including the measurement of self-rated health and the validity of hospitalization as a proxy for morbidity, the researchers noted. Other limitations include the use of models instead of repeated self-rated health measures, and a lack of data on interventions to directly or indirectly improve self-rated health, the researchers noted.

However, the study shows the potential value of self-rated health in routine clinical care to predict future hospitalizations, they said. “Clinicians can use this simple and convenient measure for individual patients to provide more accurate and personalized risk assessments,” they said.

Looking ahead, the current study findings also support the need for more research into the routine assessment not only of self-rated health but also targeted interventions to improve self-rated health and its determinants, the researchers concluded. The ARIC study has been supported by the National Heart, Lung, and Blood Institute, National Institutes of Health. Dr. Mu disclosed support from the National Heart, Lung, and Blood Institute.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Too Much Coffee Linked to Accelerated Cognitive Decline

Article Type
Changed
Mon, 08/05/2024 - 09:24

PHILADELPHIA – Drinking more than three cups of coffee a day is linked to more rapid cognitive decline over time, results from a large study suggest.

Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.

“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida. 

The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
 

One of the World’s Most Widely Consumed Beverages

Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.

Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.  

Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.

Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.

Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.

This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.

Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.

For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.

The study assessed cognitive function at baseline and at least two additional patient visits. 

Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
 

Steeper Decline 

Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.

“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.

At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.

For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).

Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.

Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).

Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease. 

“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”

A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.

The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea. 

When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.

Dr. Sewell reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

PHILADELPHIA – Drinking more than three cups of coffee a day is linked to more rapid cognitive decline over time, results from a large study suggest.

Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.

“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida. 

The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
 

One of the World’s Most Widely Consumed Beverages

Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.

Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.  

Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.

Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.

Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.

This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.

Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.

For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.

The study assessed cognitive function at baseline and at least two additional patient visits. 

Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
 

Steeper Decline 

Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.

“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.

At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.

For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).

Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.

Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).

Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease. 

“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”

A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.

The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea. 

When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.

Dr. Sewell reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

PHILADELPHIA – Drinking more than three cups of coffee a day is linked to more rapid cognitive decline over time, results from a large study suggest.

Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.

“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida. 

The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
 

One of the World’s Most Widely Consumed Beverages

Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.

Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.  

Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.

Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.

Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.

This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.

Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.

For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.

The study assessed cognitive function at baseline and at least two additional patient visits. 

Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
 

Steeper Decline 

Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.

“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.

At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.

For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).

Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.

Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).

Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease. 

“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”

A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.

The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea. 

When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.

Dr. Sewell reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAIC 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Statins: So Misunderstood

Article Type
Changed
Wed, 07/31/2024 - 16:39

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at [email protected].

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Publications
Topics
Sections

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at [email protected].

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.

Statins Are Not Bad For the Liver

When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.

But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.

Dr. Douglas S. Paauw

In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4

Statins Do Not Cause Muscle Pain in Most Patients

Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.

Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.

Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.

Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
 

 

 

Statins Are Likely Helpful In the Very Elderly

Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.

Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at [email protected].

References

1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.

2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.

3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.

4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.

5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.

6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.

7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.

8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.

9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Alzheimer’s Blood Test in Primary Care Could Slash Diagnostic, Treatment Wait Times

Article Type
Changed
Tue, 07/30/2024 - 11:56

As disease-modifying treatments for Alzheimer’s disease (AD) become available, equipping primary care physicians with a highly accurate blood test could significantly reduce diagnostic wait times. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.

“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.

“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Projected Wait Times 100 Months by 2033

The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.

The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.

According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.

In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.

Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
 

Prioritizing Resources 

“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.

The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.

“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.

He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.

Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment. 

“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study. 

This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

As disease-modifying treatments for Alzheimer’s disease (AD) become available, equipping primary care physicians with a highly accurate blood test could significantly reduce diagnostic wait times. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.

“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.

“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Projected Wait Times 100 Months by 2033

The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.

The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.

According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.

In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.

Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
 

Prioritizing Resources 

“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.

The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.

“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.

He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.

Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment. 

“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study. 

This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.

A version of this article first appeared on Medscape.com.

As disease-modifying treatments for Alzheimer’s disease (AD) become available, equipping primary care physicians with a highly accurate blood test could significantly reduce diagnostic wait times. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.

“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.

“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.

The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
 

Projected Wait Times 100 Months by 2033

The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.

The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.

According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.

In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.

Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
 

Prioritizing Resources 

“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.

The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.

“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.

He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.

Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment. 

“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study. 

This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAIC 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Models Predict Time From Mild Cognitive Impairment to Dementia

Article Type
Changed
Tue, 07/30/2024 - 10:23

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.

The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.

The findings were published online in Neurology.
 

Easy-to-Use Prototype

On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.

Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.

For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.

Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.

At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.

In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.

The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).

However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.

For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.

External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.

Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.

Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.

“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.

Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.

The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.

The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Two Diets Linked to Improved Cognition, Slowed Brain Aging

Article Type
Changed
Wed, 07/31/2024 - 13:18

 

An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.

Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.

“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.

The findings were published online in Cell Metabolism.
 

Cognitive Outcomes

The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.

Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.

Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.

The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.

Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.

The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.

Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.

Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.

Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
 

Hypothesis-Generating Research

AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.

In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.

An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.

Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.

The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.

The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.

Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.

“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.

The findings were published online in Cell Metabolism.
 

Cognitive Outcomes

The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.

Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.

Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.

The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.

Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.

The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.

Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.

Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.

Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
 

Hypothesis-Generating Research

AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.

In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.

An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.

Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.

The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.

The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
 

A version of this article first appeared on Medscape.com.

 

An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.

Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.

“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.

The findings were published online in Cell Metabolism.
 

Cognitive Outcomes

The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.

Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.

Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.

The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.

Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.

The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.

Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.

Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.

Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
 

Hypothesis-Generating Research

AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.

In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.

An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.

Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.

The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.

The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL METABOLISM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Study Links Newer Shingles Vaccine to Delayed Dementia Diagnosis

Article Type
Changed
Fri, 07/26/2024 - 12:24

 

Receipt of a newer recombinant version of a shingles vaccine is associated with a significant delay in dementia diagnosis in older adults, a new study suggests.

The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine. 

“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England. 

But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk. 

“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said. 

The study was published online on July 25 in Nature Medicine.
 

‘Natural Experiment’

Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection. 

The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017. 

Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch. 

They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020. 

Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected. 

As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%. 

Reduced Risk or Delayed Diagnosis?

Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.

“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained. 

But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported. 

“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested. 

But the researchers cautioned that this study could not prove causality. 

“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned. 

The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom. 

Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older. 

In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.  
 

Mechanism Uncertain

Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.

“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted. 

The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role. 

“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said. 
 

Stronger Effect in Women

Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men. 

In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women. 

In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine. 

As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect. 

“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented. 

Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study. 

He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine. 

“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented. 
 

Outside Experts Positive 

Outside experts, providing comment to the Science Media Centre, welcomed the new research. 

“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association. 

The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.

“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”

Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.

In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality. 

“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.

Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Receipt of a newer recombinant version of a shingles vaccine is associated with a significant delay in dementia diagnosis in older adults, a new study suggests.

The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine. 

“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England. 

But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk. 

“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said. 

The study was published online on July 25 in Nature Medicine.
 

‘Natural Experiment’

Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection. 

The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017. 

Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch. 

They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020. 

Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected. 

As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%. 

Reduced Risk or Delayed Diagnosis?

Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.

“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained. 

But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported. 

“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested. 

But the researchers cautioned that this study could not prove causality. 

“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned. 

The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom. 

Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older. 

In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.  
 

Mechanism Uncertain

Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.

“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted. 

The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role. 

“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said. 
 

Stronger Effect in Women

Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men. 

In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women. 

In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine. 

As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect. 

“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented. 

Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study. 

He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine. 

“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented. 
 

Outside Experts Positive 

Outside experts, providing comment to the Science Media Centre, welcomed the new research. 

“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association. 

The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.

“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”

Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.

In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality. 

“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.

Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.

A version of this article first appeared on Medscape.com.

 

Receipt of a newer recombinant version of a shingles vaccine is associated with a significant delay in dementia diagnosis in older adults, a new study suggests.

The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine. 

“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England. 

But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk. 

“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said. 

The study was published online on July 25 in Nature Medicine.
 

‘Natural Experiment’

Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection. 

The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017. 

Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch. 

They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020. 

Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected. 

As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%. 

Reduced Risk or Delayed Diagnosis?

Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.

“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained. 

But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported. 

“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested. 

But the researchers cautioned that this study could not prove causality. 

“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned. 

The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom. 

Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older. 

In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.  
 

Mechanism Uncertain

Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.

“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted. 

The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role. 

“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said. 
 

Stronger Effect in Women

Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men. 

In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women. 

In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine. 

As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect. 

“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented. 

Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study. 

He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine. 

“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented. 
 

Outside Experts Positive 

Outside experts, providing comment to the Science Media Centre, welcomed the new research. 

“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association. 

The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.

“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”

Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.

In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality. 

“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.

Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Heat Waves: A Silent Threat to Older Adults’ Kidneys

Article Type
Changed
Tue, 08/06/2024 - 02:25

 

TOPLINE:

Older adults show an increase in creatinine and cystatin C levels after exposure to extreme heat in a dry setting despite staying hydrated; however, changes in these kidney function biomarkers are much more modest in a humid setting and in young adults.

METHODOLOGY:

  • Older adults are vulnerable to heat-related morbidity and mortality, with kidney complications accounting for many excess hospital admissions during heat waves.
  • Researchers investigated plasma-based markers of kidney function following extreme heat exposure for 3 hours in 20 young (21-39 years) and 18 older (65-76 years) adults recruited from the Dallas-Fort Worth area.
  • All participants underwent heat exposure in a chamber at 47 °C (116 °F) and 15% relative humidity (dry setting) and 41 °C (105 °F) and 40% relative humidity (humid setting) on separate days. They performed light physical activity mimicking their daily tasks and drank 3 mL/kg body mass of water every hour while exposed to heat.
  • Blood samples were collected at baseline, immediately before the end of heat exposure (end-heating), and 2 hours after heat exposure.
  • Plasma creatinine was the primary outcome, with a change ≥ 0.3 mg/dL considered as clinically meaningful. Cystatin C was the secondary outcome.

TAKEAWAY:

  • The plasma creatinine level showed a modest increase from baseline to end-heating (difference, 0.10 mg/dL; P = .004) and at 2 hours post exposure (difference, 0.17 mg/dL; P < .001) in older adults facing heat exposure in the dry setting.
  • The mean cystatin C levels also increased from baseline to end-heating by 0.29 mg/L (P = .01) and at 2 hours post heat exposure by 0.28 mg/L (P = .004) in older adults in the dry setting.
  • The mean creatinine levels increased by only 0.06 mg/dL (P = .01) from baseline to 2 hours post exposure in older adults facing heat exposure in the humid setting.
  • Young adults didn’t show any significant change in the plasma cystatin C levels during or after heat exposure; however, there was a modest increase in the plasma creatinine levels after 2 hours of heat exposure (difference, 0.06; P = .004).

IN PRACTICE:

“These findings provide limited evidence that the heightened thermal strain in older adults during extreme heat may contribute to reduced kidney function,” the authors wrote. 

SOURCE:

The study was led by Zachary J. McKenna, PhD, from the Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, Texas, and was published online in JAMA.

LIMITATIONS:

The use of plasma-based markers of kidney function, a short laboratory-based exposure, and a small number of generally healthy participants were the main limitations that could affect the generalizability of this study’s findings to broader populations and real-world settings. 

DISCLOSURES:

The National Institutes of Health and American Heart Association funded this study. Two authors declared receiving grants and nonfinancial support from several sources. 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Older adults show an increase in creatinine and cystatin C levels after exposure to extreme heat in a dry setting despite staying hydrated; however, changes in these kidney function biomarkers are much more modest in a humid setting and in young adults.

METHODOLOGY:

  • Older adults are vulnerable to heat-related morbidity and mortality, with kidney complications accounting for many excess hospital admissions during heat waves.
  • Researchers investigated plasma-based markers of kidney function following extreme heat exposure for 3 hours in 20 young (21-39 years) and 18 older (65-76 years) adults recruited from the Dallas-Fort Worth area.
  • All participants underwent heat exposure in a chamber at 47 °C (116 °F) and 15% relative humidity (dry setting) and 41 °C (105 °F) and 40% relative humidity (humid setting) on separate days. They performed light physical activity mimicking their daily tasks and drank 3 mL/kg body mass of water every hour while exposed to heat.
  • Blood samples were collected at baseline, immediately before the end of heat exposure (end-heating), and 2 hours after heat exposure.
  • Plasma creatinine was the primary outcome, with a change ≥ 0.3 mg/dL considered as clinically meaningful. Cystatin C was the secondary outcome.

TAKEAWAY:

  • The plasma creatinine level showed a modest increase from baseline to end-heating (difference, 0.10 mg/dL; P = .004) and at 2 hours post exposure (difference, 0.17 mg/dL; P < .001) in older adults facing heat exposure in the dry setting.
  • The mean cystatin C levels also increased from baseline to end-heating by 0.29 mg/L (P = .01) and at 2 hours post heat exposure by 0.28 mg/L (P = .004) in older adults in the dry setting.
  • The mean creatinine levels increased by only 0.06 mg/dL (P = .01) from baseline to 2 hours post exposure in older adults facing heat exposure in the humid setting.
  • Young adults didn’t show any significant change in the plasma cystatin C levels during or after heat exposure; however, there was a modest increase in the plasma creatinine levels after 2 hours of heat exposure (difference, 0.06; P = .004).

IN PRACTICE:

“These findings provide limited evidence that the heightened thermal strain in older adults during extreme heat may contribute to reduced kidney function,” the authors wrote. 

SOURCE:

The study was led by Zachary J. McKenna, PhD, from the Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, Texas, and was published online in JAMA.

LIMITATIONS:

The use of plasma-based markers of kidney function, a short laboratory-based exposure, and a small number of generally healthy participants were the main limitations that could affect the generalizability of this study’s findings to broader populations and real-world settings. 

DISCLOSURES:

The National Institutes of Health and American Heart Association funded this study. Two authors declared receiving grants and nonfinancial support from several sources. 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

Older adults show an increase in creatinine and cystatin C levels after exposure to extreme heat in a dry setting despite staying hydrated; however, changes in these kidney function biomarkers are much more modest in a humid setting and in young adults.

METHODOLOGY:

  • Older adults are vulnerable to heat-related morbidity and mortality, with kidney complications accounting for many excess hospital admissions during heat waves.
  • Researchers investigated plasma-based markers of kidney function following extreme heat exposure for 3 hours in 20 young (21-39 years) and 18 older (65-76 years) adults recruited from the Dallas-Fort Worth area.
  • All participants underwent heat exposure in a chamber at 47 °C (116 °F) and 15% relative humidity (dry setting) and 41 °C (105 °F) and 40% relative humidity (humid setting) on separate days. They performed light physical activity mimicking their daily tasks and drank 3 mL/kg body mass of water every hour while exposed to heat.
  • Blood samples were collected at baseline, immediately before the end of heat exposure (end-heating), and 2 hours after heat exposure.
  • Plasma creatinine was the primary outcome, with a change ≥ 0.3 mg/dL considered as clinically meaningful. Cystatin C was the secondary outcome.

TAKEAWAY:

  • The plasma creatinine level showed a modest increase from baseline to end-heating (difference, 0.10 mg/dL; P = .004) and at 2 hours post exposure (difference, 0.17 mg/dL; P < .001) in older adults facing heat exposure in the dry setting.
  • The mean cystatin C levels also increased from baseline to end-heating by 0.29 mg/L (P = .01) and at 2 hours post heat exposure by 0.28 mg/L (P = .004) in older adults in the dry setting.
  • The mean creatinine levels increased by only 0.06 mg/dL (P = .01) from baseline to 2 hours post exposure in older adults facing heat exposure in the humid setting.
  • Young adults didn’t show any significant change in the plasma cystatin C levels during or after heat exposure; however, there was a modest increase in the plasma creatinine levels after 2 hours of heat exposure (difference, 0.06; P = .004).

IN PRACTICE:

“These findings provide limited evidence that the heightened thermal strain in older adults during extreme heat may contribute to reduced kidney function,” the authors wrote. 

SOURCE:

The study was led by Zachary J. McKenna, PhD, from the Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, Texas, and was published online in JAMA.

LIMITATIONS:

The use of plasma-based markers of kidney function, a short laboratory-based exposure, and a small number of generally healthy participants were the main limitations that could affect the generalizability of this study’s findings to broader populations and real-world settings. 

DISCLOSURES:

The National Institutes of Health and American Heart Association funded this study. Two authors declared receiving grants and nonfinancial support from several sources. 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How the New Vitamin D Guidelines Will, and Won’t, Change My Practice

Article Type
Changed
Thu, 07/25/2024 - 15:17

Hi, everyone. I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.

A few months ago, my health system added a clinical decision support function to our electronic health record to reduce inappropriate ordering of vitamin D levels. Clinicians are now required to select from a list of approved indications or diagnoses (including a history of vitamin D deficiency) before ordering the test.

Although I don’t know yet whether this process has had the desired effect, I felt that it was long overdue. Several years ago, I wrote an editorial that questioned the dramatic increase in vitamin D testing given the uncertainty about what level is adequate for good health and clinical trials showing that supplementing people with lower levels has no benefits for a variety of medical conditions. A more recent review of prospective studies of vitamin D supplements concluded that most correlations between vitamin D levels and outcomes in common and high-mortality conditions are unlikely to be causal.

A new Endocrine Society guideline recommends against routine measurement of vitamin D levels in healthy individuals. The guideline reinforces my current practice of not screening for vitamin D deficiency except in special situations, such as an individual with dark skin who works the night shift and rarely goes outdoors during daytime hours. But I haven’t been offering empirical vitamin D supplements to the four at-risk groups identified by the Endocrine Society: children, adults older than 75 years, pregnant patients, and adults with prediabetes. The evidence behind these recommendations merits a closer look.

In exclusively or primarily breastfed infants, I follow the American Academy of Pediatrics recommendation to prescribe a daily supplement containing 400 IU of vitamin D. However, the Endocrine Society found evidence from several studies conducted in other countries that continuing supplementation throughout childhood reduces the risk for rickets and possibly reduces the incidence of respiratory infections, with few adverse effects.

Many older women, and some older men, choose to take a calcium and vitamin D supplement for bone health, even though there is scant evidence that doing so prevents fractures in community-dwelling adults without osteoporosis. The Endocrine Society’s meta-analysis, however, found that 1000 adults aged 75 years or older who took an average of 900 IU of vitamin D daily for 2 years could expect to experience six fewer deaths than an identical group not taking supplements.

A typical prenatal vitamin contains 400 IU of vitamin D. Placebo-controlled trials reviewed by the Endocrine Society that gave an average of 2500 IU daily found statistically insignificant reductions in preeclampsia, intrauterine death, preterm birth, small for gestation age birth, and neonatal deaths.

Finally, the Endocrine Society’s recommendation for adults with prediabetes was based on 11 trials (three conducted in the United States) that tested a daily average of 3500 IU and found a slightly lower risk for progression to diabetes (24 fewer diagnoses of type 2 diabetes per 1000 persons) in the group who took supplements.

Of the four groups highlighted by the guideline, the strongest case for vitamin D supplements is in older adults — it’s hard to argue with lower mortality, even if the difference is small. Therefore, I will start suggesting that my patients over age 75 take a daily vitamin D supplement containing at least 800 IU if they aren’t already doing so.

On the other hand, I don’t plan to change my approach to pregnant patients (whose benefits in studies could have been due to chance), children after age 1 year (studies of children in other countries with different nutritional status may not apply to the United States), or adults with prediabetes (where we already have proven lifestyle interventions with much greater effects). In these cases, either I am unconvinced that the data support benefits for my patients, or I feel that the benefits of vitamin D supplements are small enough to be outweighed by potential harms, such as increased kidney stones.

Kenneth W. Lin, Associate Director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania, has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Hi, everyone. I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.

A few months ago, my health system added a clinical decision support function to our electronic health record to reduce inappropriate ordering of vitamin D levels. Clinicians are now required to select from a list of approved indications or diagnoses (including a history of vitamin D deficiency) before ordering the test.

Although I don’t know yet whether this process has had the desired effect, I felt that it was long overdue. Several years ago, I wrote an editorial that questioned the dramatic increase in vitamin D testing given the uncertainty about what level is adequate for good health and clinical trials showing that supplementing people with lower levels has no benefits for a variety of medical conditions. A more recent review of prospective studies of vitamin D supplements concluded that most correlations between vitamin D levels and outcomes in common and high-mortality conditions are unlikely to be causal.

A new Endocrine Society guideline recommends against routine measurement of vitamin D levels in healthy individuals. The guideline reinforces my current practice of not screening for vitamin D deficiency except in special situations, such as an individual with dark skin who works the night shift and rarely goes outdoors during daytime hours. But I haven’t been offering empirical vitamin D supplements to the four at-risk groups identified by the Endocrine Society: children, adults older than 75 years, pregnant patients, and adults with prediabetes. The evidence behind these recommendations merits a closer look.

In exclusively or primarily breastfed infants, I follow the American Academy of Pediatrics recommendation to prescribe a daily supplement containing 400 IU of vitamin D. However, the Endocrine Society found evidence from several studies conducted in other countries that continuing supplementation throughout childhood reduces the risk for rickets and possibly reduces the incidence of respiratory infections, with few adverse effects.

Many older women, and some older men, choose to take a calcium and vitamin D supplement for bone health, even though there is scant evidence that doing so prevents fractures in community-dwelling adults without osteoporosis. The Endocrine Society’s meta-analysis, however, found that 1000 adults aged 75 years or older who took an average of 900 IU of vitamin D daily for 2 years could expect to experience six fewer deaths than an identical group not taking supplements.

A typical prenatal vitamin contains 400 IU of vitamin D. Placebo-controlled trials reviewed by the Endocrine Society that gave an average of 2500 IU daily found statistically insignificant reductions in preeclampsia, intrauterine death, preterm birth, small for gestation age birth, and neonatal deaths.

Finally, the Endocrine Society’s recommendation for adults with prediabetes was based on 11 trials (three conducted in the United States) that tested a daily average of 3500 IU and found a slightly lower risk for progression to diabetes (24 fewer diagnoses of type 2 diabetes per 1000 persons) in the group who took supplements.

Of the four groups highlighted by the guideline, the strongest case for vitamin D supplements is in older adults — it’s hard to argue with lower mortality, even if the difference is small. Therefore, I will start suggesting that my patients over age 75 take a daily vitamin D supplement containing at least 800 IU if they aren’t already doing so.

On the other hand, I don’t plan to change my approach to pregnant patients (whose benefits in studies could have been due to chance), children after age 1 year (studies of children in other countries with different nutritional status may not apply to the United States), or adults with prediabetes (where we already have proven lifestyle interventions with much greater effects). In these cases, either I am unconvinced that the data support benefits for my patients, or I feel that the benefits of vitamin D supplements are small enough to be outweighed by potential harms, such as increased kidney stones.

Kenneth W. Lin, Associate Director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania, has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Hi, everyone. I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.

A few months ago, my health system added a clinical decision support function to our electronic health record to reduce inappropriate ordering of vitamin D levels. Clinicians are now required to select from a list of approved indications or diagnoses (including a history of vitamin D deficiency) before ordering the test.

Although I don’t know yet whether this process has had the desired effect, I felt that it was long overdue. Several years ago, I wrote an editorial that questioned the dramatic increase in vitamin D testing given the uncertainty about what level is adequate for good health and clinical trials showing that supplementing people with lower levels has no benefits for a variety of medical conditions. A more recent review of prospective studies of vitamin D supplements concluded that most correlations between vitamin D levels and outcomes in common and high-mortality conditions are unlikely to be causal.

A new Endocrine Society guideline recommends against routine measurement of vitamin D levels in healthy individuals. The guideline reinforces my current practice of not screening for vitamin D deficiency except in special situations, such as an individual with dark skin who works the night shift and rarely goes outdoors during daytime hours. But I haven’t been offering empirical vitamin D supplements to the four at-risk groups identified by the Endocrine Society: children, adults older than 75 years, pregnant patients, and adults with prediabetes. The evidence behind these recommendations merits a closer look.

In exclusively or primarily breastfed infants, I follow the American Academy of Pediatrics recommendation to prescribe a daily supplement containing 400 IU of vitamin D. However, the Endocrine Society found evidence from several studies conducted in other countries that continuing supplementation throughout childhood reduces the risk for rickets and possibly reduces the incidence of respiratory infections, with few adverse effects.

Many older women, and some older men, choose to take a calcium and vitamin D supplement for bone health, even though there is scant evidence that doing so prevents fractures in community-dwelling adults without osteoporosis. The Endocrine Society’s meta-analysis, however, found that 1000 adults aged 75 years or older who took an average of 900 IU of vitamin D daily for 2 years could expect to experience six fewer deaths than an identical group not taking supplements.

A typical prenatal vitamin contains 400 IU of vitamin D. Placebo-controlled trials reviewed by the Endocrine Society that gave an average of 2500 IU daily found statistically insignificant reductions in preeclampsia, intrauterine death, preterm birth, small for gestation age birth, and neonatal deaths.

Finally, the Endocrine Society’s recommendation for adults with prediabetes was based on 11 trials (three conducted in the United States) that tested a daily average of 3500 IU and found a slightly lower risk for progression to diabetes (24 fewer diagnoses of type 2 diabetes per 1000 persons) in the group who took supplements.

Of the four groups highlighted by the guideline, the strongest case for vitamin D supplements is in older adults — it’s hard to argue with lower mortality, even if the difference is small. Therefore, I will start suggesting that my patients over age 75 take a daily vitamin D supplement containing at least 800 IU if they aren’t already doing so.

On the other hand, I don’t plan to change my approach to pregnant patients (whose benefits in studies could have been due to chance), children after age 1 year (studies of children in other countries with different nutritional status may not apply to the United States), or adults with prediabetes (where we already have proven lifestyle interventions with much greater effects). In these cases, either I am unconvinced that the data support benefits for my patients, or I feel that the benefits of vitamin D supplements are small enough to be outweighed by potential harms, such as increased kidney stones.

Kenneth W. Lin, Associate Director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania, has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Criteria Distinguish Memory Disorder Often Misdiagnosed as Alzheimer’s

Article Type
Changed
Thu, 07/25/2024 - 15:04

Proposed clinical criteria for a memory loss disorder that is often misdiagnosed as Alzheimer’s disease (AD) have been published.

The new criteria for limbic-predominant amnestic neurodegenerative syndrome (LANS) provide a framework for neurologists and other experts to classify the condition and offer a more precise diagnosis and potential treatments.

“In our clinical work, we see patients whose memory symptoms appear to mimic Alzheimer’s disease, but when you look at their brain imaging or biomarkers, it’s clear they don’t have Alzheimer’s. Until now, there has not been a specific medical diagnosis to point to, but now we can offer them some answers,” senior investigator David T. Jones, MD, said in a release.

The proposed criteria and the research behind it were published online in Brain Communications and will be presented at the Alzheimer›s Association International Conference in Philadelphia.
 

Already in Use

Predominant limbic degeneration has been linked to various underlying etiologies, older age, predominant impairment of episodic memory, and slow clinical progression, the investigators noted. However, they added, the neurologic syndrome associated with predominant limbic degeneration is undefined.

Developing clinical criteria and validating them “is critical to distinguish such a syndrome from those originating from neocortical degeneration, which may differ in underlying etiology, disease course, and therapeutic needs,” the investigators wrote.

The newly proposed clinical criteria apply to LANS, which is “highly associated with limbic-predominant age-related TDP-43 encephalopathy but also other pathologic entities.”

The criteria incorporate core, standard, and advanced features including older age at evaluation, mild clinical syndrome, disproportionate hippocampal atrophy, impaired semantic memory, limbic hypometabolism, absence of endocortical degeneration, and low likelihood of neocortical tau with highest, high, moderate, and low degrees of certainty.

“A detailed history of the clinical symptoms, which may be supported by neuropsychological testing, with the observation of disproportionate hippocampal atrophy and limbic degeneration on MRI/FDG yields a high confidence in a diagnosis of LANS, where the most likely symptom-driving proteinopathy is TDP-43 and not Alzheimer’s associated proteins,” the first author, Nick Corriveau-Lecavalier, PhD, assistant professor of neurology and psychology at Mayo Clinic, Rochester, Minnesota, told this news organization.

To validate the criteria, the investigators screened autopsied patients from Mayo Clinic and Alzheimer’s Disease Neuroimaging Initiative cohorts and applied the criteria to those with a predominant amnestic syndrome and those who had AD neuropathologic change, limbic-predominant age-related TDP-43 encephalopathy, or both pathologies at autopsy.

“The criteria effectively categorized these cases, with Alzheimer’s disease having the lowest likelihoods, limbic-predominant age-related TDP-43 encephalopathy patients having the highest likelihoods, and patients with both pathologies having intermediate likelihoods,” the investigators reported.

“Patients with high likelihoods had a milder and slower clinical course and more severe temporo-limbic degeneration compared to those with low likelihoods,” they added.

Dr. Corriveau-Lecavalier said the team is currently analyzing longitudinal cognitive and imaging trajectories in LANS over several years. “This will help us better understand how LANS and Alzheimer’s differ in their sequence of symptoms over time.”

It is important to understand that memory symptoms in old age are not “unequivocally” driven by Alzheimer’s and that LANS progresses more slowly and has a better prognosis than AD, he noted.

In addition, in vivo markers of TDP-43 are “on the horizon and can hopefully make their way to human research settings soon. This will help better understand the underlying molecular etiologies causing LANS and associated symptoms,” he said.

Dr. Corriveau-Lecavalier said the LANS criteria are ready for clinical use by experts in neurologic care. These criteria can be used to inform not only diagnosis but also prognosis, where this syndrome is associated with slow and mild progression and a memory-dominant profile.

He added that “the new criteria are also routinely used in our practice to make decisions about anti-amyloid treatment eligibility.”

Commenting on the research for this news organization, Rebecca M. Edelmayer, PhD, Alzheimer’s Association senior director of scientific engagement, said the research “exemplifies the great need to develop objective criteria for diagnosis and staging of Alzheimer’s and all other types of dementia and to create an integrated biological and clinical staging scheme that can be used effectively by physicians.”

“Advances in biomarkers will help to differentiate all types of dementia when incorporated into the diagnostic workup, but until those tools are available, a more succinct clinical criteria for diagnosis can be used to support a more personalized medicine approach to treatment, care, and enrollment into clinical studies,” said Dr. Edelmayer, who wasn’t involved in the research.

The research was funded in part by the National Institutes of Health and by the Robert Wood Johnson Foundation, the Elsie & Marvin Dekelboum Family Foundation, the Liston Family Foundation, the Edson Family, the Gerald A. and Henrietta Rauenhorst Foundation, and the Foundation Dr Corinne Schuler. Dr. Corriveau-Lecavalier and Dr. Edelmayer had no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Proposed clinical criteria for a memory loss disorder that is often misdiagnosed as Alzheimer’s disease (AD) have been published.

The new criteria for limbic-predominant amnestic neurodegenerative syndrome (LANS) provide a framework for neurologists and other experts to classify the condition and offer a more precise diagnosis and potential treatments.

“In our clinical work, we see patients whose memory symptoms appear to mimic Alzheimer’s disease, but when you look at their brain imaging or biomarkers, it’s clear they don’t have Alzheimer’s. Until now, there has not been a specific medical diagnosis to point to, but now we can offer them some answers,” senior investigator David T. Jones, MD, said in a release.

The proposed criteria and the research behind it were published online in Brain Communications and will be presented at the Alzheimer›s Association International Conference in Philadelphia.
 

Already in Use

Predominant limbic degeneration has been linked to various underlying etiologies, older age, predominant impairment of episodic memory, and slow clinical progression, the investigators noted. However, they added, the neurologic syndrome associated with predominant limbic degeneration is undefined.

Developing clinical criteria and validating them “is critical to distinguish such a syndrome from those originating from neocortical degeneration, which may differ in underlying etiology, disease course, and therapeutic needs,” the investigators wrote.

The newly proposed clinical criteria apply to LANS, which is “highly associated with limbic-predominant age-related TDP-43 encephalopathy but also other pathologic entities.”

The criteria incorporate core, standard, and advanced features including older age at evaluation, mild clinical syndrome, disproportionate hippocampal atrophy, impaired semantic memory, limbic hypometabolism, absence of endocortical degeneration, and low likelihood of neocortical tau with highest, high, moderate, and low degrees of certainty.

“A detailed history of the clinical symptoms, which may be supported by neuropsychological testing, with the observation of disproportionate hippocampal atrophy and limbic degeneration on MRI/FDG yields a high confidence in a diagnosis of LANS, where the most likely symptom-driving proteinopathy is TDP-43 and not Alzheimer’s associated proteins,” the first author, Nick Corriveau-Lecavalier, PhD, assistant professor of neurology and psychology at Mayo Clinic, Rochester, Minnesota, told this news organization.

To validate the criteria, the investigators screened autopsied patients from Mayo Clinic and Alzheimer’s Disease Neuroimaging Initiative cohorts and applied the criteria to those with a predominant amnestic syndrome and those who had AD neuropathologic change, limbic-predominant age-related TDP-43 encephalopathy, or both pathologies at autopsy.

“The criteria effectively categorized these cases, with Alzheimer’s disease having the lowest likelihoods, limbic-predominant age-related TDP-43 encephalopathy patients having the highest likelihoods, and patients with both pathologies having intermediate likelihoods,” the investigators reported.

“Patients with high likelihoods had a milder and slower clinical course and more severe temporo-limbic degeneration compared to those with low likelihoods,” they added.

Dr. Corriveau-Lecavalier said the team is currently analyzing longitudinal cognitive and imaging trajectories in LANS over several years. “This will help us better understand how LANS and Alzheimer’s differ in their sequence of symptoms over time.”

It is important to understand that memory symptoms in old age are not “unequivocally” driven by Alzheimer’s and that LANS progresses more slowly and has a better prognosis than AD, he noted.

In addition, in vivo markers of TDP-43 are “on the horizon and can hopefully make their way to human research settings soon. This will help better understand the underlying molecular etiologies causing LANS and associated symptoms,” he said.

Dr. Corriveau-Lecavalier said the LANS criteria are ready for clinical use by experts in neurologic care. These criteria can be used to inform not only diagnosis but also prognosis, where this syndrome is associated with slow and mild progression and a memory-dominant profile.

He added that “the new criteria are also routinely used in our practice to make decisions about anti-amyloid treatment eligibility.”

Commenting on the research for this news organization, Rebecca M. Edelmayer, PhD, Alzheimer’s Association senior director of scientific engagement, said the research “exemplifies the great need to develop objective criteria for diagnosis and staging of Alzheimer’s and all other types of dementia and to create an integrated biological and clinical staging scheme that can be used effectively by physicians.”

“Advances in biomarkers will help to differentiate all types of dementia when incorporated into the diagnostic workup, but until those tools are available, a more succinct clinical criteria for diagnosis can be used to support a more personalized medicine approach to treatment, care, and enrollment into clinical studies,” said Dr. Edelmayer, who wasn’t involved in the research.

The research was funded in part by the National Institutes of Health and by the Robert Wood Johnson Foundation, the Elsie & Marvin Dekelboum Family Foundation, the Liston Family Foundation, the Edson Family, the Gerald A. and Henrietta Rauenhorst Foundation, and the Foundation Dr Corinne Schuler. Dr. Corriveau-Lecavalier and Dr. Edelmayer had no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Proposed clinical criteria for a memory loss disorder that is often misdiagnosed as Alzheimer’s disease (AD) have been published.

The new criteria for limbic-predominant amnestic neurodegenerative syndrome (LANS) provide a framework for neurologists and other experts to classify the condition and offer a more precise diagnosis and potential treatments.

“In our clinical work, we see patients whose memory symptoms appear to mimic Alzheimer’s disease, but when you look at their brain imaging or biomarkers, it’s clear they don’t have Alzheimer’s. Until now, there has not been a specific medical diagnosis to point to, but now we can offer them some answers,” senior investigator David T. Jones, MD, said in a release.

The proposed criteria and the research behind it were published online in Brain Communications and will be presented at the Alzheimer›s Association International Conference in Philadelphia.
 

Already in Use

Predominant limbic degeneration has been linked to various underlying etiologies, older age, predominant impairment of episodic memory, and slow clinical progression, the investigators noted. However, they added, the neurologic syndrome associated with predominant limbic degeneration is undefined.

Developing clinical criteria and validating them “is critical to distinguish such a syndrome from those originating from neocortical degeneration, which may differ in underlying etiology, disease course, and therapeutic needs,” the investigators wrote.

The newly proposed clinical criteria apply to LANS, which is “highly associated with limbic-predominant age-related TDP-43 encephalopathy but also other pathologic entities.”

The criteria incorporate core, standard, and advanced features including older age at evaluation, mild clinical syndrome, disproportionate hippocampal atrophy, impaired semantic memory, limbic hypometabolism, absence of endocortical degeneration, and low likelihood of neocortical tau with highest, high, moderate, and low degrees of certainty.

“A detailed history of the clinical symptoms, which may be supported by neuropsychological testing, with the observation of disproportionate hippocampal atrophy and limbic degeneration on MRI/FDG yields a high confidence in a diagnosis of LANS, where the most likely symptom-driving proteinopathy is TDP-43 and not Alzheimer’s associated proteins,” the first author, Nick Corriveau-Lecavalier, PhD, assistant professor of neurology and psychology at Mayo Clinic, Rochester, Minnesota, told this news organization.

To validate the criteria, the investigators screened autopsied patients from Mayo Clinic and Alzheimer’s Disease Neuroimaging Initiative cohorts and applied the criteria to those with a predominant amnestic syndrome and those who had AD neuropathologic change, limbic-predominant age-related TDP-43 encephalopathy, or both pathologies at autopsy.

“The criteria effectively categorized these cases, with Alzheimer’s disease having the lowest likelihoods, limbic-predominant age-related TDP-43 encephalopathy patients having the highest likelihoods, and patients with both pathologies having intermediate likelihoods,” the investigators reported.

“Patients with high likelihoods had a milder and slower clinical course and more severe temporo-limbic degeneration compared to those with low likelihoods,” they added.

Dr. Corriveau-Lecavalier said the team is currently analyzing longitudinal cognitive and imaging trajectories in LANS over several years. “This will help us better understand how LANS and Alzheimer’s differ in their sequence of symptoms over time.”

It is important to understand that memory symptoms in old age are not “unequivocally” driven by Alzheimer’s and that LANS progresses more slowly and has a better prognosis than AD, he noted.

In addition, in vivo markers of TDP-43 are “on the horizon and can hopefully make their way to human research settings soon. This will help better understand the underlying molecular etiologies causing LANS and associated symptoms,” he said.

Dr. Corriveau-Lecavalier said the LANS criteria are ready for clinical use by experts in neurologic care. These criteria can be used to inform not only diagnosis but also prognosis, where this syndrome is associated with slow and mild progression and a memory-dominant profile.

He added that “the new criteria are also routinely used in our practice to make decisions about anti-amyloid treatment eligibility.”

Commenting on the research for this news organization, Rebecca M. Edelmayer, PhD, Alzheimer’s Association senior director of scientific engagement, said the research “exemplifies the great need to develop objective criteria for diagnosis and staging of Alzheimer’s and all other types of dementia and to create an integrated biological and clinical staging scheme that can be used effectively by physicians.”

“Advances in biomarkers will help to differentiate all types of dementia when incorporated into the diagnostic workup, but until those tools are available, a more succinct clinical criteria for diagnosis can be used to support a more personalized medicine approach to treatment, care, and enrollment into clinical studies,” said Dr. Edelmayer, who wasn’t involved in the research.

The research was funded in part by the National Institutes of Health and by the Robert Wood Johnson Foundation, the Elsie & Marvin Dekelboum Family Foundation, the Liston Family Foundation, the Edson Family, the Gerald A. and Henrietta Rauenhorst Foundation, and the Foundation Dr Corinne Schuler. Dr. Corriveau-Lecavalier and Dr. Edelmayer had no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article