Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort
Allow Teaser Image

Lean muscle mass protective against Alzheimer’s?

Article Type
Changed
Wed, 07/05/2023 - 11:54

Lean muscle mass may offer protection against the development of Alzheimer’s disease (AD), new research suggests.

Investigators analyzed data on more than 450,000 participants in the UK Biobank as well as two independent samples of more than 320,000 individuals with and without AD, and more than 260,000 individuals participating in a separate genes and intelligence study.

They estimated lean muscle and fat tissue in the arms and legs and found, in adjusted analyses, over 500 genetic variants associated with lean mass.

On average, higher genetically lean mass was associated with a “modest but statistically robust” reduction in AD risk and with superior performance on cognitive tasks.

“Using human genetic data, we found evidence for a protective effect of lean mass on risk of Alzheimer’s disease,” study investigators Iyas Daghlas, MD, a resident in the department of neurology, University of California, San Francisco, said in an interview.

Although “clinical intervention studies are needed to confirm this effect, this study supports current recommendations to maintain a healthy lifestyle to prevent dementia,” he said.

The study was published online in BMJ Medicine.
 

Naturally randomized research

Several measures of body composition have been investigated for their potential association with AD. Lean mass – a “proxy for muscle mass, defined as the difference between total mass and fat mass” – has been shown to be reduced in patients with AD compared with controls, the researchers noted.

“Previous research studies have tested the relationship of body mass index with Alzheimer’s disease and did not find evidence for a causal effect,” Dr. Daghlas said. “We wondered whether BMI was an insufficiently granular measure and hypothesized that disaggregating body mass into lean mass and fat mass could reveal novel associations with disease.”

Most studies have used case-control designs, which might be biased by “residual confounding or reverse causality.” Naturally randomized data “may be used as an alternative to conventional observational studies to investigate causal relations between risk factors and diseases,” the researchers wrote.

In particular, the Mendelian randomization (MR) paradigm randomly allocates germline genetic variants and uses them as proxies for a specific risk factor.

MR “is a technique that permits researchers to investigate cause-and-effect relationships using human genetic data,” Dr. Daghlas explained. “In effect, we’re studying the results of a naturally randomized experiment whereby some individuals are genetically allocated to carry more lean mass.” 

The current study used MR to investigate the effect of genetically proxied lean mass on the risk of AD and the “related phenotype” of cognitive performance.
 

Genetic proxy

As genetic proxies for lean mass, the researchers chose single nucleotide polymorphisms (genetic variants) that were associated, in a genome-wide association study (GWAS), with appendicular lean mass.

Appendicular lean mass “more accurately reflects the effects of lean mass than whole body lean mass, which includes smooth and cardiac muscle,” the authors explained.

This GWAS used phenotypic and genetic data from 450,243 participants in the UK Biobank cohort (mean age 57 years). All participants were of European ancestry.

The researchers adjusted for age, sex, and genetic ancestry. They measured appendicular lean mass using bioimpedance – an electric current that flows at different rates through the body, depending on its composition.

In addition to the UK Biobank participants, the researchers drew on an independent sample of 21,982 people with AD; a control group of 41,944 people without AD; a replication sample of 7,329 people with and 252,879 people without AD to validate the findings; and 269,867 people taking part in a genome-wide study of cognitive performance.

The researchers identified 584 variants that met criteria for use as genetic proxies for lean mass. None were located within the APOE gene region. In the aggregate, these variants explained 10.3% of the variance in appendicular lean mass.

Each standard deviation increase in genetically proxied lean mass was associated with a 12% reduction in AD risk (odds ratio [OR], 0.88; 95% confidence interval [CI], 0.82-0.95; P < .001). This finding was replicated in the independent consortium (OR, 0.91; 95% CI, 0.83-0.99; P = .02).

The findings remained “consistent” in sensitivity analyses.
 

 

 

A modifiable risk factor?

Higher appendicular lean mass was associated with higher levels of cognitive performance, with each SD increase in lean mass associated with an SD increase in cognitive performance (OR, 0.09; 95% CI, 0.06-0.11; P = .001).

“Adjusting for potential mediation through performance did not reduce the association between appendicular lean mass and risk of AD,” the authors wrote.

They obtained similar results using genetically proxied trunk and whole-body lean mass, after adjusting for fat mass.

The authors noted several limitations. The bioimpedance measures “only predict, but do not directly measure, lean mass.” Moreover, the approach didn’t examine whether a “critical window of risk factor timing” exists, during which lean mass might play a role in influencing AD risk and after which “interventions would no longer be effective.” Nor could the study determine whether increasing lean mass could reverse AD pathology in patients with preclinical disease or mild cognitive impairment.

Nevertheless, the findings suggest “that lean mass might be a possible modifiable protective factor for Alzheimer’s disease,” the authors wrote. “The mechanisms underlying this finding, as well as the clinical and public health implications, warrant further investigation.”
 

Novel strategies

In a comment, Iva Miljkovic, MD, PhD, associate professor, department of epidemiology, University of Pittsburgh, said the investigators used “very rigorous methodology.”

The finding suggesting that lean mass is associated with better cognitive function is “important, as cognitive impairment can become stable rather than progress to a pathological state; and, in some cases, can even be reversed.”

In those cases, “identifying the underlying cause – e.g., low lean mass – can significantly improve cognitive function,” said Dr. Miljkovic, senior author of a study showing muscle fat as a risk factor for cognitive decline.

More research will enable us to “expand our understanding” of the mechanisms involved and determine whether interventions aimed at preventing muscle loss and/or increasing muscle fat may have a beneficial effect on cognitive function,” she said. “This might lead to novel strategies to prevent AD.”

Dr. Daghlas is supported by the British Heart Foundation Centre of Research Excellence at Imperial College, London, and is employed part-time by Novo Nordisk. Dr. Miljkovic reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Lean muscle mass may offer protection against the development of Alzheimer’s disease (AD), new research suggests.

Investigators analyzed data on more than 450,000 participants in the UK Biobank as well as two independent samples of more than 320,000 individuals with and without AD, and more than 260,000 individuals participating in a separate genes and intelligence study.

They estimated lean muscle and fat tissue in the arms and legs and found, in adjusted analyses, over 500 genetic variants associated with lean mass.

On average, higher genetically lean mass was associated with a “modest but statistically robust” reduction in AD risk and with superior performance on cognitive tasks.

“Using human genetic data, we found evidence for a protective effect of lean mass on risk of Alzheimer’s disease,” study investigators Iyas Daghlas, MD, a resident in the department of neurology, University of California, San Francisco, said in an interview.

Although “clinical intervention studies are needed to confirm this effect, this study supports current recommendations to maintain a healthy lifestyle to prevent dementia,” he said.

The study was published online in BMJ Medicine.
 

Naturally randomized research

Several measures of body composition have been investigated for their potential association with AD. Lean mass – a “proxy for muscle mass, defined as the difference between total mass and fat mass” – has been shown to be reduced in patients with AD compared with controls, the researchers noted.

“Previous research studies have tested the relationship of body mass index with Alzheimer’s disease and did not find evidence for a causal effect,” Dr. Daghlas said. “We wondered whether BMI was an insufficiently granular measure and hypothesized that disaggregating body mass into lean mass and fat mass could reveal novel associations with disease.”

Most studies have used case-control designs, which might be biased by “residual confounding or reverse causality.” Naturally randomized data “may be used as an alternative to conventional observational studies to investigate causal relations between risk factors and diseases,” the researchers wrote.

In particular, the Mendelian randomization (MR) paradigm randomly allocates germline genetic variants and uses them as proxies for a specific risk factor.

MR “is a technique that permits researchers to investigate cause-and-effect relationships using human genetic data,” Dr. Daghlas explained. “In effect, we’re studying the results of a naturally randomized experiment whereby some individuals are genetically allocated to carry more lean mass.” 

The current study used MR to investigate the effect of genetically proxied lean mass on the risk of AD and the “related phenotype” of cognitive performance.
 

Genetic proxy

As genetic proxies for lean mass, the researchers chose single nucleotide polymorphisms (genetic variants) that were associated, in a genome-wide association study (GWAS), with appendicular lean mass.

Appendicular lean mass “more accurately reflects the effects of lean mass than whole body lean mass, which includes smooth and cardiac muscle,” the authors explained.

This GWAS used phenotypic and genetic data from 450,243 participants in the UK Biobank cohort (mean age 57 years). All participants were of European ancestry.

The researchers adjusted for age, sex, and genetic ancestry. They measured appendicular lean mass using bioimpedance – an electric current that flows at different rates through the body, depending on its composition.

In addition to the UK Biobank participants, the researchers drew on an independent sample of 21,982 people with AD; a control group of 41,944 people without AD; a replication sample of 7,329 people with and 252,879 people without AD to validate the findings; and 269,867 people taking part in a genome-wide study of cognitive performance.

The researchers identified 584 variants that met criteria for use as genetic proxies for lean mass. None were located within the APOE gene region. In the aggregate, these variants explained 10.3% of the variance in appendicular lean mass.

Each standard deviation increase in genetically proxied lean mass was associated with a 12% reduction in AD risk (odds ratio [OR], 0.88; 95% confidence interval [CI], 0.82-0.95; P < .001). This finding was replicated in the independent consortium (OR, 0.91; 95% CI, 0.83-0.99; P = .02).

The findings remained “consistent” in sensitivity analyses.
 

 

 

A modifiable risk factor?

Higher appendicular lean mass was associated with higher levels of cognitive performance, with each SD increase in lean mass associated with an SD increase in cognitive performance (OR, 0.09; 95% CI, 0.06-0.11; P = .001).

“Adjusting for potential mediation through performance did not reduce the association between appendicular lean mass and risk of AD,” the authors wrote.

They obtained similar results using genetically proxied trunk and whole-body lean mass, after adjusting for fat mass.

The authors noted several limitations. The bioimpedance measures “only predict, but do not directly measure, lean mass.” Moreover, the approach didn’t examine whether a “critical window of risk factor timing” exists, during which lean mass might play a role in influencing AD risk and after which “interventions would no longer be effective.” Nor could the study determine whether increasing lean mass could reverse AD pathology in patients with preclinical disease or mild cognitive impairment.

Nevertheless, the findings suggest “that lean mass might be a possible modifiable protective factor for Alzheimer’s disease,” the authors wrote. “The mechanisms underlying this finding, as well as the clinical and public health implications, warrant further investigation.”
 

Novel strategies

In a comment, Iva Miljkovic, MD, PhD, associate professor, department of epidemiology, University of Pittsburgh, said the investigators used “very rigorous methodology.”

The finding suggesting that lean mass is associated with better cognitive function is “important, as cognitive impairment can become stable rather than progress to a pathological state; and, in some cases, can even be reversed.”

In those cases, “identifying the underlying cause – e.g., low lean mass – can significantly improve cognitive function,” said Dr. Miljkovic, senior author of a study showing muscle fat as a risk factor for cognitive decline.

More research will enable us to “expand our understanding” of the mechanisms involved and determine whether interventions aimed at preventing muscle loss and/or increasing muscle fat may have a beneficial effect on cognitive function,” she said. “This might lead to novel strategies to prevent AD.”

Dr. Daghlas is supported by the British Heart Foundation Centre of Research Excellence at Imperial College, London, and is employed part-time by Novo Nordisk. Dr. Miljkovic reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Lean muscle mass may offer protection against the development of Alzheimer’s disease (AD), new research suggests.

Investigators analyzed data on more than 450,000 participants in the UK Biobank as well as two independent samples of more than 320,000 individuals with and without AD, and more than 260,000 individuals participating in a separate genes and intelligence study.

They estimated lean muscle and fat tissue in the arms and legs and found, in adjusted analyses, over 500 genetic variants associated with lean mass.

On average, higher genetically lean mass was associated with a “modest but statistically robust” reduction in AD risk and with superior performance on cognitive tasks.

“Using human genetic data, we found evidence for a protective effect of lean mass on risk of Alzheimer’s disease,” study investigators Iyas Daghlas, MD, a resident in the department of neurology, University of California, San Francisco, said in an interview.

Although “clinical intervention studies are needed to confirm this effect, this study supports current recommendations to maintain a healthy lifestyle to prevent dementia,” he said.

The study was published online in BMJ Medicine.
 

Naturally randomized research

Several measures of body composition have been investigated for their potential association with AD. Lean mass – a “proxy for muscle mass, defined as the difference between total mass and fat mass” – has been shown to be reduced in patients with AD compared with controls, the researchers noted.

“Previous research studies have tested the relationship of body mass index with Alzheimer’s disease and did not find evidence for a causal effect,” Dr. Daghlas said. “We wondered whether BMI was an insufficiently granular measure and hypothesized that disaggregating body mass into lean mass and fat mass could reveal novel associations with disease.”

Most studies have used case-control designs, which might be biased by “residual confounding or reverse causality.” Naturally randomized data “may be used as an alternative to conventional observational studies to investigate causal relations between risk factors and diseases,” the researchers wrote.

In particular, the Mendelian randomization (MR) paradigm randomly allocates germline genetic variants and uses them as proxies for a specific risk factor.

MR “is a technique that permits researchers to investigate cause-and-effect relationships using human genetic data,” Dr. Daghlas explained. “In effect, we’re studying the results of a naturally randomized experiment whereby some individuals are genetically allocated to carry more lean mass.” 

The current study used MR to investigate the effect of genetically proxied lean mass on the risk of AD and the “related phenotype” of cognitive performance.
 

Genetic proxy

As genetic proxies for lean mass, the researchers chose single nucleotide polymorphisms (genetic variants) that were associated, in a genome-wide association study (GWAS), with appendicular lean mass.

Appendicular lean mass “more accurately reflects the effects of lean mass than whole body lean mass, which includes smooth and cardiac muscle,” the authors explained.

This GWAS used phenotypic and genetic data from 450,243 participants in the UK Biobank cohort (mean age 57 years). All participants were of European ancestry.

The researchers adjusted for age, sex, and genetic ancestry. They measured appendicular lean mass using bioimpedance – an electric current that flows at different rates through the body, depending on its composition.

In addition to the UK Biobank participants, the researchers drew on an independent sample of 21,982 people with AD; a control group of 41,944 people without AD; a replication sample of 7,329 people with and 252,879 people without AD to validate the findings; and 269,867 people taking part in a genome-wide study of cognitive performance.

The researchers identified 584 variants that met criteria for use as genetic proxies for lean mass. None were located within the APOE gene region. In the aggregate, these variants explained 10.3% of the variance in appendicular lean mass.

Each standard deviation increase in genetically proxied lean mass was associated with a 12% reduction in AD risk (odds ratio [OR], 0.88; 95% confidence interval [CI], 0.82-0.95; P < .001). This finding was replicated in the independent consortium (OR, 0.91; 95% CI, 0.83-0.99; P = .02).

The findings remained “consistent” in sensitivity analyses.
 

 

 

A modifiable risk factor?

Higher appendicular lean mass was associated with higher levels of cognitive performance, with each SD increase in lean mass associated with an SD increase in cognitive performance (OR, 0.09; 95% CI, 0.06-0.11; P = .001).

“Adjusting for potential mediation through performance did not reduce the association between appendicular lean mass and risk of AD,” the authors wrote.

They obtained similar results using genetically proxied trunk and whole-body lean mass, after adjusting for fat mass.

The authors noted several limitations. The bioimpedance measures “only predict, but do not directly measure, lean mass.” Moreover, the approach didn’t examine whether a “critical window of risk factor timing” exists, during which lean mass might play a role in influencing AD risk and after which “interventions would no longer be effective.” Nor could the study determine whether increasing lean mass could reverse AD pathology in patients with preclinical disease or mild cognitive impairment.

Nevertheless, the findings suggest “that lean mass might be a possible modifiable protective factor for Alzheimer’s disease,” the authors wrote. “The mechanisms underlying this finding, as well as the clinical and public health implications, warrant further investigation.”
 

Novel strategies

In a comment, Iva Miljkovic, MD, PhD, associate professor, department of epidemiology, University of Pittsburgh, said the investigators used “very rigorous methodology.”

The finding suggesting that lean mass is associated with better cognitive function is “important, as cognitive impairment can become stable rather than progress to a pathological state; and, in some cases, can even be reversed.”

In those cases, “identifying the underlying cause – e.g., low lean mass – can significantly improve cognitive function,” said Dr. Miljkovic, senior author of a study showing muscle fat as a risk factor for cognitive decline.

More research will enable us to “expand our understanding” of the mechanisms involved and determine whether interventions aimed at preventing muscle loss and/or increasing muscle fat may have a beneficial effect on cognitive function,” she said. “This might lead to novel strategies to prevent AD.”

Dr. Daghlas is supported by the British Heart Foundation Centre of Research Excellence at Imperial College, London, and is employed part-time by Novo Nordisk. Dr. Miljkovic reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BMJ MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New data on traumatic brain injury show it’s chronic, evolving

Article Type
Changed
Wed, 06/28/2023 - 13:28

New longitudinal data from the TRACK TBI investigators show that recovery from traumatic brain injury (TBI) is a dynamic process that continues to evolve well beyond the initial 12 months after injury.

The data show that patients with TBI may continue to improve or decline during a period of up to 7 years after injury, making it more of a chronic condition, the investigators report.

“Our results dispute the notion that TBI is a discrete, isolated medical event with a finite, static functional outcome following a relatively short period of upward recovery (typically up to 1 year),” Benjamin Brett, PhD, assistant professor, departments of neurosurgery and neurology, Medical College of Wisconsin, Milwaukee, told this news organization.

“Rather, individuals continue to exhibit improvement and decline across a range of domains, including psychiatric, cognitive, and functional outcomes, even 2-7 years after their injury,” Dr. Brett said.

“Ultimately, our findings support conceptualizing TBI as a chronic condition for many patients, which requires routine follow-up, medical monitoring, responsive care, and support, adapting to their evolving needs many years following injury,” he said.

Results of the TRACK TBI LONG (Transforming Research and Clinical Knowledge in TBI Longitudinal study) were published online in Neurology.
 

Chronic and evolving

The results are based on 1,264 adults (mean age at injury, 41 years) from the initial TRACK TBI study, including 917 with mild TBI (mTBI) and 193 with moderate/severe TBI (msTBI), who were matched to 154 control patients who had experienced orthopedic trauma without evidence of head injury (OTC).

The participants were followed annually for up to 7 years after injury using the Glasgow Outcome Scale–Extended (GOSE), Brief Symptom Inventory–18 (BSI), and the Brief Test of Adult Cognition by Telephone (BTACT), as well as a self-reported perception of function. The researchers calculated rates of change (classified as stable, improved, or declined) for individual outcomes at each long-term follow-up.

In general, “stable” was the most frequent change outcome for the individual measures from postinjury baseline assessment to 7 years post injury.

However, a substantial proportion of patients with TBI (regardless of severity) experienced changes in psychiatric status, cognition, and functional outcomes over the years.

When the GOSE, BSI, and BTACT were considered collectively, rates of decline were 21% for mTBI, 26% for msTBI, and 15% for OTC.

The highest rates of decline were in functional outcomes (GOSE scores). On average, over the course of 2-7 years post injury, 29% of patients with mTBI and 23% of those with msTBI experienced a decline in the ability to function with daily activities.

A pattern of improvement on the GOSE was noted in 36% of patients with msTBI and 22% patients with mTBI.

Notably, said Dr. Brett, patients who experienced greater difficulties near the time of injury showed improvement for a period of 2-7 years post injury. Patient factors, such as older age at the time of the injury, were associated with greater risk of long-term decline.

“Our findings highlight the need to embrace conceptualization of TBI as a chronic condition in order to establish systems of care that provide continued follow-up with treatment and supports that adapt to evolving patient needs, regardless of the directions of change,” Dr. Brett told this news organization.
 

 

 

Important and novel work

In a linked editorial, Robynne Braun, MD, PhD, with the department of neurology, University of Maryland, Baltimore, notes that there have been “few prospective studies examining postinjury outcomes on this longer timescale, especially in mild TBI, making this an important and novel body of work.”

The study “effectively demonstrates that changes in function across multiple domains continue to occur well beyond the conventionally tracked 6- to 12-month period of injury recovery,” Dr. Braun writes.

The observation that over the 7-year follow-up, a substantial proportion of patients with mTBI and msTBI exhibited a pattern of decline on the GOSE suggests that they “may have needed more ongoing medical monitoring, rehabilitation, or supportive services to prevent worsening,” Dr. Braun adds.

At the same time, the improvement pattern on the GOSE suggests “opportunities for recovery that further rehabilitative or medical services might have enhanced.”

The study was funded by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, the National Football League Scientific Advisory Board, and the U.S. Department of Defense. Dr. Brett and Dr. Braun have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

New longitudinal data from the TRACK TBI investigators show that recovery from traumatic brain injury (TBI) is a dynamic process that continues to evolve well beyond the initial 12 months after injury.

The data show that patients with TBI may continue to improve or decline during a period of up to 7 years after injury, making it more of a chronic condition, the investigators report.

“Our results dispute the notion that TBI is a discrete, isolated medical event with a finite, static functional outcome following a relatively short period of upward recovery (typically up to 1 year),” Benjamin Brett, PhD, assistant professor, departments of neurosurgery and neurology, Medical College of Wisconsin, Milwaukee, told this news organization.

“Rather, individuals continue to exhibit improvement and decline across a range of domains, including psychiatric, cognitive, and functional outcomes, even 2-7 years after their injury,” Dr. Brett said.

“Ultimately, our findings support conceptualizing TBI as a chronic condition for many patients, which requires routine follow-up, medical monitoring, responsive care, and support, adapting to their evolving needs many years following injury,” he said.

Results of the TRACK TBI LONG (Transforming Research and Clinical Knowledge in TBI Longitudinal study) were published online in Neurology.
 

Chronic and evolving

The results are based on 1,264 adults (mean age at injury, 41 years) from the initial TRACK TBI study, including 917 with mild TBI (mTBI) and 193 with moderate/severe TBI (msTBI), who were matched to 154 control patients who had experienced orthopedic trauma without evidence of head injury (OTC).

The participants were followed annually for up to 7 years after injury using the Glasgow Outcome Scale–Extended (GOSE), Brief Symptom Inventory–18 (BSI), and the Brief Test of Adult Cognition by Telephone (BTACT), as well as a self-reported perception of function. The researchers calculated rates of change (classified as stable, improved, or declined) for individual outcomes at each long-term follow-up.

In general, “stable” was the most frequent change outcome for the individual measures from postinjury baseline assessment to 7 years post injury.

However, a substantial proportion of patients with TBI (regardless of severity) experienced changes in psychiatric status, cognition, and functional outcomes over the years.

When the GOSE, BSI, and BTACT were considered collectively, rates of decline were 21% for mTBI, 26% for msTBI, and 15% for OTC.

The highest rates of decline were in functional outcomes (GOSE scores). On average, over the course of 2-7 years post injury, 29% of patients with mTBI and 23% of those with msTBI experienced a decline in the ability to function with daily activities.

A pattern of improvement on the GOSE was noted in 36% of patients with msTBI and 22% patients with mTBI.

Notably, said Dr. Brett, patients who experienced greater difficulties near the time of injury showed improvement for a period of 2-7 years post injury. Patient factors, such as older age at the time of the injury, were associated with greater risk of long-term decline.

“Our findings highlight the need to embrace conceptualization of TBI as a chronic condition in order to establish systems of care that provide continued follow-up with treatment and supports that adapt to evolving patient needs, regardless of the directions of change,” Dr. Brett told this news organization.
 

 

 

Important and novel work

In a linked editorial, Robynne Braun, MD, PhD, with the department of neurology, University of Maryland, Baltimore, notes that there have been “few prospective studies examining postinjury outcomes on this longer timescale, especially in mild TBI, making this an important and novel body of work.”

The study “effectively demonstrates that changes in function across multiple domains continue to occur well beyond the conventionally tracked 6- to 12-month period of injury recovery,” Dr. Braun writes.

The observation that over the 7-year follow-up, a substantial proportion of patients with mTBI and msTBI exhibited a pattern of decline on the GOSE suggests that they “may have needed more ongoing medical monitoring, rehabilitation, or supportive services to prevent worsening,” Dr. Braun adds.

At the same time, the improvement pattern on the GOSE suggests “opportunities for recovery that further rehabilitative or medical services might have enhanced.”

The study was funded by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, the National Football League Scientific Advisory Board, and the U.S. Department of Defense. Dr. Brett and Dr. Braun have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

New longitudinal data from the TRACK TBI investigators show that recovery from traumatic brain injury (TBI) is a dynamic process that continues to evolve well beyond the initial 12 months after injury.

The data show that patients with TBI may continue to improve or decline during a period of up to 7 years after injury, making it more of a chronic condition, the investigators report.

“Our results dispute the notion that TBI is a discrete, isolated medical event with a finite, static functional outcome following a relatively short period of upward recovery (typically up to 1 year),” Benjamin Brett, PhD, assistant professor, departments of neurosurgery and neurology, Medical College of Wisconsin, Milwaukee, told this news organization.

“Rather, individuals continue to exhibit improvement and decline across a range of domains, including psychiatric, cognitive, and functional outcomes, even 2-7 years after their injury,” Dr. Brett said.

“Ultimately, our findings support conceptualizing TBI as a chronic condition for many patients, which requires routine follow-up, medical monitoring, responsive care, and support, adapting to their evolving needs many years following injury,” he said.

Results of the TRACK TBI LONG (Transforming Research and Clinical Knowledge in TBI Longitudinal study) were published online in Neurology.
 

Chronic and evolving

The results are based on 1,264 adults (mean age at injury, 41 years) from the initial TRACK TBI study, including 917 with mild TBI (mTBI) and 193 with moderate/severe TBI (msTBI), who were matched to 154 control patients who had experienced orthopedic trauma without evidence of head injury (OTC).

The participants were followed annually for up to 7 years after injury using the Glasgow Outcome Scale–Extended (GOSE), Brief Symptom Inventory–18 (BSI), and the Brief Test of Adult Cognition by Telephone (BTACT), as well as a self-reported perception of function. The researchers calculated rates of change (classified as stable, improved, or declined) for individual outcomes at each long-term follow-up.

In general, “stable” was the most frequent change outcome for the individual measures from postinjury baseline assessment to 7 years post injury.

However, a substantial proportion of patients with TBI (regardless of severity) experienced changes in psychiatric status, cognition, and functional outcomes over the years.

When the GOSE, BSI, and BTACT were considered collectively, rates of decline were 21% for mTBI, 26% for msTBI, and 15% for OTC.

The highest rates of decline were in functional outcomes (GOSE scores). On average, over the course of 2-7 years post injury, 29% of patients with mTBI and 23% of those with msTBI experienced a decline in the ability to function with daily activities.

A pattern of improvement on the GOSE was noted in 36% of patients with msTBI and 22% patients with mTBI.

Notably, said Dr. Brett, patients who experienced greater difficulties near the time of injury showed improvement for a period of 2-7 years post injury. Patient factors, such as older age at the time of the injury, were associated with greater risk of long-term decline.

“Our findings highlight the need to embrace conceptualization of TBI as a chronic condition in order to establish systems of care that provide continued follow-up with treatment and supports that adapt to evolving patient needs, regardless of the directions of change,” Dr. Brett told this news organization.
 

 

 

Important and novel work

In a linked editorial, Robynne Braun, MD, PhD, with the department of neurology, University of Maryland, Baltimore, notes that there have been “few prospective studies examining postinjury outcomes on this longer timescale, especially in mild TBI, making this an important and novel body of work.”

The study “effectively demonstrates that changes in function across multiple domains continue to occur well beyond the conventionally tracked 6- to 12-month period of injury recovery,” Dr. Braun writes.

The observation that over the 7-year follow-up, a substantial proportion of patients with mTBI and msTBI exhibited a pattern of decline on the GOSE suggests that they “may have needed more ongoing medical monitoring, rehabilitation, or supportive services to prevent worsening,” Dr. Braun adds.

At the same time, the improvement pattern on the GOSE suggests “opportunities for recovery that further rehabilitative or medical services might have enhanced.”

The study was funded by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, the National Football League Scientific Advisory Board, and the U.S. Department of Defense. Dr. Brett and Dr. Braun have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Can a repurposed Parkinson’s drug slow ALS progression?

Article Type
Changed
Thu, 06/22/2023 - 15:25

Ropinirole, a drug used for Parkinson’s disease (PD), shows promise in slowing the progression of amyotrophic lateral sclerosis (ALS), early research suggests. However, at least one expert believes the study has “significant flaws.”
 

Investigators randomly assigned 20 individuals with sporadic ALS to receive either ropinirole or placebo for 24 weeks. During the double-blind period, there was no difference between the groups in terms of decline in functional status.

However, during a further open-label extension period, the ropinirole group showed significant suppression of functional decline and an average of an additional 7 months of progression-free survival.

The researchers were able to predict clinical responsiveness to ropinirole in vitro by analyzing motor neurons derived from participants’ stem cells.

“We found that ropinirole is safe and tolerable for ALS patients and shows therapeutic promise at helping them sustain daily activity and muscle strength,” first author Satoru Morimoto, MD, of the department of physiology, Keio University School of Medicine, Tokyo, said in a news release.

The study was published online in Cell Stem Cell.
 

Feasibility study

“ALS is totally incurable and it’s a very difficult disease to treat,” senior author Hideyuki Okano, MD, PhD, professor, department of physiology, Keio University, said in the news release.

Preclinical animal models have “limited translational potential” for identifying drug candidates, but induced pluripotent stem cell (iPSC)–derived motor neurons (MNs) from ALS patients can “overcome these limitations for drug screening,” the authors write.

“We previously identified ropinirole [a dopamine D2 receptor agonist] as a potential anti-ALS drug in vitro by iPSC drug discovery,” Dr. Okano said.

The current trial was a randomized, placebo-controlled phase 1/2a feasibility trial that evaluated the safety, tolerability, and efficacy of ropinirole in patients with ALS, using several parameters:

  • The revised ALS functional rating scale (ALSFRS-R) score.
  • Composite functional endpoints.
  • Event-free survival.
  • Time to ≤ 50% forced vital capacity (FVC).

The trial consisted of a 12-week run-in period, a 24-week double-blind period, an open-label extension period that lasted from 4 to 24 weeks, and a 4-week follow-up period after administration.

Thirteen patients were assigned to receive ropinirole (23.1% women; mean age, 65.2 ± 12.6 years; 7.7% with clinically definite and 76.9% with clinically probable ALS); seven were assigned to receive placebo (57.1% women; mean age, 66.3 ± 7.5 years; 14.3% with clinically definite and 85.7% with clinically probable ALS).

Of the treatment group, 30.8% had a bulbar onset lesion vs. 57.1% in the placebo group. At baseline, the mean FVC was 94.4% ± 14.9 and 81.5% ± 23.2 in the ropinirole and placebo groups, respectively. The mean body mass index (BMI) was 22.91 ± 3.82 and 19.69 ± 2.63, respectively.

Of the participants,12 in the ropinirole and six in the control group completed the full 24-week treatment protocol; 12 in the ropinirole and five in the placebo group completed the open-label extension (participants who had received placebo were switched to the active drug).

However only seven participants in the ropinirole group and one participant in the placebo group completed the full 1-year trial.
 

 

 

‘Striking correlation’

“During the double-blind period, muscle strength and daily activity were maintained, but a decline in the ALSFRS-R … was not different from that in the placebo group,” the researchers write.

In the open-label extension period, the ropinirole group showed “significant suppression of ALSFRS-R decline,” with an ALSFRS-R score change of only 7.75 (95% confidence interval, 10.66-4.63) for the treatment group vs. 17.51 (95% CI, 22.46-12.56) for the placebo group.

The researchers used the assessment of function and survival (CAFS) score, which adjusts the ALSFRS-R score against mortality, to see whether functional benefits translated into improved survival.

The score “favored ropinirole” in the open-extension period and the entire treatment period but not in the double-blind period.

 

Disease progression events occurred in 7 of 7 (100%) participants in the placebo group and 7 of 13 (54%) in the ropinirole group, “suggesting a twofold decrease in disease progression” in the treatment group.

The ropinirole group experienced an additional 27.9 weeks of disease progression–free survival, compared with the placebo group.

“No participant discontinued treatment because of adverse experiences in either treatment group,” the authors report.

The analysis of iPSC-derived motor neurons from participants showed dopamine D2 receptor expression, as well as the potential involvement of the cholesterol pathway SREBP2 in the therapeutic effects of ropinirole. Lipid peroxide was also identified as a good “surrogate clinical marker to assess disease progression and drug efficacy.”

“We found a very striking correlation between a patient’s clinical response and the response of their motor neurons in vitro,” said Dr. Morimoto. “Patients whose motor neurons responded robustly to ropinirole in vitro had a much slower clinical disease progression with ropinirole treatment, while suboptimal responders showed much more rapid disease progression, despite taking ropinirole.”

Limitations include “small sample sizes and high attrition rates in the open-label extension period,” so “further validation” is required, the authors state.


 

Significant flaws

Commenting for this article, Carmel Armon, MD, MHS, professor of neurology, Loma Linda (Calif.) University, said the study “falls short of being a credible 1/2a clinical trial.”

Although the “intentions were good and the design not unusual,” the two groups were not “balanced on risk factors for faster progressing disease.” Rather, the placebo group was “tilted towards faster progressing disease” because there were more clinically definite and probable ALS patients in the placebo group than the treatment group, and there were more patients with bulbar onset.

Participants in the placebo group also had shorter median disease duration, lower BMI, and lower FVC, noted Dr. Armon, who was not involved with the study.

And only 1 in 7 control patients completed the open-label extension, compared with 7 of 13 patients in the intervention group.

“With these limitations, I would be disinclined to rely on the findings to justify a larger clinical trial,” Dr. Armon concluded.

The trial was sponsored by K Pharma. The study drug, active drugs, and placebo were supplied free of charge by GlaxoSmithKline K.K. Dr. Okano received grants from JSPS and AMED and grants and personal fees from K Pharma during the conduct of the study and personal fees from Sanbio, outside the submitted work. Dr. Okano has a patent on a therapeutic agent for ALS and composition for treatment licensed to K Pharma. The other authors’ disclosures and additional information are available in the original article. Dr. Armon reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Ropinirole, a drug used for Parkinson’s disease (PD), shows promise in slowing the progression of amyotrophic lateral sclerosis (ALS), early research suggests. However, at least one expert believes the study has “significant flaws.”
 

Investigators randomly assigned 20 individuals with sporadic ALS to receive either ropinirole or placebo for 24 weeks. During the double-blind period, there was no difference between the groups in terms of decline in functional status.

However, during a further open-label extension period, the ropinirole group showed significant suppression of functional decline and an average of an additional 7 months of progression-free survival.

The researchers were able to predict clinical responsiveness to ropinirole in vitro by analyzing motor neurons derived from participants’ stem cells.

“We found that ropinirole is safe and tolerable for ALS patients and shows therapeutic promise at helping them sustain daily activity and muscle strength,” first author Satoru Morimoto, MD, of the department of physiology, Keio University School of Medicine, Tokyo, said in a news release.

The study was published online in Cell Stem Cell.
 

Feasibility study

“ALS is totally incurable and it’s a very difficult disease to treat,” senior author Hideyuki Okano, MD, PhD, professor, department of physiology, Keio University, said in the news release.

Preclinical animal models have “limited translational potential” for identifying drug candidates, but induced pluripotent stem cell (iPSC)–derived motor neurons (MNs) from ALS patients can “overcome these limitations for drug screening,” the authors write.

“We previously identified ropinirole [a dopamine D2 receptor agonist] as a potential anti-ALS drug in vitro by iPSC drug discovery,” Dr. Okano said.

The current trial was a randomized, placebo-controlled phase 1/2a feasibility trial that evaluated the safety, tolerability, and efficacy of ropinirole in patients with ALS, using several parameters:

  • The revised ALS functional rating scale (ALSFRS-R) score.
  • Composite functional endpoints.
  • Event-free survival.
  • Time to ≤ 50% forced vital capacity (FVC).

The trial consisted of a 12-week run-in period, a 24-week double-blind period, an open-label extension period that lasted from 4 to 24 weeks, and a 4-week follow-up period after administration.

Thirteen patients were assigned to receive ropinirole (23.1% women; mean age, 65.2 ± 12.6 years; 7.7% with clinically definite and 76.9% with clinically probable ALS); seven were assigned to receive placebo (57.1% women; mean age, 66.3 ± 7.5 years; 14.3% with clinically definite and 85.7% with clinically probable ALS).

Of the treatment group, 30.8% had a bulbar onset lesion vs. 57.1% in the placebo group. At baseline, the mean FVC was 94.4% ± 14.9 and 81.5% ± 23.2 in the ropinirole and placebo groups, respectively. The mean body mass index (BMI) was 22.91 ± 3.82 and 19.69 ± 2.63, respectively.

Of the participants,12 in the ropinirole and six in the control group completed the full 24-week treatment protocol; 12 in the ropinirole and five in the placebo group completed the open-label extension (participants who had received placebo were switched to the active drug).

However only seven participants in the ropinirole group and one participant in the placebo group completed the full 1-year trial.
 

 

 

‘Striking correlation’

“During the double-blind period, muscle strength and daily activity were maintained, but a decline in the ALSFRS-R … was not different from that in the placebo group,” the researchers write.

In the open-label extension period, the ropinirole group showed “significant suppression of ALSFRS-R decline,” with an ALSFRS-R score change of only 7.75 (95% confidence interval, 10.66-4.63) for the treatment group vs. 17.51 (95% CI, 22.46-12.56) for the placebo group.

The researchers used the assessment of function and survival (CAFS) score, which adjusts the ALSFRS-R score against mortality, to see whether functional benefits translated into improved survival.

The score “favored ropinirole” in the open-extension period and the entire treatment period but not in the double-blind period.

 

Disease progression events occurred in 7 of 7 (100%) participants in the placebo group and 7 of 13 (54%) in the ropinirole group, “suggesting a twofold decrease in disease progression” in the treatment group.

The ropinirole group experienced an additional 27.9 weeks of disease progression–free survival, compared with the placebo group.

“No participant discontinued treatment because of adverse experiences in either treatment group,” the authors report.

The analysis of iPSC-derived motor neurons from participants showed dopamine D2 receptor expression, as well as the potential involvement of the cholesterol pathway SREBP2 in the therapeutic effects of ropinirole. Lipid peroxide was also identified as a good “surrogate clinical marker to assess disease progression and drug efficacy.”

“We found a very striking correlation between a patient’s clinical response and the response of their motor neurons in vitro,” said Dr. Morimoto. “Patients whose motor neurons responded robustly to ropinirole in vitro had a much slower clinical disease progression with ropinirole treatment, while suboptimal responders showed much more rapid disease progression, despite taking ropinirole.”

Limitations include “small sample sizes and high attrition rates in the open-label extension period,” so “further validation” is required, the authors state.


 

Significant flaws

Commenting for this article, Carmel Armon, MD, MHS, professor of neurology, Loma Linda (Calif.) University, said the study “falls short of being a credible 1/2a clinical trial.”

Although the “intentions were good and the design not unusual,” the two groups were not “balanced on risk factors for faster progressing disease.” Rather, the placebo group was “tilted towards faster progressing disease” because there were more clinically definite and probable ALS patients in the placebo group than the treatment group, and there were more patients with bulbar onset.

Participants in the placebo group also had shorter median disease duration, lower BMI, and lower FVC, noted Dr. Armon, who was not involved with the study.

And only 1 in 7 control patients completed the open-label extension, compared with 7 of 13 patients in the intervention group.

“With these limitations, I would be disinclined to rely on the findings to justify a larger clinical trial,” Dr. Armon concluded.

The trial was sponsored by K Pharma. The study drug, active drugs, and placebo were supplied free of charge by GlaxoSmithKline K.K. Dr. Okano received grants from JSPS and AMED and grants and personal fees from K Pharma during the conduct of the study and personal fees from Sanbio, outside the submitted work. Dr. Okano has a patent on a therapeutic agent for ALS and composition for treatment licensed to K Pharma. The other authors’ disclosures and additional information are available in the original article. Dr. Armon reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Ropinirole, a drug used for Parkinson’s disease (PD), shows promise in slowing the progression of amyotrophic lateral sclerosis (ALS), early research suggests. However, at least one expert believes the study has “significant flaws.”
 

Investigators randomly assigned 20 individuals with sporadic ALS to receive either ropinirole or placebo for 24 weeks. During the double-blind period, there was no difference between the groups in terms of decline in functional status.

However, during a further open-label extension period, the ropinirole group showed significant suppression of functional decline and an average of an additional 7 months of progression-free survival.

The researchers were able to predict clinical responsiveness to ropinirole in vitro by analyzing motor neurons derived from participants’ stem cells.

“We found that ropinirole is safe and tolerable for ALS patients and shows therapeutic promise at helping them sustain daily activity and muscle strength,” first author Satoru Morimoto, MD, of the department of physiology, Keio University School of Medicine, Tokyo, said in a news release.

The study was published online in Cell Stem Cell.
 

Feasibility study

“ALS is totally incurable and it’s a very difficult disease to treat,” senior author Hideyuki Okano, MD, PhD, professor, department of physiology, Keio University, said in the news release.

Preclinical animal models have “limited translational potential” for identifying drug candidates, but induced pluripotent stem cell (iPSC)–derived motor neurons (MNs) from ALS patients can “overcome these limitations for drug screening,” the authors write.

“We previously identified ropinirole [a dopamine D2 receptor agonist] as a potential anti-ALS drug in vitro by iPSC drug discovery,” Dr. Okano said.

The current trial was a randomized, placebo-controlled phase 1/2a feasibility trial that evaluated the safety, tolerability, and efficacy of ropinirole in patients with ALS, using several parameters:

  • The revised ALS functional rating scale (ALSFRS-R) score.
  • Composite functional endpoints.
  • Event-free survival.
  • Time to ≤ 50% forced vital capacity (FVC).

The trial consisted of a 12-week run-in period, a 24-week double-blind period, an open-label extension period that lasted from 4 to 24 weeks, and a 4-week follow-up period after administration.

Thirteen patients were assigned to receive ropinirole (23.1% women; mean age, 65.2 ± 12.6 years; 7.7% with clinically definite and 76.9% with clinically probable ALS); seven were assigned to receive placebo (57.1% women; mean age, 66.3 ± 7.5 years; 14.3% with clinically definite and 85.7% with clinically probable ALS).

Of the treatment group, 30.8% had a bulbar onset lesion vs. 57.1% in the placebo group. At baseline, the mean FVC was 94.4% ± 14.9 and 81.5% ± 23.2 in the ropinirole and placebo groups, respectively. The mean body mass index (BMI) was 22.91 ± 3.82 and 19.69 ± 2.63, respectively.

Of the participants,12 in the ropinirole and six in the control group completed the full 24-week treatment protocol; 12 in the ropinirole and five in the placebo group completed the open-label extension (participants who had received placebo were switched to the active drug).

However only seven participants in the ropinirole group and one participant in the placebo group completed the full 1-year trial.
 

 

 

‘Striking correlation’

“During the double-blind period, muscle strength and daily activity were maintained, but a decline in the ALSFRS-R … was not different from that in the placebo group,” the researchers write.

In the open-label extension period, the ropinirole group showed “significant suppression of ALSFRS-R decline,” with an ALSFRS-R score change of only 7.75 (95% confidence interval, 10.66-4.63) for the treatment group vs. 17.51 (95% CI, 22.46-12.56) for the placebo group.

The researchers used the assessment of function and survival (CAFS) score, which adjusts the ALSFRS-R score against mortality, to see whether functional benefits translated into improved survival.

The score “favored ropinirole” in the open-extension period and the entire treatment period but not in the double-blind period.

 

Disease progression events occurred in 7 of 7 (100%) participants in the placebo group and 7 of 13 (54%) in the ropinirole group, “suggesting a twofold decrease in disease progression” in the treatment group.

The ropinirole group experienced an additional 27.9 weeks of disease progression–free survival, compared with the placebo group.

“No participant discontinued treatment because of adverse experiences in either treatment group,” the authors report.

The analysis of iPSC-derived motor neurons from participants showed dopamine D2 receptor expression, as well as the potential involvement of the cholesterol pathway SREBP2 in the therapeutic effects of ropinirole. Lipid peroxide was also identified as a good “surrogate clinical marker to assess disease progression and drug efficacy.”

“We found a very striking correlation between a patient’s clinical response and the response of their motor neurons in vitro,” said Dr. Morimoto. “Patients whose motor neurons responded robustly to ropinirole in vitro had a much slower clinical disease progression with ropinirole treatment, while suboptimal responders showed much more rapid disease progression, despite taking ropinirole.”

Limitations include “small sample sizes and high attrition rates in the open-label extension period,” so “further validation” is required, the authors state.


 

Significant flaws

Commenting for this article, Carmel Armon, MD, MHS, professor of neurology, Loma Linda (Calif.) University, said the study “falls short of being a credible 1/2a clinical trial.”

Although the “intentions were good and the design not unusual,” the two groups were not “balanced on risk factors for faster progressing disease.” Rather, the placebo group was “tilted towards faster progressing disease” because there were more clinically definite and probable ALS patients in the placebo group than the treatment group, and there were more patients with bulbar onset.

Participants in the placebo group also had shorter median disease duration, lower BMI, and lower FVC, noted Dr. Armon, who was not involved with the study.

And only 1 in 7 control patients completed the open-label extension, compared with 7 of 13 patients in the intervention group.

“With these limitations, I would be disinclined to rely on the findings to justify a larger clinical trial,” Dr. Armon concluded.

The trial was sponsored by K Pharma. The study drug, active drugs, and placebo were supplied free of charge by GlaxoSmithKline K.K. Dr. Okano received grants from JSPS and AMED and grants and personal fees from K Pharma during the conduct of the study and personal fees from Sanbio, outside the submitted work. Dr. Okano has a patent on a therapeutic agent for ALS and composition for treatment licensed to K Pharma. The other authors’ disclosures and additional information are available in the original article. Dr. Armon reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL STEM CELL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Regular napping linked to greater brain volume

Article Type
Changed
Wed, 06/28/2023 - 09:00

Daily napping may help preserve brain health, new research suggests.

Investigators at University College London, and the University of the Republic of Uruguay, Montevideo, found individuals genetically predisposed to regular napping had larger total brain volume, a surrogate of better cognitive health.

“Our results suggest that napping may improve brain health,” first author Valentina Paz, MSc, a PhD candidate at the University of the Republic of Uruguay said in an interview. “Specifically, our work revealed a 15.8 cubic cm increase in total brain volume with more frequent daytime napping,” she said.

The findings were published online in Sleep Health.
 

Higher brain volume

Previous studies examining the potential link between napping and cognition in older adults have yielded conflicting results.

To clarify this association, Ms. Paz and colleagues used Mendelian randomization to study DNA samples, cognitive outcomes, and functional magnetic resonance imaging data in participants from the ongoing UK Biobank Study.  

Starting with data from 378,932 study participants (mean age 57), investigators compared measures of brain health and cognition of those who are more genetically programmed to nap with people who did not have these genetic variations.

More specifically, the investigators examined 97 sections of genetic code previously linked to the likelihood of regular napping and correlated these results with fMRI and cognitive outcomes between those genetically predisposed to take regular naps and those who weren’t.

Study outcomes included total brain volume, hippocampal volume, reaction time, and visual memory.

The final study sample included 35,080 with neuroimaging, cognitive assessment, and genotype data.

The researchers estimated that the average difference in brain volume between individuals genetically programmed to be habitual nappers and those who were not was equivalent to 15.8 cubic cm, or 2.6-6.5 years of aging.

However, there was no difference in the other three outcomes – hippocampal volume, reaction time, and visual processing – between the two study groups.

Since investigators did not have information on the length of time participants napped, Ms. Paz suggested that “taking a short nap in the early afternoon may help cognition in those needing it.”

However, she added, the study’s findings need to be replicated before any firm conclusions can be made.

“More work is needed to examine the associations between napping and cognition, and the replication of these findings using other datasets and methods,” she said.

The investigators note that the study’s findings augment the knowledge of the “impact of habitual daytime napping on brain health, which is essential to understanding cognitive impairment in the aging population. The lack of evidence for an association between napping and hippocampal volume and cognitive outcomes (for example, alertness) may be affected by habitual daytime napping and should be studied in the future.”
 

Strengths, limitations

Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, said, “the study shows a small but significant increase in brain volume in people who have a genetic signature associated with taking daytime naps.”

Dr. Spires-Jones, who was not involved in the research, noted that while the study is well-conducted, it has limitations. Because Mendelian randomization uses a genetic signature, she noted, outcomes depend on the accuracy of the signature. 

“The napping habits of UK Biobank participants were self-reported, which might not be entirely accurate, and the ‘napping’ signature overlapped substantially with the signature for cognitive outcomes in the study, which makes the causal link weaker,” she said.

“Even with those limitations, this study is interesting because it adds to the data indicating that sleep is important for brain health,” said Dr. Spires-Jones.

The study was supported by Diabetes UK, the British Heart Foundation, and the Diabetes Research and Wellness Foundation. In Uruguay, it was supported by Programa de Desarrollo de las Ciencias Básicas, Agencia Nacional de Investigación e Innovación, Comisión Sectorial de Investigación Científica, and Comisión Académica de Posgrado. In the United States it was supported by the National Heart, Lung, and Blood Institute. There were no disclosures reported.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Daily napping may help preserve brain health, new research suggests.

Investigators at University College London, and the University of the Republic of Uruguay, Montevideo, found individuals genetically predisposed to regular napping had larger total brain volume, a surrogate of better cognitive health.

“Our results suggest that napping may improve brain health,” first author Valentina Paz, MSc, a PhD candidate at the University of the Republic of Uruguay said in an interview. “Specifically, our work revealed a 15.8 cubic cm increase in total brain volume with more frequent daytime napping,” she said.

The findings were published online in Sleep Health.
 

Higher brain volume

Previous studies examining the potential link between napping and cognition in older adults have yielded conflicting results.

To clarify this association, Ms. Paz and colleagues used Mendelian randomization to study DNA samples, cognitive outcomes, and functional magnetic resonance imaging data in participants from the ongoing UK Biobank Study.  

Starting with data from 378,932 study participants (mean age 57), investigators compared measures of brain health and cognition of those who are more genetically programmed to nap with people who did not have these genetic variations.

More specifically, the investigators examined 97 sections of genetic code previously linked to the likelihood of regular napping and correlated these results with fMRI and cognitive outcomes between those genetically predisposed to take regular naps and those who weren’t.

Study outcomes included total brain volume, hippocampal volume, reaction time, and visual memory.

The final study sample included 35,080 with neuroimaging, cognitive assessment, and genotype data.

The researchers estimated that the average difference in brain volume between individuals genetically programmed to be habitual nappers and those who were not was equivalent to 15.8 cubic cm, or 2.6-6.5 years of aging.

However, there was no difference in the other three outcomes – hippocampal volume, reaction time, and visual processing – between the two study groups.

Since investigators did not have information on the length of time participants napped, Ms. Paz suggested that “taking a short nap in the early afternoon may help cognition in those needing it.”

However, she added, the study’s findings need to be replicated before any firm conclusions can be made.

“More work is needed to examine the associations between napping and cognition, and the replication of these findings using other datasets and methods,” she said.

The investigators note that the study’s findings augment the knowledge of the “impact of habitual daytime napping on brain health, which is essential to understanding cognitive impairment in the aging population. The lack of evidence for an association between napping and hippocampal volume and cognitive outcomes (for example, alertness) may be affected by habitual daytime napping and should be studied in the future.”
 

Strengths, limitations

Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, said, “the study shows a small but significant increase in brain volume in people who have a genetic signature associated with taking daytime naps.”

Dr. Spires-Jones, who was not involved in the research, noted that while the study is well-conducted, it has limitations. Because Mendelian randomization uses a genetic signature, she noted, outcomes depend on the accuracy of the signature. 

“The napping habits of UK Biobank participants were self-reported, which might not be entirely accurate, and the ‘napping’ signature overlapped substantially with the signature for cognitive outcomes in the study, which makes the causal link weaker,” she said.

“Even with those limitations, this study is interesting because it adds to the data indicating that sleep is important for brain health,” said Dr. Spires-Jones.

The study was supported by Diabetes UK, the British Heart Foundation, and the Diabetes Research and Wellness Foundation. In Uruguay, it was supported by Programa de Desarrollo de las Ciencias Básicas, Agencia Nacional de Investigación e Innovación, Comisión Sectorial de Investigación Científica, and Comisión Académica de Posgrado. In the United States it was supported by the National Heart, Lung, and Blood Institute. There were no disclosures reported.

A version of this article first appeared on Medscape.com.

Daily napping may help preserve brain health, new research suggests.

Investigators at University College London, and the University of the Republic of Uruguay, Montevideo, found individuals genetically predisposed to regular napping had larger total brain volume, a surrogate of better cognitive health.

“Our results suggest that napping may improve brain health,” first author Valentina Paz, MSc, a PhD candidate at the University of the Republic of Uruguay said in an interview. “Specifically, our work revealed a 15.8 cubic cm increase in total brain volume with more frequent daytime napping,” she said.

The findings were published online in Sleep Health.
 

Higher brain volume

Previous studies examining the potential link between napping and cognition in older adults have yielded conflicting results.

To clarify this association, Ms. Paz and colleagues used Mendelian randomization to study DNA samples, cognitive outcomes, and functional magnetic resonance imaging data in participants from the ongoing UK Biobank Study.  

Starting with data from 378,932 study participants (mean age 57), investigators compared measures of brain health and cognition of those who are more genetically programmed to nap with people who did not have these genetic variations.

More specifically, the investigators examined 97 sections of genetic code previously linked to the likelihood of regular napping and correlated these results with fMRI and cognitive outcomes between those genetically predisposed to take regular naps and those who weren’t.

Study outcomes included total brain volume, hippocampal volume, reaction time, and visual memory.

The final study sample included 35,080 with neuroimaging, cognitive assessment, and genotype data.

The researchers estimated that the average difference in brain volume between individuals genetically programmed to be habitual nappers and those who were not was equivalent to 15.8 cubic cm, or 2.6-6.5 years of aging.

However, there was no difference in the other three outcomes – hippocampal volume, reaction time, and visual processing – between the two study groups.

Since investigators did not have information on the length of time participants napped, Ms. Paz suggested that “taking a short nap in the early afternoon may help cognition in those needing it.”

However, she added, the study’s findings need to be replicated before any firm conclusions can be made.

“More work is needed to examine the associations between napping and cognition, and the replication of these findings using other datasets and methods,” she said.

The investigators note that the study’s findings augment the knowledge of the “impact of habitual daytime napping on brain health, which is essential to understanding cognitive impairment in the aging population. The lack of evidence for an association between napping and hippocampal volume and cognitive outcomes (for example, alertness) may be affected by habitual daytime napping and should be studied in the future.”
 

Strengths, limitations

Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, said, “the study shows a small but significant increase in brain volume in people who have a genetic signature associated with taking daytime naps.”

Dr. Spires-Jones, who was not involved in the research, noted that while the study is well-conducted, it has limitations. Because Mendelian randomization uses a genetic signature, she noted, outcomes depend on the accuracy of the signature. 

“The napping habits of UK Biobank participants were self-reported, which might not be entirely accurate, and the ‘napping’ signature overlapped substantially with the signature for cognitive outcomes in the study, which makes the causal link weaker,” she said.

“Even with those limitations, this study is interesting because it adds to the data indicating that sleep is important for brain health,” said Dr. Spires-Jones.

The study was supported by Diabetes UK, the British Heart Foundation, and the Diabetes Research and Wellness Foundation. In Uruguay, it was supported by Programa de Desarrollo de las Ciencias Básicas, Agencia Nacional de Investigación e Innovación, Comisión Sectorial de Investigación Científica, and Comisión Académica de Posgrado. In the United States it was supported by the National Heart, Lung, and Blood Institute. There were no disclosures reported.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SLEEP HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Altered gut bacteria a biomarker of preclinical Alzheimer’s?

Article Type
Changed
Tue, 06/20/2023 - 10:13

The composition of gut bacteria in people with preclinical Alzheimer’s disease (AD) differs from that of healthy people, a new study shows.

The findings open up the possibility of analyzing the gut microbiome to identify individuals at a higher risk for dementia and perhaps designing microbiome-altering preventive treatments to help stave off cognitive decline, researchers noted.

Study investigator Gautam Dantas, PhD, cautioned that it’s not known whether the gut is influencing the brain, or the brain is influencing the gut, “but this association is valuable to know in either case.

“It could be that the changes in the gut microbiome are just a readout of pathological changes in the brain. The other alternative is that the gut microbiome is contributing to AD, in which case, altering the gut microbiome with probiotics or fecal transfers might help change the course of the disease,” Dr. Dantas, Washington University, St. Louis, said in a news release.

The study was published online in Science Translational Medicine.
 

Stool test?

Multiple lines of evidence suggest a role for gut microbes in the evolution of AD pathogenesis. However, less is known about gut microbiome changes in the preclinical (presymptomatic) phase of AD.

To investigate, Dr. Dantas and colleagues studied 164 cognitively normal adults, 49 of whom had biomarker evidence of preclinical AD.

After the researchers accounted for clinical covariates and diet, those with preclinical AD had distinct gut microbial taxonomic profiles compared with their healthy controls.

The observed microbiome features correlated with amyloid and tau but not neurodegeneration biomarkers, “suggesting that the gut microbial community changes early in the disease process,” the researchers suggested.

They identified specific taxa that were associated with preclinical AD and including these microbiome features improved the accuracy, sensitivity, and specificity of machine learning classifiers for predicting preclinical AD status.

The findings suggest “markers in the stool might complement early screening measures for preclinical AD,” the researchers noted.

“The nice thing about using the gut microbiome as a screening tool is its simplicity and ease,” Beau Ances, MD, PhD, professor of neurology, at Washington University, St. Louis, said in the release.

“One day, individuals may be able to provide a stool sample and find out if they are at increased risk for developing AD. It would be much easier and less invasive and more accessible for a large proportion of the population, especially underrepresented groups, compared to brain scans or spinal taps,” Dr. Ances added.

The researchers have launched a 5-year follow-up study designed to help determine whether the differences in the gut microbiome are a cause or a result of the brain changes seen in early AD.
 

Caveats, cautionary notes

In a comment, Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach, cautioned that the study design means that it’s “not possible to prove one thing causes another. What it can show is that two or more aspects are in some way related, thus setting the stage for further research.”

Dr. Sexton noted that though the authors accounted for a number of variables in their models, including age, sex, race, education, body mass index, hypertension, and diabetes, and observed no differences in intake of any major nutrient group, “it’s still not possible to rule out that additional factors beyond the variations in gut microbiome contributed to the changes in brain markers of Alzheimer’s.”

Dr. Sexton also noted that the study population is not representative of all people living with AD, with the vast majority of those with preclinical AD in the study being White.

“If these findings are replicated and confirmed in study groups that are representative of our communities, it is possible that gut microbiome signatures could be a further addition to the suite of diagnostic tools employed in certain settings,” Dr. Sexton said.

This research was supported by the Infection Disease Society of America Foundation, the National Institute on Aging, the Brennan Fund and the Paula and Rodger Riney Foundation. Dr. Dantas, Dr. Ances and Dr. Sexton have no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The composition of gut bacteria in people with preclinical Alzheimer’s disease (AD) differs from that of healthy people, a new study shows.

The findings open up the possibility of analyzing the gut microbiome to identify individuals at a higher risk for dementia and perhaps designing microbiome-altering preventive treatments to help stave off cognitive decline, researchers noted.

Study investigator Gautam Dantas, PhD, cautioned that it’s not known whether the gut is influencing the brain, or the brain is influencing the gut, “but this association is valuable to know in either case.

“It could be that the changes in the gut microbiome are just a readout of pathological changes in the brain. The other alternative is that the gut microbiome is contributing to AD, in which case, altering the gut microbiome with probiotics or fecal transfers might help change the course of the disease,” Dr. Dantas, Washington University, St. Louis, said in a news release.

The study was published online in Science Translational Medicine.
 

Stool test?

Multiple lines of evidence suggest a role for gut microbes in the evolution of AD pathogenesis. However, less is known about gut microbiome changes in the preclinical (presymptomatic) phase of AD.

To investigate, Dr. Dantas and colleagues studied 164 cognitively normal adults, 49 of whom had biomarker evidence of preclinical AD.

After the researchers accounted for clinical covariates and diet, those with preclinical AD had distinct gut microbial taxonomic profiles compared with their healthy controls.

The observed microbiome features correlated with amyloid and tau but not neurodegeneration biomarkers, “suggesting that the gut microbial community changes early in the disease process,” the researchers suggested.

They identified specific taxa that were associated with preclinical AD and including these microbiome features improved the accuracy, sensitivity, and specificity of machine learning classifiers for predicting preclinical AD status.

The findings suggest “markers in the stool might complement early screening measures for preclinical AD,” the researchers noted.

“The nice thing about using the gut microbiome as a screening tool is its simplicity and ease,” Beau Ances, MD, PhD, professor of neurology, at Washington University, St. Louis, said in the release.

“One day, individuals may be able to provide a stool sample and find out if they are at increased risk for developing AD. It would be much easier and less invasive and more accessible for a large proportion of the population, especially underrepresented groups, compared to brain scans or spinal taps,” Dr. Ances added.

The researchers have launched a 5-year follow-up study designed to help determine whether the differences in the gut microbiome are a cause or a result of the brain changes seen in early AD.
 

Caveats, cautionary notes

In a comment, Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach, cautioned that the study design means that it’s “not possible to prove one thing causes another. What it can show is that two or more aspects are in some way related, thus setting the stage for further research.”

Dr. Sexton noted that though the authors accounted for a number of variables in their models, including age, sex, race, education, body mass index, hypertension, and diabetes, and observed no differences in intake of any major nutrient group, “it’s still not possible to rule out that additional factors beyond the variations in gut microbiome contributed to the changes in brain markers of Alzheimer’s.”

Dr. Sexton also noted that the study population is not representative of all people living with AD, with the vast majority of those with preclinical AD in the study being White.

“If these findings are replicated and confirmed in study groups that are representative of our communities, it is possible that gut microbiome signatures could be a further addition to the suite of diagnostic tools employed in certain settings,” Dr. Sexton said.

This research was supported by the Infection Disease Society of America Foundation, the National Institute on Aging, the Brennan Fund and the Paula and Rodger Riney Foundation. Dr. Dantas, Dr. Ances and Dr. Sexton have no relevant disclosures.

A version of this article first appeared on Medscape.com.

The composition of gut bacteria in people with preclinical Alzheimer’s disease (AD) differs from that of healthy people, a new study shows.

The findings open up the possibility of analyzing the gut microbiome to identify individuals at a higher risk for dementia and perhaps designing microbiome-altering preventive treatments to help stave off cognitive decline, researchers noted.

Study investigator Gautam Dantas, PhD, cautioned that it’s not known whether the gut is influencing the brain, or the brain is influencing the gut, “but this association is valuable to know in either case.

“It could be that the changes in the gut microbiome are just a readout of pathological changes in the brain. The other alternative is that the gut microbiome is contributing to AD, in which case, altering the gut microbiome with probiotics or fecal transfers might help change the course of the disease,” Dr. Dantas, Washington University, St. Louis, said in a news release.

The study was published online in Science Translational Medicine.
 

Stool test?

Multiple lines of evidence suggest a role for gut microbes in the evolution of AD pathogenesis. However, less is known about gut microbiome changes in the preclinical (presymptomatic) phase of AD.

To investigate, Dr. Dantas and colleagues studied 164 cognitively normal adults, 49 of whom had biomarker evidence of preclinical AD.

After the researchers accounted for clinical covariates and diet, those with preclinical AD had distinct gut microbial taxonomic profiles compared with their healthy controls.

The observed microbiome features correlated with amyloid and tau but not neurodegeneration biomarkers, “suggesting that the gut microbial community changes early in the disease process,” the researchers suggested.

They identified specific taxa that were associated with preclinical AD and including these microbiome features improved the accuracy, sensitivity, and specificity of machine learning classifiers for predicting preclinical AD status.

The findings suggest “markers in the stool might complement early screening measures for preclinical AD,” the researchers noted.

“The nice thing about using the gut microbiome as a screening tool is its simplicity and ease,” Beau Ances, MD, PhD, professor of neurology, at Washington University, St. Louis, said in the release.

“One day, individuals may be able to provide a stool sample and find out if they are at increased risk for developing AD. It would be much easier and less invasive and more accessible for a large proportion of the population, especially underrepresented groups, compared to brain scans or spinal taps,” Dr. Ances added.

The researchers have launched a 5-year follow-up study designed to help determine whether the differences in the gut microbiome are a cause or a result of the brain changes seen in early AD.
 

Caveats, cautionary notes

In a comment, Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach, cautioned that the study design means that it’s “not possible to prove one thing causes another. What it can show is that two or more aspects are in some way related, thus setting the stage for further research.”

Dr. Sexton noted that though the authors accounted for a number of variables in their models, including age, sex, race, education, body mass index, hypertension, and diabetes, and observed no differences in intake of any major nutrient group, “it’s still not possible to rule out that additional factors beyond the variations in gut microbiome contributed to the changes in brain markers of Alzheimer’s.”

Dr. Sexton also noted that the study population is not representative of all people living with AD, with the vast majority of those with preclinical AD in the study being White.

“If these findings are replicated and confirmed in study groups that are representative of our communities, it is possible that gut microbiome signatures could be a further addition to the suite of diagnostic tools employed in certain settings,” Dr. Sexton said.

This research was supported by the Infection Disease Society of America Foundation, the National Institute on Aging, the Brennan Fund and the Paula and Rodger Riney Foundation. Dr. Dantas, Dr. Ances and Dr. Sexton have no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCIENCE TRANSLATIONAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Novel cannabis oil curbs tics in severe Tourette’s

Article Type
Changed
Mon, 06/19/2023 - 12:49

An oral oil containing tetrahydrocannabinol (THC) and cannabidiol (CBD) led to a significant and meaningful reduction in motor and vocal tics in patients with severe Tourette syndrome (TS), results of a double-blind, placebo-controlled, crossover study show.

“In a methodologically robust manner (and independent of any drug company sponsorship), we provide evidence for the effectiveness of repeated dosing with THC:CBD vs. placebo in tic suppression, as well as reduction of comorbid anxiety and obsessive-compulsive disorder in severe TS,” neuropsychiatrist and lead investigator Philip Mosley, PhD, said in an interview.

The results offer support to people with TS who “want to approach their doctor to try medicinal cannabis when other drugs have not worked or are intolerable,” said Dr. Mosley, of the Wesley Research Institute and QIMR Berghofer Medical Research Institute, Herston, Australia.

The study was published online in NEJM Evidence.
 

A viable treatment option

Twenty-two adults (mean age, 31 years) with severe TS received THC:CBD oil titrated upward over 6 weeks to a daily dose of 20 mg of THC and 20 mg of CBD, followed by a 6-week course of placebo (or vice versa). Six participants had not previously used cannabis.

The primary outcome was the total tic score on the Yale Global Tic Severity Scale (YGTSS; range 0 to 50 with higher scores = greater tic severity).

The mean baseline YGTSS total tic score was 35.7. At 6 weeks, the reduction in total tic score was 8.9 with THC:CBD vs. 2.5 with placebo.

A linear mixed-effects model (intention-to-treat) showed a significant interaction of treatment and visit number (P = .008), indicating a greater decrease (improvement) in tic score over time with THC:CBD, the study team reported.

On average, the magnitude of the tic reduction was “moderate” and comparable to the effect observed with existing treatments such as antipsychotic agents, the investigators noted.

THC:CBD also led to a reduction in other symptoms associated with TS, particularly symptoms of OCD and anxiety.

The symptomatic response to THC:CBD correlated with serum metabolites of the cannabinoids, further supporting a biological relationship, the researchers noted.

There were no serious adverse events. Adverse effects with THC:CBD were generally mild. The most common adverse effect was cognitive difficulties, including slowed mentation, memory lapses, and poor concentration.

“Like many studies of psychoactive compounds, blinding among participants was a problem,” the researchers noted. Despite best efforts to conceal treatment allocation and match placebo to the active agent in terms of color and smell, most participants were able to correctly guess their treatment order.

Based on the findings in this small trial, larger and longer trials of THC:CBD in TS are warranted, they concluded.

“We need a plurality of treatment options in Tourette syndrome. For some, antipsychotics are effective tic-suppressing agents but for many these benefits are complicated by side effects such as weight gain & sedation,” Dr. Mosley tweeted. “Cannabinoids are a biologically plausible therapeutic agent. The body’s own ‘endocannabinoid’ receptors are concentrated in the basal ganglia – the neuroanatomical nexus of TS.”

The study was funded by the Wesley Medical Research Institute, Brisbane, and the Lambert Initiative for Cannabinoid Therapeutics, a philanthropically funded research organization at the University of Sydney, Australia. Dr. Mosley reports no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An oral oil containing tetrahydrocannabinol (THC) and cannabidiol (CBD) led to a significant and meaningful reduction in motor and vocal tics in patients with severe Tourette syndrome (TS), results of a double-blind, placebo-controlled, crossover study show.

“In a methodologically robust manner (and independent of any drug company sponsorship), we provide evidence for the effectiveness of repeated dosing with THC:CBD vs. placebo in tic suppression, as well as reduction of comorbid anxiety and obsessive-compulsive disorder in severe TS,” neuropsychiatrist and lead investigator Philip Mosley, PhD, said in an interview.

The results offer support to people with TS who “want to approach their doctor to try medicinal cannabis when other drugs have not worked or are intolerable,” said Dr. Mosley, of the Wesley Research Institute and QIMR Berghofer Medical Research Institute, Herston, Australia.

The study was published online in NEJM Evidence.
 

A viable treatment option

Twenty-two adults (mean age, 31 years) with severe TS received THC:CBD oil titrated upward over 6 weeks to a daily dose of 20 mg of THC and 20 mg of CBD, followed by a 6-week course of placebo (or vice versa). Six participants had not previously used cannabis.

The primary outcome was the total tic score on the Yale Global Tic Severity Scale (YGTSS; range 0 to 50 with higher scores = greater tic severity).

The mean baseline YGTSS total tic score was 35.7. At 6 weeks, the reduction in total tic score was 8.9 with THC:CBD vs. 2.5 with placebo.

A linear mixed-effects model (intention-to-treat) showed a significant interaction of treatment and visit number (P = .008), indicating a greater decrease (improvement) in tic score over time with THC:CBD, the study team reported.

On average, the magnitude of the tic reduction was “moderate” and comparable to the effect observed with existing treatments such as antipsychotic agents, the investigators noted.

THC:CBD also led to a reduction in other symptoms associated with TS, particularly symptoms of OCD and anxiety.

The symptomatic response to THC:CBD correlated with serum metabolites of the cannabinoids, further supporting a biological relationship, the researchers noted.

There were no serious adverse events. Adverse effects with THC:CBD were generally mild. The most common adverse effect was cognitive difficulties, including slowed mentation, memory lapses, and poor concentration.

“Like many studies of psychoactive compounds, blinding among participants was a problem,” the researchers noted. Despite best efforts to conceal treatment allocation and match placebo to the active agent in terms of color and smell, most participants were able to correctly guess their treatment order.

Based on the findings in this small trial, larger and longer trials of THC:CBD in TS are warranted, they concluded.

“We need a plurality of treatment options in Tourette syndrome. For some, antipsychotics are effective tic-suppressing agents but for many these benefits are complicated by side effects such as weight gain & sedation,” Dr. Mosley tweeted. “Cannabinoids are a biologically plausible therapeutic agent. The body’s own ‘endocannabinoid’ receptors are concentrated in the basal ganglia – the neuroanatomical nexus of TS.”

The study was funded by the Wesley Medical Research Institute, Brisbane, and the Lambert Initiative for Cannabinoid Therapeutics, a philanthropically funded research organization at the University of Sydney, Australia. Dr. Mosley reports no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

An oral oil containing tetrahydrocannabinol (THC) and cannabidiol (CBD) led to a significant and meaningful reduction in motor and vocal tics in patients with severe Tourette syndrome (TS), results of a double-blind, placebo-controlled, crossover study show.

“In a methodologically robust manner (and independent of any drug company sponsorship), we provide evidence for the effectiveness of repeated dosing with THC:CBD vs. placebo in tic suppression, as well as reduction of comorbid anxiety and obsessive-compulsive disorder in severe TS,” neuropsychiatrist and lead investigator Philip Mosley, PhD, said in an interview.

The results offer support to people with TS who “want to approach their doctor to try medicinal cannabis when other drugs have not worked or are intolerable,” said Dr. Mosley, of the Wesley Research Institute and QIMR Berghofer Medical Research Institute, Herston, Australia.

The study was published online in NEJM Evidence.
 

A viable treatment option

Twenty-two adults (mean age, 31 years) with severe TS received THC:CBD oil titrated upward over 6 weeks to a daily dose of 20 mg of THC and 20 mg of CBD, followed by a 6-week course of placebo (or vice versa). Six participants had not previously used cannabis.

The primary outcome was the total tic score on the Yale Global Tic Severity Scale (YGTSS; range 0 to 50 with higher scores = greater tic severity).

The mean baseline YGTSS total tic score was 35.7. At 6 weeks, the reduction in total tic score was 8.9 with THC:CBD vs. 2.5 with placebo.

A linear mixed-effects model (intention-to-treat) showed a significant interaction of treatment and visit number (P = .008), indicating a greater decrease (improvement) in tic score over time with THC:CBD, the study team reported.

On average, the magnitude of the tic reduction was “moderate” and comparable to the effect observed with existing treatments such as antipsychotic agents, the investigators noted.

THC:CBD also led to a reduction in other symptoms associated with TS, particularly symptoms of OCD and anxiety.

The symptomatic response to THC:CBD correlated with serum metabolites of the cannabinoids, further supporting a biological relationship, the researchers noted.

There were no serious adverse events. Adverse effects with THC:CBD were generally mild. The most common adverse effect was cognitive difficulties, including slowed mentation, memory lapses, and poor concentration.

“Like many studies of psychoactive compounds, blinding among participants was a problem,” the researchers noted. Despite best efforts to conceal treatment allocation and match placebo to the active agent in terms of color and smell, most participants were able to correctly guess their treatment order.

Based on the findings in this small trial, larger and longer trials of THC:CBD in TS are warranted, they concluded.

“We need a plurality of treatment options in Tourette syndrome. For some, antipsychotics are effective tic-suppressing agents but for many these benefits are complicated by side effects such as weight gain & sedation,” Dr. Mosley tweeted. “Cannabinoids are a biologically plausible therapeutic agent. The body’s own ‘endocannabinoid’ receptors are concentrated in the basal ganglia – the neuroanatomical nexus of TS.”

The study was funded by the Wesley Medical Research Institute, Brisbane, and the Lambert Initiative for Cannabinoid Therapeutics, a philanthropically funded research organization at the University of Sydney, Australia. Dr. Mosley reports no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEJM EVIDENCE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Impressive’ results for intranasal ketamine in chronic, refractory migraine

Article Type
Changed
Mon, 06/19/2023 - 12:45

Intranasal (IN) ketamine may be a feasible treatment alternative for people with chronic, refractory migraine who don’t respond to other medications, new research shows.

Half of the study participants who used IN ketamine for chronic, treatment-refractory migraine in a new retrospective cohort study reported it as “very effective” and over one-third said it boosted their quality of life.

“In our study, we showed that with even a few uses per day, intranasal ketamine can still improve patients’ quality of life,” lead investigator Hsiangkuo Yuan, MD, PhD, said in an interview. Dr. Yuan is associate professor of neurology at Thomas Jefferson University, Philadelphia, and director of clinical research at the Jefferson Headache Center.

He added that “multiple medications failed these patients, and the majority of patients were having daily headaches. So, if anything works, even partially and shortly, it may still give patients some relief to get through the day.”

The findings were published online in Regional Anesthesia & Pain Medicine.  
 

Daily migraine, failed medications

Use of IN ketamine has not been studied for the treatment of chronic, treatment-refractory migraine – although it has been studied in patients with cluster headache and migraine, the investigators note.

Ketamine is not yet approved by the Food and Drug Administration to treat migraine.

To further explore ketamine’s effect in those with chronic, treatment-refractory migraine, the investigators retrospectively analyzed electronic health records of patients at the Jefferson Headache Center who had received IN ketamine for the treatment of migraine between January 2019 and February 2020.

Of 242 patients who had received IN ketamine, Dr. Yuan’s team followed up with 169 who agreed to be part of the study.

The majority (67%) had daily migraine, and 85% had tried more than three classes of preventive medications for migraine. They currently used a median of two medications, the most common of which was a CGRP monoclonal antibody.

On average, patients used six sprays per day for a median 10 days per month. Median pain relief onset was 52 minutes after dosage.

Almost three-quarters of patients reported at least one side effect from the ketamine, most commonly fatigue (22%), double/blurred vision (21%), and confusion/dissociation (21%). These effects were mostly temporary, the researchers report.

The most common reasons for initiating IN ketamine included an incomplete response to prior acute medications (59%), incomplete response to prior preventive medications (31%), and prior benefit from IV ketamine (23%).

Study investigators noted that ketamine has the potential to become addictive and indicated that “clinicians should only consider the use of a potentially addictive medication such as ketamine for significantly disabled patients with migraine.”

About half of the participants who used IN ketamine found it “very effective,” and 40% found it “somewhat effective.” Within the same group, 36% and 43% found the overall impact of IN ketamine on their quality of life was much better and somewhat better, respectively.

Among those still using ketamine during study follow-up, 82% reported that ketamine was very effective.

Compared with other acute headache medications, IN ketamine was considered much better (43%) or somewhat better (30%).

Nearly 75% of participants reported using fewer pain relievers when using IN ketamine.

Dr. Yuan said that future research might focus on finding predictors for IN ketamine response or determining the optimal effective and safe dose for the drug in those with chronic, treatment-refractory migraine.  

“We still need a prospective, randomized controlled trial to assess the efficacy and tolerability of intranasal ketamine,” he added.
 

 

 

‘Impressive result’

Commenting on the findings for this article, Richard Lipton, MD, professor of neurology, psychiatry and behavioral sciences and director of the Montefiore Headache Center at Albert Einstein College of Medicine, New York, said that “in this refractory population with multiple treatment failures, this is a very impressive, open-label result.”

“This real-world data suggests that ketamine is an effective option for people with medically intractable chronic migraine,” said Dr. Lipton, who was not part of the study. “In these very difficult to treat patients, 65% of those who started on ketamine persisted. Of those who remained on ketamine, 82% found it very effective.”

“This study makes me more confident that intranasal ketamine is a helpful treatment option, and I plan to use it more often in the future,” he added.

Like Dr. Yuan, Dr. Lipton highlighted the need for “well-designed placebo-controlled trials” and “rigorous comparative effectiveness studies.”

The study was funded by Miles for Migraine. Dr. Yuan has received institutional support for serving as an investigator from Teva and AbbVie, and royalties from Cambridge University Press and MedLink. Dr. Lipton has received compensation for consultation from Alder/Lumbeck, Axsome, Supernus, Theranica, Upsher-Smith, and Satsuma. He has participated in speaker bureaus for Eli Lilly and Amgen/Novartis and has received institutional support for serving as principal investigator from Teva, GammaCore, and Allergan/AbbVie. He has received payments for authorship or royalties from Demos Medical, Cambridge University Press, and MedLink.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Intranasal (IN) ketamine may be a feasible treatment alternative for people with chronic, refractory migraine who don’t respond to other medications, new research shows.

Half of the study participants who used IN ketamine for chronic, treatment-refractory migraine in a new retrospective cohort study reported it as “very effective” and over one-third said it boosted their quality of life.

“In our study, we showed that with even a few uses per day, intranasal ketamine can still improve patients’ quality of life,” lead investigator Hsiangkuo Yuan, MD, PhD, said in an interview. Dr. Yuan is associate professor of neurology at Thomas Jefferson University, Philadelphia, and director of clinical research at the Jefferson Headache Center.

He added that “multiple medications failed these patients, and the majority of patients were having daily headaches. So, if anything works, even partially and shortly, it may still give patients some relief to get through the day.”

The findings were published online in Regional Anesthesia & Pain Medicine.  
 

Daily migraine, failed medications

Use of IN ketamine has not been studied for the treatment of chronic, treatment-refractory migraine – although it has been studied in patients with cluster headache and migraine, the investigators note.

Ketamine is not yet approved by the Food and Drug Administration to treat migraine.

To further explore ketamine’s effect in those with chronic, treatment-refractory migraine, the investigators retrospectively analyzed electronic health records of patients at the Jefferson Headache Center who had received IN ketamine for the treatment of migraine between January 2019 and February 2020.

Of 242 patients who had received IN ketamine, Dr. Yuan’s team followed up with 169 who agreed to be part of the study.

The majority (67%) had daily migraine, and 85% had tried more than three classes of preventive medications for migraine. They currently used a median of two medications, the most common of which was a CGRP monoclonal antibody.

On average, patients used six sprays per day for a median 10 days per month. Median pain relief onset was 52 minutes after dosage.

Almost three-quarters of patients reported at least one side effect from the ketamine, most commonly fatigue (22%), double/blurred vision (21%), and confusion/dissociation (21%). These effects were mostly temporary, the researchers report.

The most common reasons for initiating IN ketamine included an incomplete response to prior acute medications (59%), incomplete response to prior preventive medications (31%), and prior benefit from IV ketamine (23%).

Study investigators noted that ketamine has the potential to become addictive and indicated that “clinicians should only consider the use of a potentially addictive medication such as ketamine for significantly disabled patients with migraine.”

About half of the participants who used IN ketamine found it “very effective,” and 40% found it “somewhat effective.” Within the same group, 36% and 43% found the overall impact of IN ketamine on their quality of life was much better and somewhat better, respectively.

Among those still using ketamine during study follow-up, 82% reported that ketamine was very effective.

Compared with other acute headache medications, IN ketamine was considered much better (43%) or somewhat better (30%).

Nearly 75% of participants reported using fewer pain relievers when using IN ketamine.

Dr. Yuan said that future research might focus on finding predictors for IN ketamine response or determining the optimal effective and safe dose for the drug in those with chronic, treatment-refractory migraine.  

“We still need a prospective, randomized controlled trial to assess the efficacy and tolerability of intranasal ketamine,” he added.
 

 

 

‘Impressive result’

Commenting on the findings for this article, Richard Lipton, MD, professor of neurology, psychiatry and behavioral sciences and director of the Montefiore Headache Center at Albert Einstein College of Medicine, New York, said that “in this refractory population with multiple treatment failures, this is a very impressive, open-label result.”

“This real-world data suggests that ketamine is an effective option for people with medically intractable chronic migraine,” said Dr. Lipton, who was not part of the study. “In these very difficult to treat patients, 65% of those who started on ketamine persisted. Of those who remained on ketamine, 82% found it very effective.”

“This study makes me more confident that intranasal ketamine is a helpful treatment option, and I plan to use it more often in the future,” he added.

Like Dr. Yuan, Dr. Lipton highlighted the need for “well-designed placebo-controlled trials” and “rigorous comparative effectiveness studies.”

The study was funded by Miles for Migraine. Dr. Yuan has received institutional support for serving as an investigator from Teva and AbbVie, and royalties from Cambridge University Press and MedLink. Dr. Lipton has received compensation for consultation from Alder/Lumbeck, Axsome, Supernus, Theranica, Upsher-Smith, and Satsuma. He has participated in speaker bureaus for Eli Lilly and Amgen/Novartis and has received institutional support for serving as principal investigator from Teva, GammaCore, and Allergan/AbbVie. He has received payments for authorship or royalties from Demos Medical, Cambridge University Press, and MedLink.

A version of this article originally appeared on Medscape.com.

Intranasal (IN) ketamine may be a feasible treatment alternative for people with chronic, refractory migraine who don’t respond to other medications, new research shows.

Half of the study participants who used IN ketamine for chronic, treatment-refractory migraine in a new retrospective cohort study reported it as “very effective” and over one-third said it boosted their quality of life.

“In our study, we showed that with even a few uses per day, intranasal ketamine can still improve patients’ quality of life,” lead investigator Hsiangkuo Yuan, MD, PhD, said in an interview. Dr. Yuan is associate professor of neurology at Thomas Jefferson University, Philadelphia, and director of clinical research at the Jefferson Headache Center.

He added that “multiple medications failed these patients, and the majority of patients were having daily headaches. So, if anything works, even partially and shortly, it may still give patients some relief to get through the day.”

The findings were published online in Regional Anesthesia & Pain Medicine.  
 

Daily migraine, failed medications

Use of IN ketamine has not been studied for the treatment of chronic, treatment-refractory migraine – although it has been studied in patients with cluster headache and migraine, the investigators note.

Ketamine is not yet approved by the Food and Drug Administration to treat migraine.

To further explore ketamine’s effect in those with chronic, treatment-refractory migraine, the investigators retrospectively analyzed electronic health records of patients at the Jefferson Headache Center who had received IN ketamine for the treatment of migraine between January 2019 and February 2020.

Of 242 patients who had received IN ketamine, Dr. Yuan’s team followed up with 169 who agreed to be part of the study.

The majority (67%) had daily migraine, and 85% had tried more than three classes of preventive medications for migraine. They currently used a median of two medications, the most common of which was a CGRP monoclonal antibody.

On average, patients used six sprays per day for a median 10 days per month. Median pain relief onset was 52 minutes after dosage.

Almost three-quarters of patients reported at least one side effect from the ketamine, most commonly fatigue (22%), double/blurred vision (21%), and confusion/dissociation (21%). These effects were mostly temporary, the researchers report.

The most common reasons for initiating IN ketamine included an incomplete response to prior acute medications (59%), incomplete response to prior preventive medications (31%), and prior benefit from IV ketamine (23%).

Study investigators noted that ketamine has the potential to become addictive and indicated that “clinicians should only consider the use of a potentially addictive medication such as ketamine for significantly disabled patients with migraine.”

About half of the participants who used IN ketamine found it “very effective,” and 40% found it “somewhat effective.” Within the same group, 36% and 43% found the overall impact of IN ketamine on their quality of life was much better and somewhat better, respectively.

Among those still using ketamine during study follow-up, 82% reported that ketamine was very effective.

Compared with other acute headache medications, IN ketamine was considered much better (43%) or somewhat better (30%).

Nearly 75% of participants reported using fewer pain relievers when using IN ketamine.

Dr. Yuan said that future research might focus on finding predictors for IN ketamine response or determining the optimal effective and safe dose for the drug in those with chronic, treatment-refractory migraine.  

“We still need a prospective, randomized controlled trial to assess the efficacy and tolerability of intranasal ketamine,” he added.
 

 

 

‘Impressive result’

Commenting on the findings for this article, Richard Lipton, MD, professor of neurology, psychiatry and behavioral sciences and director of the Montefiore Headache Center at Albert Einstein College of Medicine, New York, said that “in this refractory population with multiple treatment failures, this is a very impressive, open-label result.”

“This real-world data suggests that ketamine is an effective option for people with medically intractable chronic migraine,” said Dr. Lipton, who was not part of the study. “In these very difficult to treat patients, 65% of those who started on ketamine persisted. Of those who remained on ketamine, 82% found it very effective.”

“This study makes me more confident that intranasal ketamine is a helpful treatment option, and I plan to use it more often in the future,” he added.

Like Dr. Yuan, Dr. Lipton highlighted the need for “well-designed placebo-controlled trials” and “rigorous comparative effectiveness studies.”

The study was funded by Miles for Migraine. Dr. Yuan has received institutional support for serving as an investigator from Teva and AbbVie, and royalties from Cambridge University Press and MedLink. Dr. Lipton has received compensation for consultation from Alder/Lumbeck, Axsome, Supernus, Theranica, Upsher-Smith, and Satsuma. He has participated in speaker bureaus for Eli Lilly and Amgen/Novartis and has received institutional support for serving as principal investigator from Teva, GammaCore, and Allergan/AbbVie. He has received payments for authorship or royalties from Demos Medical, Cambridge University Press, and MedLink.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM REGIONAL ANESTHESIA & PAIN MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Early hysterectomy linked to higher CVD, stroke risk

Article Type
Changed
Wed, 06/14/2023 - 08:14

 

TOPLINE:

Among Korean women younger than 50 years, hysterectomy is associated with an increased risk of cardiovascular disease (CVD), especially stroke, a new cohort study shows.

METHODOLOGY:

  • Risk of CVD rapidly increases after menopause, possibly owing to loss of protective effects of female sex hormones and hemorheologic changes.
  • Results of previous studies of the association between hysterectomy and CVD were mixed.
  • Using national health insurance data, this cohort study included 55,539 South Korean women (median age, 45 years) who underwent a hysterectomy and a propensity-matched group of women.
  • The primary outcome was CVD, including myocardial infarction (MI), coronary artery revascularization, and stroke.

TAKEAWAY:

  • During follow-up of just under 8 years, the hysterectomy group had an increased risk of CVD compared with the non-hysterectomy group (hazard ratio [HR] 1.25; 95% confidence interval [CI], 1.09-1.44; P = .002)
  • The incidence of MI and coronary revascularization was comparable between groups, but the risk of stroke was significantly higher among those who had had a hysterectomy (HR, 1.31; 95% CI, 1.12-1.53; P < .001)
  • This increase in risk was similar after excluding patients who also underwent adnexal surgery.

IN PRACTICE:

Early hysterectomy was linked to higher CVD risk, especially stroke, but since the CVD incidence wasn’t high, a change in clinical practice may not be needed, said the authors.

STUDY DETAILS:

The study was conducted by Jin-Sung Yuk, MD, PhD, Department of Obstetrics and Gynecology, Sanggye Paik Hospital, Inje University College of Medicine, Seoul, Republic of Korea, and colleagues. It was published online June 12 in JAMA Network Open.

LIMITATIONS:

The study was retrospective and observational and used administrative databases that may be prone to inaccurate coding. The findings may not be generalizable outside Korea.

DISCLOSURES:

The study was supported by a National Research Foundation of Korea grant funded by the Korea government. The authors report no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Among Korean women younger than 50 years, hysterectomy is associated with an increased risk of cardiovascular disease (CVD), especially stroke, a new cohort study shows.

METHODOLOGY:

  • Risk of CVD rapidly increases after menopause, possibly owing to loss of protective effects of female sex hormones and hemorheologic changes.
  • Results of previous studies of the association between hysterectomy and CVD were mixed.
  • Using national health insurance data, this cohort study included 55,539 South Korean women (median age, 45 years) who underwent a hysterectomy and a propensity-matched group of women.
  • The primary outcome was CVD, including myocardial infarction (MI), coronary artery revascularization, and stroke.

TAKEAWAY:

  • During follow-up of just under 8 years, the hysterectomy group had an increased risk of CVD compared with the non-hysterectomy group (hazard ratio [HR] 1.25; 95% confidence interval [CI], 1.09-1.44; P = .002)
  • The incidence of MI and coronary revascularization was comparable between groups, but the risk of stroke was significantly higher among those who had had a hysterectomy (HR, 1.31; 95% CI, 1.12-1.53; P < .001)
  • This increase in risk was similar after excluding patients who also underwent adnexal surgery.

IN PRACTICE:

Early hysterectomy was linked to higher CVD risk, especially stroke, but since the CVD incidence wasn’t high, a change in clinical practice may not be needed, said the authors.

STUDY DETAILS:

The study was conducted by Jin-Sung Yuk, MD, PhD, Department of Obstetrics and Gynecology, Sanggye Paik Hospital, Inje University College of Medicine, Seoul, Republic of Korea, and colleagues. It was published online June 12 in JAMA Network Open.

LIMITATIONS:

The study was retrospective and observational and used administrative databases that may be prone to inaccurate coding. The findings may not be generalizable outside Korea.

DISCLOSURES:

The study was supported by a National Research Foundation of Korea grant funded by the Korea government. The authors report no conflicts of interest.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Among Korean women younger than 50 years, hysterectomy is associated with an increased risk of cardiovascular disease (CVD), especially stroke, a new cohort study shows.

METHODOLOGY:

  • Risk of CVD rapidly increases after menopause, possibly owing to loss of protective effects of female sex hormones and hemorheologic changes.
  • Results of previous studies of the association between hysterectomy and CVD were mixed.
  • Using national health insurance data, this cohort study included 55,539 South Korean women (median age, 45 years) who underwent a hysterectomy and a propensity-matched group of women.
  • The primary outcome was CVD, including myocardial infarction (MI), coronary artery revascularization, and stroke.

TAKEAWAY:

  • During follow-up of just under 8 years, the hysterectomy group had an increased risk of CVD compared with the non-hysterectomy group (hazard ratio [HR] 1.25; 95% confidence interval [CI], 1.09-1.44; P = .002)
  • The incidence of MI and coronary revascularization was comparable between groups, but the risk of stroke was significantly higher among those who had had a hysterectomy (HR, 1.31; 95% CI, 1.12-1.53; P < .001)
  • This increase in risk was similar after excluding patients who also underwent adnexal surgery.

IN PRACTICE:

Early hysterectomy was linked to higher CVD risk, especially stroke, but since the CVD incidence wasn’t high, a change in clinical practice may not be needed, said the authors.

STUDY DETAILS:

The study was conducted by Jin-Sung Yuk, MD, PhD, Department of Obstetrics and Gynecology, Sanggye Paik Hospital, Inje University College of Medicine, Seoul, Republic of Korea, and colleagues. It was published online June 12 in JAMA Network Open.

LIMITATIONS:

The study was retrospective and observational and used administrative databases that may be prone to inaccurate coding. The findings may not be generalizable outside Korea.

DISCLOSURES:

The study was supported by a National Research Foundation of Korea grant funded by the Korea government. The authors report no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Low-carb breakfast key to lower glucose variability in T2D?

Article Type
Changed
Tue, 06/13/2023 - 09:01

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

A low-carbohydrate breakfast was better than a control (low-fat) breakfast to decrease glycemic variability throughout the day in type 2 diabetes, in new research.

These findings from a 3-month randomized study in 121 patients in Canada and Australia were published online recently in the American Journal of Clinical Nutrition.

The researchers aimed to determine whether a low-carbohydrate, high-fat breakfast (focused around eggs), compared with a standard, low-fat control breakfast (designed to have no/minimal eggs), would improve blood glucose control in individuals with type 2 diabetes.

“We’ve determined that if the first meal of the day is low-carb and higher in protein and fat we can limit hyperglycemic swings,” lead author Barbara Oliveira, PhD, School of Health and Exercise Sciences, University of British Columbia, Kelowna, said in a press release from the university.

“Having fewer carbs for breakfast not only aligns better with how people with [type 2 diabetes] handle glucose throughout the day,” she noted, “but it also has incredible potential for people with [type 2 diabetes] who struggle with their glucose levels in the morning.”

“By making a small adjustment to the carb content of a single meal rather than the entire diet,” Dr. Oliveira added, “we have the potential to increase adherence significantly while still obtaining significant benefits.”

The researchers conclude that “this trial provides evidence that advice to consume a low-carbohydrate breakfast could be a simple, feasible, and effective approach to manage postprandial hyperglycemia and lower glycemic variability in people living with type 2 diabetes.”
 

Could breakfast tweak improve glucose control?

People with type 2 diabetes have higher levels of insulin resistance and greater glucose intolerance in the morning, the researchers write.

And consuming a low-fat, high-carbohydrate meal in line with most dietary guidelines appears to incur the highest hyperglycemia spike and leads to higher glycemic variability.

They speculated that eating a low-carb breakfast, compared with a low-fat breakfast, might be an easy way to mitigate this.

They recruited participants from online ads in three provinces in Canada and four states in Australia, and they conducted the study from a site in British Columbia and one in Wollongong, Australia.

The participants were aged 20-79 years and diagnosed with type 2 diabetes. They also had a current hemoglobin A1c < 8.5% and no allergies to eggs, and they were able to follow remote, online guidance.

After screening, the participants had a phone or video conference call with a member of the research team who explained the study.

The researchers randomly assigned 75 participants in Canada and 46 participants in Australia 1:1 to the low-carbohydrate intervention or the control intervention.

The participants had a mean age of 64 and 53% were women. They had a mean weight of 93 kg (204 lb), body mass index of 32 kg/m2, and A1c of 7.0%.

Registered dietitians in Canada and Australia each designed 8-10 recipes/menus for low-carb breakfasts and an equal number of recipes/menus for control (low-fat) breakfasts that were specific for those countries.

Each recipe contains about 450 kcal, and they are available in Supplemental Appendix 1A and 1B, with the article.

Each low-carbohydrate breakfast contains about 25 g protein, 8 g carbohydrates, and 37 g fat. For example, one breakfast is a three-egg omelet with spinach.

Each control (low-fat) recipe contains about 20 g protein, 56 g carbohydrates, and 15 g fat. For example, one breakfast is a small blueberry muffin and a small plain Greek yogurt.

The participants were advised to select one of these breakfasts every day and follow it exactly (they were also required to upload a photograph of their breakfast every morning). They were not given any guidance or calorie restriction for the other meals of the day.

The participants also filled in 3-day food records and answered a questionnaire about exercise, hunger, and satiety, at the beginning, middle, and end of the intervention.

They provided self-reported height, weight, and waist circumference, and they were given requisitions for blood tests for A1c to be done at a local laboratory, at the beginning and end of the intervention.

The participants also wore a continuous glucose monitor (CGM) during the first and last 14 days of the intervention.
 

 

 

Intervention improved CGM measures

There was no significant difference in the primary outcome, change in A1c, at the end of 12 weeks, in the two groups. The mean A1c decreased by 0.3% in the intervention group vs 0.1% in the control group (P = .06).

Similarly, in secondary outcomes, weight and BMI each decreased about 1% and waist circumference decreased by about 2.5 cm in each group at 12 weeks (no significant difference). There were also no significant differences in hunger, satiety, or physical activity between the two groups.

However, the 24-hour CGM data showed that mean and maximum glucose, glycemic variability, and time above range were all significantly lower in participants in the low-carbohydrate breakfast intervention group vs. those in the control group (all P < .05).

Time in range was significantly higher among participants in the intervention group (P < .05).

In addition, the 2-hour postprandial CGM data showed that mean glucose and maximum glucose after breakfast were lower in participants in the low-carbohydrate breakfast group than in the control group.

This work was supported by investigator-initiated operating grants to senior author Jonathan P. Little, PhD, School of Health and Exercise Sciences, University of British Columbia, from the Egg Nutrition Center, United States, and Egg Farmers of Canada. The authors declare that they have no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF CLINICAL NUTRITION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cognitive decline risk in adult childhood cancer survivors

Article Type
Changed
Sun, 06/11/2023 - 11:30

 

Adult survivors of childhood cancer have an elevated risk of developing cognitive impairment years after their cancer diagnosis and treatment, new research shows.

Among more than 2,300 adult survivors of childhood cancer and their siblings, who served as controls, new-onset memory impairment emerged more often in survivors decades later.

The increased risk was associated with the cancer treatment that was provided as well as modifiable health behaviors and chronic health conditions.

Even 35 years after being diagnosed, cancer survivors who never received chemotherapies or radiation therapies known to damage the brain reported far greater memory impairment than did their siblings, first author Nicholas Phillips, MD, told this news organization.

What the findings suggest is that “we need to educate oncologists and primary care providers on the risks our survivors face long after completion of therapy,” said Dr. Phillips, of the epidemiology and cancer control department at St. Jude Children’s Research Hospital, Memphis, Tenn.

The study was published online in JAMA Network Open.

Cancer survivors face an elevated risk for severe neurocognitive effects that can emerge 5-10 years following their diagnosis and treatment. However, it’s unclear whether new-onset neurocognitive problems can still develop a decade or more following diagnosis.

Over a long-term follow-up, Dr. Phillips and colleagues explored this question in 2,375 adult survivors of childhood cancer from the Childhood Cancer Survivor Study and 232 of their siblings.

Among the cancer cohort, 1,316 patients were survivors of acute lymphoblastic leukemia (ALL), 488 were survivors of central nervous system (CNS) tumors, and 571 had survived Hodgkin lymphoma.

The researchers determined the prevalence of new-onset neurocognitive impairment between baseline (23 years after diagnosis) and follow-up (35 years after diagnosis). New-onset neurocognitive impairment – present at follow-up but not at baseline – was defined as having a score in the worst 10% of the sibling cohort.

A higher proportion of survivors had new-onset memory impairment at follow-up compared with siblings. Specifically, about 8% of siblings had new-onset memory trouble, compared with 14% of ALL survivors treated with chemotherapy only, 26% of ALL survivors treated with cranial radiation, 35% of CNS tumor survivors, and 17% of Hodgkin lymphoma survivors.

New-onset memory impairment was associated with cranial radiation among CNS tumor survivors (relative risk [RR], 1.97) and alkylator chemotherapy at or above 8,000 mg/m2 among survivors of ALL who were treated without cranial radiation (RR, 2.80). The authors also found that smoking, low educational attainment, and low physical activity were associated with an elevated risk for new-onset memory impairment.

Dr. Phillips noted that current guidelines emphasize the importance of short-term monitoring of a survivor’s neurocognitive status on the basis of that person’s chemotherapy and radiation exposures.

However, “our study suggests that all survivors, regardless of their therapy, should be screened regularly for new-onset neurocognitive problems. And this screening should be done regularly for decades after diagnosis,” he said in an interview.

Dr. Phillips also noted the importance of communicating lifestyle modifications, such as not smoking and maintaining an active lifestyle.

“We need to start early and use the power of repetition when communicating with our survivors and their families,” Dr. Phillips said. “When our families and survivors hear the word ‘exercise,’ they think of gym memberships, lifting weights, and running on treadmills. But what we really want our survivors to do is stay active.”

What this means is engaging for about 2.5 hours a week in a range of activities, such as ballet, basketball, volleyball, bicycling, or swimming.

“And if our kids want to quit after 3 months, let them know that this is okay. They just need to replace that activity with another activity,” said Dr. Phillips. “We want them to find a fun hobby that they will enjoy that will keep them active.”

The study was supported by the National Cancer Institute. Dr. Phillips has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Adult survivors of childhood cancer have an elevated risk of developing cognitive impairment years after their cancer diagnosis and treatment, new research shows.

Among more than 2,300 adult survivors of childhood cancer and their siblings, who served as controls, new-onset memory impairment emerged more often in survivors decades later.

The increased risk was associated with the cancer treatment that was provided as well as modifiable health behaviors and chronic health conditions.

Even 35 years after being diagnosed, cancer survivors who never received chemotherapies or radiation therapies known to damage the brain reported far greater memory impairment than did their siblings, first author Nicholas Phillips, MD, told this news organization.

What the findings suggest is that “we need to educate oncologists and primary care providers on the risks our survivors face long after completion of therapy,” said Dr. Phillips, of the epidemiology and cancer control department at St. Jude Children’s Research Hospital, Memphis, Tenn.

The study was published online in JAMA Network Open.

Cancer survivors face an elevated risk for severe neurocognitive effects that can emerge 5-10 years following their diagnosis and treatment. However, it’s unclear whether new-onset neurocognitive problems can still develop a decade or more following diagnosis.

Over a long-term follow-up, Dr. Phillips and colleagues explored this question in 2,375 adult survivors of childhood cancer from the Childhood Cancer Survivor Study and 232 of their siblings.

Among the cancer cohort, 1,316 patients were survivors of acute lymphoblastic leukemia (ALL), 488 were survivors of central nervous system (CNS) tumors, and 571 had survived Hodgkin lymphoma.

The researchers determined the prevalence of new-onset neurocognitive impairment between baseline (23 years after diagnosis) and follow-up (35 years after diagnosis). New-onset neurocognitive impairment – present at follow-up but not at baseline – was defined as having a score in the worst 10% of the sibling cohort.

A higher proportion of survivors had new-onset memory impairment at follow-up compared with siblings. Specifically, about 8% of siblings had new-onset memory trouble, compared with 14% of ALL survivors treated with chemotherapy only, 26% of ALL survivors treated with cranial radiation, 35% of CNS tumor survivors, and 17% of Hodgkin lymphoma survivors.

New-onset memory impairment was associated with cranial radiation among CNS tumor survivors (relative risk [RR], 1.97) and alkylator chemotherapy at or above 8,000 mg/m2 among survivors of ALL who were treated without cranial radiation (RR, 2.80). The authors also found that smoking, low educational attainment, and low physical activity were associated with an elevated risk for new-onset memory impairment.

Dr. Phillips noted that current guidelines emphasize the importance of short-term monitoring of a survivor’s neurocognitive status on the basis of that person’s chemotherapy and radiation exposures.

However, “our study suggests that all survivors, regardless of their therapy, should be screened regularly for new-onset neurocognitive problems. And this screening should be done regularly for decades after diagnosis,” he said in an interview.

Dr. Phillips also noted the importance of communicating lifestyle modifications, such as not smoking and maintaining an active lifestyle.

“We need to start early and use the power of repetition when communicating with our survivors and their families,” Dr. Phillips said. “When our families and survivors hear the word ‘exercise,’ they think of gym memberships, lifting weights, and running on treadmills. But what we really want our survivors to do is stay active.”

What this means is engaging for about 2.5 hours a week in a range of activities, such as ballet, basketball, volleyball, bicycling, or swimming.

“And if our kids want to quit after 3 months, let them know that this is okay. They just need to replace that activity with another activity,” said Dr. Phillips. “We want them to find a fun hobby that they will enjoy that will keep them active.”

The study was supported by the National Cancer Institute. Dr. Phillips has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Adult survivors of childhood cancer have an elevated risk of developing cognitive impairment years after their cancer diagnosis and treatment, new research shows.

Among more than 2,300 adult survivors of childhood cancer and their siblings, who served as controls, new-onset memory impairment emerged more often in survivors decades later.

The increased risk was associated with the cancer treatment that was provided as well as modifiable health behaviors and chronic health conditions.

Even 35 years after being diagnosed, cancer survivors who never received chemotherapies or radiation therapies known to damage the brain reported far greater memory impairment than did their siblings, first author Nicholas Phillips, MD, told this news organization.

What the findings suggest is that “we need to educate oncologists and primary care providers on the risks our survivors face long after completion of therapy,” said Dr. Phillips, of the epidemiology and cancer control department at St. Jude Children’s Research Hospital, Memphis, Tenn.

The study was published online in JAMA Network Open.

Cancer survivors face an elevated risk for severe neurocognitive effects that can emerge 5-10 years following their diagnosis and treatment. However, it’s unclear whether new-onset neurocognitive problems can still develop a decade or more following diagnosis.

Over a long-term follow-up, Dr. Phillips and colleagues explored this question in 2,375 adult survivors of childhood cancer from the Childhood Cancer Survivor Study and 232 of their siblings.

Among the cancer cohort, 1,316 patients were survivors of acute lymphoblastic leukemia (ALL), 488 were survivors of central nervous system (CNS) tumors, and 571 had survived Hodgkin lymphoma.

The researchers determined the prevalence of new-onset neurocognitive impairment between baseline (23 years after diagnosis) and follow-up (35 years after diagnosis). New-onset neurocognitive impairment – present at follow-up but not at baseline – was defined as having a score in the worst 10% of the sibling cohort.

A higher proportion of survivors had new-onset memory impairment at follow-up compared with siblings. Specifically, about 8% of siblings had new-onset memory trouble, compared with 14% of ALL survivors treated with chemotherapy only, 26% of ALL survivors treated with cranial radiation, 35% of CNS tumor survivors, and 17% of Hodgkin lymphoma survivors.

New-onset memory impairment was associated with cranial radiation among CNS tumor survivors (relative risk [RR], 1.97) and alkylator chemotherapy at or above 8,000 mg/m2 among survivors of ALL who were treated without cranial radiation (RR, 2.80). The authors also found that smoking, low educational attainment, and low physical activity were associated with an elevated risk for new-onset memory impairment.

Dr. Phillips noted that current guidelines emphasize the importance of short-term monitoring of a survivor’s neurocognitive status on the basis of that person’s chemotherapy and radiation exposures.

However, “our study suggests that all survivors, regardless of their therapy, should be screened regularly for new-onset neurocognitive problems. And this screening should be done regularly for decades after diagnosis,” he said in an interview.

Dr. Phillips also noted the importance of communicating lifestyle modifications, such as not smoking and maintaining an active lifestyle.

“We need to start early and use the power of repetition when communicating with our survivors and their families,” Dr. Phillips said. “When our families and survivors hear the word ‘exercise,’ they think of gym memberships, lifting weights, and running on treadmills. But what we really want our survivors to do is stay active.”

What this means is engaging for about 2.5 hours a week in a range of activities, such as ballet, basketball, volleyball, bicycling, or swimming.

“And if our kids want to quit after 3 months, let them know that this is okay. They just need to replace that activity with another activity,” said Dr. Phillips. “We want them to find a fun hobby that they will enjoy that will keep them active.”

The study was supported by the National Cancer Institute. Dr. Phillips has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article