Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort
Allow Teaser Image

Specific personality traits may influence dementia risk

Article Type
Changed
Tue, 12/12/2023 - 15:02

 

TOPLINE:

People who are extroverted and conscientious and have a positive outlook may be at lower dementia risk, whereas those who score highly for neuroticism and have a negative outlook may be at increased risk, new research suggests. 

METHODOLOGY: 

  • Researchers examined the link between the “big five” personality traits (conscientiousness, extraversion, openness to experience, neuroticism, and agreeableness) and subjective well-being (positive and negative affect and life satisfaction) and clinical symptoms of dementia (cognitive test performance) and neuropathology at autopsy. 
  • Data for the meta-analysis came from eight longitudinal studies with 44,531 adults (aged 49-81 years at baseline; 26%-61% women) followed for up to 21 years, during which 1703 incident cases of dementia occurred. 
  • Bayesian multilevel models tested whether personality traits and subjective well-being differentially predicted neuropsychological and neuropathologic characteristics of dementia. 

TAKEAWAY:

  • High neuroticism, negative affect, and low conscientiousness were risk factors for dementia, whereas conscientiousness, extraversion, and positive affect were protective.
  • Across all analyses, there was directional consistency in estimates across samples, which is noteworthy given between-study differences in sociodemographic and design characteristics. 
  • No consistent associations were found between psychological factors and neuropathology. 
  • However, individuals higher in conscientiousness who did not receive a clinical diagnosis tended to have a lower Braak stage at autopsy, suggesting the possibility that conscientiousness is related to cognitive resilience. 

IN PRACTICE:

“These results replicate and extend evidence that personality traits may assist in early identification and dementia-care planning strategies, as well as risk stratification for dementia diagnosis. Moreover, our findings provide further support for recommendations to incorporate psychological trait measures into clinical screening or diagnosis criteria,” the authors write. SOURCE:

The study, with first author Emorie Beck, PhD, Department of Psychology, University of California, Davis, was published online on November 29, 2023, in Alzheimer’s & Dementia.

 LIMITATIONS:

Access to autopsy data was limited. The findings may not generalize across racial groups. The analysis did not examine dynamic associations between changing personality and cognition and neuropathology over time.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging. The authors have declared no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

People who are extroverted and conscientious and have a positive outlook may be at lower dementia risk, whereas those who score highly for neuroticism and have a negative outlook may be at increased risk, new research suggests. 

METHODOLOGY: 

  • Researchers examined the link between the “big five” personality traits (conscientiousness, extraversion, openness to experience, neuroticism, and agreeableness) and subjective well-being (positive and negative affect and life satisfaction) and clinical symptoms of dementia (cognitive test performance) and neuropathology at autopsy. 
  • Data for the meta-analysis came from eight longitudinal studies with 44,531 adults (aged 49-81 years at baseline; 26%-61% women) followed for up to 21 years, during which 1703 incident cases of dementia occurred. 
  • Bayesian multilevel models tested whether personality traits and subjective well-being differentially predicted neuropsychological and neuropathologic characteristics of dementia. 

TAKEAWAY:

  • High neuroticism, negative affect, and low conscientiousness were risk factors for dementia, whereas conscientiousness, extraversion, and positive affect were protective.
  • Across all analyses, there was directional consistency in estimates across samples, which is noteworthy given between-study differences in sociodemographic and design characteristics. 
  • No consistent associations were found between psychological factors and neuropathology. 
  • However, individuals higher in conscientiousness who did not receive a clinical diagnosis tended to have a lower Braak stage at autopsy, suggesting the possibility that conscientiousness is related to cognitive resilience. 

IN PRACTICE:

“These results replicate and extend evidence that personality traits may assist in early identification and dementia-care planning strategies, as well as risk stratification for dementia diagnosis. Moreover, our findings provide further support for recommendations to incorporate psychological trait measures into clinical screening or diagnosis criteria,” the authors write. SOURCE:

The study, with first author Emorie Beck, PhD, Department of Psychology, University of California, Davis, was published online on November 29, 2023, in Alzheimer’s & Dementia.

 LIMITATIONS:

Access to autopsy data was limited. The findings may not generalize across racial groups. The analysis did not examine dynamic associations between changing personality and cognition and neuropathology over time.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging. The authors have declared no conflicts of interest.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

People who are extroverted and conscientious and have a positive outlook may be at lower dementia risk, whereas those who score highly for neuroticism and have a negative outlook may be at increased risk, new research suggests. 

METHODOLOGY: 

  • Researchers examined the link between the “big five” personality traits (conscientiousness, extraversion, openness to experience, neuroticism, and agreeableness) and subjective well-being (positive and negative affect and life satisfaction) and clinical symptoms of dementia (cognitive test performance) and neuropathology at autopsy. 
  • Data for the meta-analysis came from eight longitudinal studies with 44,531 adults (aged 49-81 years at baseline; 26%-61% women) followed for up to 21 years, during which 1703 incident cases of dementia occurred. 
  • Bayesian multilevel models tested whether personality traits and subjective well-being differentially predicted neuropsychological and neuropathologic characteristics of dementia. 

TAKEAWAY:

  • High neuroticism, negative affect, and low conscientiousness were risk factors for dementia, whereas conscientiousness, extraversion, and positive affect were protective.
  • Across all analyses, there was directional consistency in estimates across samples, which is noteworthy given between-study differences in sociodemographic and design characteristics. 
  • No consistent associations were found between psychological factors and neuropathology. 
  • However, individuals higher in conscientiousness who did not receive a clinical diagnosis tended to have a lower Braak stage at autopsy, suggesting the possibility that conscientiousness is related to cognitive resilience. 

IN PRACTICE:

“These results replicate and extend evidence that personality traits may assist in early identification and dementia-care planning strategies, as well as risk stratification for dementia diagnosis. Moreover, our findings provide further support for recommendations to incorporate psychological trait measures into clinical screening or diagnosis criteria,” the authors write. SOURCE:

The study, with first author Emorie Beck, PhD, Department of Psychology, University of California, Davis, was published online on November 29, 2023, in Alzheimer’s & Dementia.

 LIMITATIONS:

Access to autopsy data was limited. The findings may not generalize across racial groups. The analysis did not examine dynamic associations between changing personality and cognition and neuropathology over time.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging. The authors have declared no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Younger heart disease onset tied to higher dementia risk

Article Type
Changed
Mon, 12/11/2023 - 15:04

 

TOPLINE:

Adults diagnosed with coronary heart disease (CHD) are at an increased risk for dementia, including all-cause dementia, Alzheimer›s disease (AD), and vascular dementia (VD), with the risk highest — at 36% — if onset is before age 45, results of a large observational study show.

METHODOLOGY:

  • The study included 432,667 of the more than 500,000 participants in the UK Biobank, with a mean age of 56.9 years, 50,685 (11.7%) of whom had CHD and 50,445 had data on age at CHD onset.
  • Researchers divided participants into three groups according to age at CHD onset (below 45 years, 45-59 years, and 60 years and older), and carried out a propensity score matching analysis.
  • Outcomes included all-cause dementia, AD, and VD.
  • Covariates included age, sex, race, educational level, body mass index, low-density lipoprotein cholesterol, smoking status, alcohol intake, exercise, depressed mood, hypertension, diabetes, statin use, and apolipoprotein E4 status.

TAKEAWAY:

  • During a median follow-up of 12.8 years, researchers identified 5876 cases of all-cause dementia, 2540 cases of AD, and 1220 cases of VD.
  • Fully adjusted models showed participants with CHD had significantly higher risks than those without CHD of developing all-cause dementia (hazard ratio [HR], 1.36; 95% CI, 1.28-1.45; P < .001), AD (HR, 1.13; 95% CI, 1.02-1.24; P = .019), and VD (HR, 1.78; 95% CI, 1.56-2.02; P < .001). The higher risk for VD suggests CHD has a more profound influence on neuropathologic changes involved in this dementia type, said the authors.
  • Those with CHD diagnosed at a younger age had higher risks of developing dementia (HR per 10-year decrease in age, 1.25; 95% CI, 1.20-1.30 for all-cause dementia, 1.29; 95% CI, 1.20-1.38 for AD, and 1.22; 95% CI, 1.13-1.31 for VD; P for all < .001).
  • Propensity score matching analysis showed patients with CHD had significantly higher risks for dementia compared with matched controls, with the highest risk seen in patients diagnosed before age 45 (HR, 2.40; 95% CI, 1.79-3.20; P < .001), followed by those diagnosed between 45 and 59 years (HR, 1.46; 95% CI, 1.32-1.62; < .001) and at or above 60 years (HR, 1.11; 95% CI, 1.03-1.19; P = .005), with similar results for AD and VD.

IN PRACTICE:

The findings suggest “additional attention should be paid to the cognitive status of patients with CHD, especially the ones diagnosed with CHD at a young age,” the authors conclude, noting that “timely intervention, such as cognitive training, could be implemented once signs of cognitive deteriorations are detected.”

SOURCE:

The study was conducted by Jie Liang, BS, School of Nursing, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, and colleagues. It was published online on November 29, 2023, in the Journal of the American Heart Association.

LIMITATIONS:

As this is an observational study, it can’t conclude a causal relationship. Although the authors adjusted for many potential confounders, unknown risk factors that also contribute to CHD can’t be ruled out. As the study excluded 69,744 participants, selection bias is possible. The study included a mostly White population.

 

 

DISCLOSURES:

The study was supported by the National Natural Science Foundation of China, the Non-Profit Central Research Institute Fund of the Chinese Academy of Medical Sciences, and the China Medical Board. The authors have no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Adults diagnosed with coronary heart disease (CHD) are at an increased risk for dementia, including all-cause dementia, Alzheimer›s disease (AD), and vascular dementia (VD), with the risk highest — at 36% — if onset is before age 45, results of a large observational study show.

METHODOLOGY:

  • The study included 432,667 of the more than 500,000 participants in the UK Biobank, with a mean age of 56.9 years, 50,685 (11.7%) of whom had CHD and 50,445 had data on age at CHD onset.
  • Researchers divided participants into three groups according to age at CHD onset (below 45 years, 45-59 years, and 60 years and older), and carried out a propensity score matching analysis.
  • Outcomes included all-cause dementia, AD, and VD.
  • Covariates included age, sex, race, educational level, body mass index, low-density lipoprotein cholesterol, smoking status, alcohol intake, exercise, depressed mood, hypertension, diabetes, statin use, and apolipoprotein E4 status.

TAKEAWAY:

  • During a median follow-up of 12.8 years, researchers identified 5876 cases of all-cause dementia, 2540 cases of AD, and 1220 cases of VD.
  • Fully adjusted models showed participants with CHD had significantly higher risks than those without CHD of developing all-cause dementia (hazard ratio [HR], 1.36; 95% CI, 1.28-1.45; P < .001), AD (HR, 1.13; 95% CI, 1.02-1.24; P = .019), and VD (HR, 1.78; 95% CI, 1.56-2.02; P < .001). The higher risk for VD suggests CHD has a more profound influence on neuropathologic changes involved in this dementia type, said the authors.
  • Those with CHD diagnosed at a younger age had higher risks of developing dementia (HR per 10-year decrease in age, 1.25; 95% CI, 1.20-1.30 for all-cause dementia, 1.29; 95% CI, 1.20-1.38 for AD, and 1.22; 95% CI, 1.13-1.31 for VD; P for all < .001).
  • Propensity score matching analysis showed patients with CHD had significantly higher risks for dementia compared with matched controls, with the highest risk seen in patients diagnosed before age 45 (HR, 2.40; 95% CI, 1.79-3.20; P < .001), followed by those diagnosed between 45 and 59 years (HR, 1.46; 95% CI, 1.32-1.62; < .001) and at or above 60 years (HR, 1.11; 95% CI, 1.03-1.19; P = .005), with similar results for AD and VD.

IN PRACTICE:

The findings suggest “additional attention should be paid to the cognitive status of patients with CHD, especially the ones diagnosed with CHD at a young age,” the authors conclude, noting that “timely intervention, such as cognitive training, could be implemented once signs of cognitive deteriorations are detected.”

SOURCE:

The study was conducted by Jie Liang, BS, School of Nursing, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, and colleagues. It was published online on November 29, 2023, in the Journal of the American Heart Association.

LIMITATIONS:

As this is an observational study, it can’t conclude a causal relationship. Although the authors adjusted for many potential confounders, unknown risk factors that also contribute to CHD can’t be ruled out. As the study excluded 69,744 participants, selection bias is possible. The study included a mostly White population.

 

 

DISCLOSURES:

The study was supported by the National Natural Science Foundation of China, the Non-Profit Central Research Institute Fund of the Chinese Academy of Medical Sciences, and the China Medical Board. The authors have no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Adults diagnosed with coronary heart disease (CHD) are at an increased risk for dementia, including all-cause dementia, Alzheimer›s disease (AD), and vascular dementia (VD), with the risk highest — at 36% — if onset is before age 45, results of a large observational study show.

METHODOLOGY:

  • The study included 432,667 of the more than 500,000 participants in the UK Biobank, with a mean age of 56.9 years, 50,685 (11.7%) of whom had CHD and 50,445 had data on age at CHD onset.
  • Researchers divided participants into three groups according to age at CHD onset (below 45 years, 45-59 years, and 60 years and older), and carried out a propensity score matching analysis.
  • Outcomes included all-cause dementia, AD, and VD.
  • Covariates included age, sex, race, educational level, body mass index, low-density lipoprotein cholesterol, smoking status, alcohol intake, exercise, depressed mood, hypertension, diabetes, statin use, and apolipoprotein E4 status.

TAKEAWAY:

  • During a median follow-up of 12.8 years, researchers identified 5876 cases of all-cause dementia, 2540 cases of AD, and 1220 cases of VD.
  • Fully adjusted models showed participants with CHD had significantly higher risks than those without CHD of developing all-cause dementia (hazard ratio [HR], 1.36; 95% CI, 1.28-1.45; P < .001), AD (HR, 1.13; 95% CI, 1.02-1.24; P = .019), and VD (HR, 1.78; 95% CI, 1.56-2.02; P < .001). The higher risk for VD suggests CHD has a more profound influence on neuropathologic changes involved in this dementia type, said the authors.
  • Those with CHD diagnosed at a younger age had higher risks of developing dementia (HR per 10-year decrease in age, 1.25; 95% CI, 1.20-1.30 for all-cause dementia, 1.29; 95% CI, 1.20-1.38 for AD, and 1.22; 95% CI, 1.13-1.31 for VD; P for all < .001).
  • Propensity score matching analysis showed patients with CHD had significantly higher risks for dementia compared with matched controls, with the highest risk seen in patients diagnosed before age 45 (HR, 2.40; 95% CI, 1.79-3.20; P < .001), followed by those diagnosed between 45 and 59 years (HR, 1.46; 95% CI, 1.32-1.62; < .001) and at or above 60 years (HR, 1.11; 95% CI, 1.03-1.19; P = .005), with similar results for AD and VD.

IN PRACTICE:

The findings suggest “additional attention should be paid to the cognitive status of patients with CHD, especially the ones diagnosed with CHD at a young age,” the authors conclude, noting that “timely intervention, such as cognitive training, could be implemented once signs of cognitive deteriorations are detected.”

SOURCE:

The study was conducted by Jie Liang, BS, School of Nursing, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, and colleagues. It was published online on November 29, 2023, in the Journal of the American Heart Association.

LIMITATIONS:

As this is an observational study, it can’t conclude a causal relationship. Although the authors adjusted for many potential confounders, unknown risk factors that also contribute to CHD can’t be ruled out. As the study excluded 69,744 participants, selection bias is possible. The study included a mostly White population.

 

 

DISCLOSURES:

The study was supported by the National Natural Science Foundation of China, the Non-Profit Central Research Institute Fund of the Chinese Academy of Medical Sciences, and the China Medical Board. The authors have no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Which migraine medications are most effective?

Article Type
Changed
Tue, 12/12/2023 - 08:07

 

For relief of acute migraine, triptans, ergots, and antiemetics are two to five times more effective than ibuprofen, and acetaminophen is the least effective medication, new results from large, real-world analysis of self-reported patient data show. 

METHODOLOGY: 

  • Researchers analyzed nearly 11 million migraine attack records extracted from Migraine Buddy, an e-diary smartphone app, over a 6-year period. 
  • They evaluated self-reported treatment effectiveness for 25 acute migraine medications among seven classes: acetaminophen, NSAIDs, triptans, combination analgesics, ergots, antiemetics, and opioids. 
  • A two-level nested multivariate logistic regression model adjusted for within-subject dependency and for concomitant medications taken within each analyzed migraine attack. 
  • The final analysis included nearly 5 million medication-outcome pairs from 3.1 million migraine attacks in 278,000 medication users. 

TAKEAWAY:

  • Using ibuprofen as the reference, triptans, ergots, and antiemetics were the top three medication classes with the highest effectiveness (mean odds ratios [OR] 4.80, 3.02, and 2.67, respectively). 
  • The next most effective medication classes were opioids (OR, 2.49), NSAIDs other than ibuprofen (OR, 1.94), combination analgesics acetaminophen/acetylsalicylic acid/caffeine (OR, 1.69), and others (OR, 1.49).
  • Acetaminophen (OR, 0.83) was considered to be the least effective.
  • The most effective individual medications were eletriptan (Relpax) (OR, 6.1); zolmitriptan (Zomig) (OR, 5.7); and sumatriptan (Imitrex) (OR, 5.2).

IN PRACTICE:

“Our findings that triptans, ergots, and antiemetics are the most effective classes of medications align with the guideline recommendations and offer generalizable insights to complement clinical practice,” the authors wrote. 

laflor/gettyimages

SOURCE:

The study, with first author Chia-Chun Chiang, MD, Department of Neurology, Mayo Clinic, Rochester, Minnesota, was published online November 29 in Neurology.

LIMITATIONS:

The findings are based on subjective user-reported ratings of effectiveness and information on side effects, dosages, and formulations were not available. The newer migraine medication classes, gepants and ditans, were not included due to the relatively lower number of treated attacks. The regression model did not include age, gender, pain intensity, and other migraine-associated symptoms, which could potentially affect treatment effectiveness. 

DISCLOSURES: 

Funding for the study was provided by the Kanagawa University of Human Service research fund. A full list of author disclosures can be found with the original article.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

For relief of acute migraine, triptans, ergots, and antiemetics are two to five times more effective than ibuprofen, and acetaminophen is the least effective medication, new results from large, real-world analysis of self-reported patient data show. 

METHODOLOGY: 

  • Researchers analyzed nearly 11 million migraine attack records extracted from Migraine Buddy, an e-diary smartphone app, over a 6-year period. 
  • They evaluated self-reported treatment effectiveness for 25 acute migraine medications among seven classes: acetaminophen, NSAIDs, triptans, combination analgesics, ergots, antiemetics, and opioids. 
  • A two-level nested multivariate logistic regression model adjusted for within-subject dependency and for concomitant medications taken within each analyzed migraine attack. 
  • The final analysis included nearly 5 million medication-outcome pairs from 3.1 million migraine attacks in 278,000 medication users. 

TAKEAWAY:

  • Using ibuprofen as the reference, triptans, ergots, and antiemetics were the top three medication classes with the highest effectiveness (mean odds ratios [OR] 4.80, 3.02, and 2.67, respectively). 
  • The next most effective medication classes were opioids (OR, 2.49), NSAIDs other than ibuprofen (OR, 1.94), combination analgesics acetaminophen/acetylsalicylic acid/caffeine (OR, 1.69), and others (OR, 1.49).
  • Acetaminophen (OR, 0.83) was considered to be the least effective.
  • The most effective individual medications were eletriptan (Relpax) (OR, 6.1); zolmitriptan (Zomig) (OR, 5.7); and sumatriptan (Imitrex) (OR, 5.2).

IN PRACTICE:

“Our findings that triptans, ergots, and antiemetics are the most effective classes of medications align with the guideline recommendations and offer generalizable insights to complement clinical practice,” the authors wrote. 

laflor/gettyimages

SOURCE:

The study, with first author Chia-Chun Chiang, MD, Department of Neurology, Mayo Clinic, Rochester, Minnesota, was published online November 29 in Neurology.

LIMITATIONS:

The findings are based on subjective user-reported ratings of effectiveness and information on side effects, dosages, and formulations were not available. The newer migraine medication classes, gepants and ditans, were not included due to the relatively lower number of treated attacks. The regression model did not include age, gender, pain intensity, and other migraine-associated symptoms, which could potentially affect treatment effectiveness. 

DISCLOSURES: 

Funding for the study was provided by the Kanagawa University of Human Service research fund. A full list of author disclosures can be found with the original article.

A version of this article first appeared on Medscape.com.

 

For relief of acute migraine, triptans, ergots, and antiemetics are two to five times more effective than ibuprofen, and acetaminophen is the least effective medication, new results from large, real-world analysis of self-reported patient data show. 

METHODOLOGY: 

  • Researchers analyzed nearly 11 million migraine attack records extracted from Migraine Buddy, an e-diary smartphone app, over a 6-year period. 
  • They evaluated self-reported treatment effectiveness for 25 acute migraine medications among seven classes: acetaminophen, NSAIDs, triptans, combination analgesics, ergots, antiemetics, and opioids. 
  • A two-level nested multivariate logistic regression model adjusted for within-subject dependency and for concomitant medications taken within each analyzed migraine attack. 
  • The final analysis included nearly 5 million medication-outcome pairs from 3.1 million migraine attacks in 278,000 medication users. 

TAKEAWAY:

  • Using ibuprofen as the reference, triptans, ergots, and antiemetics were the top three medication classes with the highest effectiveness (mean odds ratios [OR] 4.80, 3.02, and 2.67, respectively). 
  • The next most effective medication classes were opioids (OR, 2.49), NSAIDs other than ibuprofen (OR, 1.94), combination analgesics acetaminophen/acetylsalicylic acid/caffeine (OR, 1.69), and others (OR, 1.49).
  • Acetaminophen (OR, 0.83) was considered to be the least effective.
  • The most effective individual medications were eletriptan (Relpax) (OR, 6.1); zolmitriptan (Zomig) (OR, 5.7); and sumatriptan (Imitrex) (OR, 5.2).

IN PRACTICE:

“Our findings that triptans, ergots, and antiemetics are the most effective classes of medications align with the guideline recommendations and offer generalizable insights to complement clinical practice,” the authors wrote. 

laflor/gettyimages

SOURCE:

The study, with first author Chia-Chun Chiang, MD, Department of Neurology, Mayo Clinic, Rochester, Minnesota, was published online November 29 in Neurology.

LIMITATIONS:

The findings are based on subjective user-reported ratings of effectiveness and information on side effects, dosages, and formulations were not available. The newer migraine medication classes, gepants and ditans, were not included due to the relatively lower number of treated attacks. The regression model did not include age, gender, pain intensity, and other migraine-associated symptoms, which could potentially affect treatment effectiveness. 

DISCLOSURES: 

Funding for the study was provided by the Kanagawa University of Human Service research fund. A full list of author disclosures can be found with the original article.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Excessive TV-watching tied to elevated risk for dementia, Parkinson’s disease, and depression

Article Type
Changed
Thu, 12/07/2023 - 13:05

 

TOPLINE

Excessive television-watching is tied to an increased risk for dementia, Parkinson’s disease (PD), and depression, whereas a limited amount of daily computer use that is not work-related is linked to a lower risk for dementia.

METHODOLOGY:

  • Investigators analyzed data on 473,184 people aged 39-72 years from the UK Biobank who were enrolled from 2006 to 2010 and followed until a diagnosis of dementia, PD, depression, death, or study end (2018 for Wales residents; 2021 for residents of England and Scotland).
  • Participants reported on the number of hours they spent outside of work exercising, watching television, and using the computer.
  • MRI was conducted to determine participants’ brain volume.

TAKEAWAY: 

  • During the study, 6096 people developed dementia, 3000 developed PD, 23,600 developed depression, 1200 developed dementia and depression, and 486 developed PD and depression.
  • Compared with those who watched TV for under 1 hour per day, those who reported watching 4 or more hours per day had a 28% higher risk for dementia (adjusted hazard ratio [aHR], 1.28; 95% CI, 1.17-1.39), a 35% higher risk for depression, (aHR, 1.35; 95% CI, 1.29-1.40) and a 16% greater risk for PD (aHR, 1.16; 95% CI, 1.03-1.29).
  • However, moderate computer use outside of work seemed somewhat protective. Participants who used the computer for 30-60 minutes per day had lower risks for dementia (aHR, 0.68; 95% CI, 0.64-0.72), PD, (aHR, 0.86; 95% CI, 0.79-0.93), and depression (aHR, 0.85; 95% CI, 0.83-0.88) compared with those who reported the lowest levels of computer usage.
  • Replacing 30 minutes per day of computer time with an equal amount of structured exercise was associated with decreased risk for dementia (aHR, 0.74; 95% CI, 0.85-0.95) and PD (aHR, 0.84; 95% CI, 0.78-0.90).

IN PRACTICE:

The association between extended periods of TV use and higher risk for PD and dementia could be explained by a lack of activity, the authors note. They add that sedentary behavior is, “associated with biomarkers of low-grade inflammation and changes in inflammation markers that could initiate and or worsen neuroinflammation and contribute to neurodegeneration.”

SOURCE:

Hanzhang Wu, PhD, of Tianjin University of Traditional Medicine in Tianjin, China, led the study, which was published online in the International Journal of Behavioral Nutrition and Physical Activity.

LIMITATIONS: 

Screen behaviors were assessed using self-report measures, which is subject to recall bias. Also, there may have been variables confounding the findings for which investigators did not account. 

DISCLOSURES:

The study was funded by the National Natural Science Foundation of China, the Tianjin Major Public Health Science and Technology Project, the National Health Commission of China, the Food Science and Technology Foundation of Chinese Institute of Food Science and Technology, the China Cohort Consortium, and the Chinese Nutrition Society Nutrition Research Foundation–DSM Research Fund, China. There were no disclosures reported.

Eve Bender has no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE

Excessive television-watching is tied to an increased risk for dementia, Parkinson’s disease (PD), and depression, whereas a limited amount of daily computer use that is not work-related is linked to a lower risk for dementia.

METHODOLOGY:

  • Investigators analyzed data on 473,184 people aged 39-72 years from the UK Biobank who were enrolled from 2006 to 2010 and followed until a diagnosis of dementia, PD, depression, death, or study end (2018 for Wales residents; 2021 for residents of England and Scotland).
  • Participants reported on the number of hours they spent outside of work exercising, watching television, and using the computer.
  • MRI was conducted to determine participants’ brain volume.

TAKEAWAY: 

  • During the study, 6096 people developed dementia, 3000 developed PD, 23,600 developed depression, 1200 developed dementia and depression, and 486 developed PD and depression.
  • Compared with those who watched TV for under 1 hour per day, those who reported watching 4 or more hours per day had a 28% higher risk for dementia (adjusted hazard ratio [aHR], 1.28; 95% CI, 1.17-1.39), a 35% higher risk for depression, (aHR, 1.35; 95% CI, 1.29-1.40) and a 16% greater risk for PD (aHR, 1.16; 95% CI, 1.03-1.29).
  • However, moderate computer use outside of work seemed somewhat protective. Participants who used the computer for 30-60 minutes per day had lower risks for dementia (aHR, 0.68; 95% CI, 0.64-0.72), PD, (aHR, 0.86; 95% CI, 0.79-0.93), and depression (aHR, 0.85; 95% CI, 0.83-0.88) compared with those who reported the lowest levels of computer usage.
  • Replacing 30 minutes per day of computer time with an equal amount of structured exercise was associated with decreased risk for dementia (aHR, 0.74; 95% CI, 0.85-0.95) and PD (aHR, 0.84; 95% CI, 0.78-0.90).

IN PRACTICE:

The association between extended periods of TV use and higher risk for PD and dementia could be explained by a lack of activity, the authors note. They add that sedentary behavior is, “associated with biomarkers of low-grade inflammation and changes in inflammation markers that could initiate and or worsen neuroinflammation and contribute to neurodegeneration.”

SOURCE:

Hanzhang Wu, PhD, of Tianjin University of Traditional Medicine in Tianjin, China, led the study, which was published online in the International Journal of Behavioral Nutrition and Physical Activity.

LIMITATIONS: 

Screen behaviors were assessed using self-report measures, which is subject to recall bias. Also, there may have been variables confounding the findings for which investigators did not account. 

DISCLOSURES:

The study was funded by the National Natural Science Foundation of China, the Tianjin Major Public Health Science and Technology Project, the National Health Commission of China, the Food Science and Technology Foundation of Chinese Institute of Food Science and Technology, the China Cohort Consortium, and the Chinese Nutrition Society Nutrition Research Foundation–DSM Research Fund, China. There were no disclosures reported.

Eve Bender has no relevant financial relationships.

A version of this article appeared on Medscape.com.

 

TOPLINE

Excessive television-watching is tied to an increased risk for dementia, Parkinson’s disease (PD), and depression, whereas a limited amount of daily computer use that is not work-related is linked to a lower risk for dementia.

METHODOLOGY:

  • Investigators analyzed data on 473,184 people aged 39-72 years from the UK Biobank who were enrolled from 2006 to 2010 and followed until a diagnosis of dementia, PD, depression, death, or study end (2018 for Wales residents; 2021 for residents of England and Scotland).
  • Participants reported on the number of hours they spent outside of work exercising, watching television, and using the computer.
  • MRI was conducted to determine participants’ brain volume.

TAKEAWAY: 

  • During the study, 6096 people developed dementia, 3000 developed PD, 23,600 developed depression, 1200 developed dementia and depression, and 486 developed PD and depression.
  • Compared with those who watched TV for under 1 hour per day, those who reported watching 4 or more hours per day had a 28% higher risk for dementia (adjusted hazard ratio [aHR], 1.28; 95% CI, 1.17-1.39), a 35% higher risk for depression, (aHR, 1.35; 95% CI, 1.29-1.40) and a 16% greater risk for PD (aHR, 1.16; 95% CI, 1.03-1.29).
  • However, moderate computer use outside of work seemed somewhat protective. Participants who used the computer for 30-60 minutes per day had lower risks for dementia (aHR, 0.68; 95% CI, 0.64-0.72), PD, (aHR, 0.86; 95% CI, 0.79-0.93), and depression (aHR, 0.85; 95% CI, 0.83-0.88) compared with those who reported the lowest levels of computer usage.
  • Replacing 30 minutes per day of computer time with an equal amount of structured exercise was associated with decreased risk for dementia (aHR, 0.74; 95% CI, 0.85-0.95) and PD (aHR, 0.84; 95% CI, 0.78-0.90).

IN PRACTICE:

The association between extended periods of TV use and higher risk for PD and dementia could be explained by a lack of activity, the authors note. They add that sedentary behavior is, “associated with biomarkers of low-grade inflammation and changes in inflammation markers that could initiate and or worsen neuroinflammation and contribute to neurodegeneration.”

SOURCE:

Hanzhang Wu, PhD, of Tianjin University of Traditional Medicine in Tianjin, China, led the study, which was published online in the International Journal of Behavioral Nutrition and Physical Activity.

LIMITATIONS: 

Screen behaviors were assessed using self-report measures, which is subject to recall bias. Also, there may have been variables confounding the findings for which investigators did not account. 

DISCLOSURES:

The study was funded by the National Natural Science Foundation of China, the Tianjin Major Public Health Science and Technology Project, the National Health Commission of China, the Food Science and Technology Foundation of Chinese Institute of Food Science and Technology, the China Cohort Consortium, and the Chinese Nutrition Society Nutrition Research Foundation–DSM Research Fund, China. There were no disclosures reported.

Eve Bender has no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Meet the newest acronym in primary care: CKM

Article Type
Changed
Wed, 12/06/2023 - 18:32

Primary care clinicians play a central role in maintaining the cardiovascular-kidney-metabolic (CKM) health of patients, according to a new advisory from the American Heart Association.

The advisory, published recently in Circulation introduces the concept of CKM health and reevaluates the relationships between obesity, diabetes, kidney disease, and cardiovascular disease (CVD).

“This approach not only raises awareness, it also empowers PCPs to diagnose and treat these conditions more holistically,” Salim Hayek, MD, associate professor of cardiovascular disease and internal medicine, and medical director of the Frankel Cardiovascular Center Clinics at the University of Michigan in Ann Arbor, said in an interview.

 

New CKM Staging, Testing, and Care Strategies

The advisory introduces a new scoring system that ranges from stage 0 (patients with no risk factors for CKM) through stage 4 (patients with clinical CVD in CKM syndrome). Each stage requires specific management strategies and may include screening starting at age 30 years for diabetes, hypertension, and heart failure.

“Stage 0 CKM is usually found in young people, and CKM risk factors and scores typically increase as people age,” said Sean M. Drake, MD, a primary care physician at Henry Ford Health in Sterling Heights, Michigan. 

Dr. Drake advised PCPs to encourage patients who are at stage 0 to maintain ideal cardiovascular health and to monitor those at risk of progressing through the stages.

While PCPs already perform many of the tests the advisory recommends, the conditions overlap and an abnormality in one system should prompt more testing for other conditions. Additional tests, such as urine albumin-creatinine ratio, and more frequent glomerular filtration rate and lipid profile are advised, according to Dr. Drake.

“There also appears to be a role for additional cardiac testing, including echocardiograms and coronary CT scans, and for liver fibrosis screening,” Dr. Drake said. “Medications such as SGLT2 inhibitors, GLP-1 receptor agonists, and ACE inhibitors, beyond current routine use, are emphasized.” 

To better characterize body composition and help diagnose metabolic syndrome, the advisory also recommends measuring waist circumference, which is not routine practice, noted Joshua J. Joseph, MD, MPH, an associate professor of endocrinology, diabetes, and metabolism at The Ohio State University Wexner Medical Center in Columbus, and a co-author of the advisory. 

Recognizing the interconnected nature of cardiac, kidney, and metabolic diseases encourages a shift in mindset for clinicians, according to Neha Pagidipati, MD, MPH, a cardiologist at Duke Health in Durham, North Carolina.

“We have often been trained to focus on the specific problem in front of us,” Dr. Pagidipati said. “We need to be hyper-aware that many patients we see are at risk for multiple CKM entities. We need to be proactive about screening for and treating these when appropriate.”

The advisory emphasizes the need for CKM coordinators to support teams of clinicians from primary care, cardiology, endocrinology, nephrology, nursing, and pharmacy, as well as social workers, care navigators, or community health workers, Dr. Joseph said. 

“The advisory repositions the PCP at the forefront of CKM care coordination, marking a departure from the traditional model where subspecialists primarily manage complications,” Dr. Hayek added.
 

Changes to Payment

The new recommendations are consistent with current management guidelines for obesity, hypertriglyceridemia, hypertension, type 2 diabetes, and chronic kidney disease. 

“The advisory provides integrated algorithms for cardiovascular prevention and management, with specific therapeutic guidance tied to CKM stages, bringing together the current evidence for best practices from the various guidelines and filling gaps in a unified approach,” Dr. Joseph said. 

In addition, the advisory draws attention to the care of younger patients, who may be at increased risk for cardiovascular disease due to lifestyle factors, according to Nishant Shah, MD, assistant professor of medicine at Duke. 

“It considers barriers to care that prevent people from optimizing their cardiovascular health,” Dr. Shah said. 

Although the advisory does not specify proposed payment changes to support the new care model, the move towards value-based care may require billing practices that accommodate integrated care as well as more frequent and more specialized testing, Dr. Hayek said. 

“The advisory is an empowering tool for PCPs, underscoring their critical role in healthcare,” Dr. Hayek said. “It encourages PCPs to advocate for integrated care within their practices and to consider workflow adjustments that enhance the identification and initiation of preventive care for at-risk patients.”

Funding information was not provided. 

Dr. Joseph reports no relevant financial involvements; several advisory co-authors report financial involvements with pharmaceutical companies. Dr. Pagidipati reports relevant financial involvement with pharmaceutical companies. Dr. Hayek, Dr. Drake, and Dr. Shah report no relevant financial involvements. Dr. Joseph is an author of the advisory. Dr. Pagidipati, Dr. Hayek, Dr. Drake, and Dr. Shah were not involved in the writing of the advisory.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Primary care clinicians play a central role in maintaining the cardiovascular-kidney-metabolic (CKM) health of patients, according to a new advisory from the American Heart Association.

The advisory, published recently in Circulation introduces the concept of CKM health and reevaluates the relationships between obesity, diabetes, kidney disease, and cardiovascular disease (CVD).

“This approach not only raises awareness, it also empowers PCPs to diagnose and treat these conditions more holistically,” Salim Hayek, MD, associate professor of cardiovascular disease and internal medicine, and medical director of the Frankel Cardiovascular Center Clinics at the University of Michigan in Ann Arbor, said in an interview.

 

New CKM Staging, Testing, and Care Strategies

The advisory introduces a new scoring system that ranges from stage 0 (patients with no risk factors for CKM) through stage 4 (patients with clinical CVD in CKM syndrome). Each stage requires specific management strategies and may include screening starting at age 30 years for diabetes, hypertension, and heart failure.

“Stage 0 CKM is usually found in young people, and CKM risk factors and scores typically increase as people age,” said Sean M. Drake, MD, a primary care physician at Henry Ford Health in Sterling Heights, Michigan. 

Dr. Drake advised PCPs to encourage patients who are at stage 0 to maintain ideal cardiovascular health and to monitor those at risk of progressing through the stages.

While PCPs already perform many of the tests the advisory recommends, the conditions overlap and an abnormality in one system should prompt more testing for other conditions. Additional tests, such as urine albumin-creatinine ratio, and more frequent glomerular filtration rate and lipid profile are advised, according to Dr. Drake.

“There also appears to be a role for additional cardiac testing, including echocardiograms and coronary CT scans, and for liver fibrosis screening,” Dr. Drake said. “Medications such as SGLT2 inhibitors, GLP-1 receptor agonists, and ACE inhibitors, beyond current routine use, are emphasized.” 

To better characterize body composition and help diagnose metabolic syndrome, the advisory also recommends measuring waist circumference, which is not routine practice, noted Joshua J. Joseph, MD, MPH, an associate professor of endocrinology, diabetes, and metabolism at The Ohio State University Wexner Medical Center in Columbus, and a co-author of the advisory. 

Recognizing the interconnected nature of cardiac, kidney, and metabolic diseases encourages a shift in mindset for clinicians, according to Neha Pagidipati, MD, MPH, a cardiologist at Duke Health in Durham, North Carolina.

“We have often been trained to focus on the specific problem in front of us,” Dr. Pagidipati said. “We need to be hyper-aware that many patients we see are at risk for multiple CKM entities. We need to be proactive about screening for and treating these when appropriate.”

The advisory emphasizes the need for CKM coordinators to support teams of clinicians from primary care, cardiology, endocrinology, nephrology, nursing, and pharmacy, as well as social workers, care navigators, or community health workers, Dr. Joseph said. 

“The advisory repositions the PCP at the forefront of CKM care coordination, marking a departure from the traditional model where subspecialists primarily manage complications,” Dr. Hayek added.
 

Changes to Payment

The new recommendations are consistent with current management guidelines for obesity, hypertriglyceridemia, hypertension, type 2 diabetes, and chronic kidney disease. 

“The advisory provides integrated algorithms for cardiovascular prevention and management, with specific therapeutic guidance tied to CKM stages, bringing together the current evidence for best practices from the various guidelines and filling gaps in a unified approach,” Dr. Joseph said. 

In addition, the advisory draws attention to the care of younger patients, who may be at increased risk for cardiovascular disease due to lifestyle factors, according to Nishant Shah, MD, assistant professor of medicine at Duke. 

“It considers barriers to care that prevent people from optimizing their cardiovascular health,” Dr. Shah said. 

Although the advisory does not specify proposed payment changes to support the new care model, the move towards value-based care may require billing practices that accommodate integrated care as well as more frequent and more specialized testing, Dr. Hayek said. 

“The advisory is an empowering tool for PCPs, underscoring their critical role in healthcare,” Dr. Hayek said. “It encourages PCPs to advocate for integrated care within their practices and to consider workflow adjustments that enhance the identification and initiation of preventive care for at-risk patients.”

Funding information was not provided. 

Dr. Joseph reports no relevant financial involvements; several advisory co-authors report financial involvements with pharmaceutical companies. Dr. Pagidipati reports relevant financial involvement with pharmaceutical companies. Dr. Hayek, Dr. Drake, and Dr. Shah report no relevant financial involvements. Dr. Joseph is an author of the advisory. Dr. Pagidipati, Dr. Hayek, Dr. Drake, and Dr. Shah were not involved in the writing of the advisory.

A version of this article appeared on Medscape.com.

Primary care clinicians play a central role in maintaining the cardiovascular-kidney-metabolic (CKM) health of patients, according to a new advisory from the American Heart Association.

The advisory, published recently in Circulation introduces the concept of CKM health and reevaluates the relationships between obesity, diabetes, kidney disease, and cardiovascular disease (CVD).

“This approach not only raises awareness, it also empowers PCPs to diagnose and treat these conditions more holistically,” Salim Hayek, MD, associate professor of cardiovascular disease and internal medicine, and medical director of the Frankel Cardiovascular Center Clinics at the University of Michigan in Ann Arbor, said in an interview.

 

New CKM Staging, Testing, and Care Strategies

The advisory introduces a new scoring system that ranges from stage 0 (patients with no risk factors for CKM) through stage 4 (patients with clinical CVD in CKM syndrome). Each stage requires specific management strategies and may include screening starting at age 30 years for diabetes, hypertension, and heart failure.

“Stage 0 CKM is usually found in young people, and CKM risk factors and scores typically increase as people age,” said Sean M. Drake, MD, a primary care physician at Henry Ford Health in Sterling Heights, Michigan. 

Dr. Drake advised PCPs to encourage patients who are at stage 0 to maintain ideal cardiovascular health and to monitor those at risk of progressing through the stages.

While PCPs already perform many of the tests the advisory recommends, the conditions overlap and an abnormality in one system should prompt more testing for other conditions. Additional tests, such as urine albumin-creatinine ratio, and more frequent glomerular filtration rate and lipid profile are advised, according to Dr. Drake.

“There also appears to be a role for additional cardiac testing, including echocardiograms and coronary CT scans, and for liver fibrosis screening,” Dr. Drake said. “Medications such as SGLT2 inhibitors, GLP-1 receptor agonists, and ACE inhibitors, beyond current routine use, are emphasized.” 

To better characterize body composition and help diagnose metabolic syndrome, the advisory also recommends measuring waist circumference, which is not routine practice, noted Joshua J. Joseph, MD, MPH, an associate professor of endocrinology, diabetes, and metabolism at The Ohio State University Wexner Medical Center in Columbus, and a co-author of the advisory. 

Recognizing the interconnected nature of cardiac, kidney, and metabolic diseases encourages a shift in mindset for clinicians, according to Neha Pagidipati, MD, MPH, a cardiologist at Duke Health in Durham, North Carolina.

“We have often been trained to focus on the specific problem in front of us,” Dr. Pagidipati said. “We need to be hyper-aware that many patients we see are at risk for multiple CKM entities. We need to be proactive about screening for and treating these when appropriate.”

The advisory emphasizes the need for CKM coordinators to support teams of clinicians from primary care, cardiology, endocrinology, nephrology, nursing, and pharmacy, as well as social workers, care navigators, or community health workers, Dr. Joseph said. 

“The advisory repositions the PCP at the forefront of CKM care coordination, marking a departure from the traditional model where subspecialists primarily manage complications,” Dr. Hayek added.
 

Changes to Payment

The new recommendations are consistent with current management guidelines for obesity, hypertriglyceridemia, hypertension, type 2 diabetes, and chronic kidney disease. 

“The advisory provides integrated algorithms for cardiovascular prevention and management, with specific therapeutic guidance tied to CKM stages, bringing together the current evidence for best practices from the various guidelines and filling gaps in a unified approach,” Dr. Joseph said. 

In addition, the advisory draws attention to the care of younger patients, who may be at increased risk for cardiovascular disease due to lifestyle factors, according to Nishant Shah, MD, assistant professor of medicine at Duke. 

“It considers barriers to care that prevent people from optimizing their cardiovascular health,” Dr. Shah said. 

Although the advisory does not specify proposed payment changes to support the new care model, the move towards value-based care may require billing practices that accommodate integrated care as well as more frequent and more specialized testing, Dr. Hayek said. 

“The advisory is an empowering tool for PCPs, underscoring their critical role in healthcare,” Dr. Hayek said. “It encourages PCPs to advocate for integrated care within their practices and to consider workflow adjustments that enhance the identification and initiation of preventive care for at-risk patients.”

Funding information was not provided. 

Dr. Joseph reports no relevant financial involvements; several advisory co-authors report financial involvements with pharmaceutical companies. Dr. Pagidipati reports relevant financial involvement with pharmaceutical companies. Dr. Hayek, Dr. Drake, and Dr. Shah report no relevant financial involvements. Dr. Joseph is an author of the advisory. Dr. Pagidipati, Dr. Hayek, Dr. Drake, and Dr. Shah were not involved in the writing of the advisory.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CIRCULATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Early age at first period raises type 2 diabetes risk

Article Type
Changed
Wed, 12/06/2023 - 17:08

 

TOPLINE: 

Having a first menstrual period (menarche) at age 10 or younger was linked with a greater risk for type 2 diabetes and a greater risk of stroke in women with diabetes, a retrospective study of US women under age 65 found.

METHODOLOGY:

  • Researchers analyzed data from 17,377 women who were aged 20-65 years when they participated in a National Health and Nutrition Examination Survey (NHANES) from 1999 to 2018 and reported their age at first menstruation, which was classified as ≤ 10, 11, 12, 13, 14, or ≥ 15 years of age.
  • In total, 0.2% of the women (1773) had type 2 diabetes; of these, 11.5% (205) had cardiovascular disease (CVD), defined as coronary heart disease (CHD), myocardial infarction, or stroke.
  • Compared with women who had their first menstrual period at age 13 (the mean age in this population), those who had their period at age ≤ 10 had a significantly greater risk of having type 2 diabetes, after adjustment for age, race/ethnicity, education, parity, menopause status, family history of diabetes, smoking status, physical activity, alcohol consumption, and body mass index (odds ratio, 1.32; 95% CI, 1.03-1.69; P trend = .03).
  • Among the women with diabetes, compared with those who had their first menstrual period at age 13, those who had it at age ≤ 10 had a significantly greater risk of having stroke (OR, 2.66; 95% CI, 1.07-6.64; P trend = .02), but not CVD or CHD, after adjustment for these multiple variables.

TAKEAWAY:

  • In a racially and ethnically diverse national sample of US women younger than 65, “extremely early” age at first menstrual period was associated with significantly increased risk for type 2 diabetes; among the women with type 2 diabetes, it was associated with significantly increased risk for stroke but not CVD or CHD, after adjustment for multiple variables.
  • Early age at menarche may be an early indicator of the cardiometabolic disease trajectory in women.

IN PRACTICE:

“Women with early-life exposures such as early age at menarche need to be further examined for diabetes and prevention research and strategies for progression of diabetes complications,” the study authors write. 

SOURCE:

The authors, mainly from Tulane University School of Public Health and Tropical Medicine, New Orleans, Louisiana, and also from Harvard Medical School, Boston, Massachusetts, published their findings in BMJ Nutrition, Prevention & Health.

LIMITATIONS:

  • The women who participated in NHANES may not be representative of all women in the United States (selection bias).
  • The study only included women who reported the age when they had their first menstrual period (selection bias).
  • This was a cross-sectional, observational study, so it cannot show causality.
  • The women may have reported the wrong age at which they had their first period (recall bias and social desirability bias).
  • The women may have inaccurately reported CVD and type 2 diabetes (recall bias and social desirability bias).

DISCLOSURES:

The researchers were supported by grants from the National Heart, Lung, and Blood Institute and from the National Institute of General Medical Sciences of the National Institutes of Health.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE: 

Having a first menstrual period (menarche) at age 10 or younger was linked with a greater risk for type 2 diabetes and a greater risk of stroke in women with diabetes, a retrospective study of US women under age 65 found.

METHODOLOGY:

  • Researchers analyzed data from 17,377 women who were aged 20-65 years when they participated in a National Health and Nutrition Examination Survey (NHANES) from 1999 to 2018 and reported their age at first menstruation, which was classified as ≤ 10, 11, 12, 13, 14, or ≥ 15 years of age.
  • In total, 0.2% of the women (1773) had type 2 diabetes; of these, 11.5% (205) had cardiovascular disease (CVD), defined as coronary heart disease (CHD), myocardial infarction, or stroke.
  • Compared with women who had their first menstrual period at age 13 (the mean age in this population), those who had their period at age ≤ 10 had a significantly greater risk of having type 2 diabetes, after adjustment for age, race/ethnicity, education, parity, menopause status, family history of diabetes, smoking status, physical activity, alcohol consumption, and body mass index (odds ratio, 1.32; 95% CI, 1.03-1.69; P trend = .03).
  • Among the women with diabetes, compared with those who had their first menstrual period at age 13, those who had it at age ≤ 10 had a significantly greater risk of having stroke (OR, 2.66; 95% CI, 1.07-6.64; P trend = .02), but not CVD or CHD, after adjustment for these multiple variables.

TAKEAWAY:

  • In a racially and ethnically diverse national sample of US women younger than 65, “extremely early” age at first menstrual period was associated with significantly increased risk for type 2 diabetes; among the women with type 2 diabetes, it was associated with significantly increased risk for stroke but not CVD or CHD, after adjustment for multiple variables.
  • Early age at menarche may be an early indicator of the cardiometabolic disease trajectory in women.

IN PRACTICE:

“Women with early-life exposures such as early age at menarche need to be further examined for diabetes and prevention research and strategies for progression of diabetes complications,” the study authors write. 

SOURCE:

The authors, mainly from Tulane University School of Public Health and Tropical Medicine, New Orleans, Louisiana, and also from Harvard Medical School, Boston, Massachusetts, published their findings in BMJ Nutrition, Prevention & Health.

LIMITATIONS:

  • The women who participated in NHANES may not be representative of all women in the United States (selection bias).
  • The study only included women who reported the age when they had their first menstrual period (selection bias).
  • This was a cross-sectional, observational study, so it cannot show causality.
  • The women may have reported the wrong age at which they had their first period (recall bias and social desirability bias).
  • The women may have inaccurately reported CVD and type 2 diabetes (recall bias and social desirability bias).

DISCLOSURES:

The researchers were supported by grants from the National Heart, Lung, and Blood Institute and from the National Institute of General Medical Sciences of the National Institutes of Health.

A version of this article first appeared on Medscape.com.

 

TOPLINE: 

Having a first menstrual period (menarche) at age 10 or younger was linked with a greater risk for type 2 diabetes and a greater risk of stroke in women with diabetes, a retrospective study of US women under age 65 found.

METHODOLOGY:

  • Researchers analyzed data from 17,377 women who were aged 20-65 years when they participated in a National Health and Nutrition Examination Survey (NHANES) from 1999 to 2018 and reported their age at first menstruation, which was classified as ≤ 10, 11, 12, 13, 14, or ≥ 15 years of age.
  • In total, 0.2% of the women (1773) had type 2 diabetes; of these, 11.5% (205) had cardiovascular disease (CVD), defined as coronary heart disease (CHD), myocardial infarction, or stroke.
  • Compared with women who had their first menstrual period at age 13 (the mean age in this population), those who had their period at age ≤ 10 had a significantly greater risk of having type 2 diabetes, after adjustment for age, race/ethnicity, education, parity, menopause status, family history of diabetes, smoking status, physical activity, alcohol consumption, and body mass index (odds ratio, 1.32; 95% CI, 1.03-1.69; P trend = .03).
  • Among the women with diabetes, compared with those who had their first menstrual period at age 13, those who had it at age ≤ 10 had a significantly greater risk of having stroke (OR, 2.66; 95% CI, 1.07-6.64; P trend = .02), but not CVD or CHD, after adjustment for these multiple variables.

TAKEAWAY:

  • In a racially and ethnically diverse national sample of US women younger than 65, “extremely early” age at first menstrual period was associated with significantly increased risk for type 2 diabetes; among the women with type 2 diabetes, it was associated with significantly increased risk for stroke but not CVD or CHD, after adjustment for multiple variables.
  • Early age at menarche may be an early indicator of the cardiometabolic disease trajectory in women.

IN PRACTICE:

“Women with early-life exposures such as early age at menarche need to be further examined for diabetes and prevention research and strategies for progression of diabetes complications,” the study authors write. 

SOURCE:

The authors, mainly from Tulane University School of Public Health and Tropical Medicine, New Orleans, Louisiana, and also from Harvard Medical School, Boston, Massachusetts, published their findings in BMJ Nutrition, Prevention & Health.

LIMITATIONS:

  • The women who participated in NHANES may not be representative of all women in the United States (selection bias).
  • The study only included women who reported the age when they had their first menstrual period (selection bias).
  • This was a cross-sectional, observational study, so it cannot show causality.
  • The women may have reported the wrong age at which they had their first period (recall bias and social desirability bias).
  • The women may have inaccurately reported CVD and type 2 diabetes (recall bias and social desirability bias).

DISCLOSURES:

The researchers were supported by grants from the National Heart, Lung, and Blood Institute and from the National Institute of General Medical Sciences of the National Institutes of Health.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Experimental Therapy Restores Cognitive Function in Chronic TBI

Article Type
Changed
Wed, 12/06/2023 - 18:31

An experimental therapy that uses deep brain stimulation (DBS) to deliver precise electrical pulses to an area deep inside the brain restored executive function in patients with moderate to severe traumatic brain injury (msTBI) and chronic sequelae.

Participants in this first-in-humans trial experienced brain injuries between 3-18 years before the study that left them with persistent neuropsychological impairment and a range of functional disabilities.

This is the first time a DBS device has been implanted in the central thalamus in humans, an area of the brain measuring only a few millimeters wide that helps regulate consciousness.

Placing the electrodes required a novel surgical technique developed by the investigators that included virtual models of each participant’s brain, microelectrode recording, and neuroimaging to identify neuronal circuits affected by the TBI.

After 3 months of 12-hour daily DBS treatments, participants’ performance on cognitive tests improved by an average of 32% from baseline. Participants were able to read books, watch TV shows, play video games, complete schoolwork, and felt significantly less fatigued during the day.

Although the small trial only included five patients, the work is already being hailed by other experts as significant.“We were looking for partial restoration of executive attention and expected [the treatment] would have an effect, but I wouldn’t have anticipated the effect size we saw,” co-lead investigator Nicholas Schiff, MD, professor of neuroscience at Weill Cornell Medical College, New York City, said in an interview.

The findings were published online Dec. 4 in Nature Medicine.

“No Trivial Feat”

An estimated 5.3 million children and adults are living with a permanent TBI-related disability in the US today. There currently is no effective therapy for impaired attention, executive function, working memory or information-processing speed caused by the initial injury.

Previous research suggests that a loss of activity in key brain circuits in the thalamus may be associated with a loss of cognitive function.

The investigators recruited six adults (four men and two women) between the ages of 22 and 60 years with a history of msTBI and chronic neuropsychological impairment and functional disability. One participant was later withdrawn from the trial for protocol noncompliance.

Participants completed a range of questionnaires and tests to establish baseline cognitive, psychological, and quality-of-life status.

To restore lost executive functioning in the brain, investigators had to target not only the central lateral nucleus, but also the neuronal network connected to the region that reaches other parts of the brain.

“To do both of those things we had to develop a whole toolset in order to model both the target and trajectory, which had to be right to make it work properly,” co-lead investigator Jaimie Henderson, MD, professor of neurosurgery at Stanford University College of Medicine, Stanford, California, said in an interview. “That gave us a pretty narrow window in which to work and getting an electrode accurately to this target is not a trivial feat.”

“A Moving Target”

Each participant’s brain physiology was slightly different, meaning the path that worked for one individual might not work for another. The surgery was further complicated by shifting in the brain that occurred as individual electrodes were placed.

 

 

“It was a literal moving target,” Dr. Henderson said.

In the beginning, investigators used microelectrode recording to “listen” to individual neurons to see which ones weren’t firing correctly.

When that method failed to offer the precise information needed for electrode placement, the investigators switched to neuroimaging, which allowed them to complete the surgery more quickly and accurately.

Participants remained in the hospital 1-2 days after surgery. They returned for postoperative imaging 30 days after surgery and were randomly assigned to different schedules for a 14-day titration period to optimize DBS stimulation.

The primary outcome was a 10% improvement on part B of the trail-making test, a neuropsychological test that measures executive functioning.

After 90 days of 12-hour daily DBS treatments, participants’ scores increased 15%–52% (average 32%) from baseline. Participants also reported an average of 33% decline in fatigue, one of the most common side effects of msTBI, and an average 80% improvement in attention.

The main safety risk during the 3- to-4-hour procedure is bleeding, which didn’t affect any of the participants in this study. One participant developed a surgical site infection, but all other side effects were mild.

After the 90-day treatment period, the study plan called for patients to be randomly assigned to a blinded withdrawal of treatment, with the DBS turned off for 21 days. Two of the patients declined to be randomized. DBS was turned off in one participant while the other two continued as normal.

After 3 weeks, the patient whose DBS was turned off showed a 34% decline on cognitive tests. The device was reactivated after the study and that participant has since reported improvements.

The DBS devices continue to function in all participants. Although their performance is not being measured as part of the study, anecdotal reports indicate sustained improvement in executive functioning.

“The brain injury causes this global down-regulation of brain function and what we think that this is doing is turning that back up again,” Dr. Henderson said. “At a very simplistic level, what we’re trying to do is turn the lights back up after the dimmer switch is switched down from the injury.”

New Hope

TBI patients are usually treated aggressively during the first year, when significant improvements are most likely, but there are few therapeutic options beyond that time, said neurologist Javier Cardenas, MD, who commented on the findings for this article.

“Many providers throw their hands up after a year in terms of intervention and then we’re always looking at potential declines over time,” said Dr. Cardenas, director of the Concussion and Brain Injury Center at the Rockefeller Neuroscience Institution, West Virginia University, Morgantown. “Most people plateau and don’t decline but we’re always worried about a secondary decline in traumatic brain injury.”Surgery is usually only employed immediately following the brain injury. The notion of surgery as a therapeutic option years after the initial assault on the brain is novel, said Jimmy Yang, MD, assistant professor of neurologic surgery at Ohio State University College of Medicine, Columbus, who commented on the findings for this article.

“While deep brain stimulation surgery in clinical practice is specifically tailored to each patient we treat, this study goes a step further by integrating research tools that have not yet made it to the clinical realm,” Dr. Yang said. “As a result, while these methods are not commonly used in clinical care, the overall strategy highlights how research advances are linked to clinical advances.”

Investigators are working to secure funding for a larger phase 2 trial.

“With millions of people affected by traumatic brain injury but without effective therapies, this study brings hope that options are on the horizon to help these patients,” Dr. Yang said.

The study was supported by funding from the National Institute of Health BRAIN Initiative and a grant from the Translational Science Center at Weill Cornell Medical College. Surgical implants were provided by Medtronic. Dr. Henderson and Dr. Schiff are listed as inventors on several patent applications for the experimental DBS therapy described in the study. Dr. Cardenas and Dr. Yang report no relevant financial relationships.


A version of this article first appeared on Medscape.com .

Publications
Topics
Sections

An experimental therapy that uses deep brain stimulation (DBS) to deliver precise electrical pulses to an area deep inside the brain restored executive function in patients with moderate to severe traumatic brain injury (msTBI) and chronic sequelae.

Participants in this first-in-humans trial experienced brain injuries between 3-18 years before the study that left them with persistent neuropsychological impairment and a range of functional disabilities.

This is the first time a DBS device has been implanted in the central thalamus in humans, an area of the brain measuring only a few millimeters wide that helps regulate consciousness.

Placing the electrodes required a novel surgical technique developed by the investigators that included virtual models of each participant’s brain, microelectrode recording, and neuroimaging to identify neuronal circuits affected by the TBI.

After 3 months of 12-hour daily DBS treatments, participants’ performance on cognitive tests improved by an average of 32% from baseline. Participants were able to read books, watch TV shows, play video games, complete schoolwork, and felt significantly less fatigued during the day.

Although the small trial only included five patients, the work is already being hailed by other experts as significant.“We were looking for partial restoration of executive attention and expected [the treatment] would have an effect, but I wouldn’t have anticipated the effect size we saw,” co-lead investigator Nicholas Schiff, MD, professor of neuroscience at Weill Cornell Medical College, New York City, said in an interview.

The findings were published online Dec. 4 in Nature Medicine.

“No Trivial Feat”

An estimated 5.3 million children and adults are living with a permanent TBI-related disability in the US today. There currently is no effective therapy for impaired attention, executive function, working memory or information-processing speed caused by the initial injury.

Previous research suggests that a loss of activity in key brain circuits in the thalamus may be associated with a loss of cognitive function.

The investigators recruited six adults (four men and two women) between the ages of 22 and 60 years with a history of msTBI and chronic neuropsychological impairment and functional disability. One participant was later withdrawn from the trial for protocol noncompliance.

Participants completed a range of questionnaires and tests to establish baseline cognitive, psychological, and quality-of-life status.

To restore lost executive functioning in the brain, investigators had to target not only the central lateral nucleus, but also the neuronal network connected to the region that reaches other parts of the brain.

“To do both of those things we had to develop a whole toolset in order to model both the target and trajectory, which had to be right to make it work properly,” co-lead investigator Jaimie Henderson, MD, professor of neurosurgery at Stanford University College of Medicine, Stanford, California, said in an interview. “That gave us a pretty narrow window in which to work and getting an electrode accurately to this target is not a trivial feat.”

“A Moving Target”

Each participant’s brain physiology was slightly different, meaning the path that worked for one individual might not work for another. The surgery was further complicated by shifting in the brain that occurred as individual electrodes were placed.

 

 

“It was a literal moving target,” Dr. Henderson said.

In the beginning, investigators used microelectrode recording to “listen” to individual neurons to see which ones weren’t firing correctly.

When that method failed to offer the precise information needed for electrode placement, the investigators switched to neuroimaging, which allowed them to complete the surgery more quickly and accurately.

Participants remained in the hospital 1-2 days after surgery. They returned for postoperative imaging 30 days after surgery and were randomly assigned to different schedules for a 14-day titration period to optimize DBS stimulation.

The primary outcome was a 10% improvement on part B of the trail-making test, a neuropsychological test that measures executive functioning.

After 90 days of 12-hour daily DBS treatments, participants’ scores increased 15%–52% (average 32%) from baseline. Participants also reported an average of 33% decline in fatigue, one of the most common side effects of msTBI, and an average 80% improvement in attention.

The main safety risk during the 3- to-4-hour procedure is bleeding, which didn’t affect any of the participants in this study. One participant developed a surgical site infection, but all other side effects were mild.

After the 90-day treatment period, the study plan called for patients to be randomly assigned to a blinded withdrawal of treatment, with the DBS turned off for 21 days. Two of the patients declined to be randomized. DBS was turned off in one participant while the other two continued as normal.

After 3 weeks, the patient whose DBS was turned off showed a 34% decline on cognitive tests. The device was reactivated after the study and that participant has since reported improvements.

The DBS devices continue to function in all participants. Although their performance is not being measured as part of the study, anecdotal reports indicate sustained improvement in executive functioning.

“The brain injury causes this global down-regulation of brain function and what we think that this is doing is turning that back up again,” Dr. Henderson said. “At a very simplistic level, what we’re trying to do is turn the lights back up after the dimmer switch is switched down from the injury.”

New Hope

TBI patients are usually treated aggressively during the first year, when significant improvements are most likely, but there are few therapeutic options beyond that time, said neurologist Javier Cardenas, MD, who commented on the findings for this article.

“Many providers throw their hands up after a year in terms of intervention and then we’re always looking at potential declines over time,” said Dr. Cardenas, director of the Concussion and Brain Injury Center at the Rockefeller Neuroscience Institution, West Virginia University, Morgantown. “Most people plateau and don’t decline but we’re always worried about a secondary decline in traumatic brain injury.”Surgery is usually only employed immediately following the brain injury. The notion of surgery as a therapeutic option years after the initial assault on the brain is novel, said Jimmy Yang, MD, assistant professor of neurologic surgery at Ohio State University College of Medicine, Columbus, who commented on the findings for this article.

“While deep brain stimulation surgery in clinical practice is specifically tailored to each patient we treat, this study goes a step further by integrating research tools that have not yet made it to the clinical realm,” Dr. Yang said. “As a result, while these methods are not commonly used in clinical care, the overall strategy highlights how research advances are linked to clinical advances.”

Investigators are working to secure funding for a larger phase 2 trial.

“With millions of people affected by traumatic brain injury but without effective therapies, this study brings hope that options are on the horizon to help these patients,” Dr. Yang said.

The study was supported by funding from the National Institute of Health BRAIN Initiative and a grant from the Translational Science Center at Weill Cornell Medical College. Surgical implants were provided by Medtronic. Dr. Henderson and Dr. Schiff are listed as inventors on several patent applications for the experimental DBS therapy described in the study. Dr. Cardenas and Dr. Yang report no relevant financial relationships.


A version of this article first appeared on Medscape.com .

An experimental therapy that uses deep brain stimulation (DBS) to deliver precise electrical pulses to an area deep inside the brain restored executive function in patients with moderate to severe traumatic brain injury (msTBI) and chronic sequelae.

Participants in this first-in-humans trial experienced brain injuries between 3-18 years before the study that left them with persistent neuropsychological impairment and a range of functional disabilities.

This is the first time a DBS device has been implanted in the central thalamus in humans, an area of the brain measuring only a few millimeters wide that helps regulate consciousness.

Placing the electrodes required a novel surgical technique developed by the investigators that included virtual models of each participant’s brain, microelectrode recording, and neuroimaging to identify neuronal circuits affected by the TBI.

After 3 months of 12-hour daily DBS treatments, participants’ performance on cognitive tests improved by an average of 32% from baseline. Participants were able to read books, watch TV shows, play video games, complete schoolwork, and felt significantly less fatigued during the day.

Although the small trial only included five patients, the work is already being hailed by other experts as significant.“We were looking for partial restoration of executive attention and expected [the treatment] would have an effect, but I wouldn’t have anticipated the effect size we saw,” co-lead investigator Nicholas Schiff, MD, professor of neuroscience at Weill Cornell Medical College, New York City, said in an interview.

The findings were published online Dec. 4 in Nature Medicine.

“No Trivial Feat”

An estimated 5.3 million children and adults are living with a permanent TBI-related disability in the US today. There currently is no effective therapy for impaired attention, executive function, working memory or information-processing speed caused by the initial injury.

Previous research suggests that a loss of activity in key brain circuits in the thalamus may be associated with a loss of cognitive function.

The investigators recruited six adults (four men and two women) between the ages of 22 and 60 years with a history of msTBI and chronic neuropsychological impairment and functional disability. One participant was later withdrawn from the trial for protocol noncompliance.

Participants completed a range of questionnaires and tests to establish baseline cognitive, psychological, and quality-of-life status.

To restore lost executive functioning in the brain, investigators had to target not only the central lateral nucleus, but also the neuronal network connected to the region that reaches other parts of the brain.

“To do both of those things we had to develop a whole toolset in order to model both the target and trajectory, which had to be right to make it work properly,” co-lead investigator Jaimie Henderson, MD, professor of neurosurgery at Stanford University College of Medicine, Stanford, California, said in an interview. “That gave us a pretty narrow window in which to work and getting an electrode accurately to this target is not a trivial feat.”

“A Moving Target”

Each participant’s brain physiology was slightly different, meaning the path that worked for one individual might not work for another. The surgery was further complicated by shifting in the brain that occurred as individual electrodes were placed.

 

 

“It was a literal moving target,” Dr. Henderson said.

In the beginning, investigators used microelectrode recording to “listen” to individual neurons to see which ones weren’t firing correctly.

When that method failed to offer the precise information needed for electrode placement, the investigators switched to neuroimaging, which allowed them to complete the surgery more quickly and accurately.

Participants remained in the hospital 1-2 days after surgery. They returned for postoperative imaging 30 days after surgery and were randomly assigned to different schedules for a 14-day titration period to optimize DBS stimulation.

The primary outcome was a 10% improvement on part B of the trail-making test, a neuropsychological test that measures executive functioning.

After 90 days of 12-hour daily DBS treatments, participants’ scores increased 15%–52% (average 32%) from baseline. Participants also reported an average of 33% decline in fatigue, one of the most common side effects of msTBI, and an average 80% improvement in attention.

The main safety risk during the 3- to-4-hour procedure is bleeding, which didn’t affect any of the participants in this study. One participant developed a surgical site infection, but all other side effects were mild.

After the 90-day treatment period, the study plan called for patients to be randomly assigned to a blinded withdrawal of treatment, with the DBS turned off for 21 days. Two of the patients declined to be randomized. DBS was turned off in one participant while the other two continued as normal.

After 3 weeks, the patient whose DBS was turned off showed a 34% decline on cognitive tests. The device was reactivated after the study and that participant has since reported improvements.

The DBS devices continue to function in all participants. Although their performance is not being measured as part of the study, anecdotal reports indicate sustained improvement in executive functioning.

“The brain injury causes this global down-regulation of brain function and what we think that this is doing is turning that back up again,” Dr. Henderson said. “At a very simplistic level, what we’re trying to do is turn the lights back up after the dimmer switch is switched down from the injury.”

New Hope

TBI patients are usually treated aggressively during the first year, when significant improvements are most likely, but there are few therapeutic options beyond that time, said neurologist Javier Cardenas, MD, who commented on the findings for this article.

“Many providers throw their hands up after a year in terms of intervention and then we’re always looking at potential declines over time,” said Dr. Cardenas, director of the Concussion and Brain Injury Center at the Rockefeller Neuroscience Institution, West Virginia University, Morgantown. “Most people plateau and don’t decline but we’re always worried about a secondary decline in traumatic brain injury.”Surgery is usually only employed immediately following the brain injury. The notion of surgery as a therapeutic option years after the initial assault on the brain is novel, said Jimmy Yang, MD, assistant professor of neurologic surgery at Ohio State University College of Medicine, Columbus, who commented on the findings for this article.

“While deep brain stimulation surgery in clinical practice is specifically tailored to each patient we treat, this study goes a step further by integrating research tools that have not yet made it to the clinical realm,” Dr. Yang said. “As a result, while these methods are not commonly used in clinical care, the overall strategy highlights how research advances are linked to clinical advances.”

Investigators are working to secure funding for a larger phase 2 trial.

“With millions of people affected by traumatic brain injury but without effective therapies, this study brings hope that options are on the horizon to help these patients,” Dr. Yang said.

The study was supported by funding from the National Institute of Health BRAIN Initiative and a grant from the Translational Science Center at Weill Cornell Medical College. Surgical implants were provided by Medtronic. Dr. Henderson and Dr. Schiff are listed as inventors on several patent applications for the experimental DBS therapy described in the study. Dr. Cardenas and Dr. Yang report no relevant financial relationships.


A version of this article first appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Secondhand smoke exposure linked to migraine, severe headache

Article Type
Changed
Fri, 12/01/2023 - 16:46

 

TOPLINE:

Heavy secondhand smoke (SHS) exposure is associated with severe headache or migraine in adults who have never smoked, with effects of exposure varying depending on body mass index (BMI) and level of physical activity, new research shows.

METHODOLOGY:

Investigators analyzed data on 4,560 participants (median age, 43 years; 60% female; 71.5% White) from the 1999-2004 National Health and Nutrition Examination Survey.

Participants were aged 20 years or older and had never smoked.

Migraine headache status was determined by asking whether participants experienced severe headaches or migraines during the previous 3 months.

SHS exposure was categorized as unexposed (serum cotinine levels <0.05 ng/mL and no smoker in the home), low (0.05 ng/mL ≤ serum cotinine level <1 ng/mL), or heavy (1 ng/mL ≤ serum cotinine level ≤ 10 ng/mL).

TAKEAWAY:

In all, 919 (20%) participants had severe headaches or migraines.

After adjustment for demographic and lifestyle factors (including medication use), heavy SHS exposure was positively associated with severe headache or migraine (adjusted odds ratio [aOR], 2.02; 95% CI, 1.19-3.43).

No significant association was found between low SHS exposure and severe headaches or migraine (aOR, 1.15; 95% CI, 0.91-1.47).

In participants who were sedentary (P=.016) and those with a BMI <25 (P=.001), significant associations between SHS and severe headache or migraine were observed.

IN PRACTICE:

Noting a linear dose-response relationship between cotinine and severe headaches or migraine, the investigators write, “These findings underscore the need for stronger regulation of tobacco exposure, particularly in homes and public places.”

SOURCE:

Junpeng Wu, MMc, and Haitang Wang, MD, of Southern Medical University in Guangzhou, China, and their colleagues conducted the study. It was published online in Headache.

LIMITATIONS:

The study could not establish causal relationships between SHS and migraine or severe headache. In addition, the half-life of serum cotinine is 15-40 hours and thus this measure can reflect only recent SHS exposure.

DISCLOSURES:

The study was not funded. The investigators reported no disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Heavy secondhand smoke (SHS) exposure is associated with severe headache or migraine in adults who have never smoked, with effects of exposure varying depending on body mass index (BMI) and level of physical activity, new research shows.

METHODOLOGY:

Investigators analyzed data on 4,560 participants (median age, 43 years; 60% female; 71.5% White) from the 1999-2004 National Health and Nutrition Examination Survey.

Participants were aged 20 years or older and had never smoked.

Migraine headache status was determined by asking whether participants experienced severe headaches or migraines during the previous 3 months.

SHS exposure was categorized as unexposed (serum cotinine levels <0.05 ng/mL and no smoker in the home), low (0.05 ng/mL ≤ serum cotinine level <1 ng/mL), or heavy (1 ng/mL ≤ serum cotinine level ≤ 10 ng/mL).

TAKEAWAY:

In all, 919 (20%) participants had severe headaches or migraines.

After adjustment for demographic and lifestyle factors (including medication use), heavy SHS exposure was positively associated with severe headache or migraine (adjusted odds ratio [aOR], 2.02; 95% CI, 1.19-3.43).

No significant association was found between low SHS exposure and severe headaches or migraine (aOR, 1.15; 95% CI, 0.91-1.47).

In participants who were sedentary (P=.016) and those with a BMI <25 (P=.001), significant associations between SHS and severe headache or migraine were observed.

IN PRACTICE:

Noting a linear dose-response relationship between cotinine and severe headaches or migraine, the investigators write, “These findings underscore the need for stronger regulation of tobacco exposure, particularly in homes and public places.”

SOURCE:

Junpeng Wu, MMc, and Haitang Wang, MD, of Southern Medical University in Guangzhou, China, and their colleagues conducted the study. It was published online in Headache.

LIMITATIONS:

The study could not establish causal relationships between SHS and migraine or severe headache. In addition, the half-life of serum cotinine is 15-40 hours and thus this measure can reflect only recent SHS exposure.

DISCLOSURES:

The study was not funded. The investigators reported no disclosures.
 

A version of this article appeared on Medscape.com.

 

TOPLINE:

Heavy secondhand smoke (SHS) exposure is associated with severe headache or migraine in adults who have never smoked, with effects of exposure varying depending on body mass index (BMI) and level of physical activity, new research shows.

METHODOLOGY:

Investigators analyzed data on 4,560 participants (median age, 43 years; 60% female; 71.5% White) from the 1999-2004 National Health and Nutrition Examination Survey.

Participants were aged 20 years or older and had never smoked.

Migraine headache status was determined by asking whether participants experienced severe headaches or migraines during the previous 3 months.

SHS exposure was categorized as unexposed (serum cotinine levels <0.05 ng/mL and no smoker in the home), low (0.05 ng/mL ≤ serum cotinine level <1 ng/mL), or heavy (1 ng/mL ≤ serum cotinine level ≤ 10 ng/mL).

TAKEAWAY:

In all, 919 (20%) participants had severe headaches or migraines.

After adjustment for demographic and lifestyle factors (including medication use), heavy SHS exposure was positively associated with severe headache or migraine (adjusted odds ratio [aOR], 2.02; 95% CI, 1.19-3.43).

No significant association was found between low SHS exposure and severe headaches or migraine (aOR, 1.15; 95% CI, 0.91-1.47).

In participants who were sedentary (P=.016) and those with a BMI <25 (P=.001), significant associations between SHS and severe headache or migraine were observed.

IN PRACTICE:

Noting a linear dose-response relationship between cotinine and severe headaches or migraine, the investigators write, “These findings underscore the need for stronger regulation of tobacco exposure, particularly in homes and public places.”

SOURCE:

Junpeng Wu, MMc, and Haitang Wang, MD, of Southern Medical University in Guangzhou, China, and their colleagues conducted the study. It was published online in Headache.

LIMITATIONS:

The study could not establish causal relationships between SHS and migraine or severe headache. In addition, the half-life of serum cotinine is 15-40 hours and thus this measure can reflect only recent SHS exposure.

DISCLOSURES:

The study was not funded. The investigators reported no disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What is the dark side of GLP-1 receptor agonists?

Article Type
Changed
Mon, 12/04/2023 - 07:44

The approval of the GLP-1 receptor agonist semaglutide for weight regulation in January 2023 ushered in a new era of obesity therapy. In recent months, however, drug regulatory authorities have also documented rare, occasionally severe side effects associated with the use of these agents in diabetes therapy that doctors may not necessarily have been aware of.

“When millions of people are treated with medications like semaglutide, even relatively rare side effects occur in a large number of individuals,” Susan Yanovski, MD, codirector of the Office of Obesity Research at the National Institute of Diabetes and Digestive and Kidney Diseases in Bethesda, Maryland, said in a JAMA news report.

Despite the low incidence of these adverse events and the likelihood that the benefits outweigh these risks in individuals with severe obesity, doctors and patients should be aware of these serious side effects, she added.

GLP-1 receptor agonists like semaglutide or liraglutide mimic certain intestinal hormones. Almost all their characteristic side effects involve the gastrointestinal tract: nausea, vomiting, constipation, and diarrhea. However, these are not the rare, severe side effects that are gaining increasing attention.
 

Severe Gastric Problems

A recent analysis published in JAMA shows that GLP-1 receptor agonists are associated with a ninefold higher risk of pancreatitis, compared with bupropion, an older weight-loss medication. Patients receiving GLP-1 receptor agonists also had four times more frequent intestinal obstruction and more than three times more frequent gastroparesis. The absolute risks for these complications, however, were less than 1% per year of use.

There were no indications of an increased risk for gallbladder diseases. Acute pancreatitis and acute gallbladder diseases are known complications of GLP-1 receptor agonists.

These results “reinforce that these are effective medications, and all medications have side effects,” said Dr. Yanovski. She emphasized that despite a significant increase in relative risk, however, the absolute risk remains very low.
 

Anesthetic Complications

In the spring of 2023, reports of patients taking GLP-1 receptor agonists and vomiting or aspirating food during anesthesia surfaced in some scientific journals. It was particularly noticeable that some of these patients vomited unusually large amounts of stomach contents, even though they had not eaten anything, as directed by the doctor before the operation.

Experts believe that the slowed gastric emptying intentionally caused by GLP-1 receptor agonists could be responsible for these problems.

The American Society of Anesthesiologists now recommends that patients do not take GLP-1 receptor agonists on the day of surgery and discontinue weekly administered agents like Wegovy 7 days before the procedure.

Increased Suicidality Risk?

In July, case reports of depression and suicidal ideation led the European Medicines Agency to investigate about 150 cases of potential self-harm and suicidal thoughts in patients who had received liraglutide or semaglutide. The review now also includes other GLP-1 receptor agonists. Results of the review process are expected in December.

Dr. Yanovski noted that it is unclear whether these incidents are caused by the drugs, but suicidal thoughts and suicidal behavior have also been observed with other medications for obesity treatment (eg, rimonabant). “It is certainly a good idea to use these medications cautiously in patients with a history of suicidality and monitor the patients accordingly,” she said.
 

 

 

Long-Term Safety

GLP-1 receptor agonists likely need to be used long term, potentially for life, for the effects on body weight to persist. Whether there are side effects and complications that only become apparent over time is currently unknown — especially when these medications are used for weight reduction.

Studies in rodents have suggested an increased risk of medullary thyroid carcinomas. Whether a similar signal exists in humans may only become apparent in many years. In patients who have had medullary thyroid carcinoma themselves or in the family, dulaglutide, liraglutide, semaglutide, and tirzepatide, a dual GLP-1/GIP receptor agonist, are contraindicated.

With dual agonists like tirzepatide or even triple agonists like retatrutide (GLP-1/GIP/glucagon), patients can lose significantly more weight than with the monoagonist semaglutide. Gastrointestinal events were also frequent in studies of dual agonists.
 

Awaiting Guideline Updates

Guidelines for using these new medications are still scarce. “There are clinical guidelines for obesity therapy, but they were all written before the GLP-1 receptor agonists came on the market,” said Dr. Yanovski. “Medical societies are currently working intensively to develop new guidelines to help doctors use these medications safely and effectively in clinical practice.”
 

This article was translated from the Medscape German edition. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

The approval of the GLP-1 receptor agonist semaglutide for weight regulation in January 2023 ushered in a new era of obesity therapy. In recent months, however, drug regulatory authorities have also documented rare, occasionally severe side effects associated with the use of these agents in diabetes therapy that doctors may not necessarily have been aware of.

“When millions of people are treated with medications like semaglutide, even relatively rare side effects occur in a large number of individuals,” Susan Yanovski, MD, codirector of the Office of Obesity Research at the National Institute of Diabetes and Digestive and Kidney Diseases in Bethesda, Maryland, said in a JAMA news report.

Despite the low incidence of these adverse events and the likelihood that the benefits outweigh these risks in individuals with severe obesity, doctors and patients should be aware of these serious side effects, she added.

GLP-1 receptor agonists like semaglutide or liraglutide mimic certain intestinal hormones. Almost all their characteristic side effects involve the gastrointestinal tract: nausea, vomiting, constipation, and diarrhea. However, these are not the rare, severe side effects that are gaining increasing attention.
 

Severe Gastric Problems

A recent analysis published in JAMA shows that GLP-1 receptor agonists are associated with a ninefold higher risk of pancreatitis, compared with bupropion, an older weight-loss medication. Patients receiving GLP-1 receptor agonists also had four times more frequent intestinal obstruction and more than three times more frequent gastroparesis. The absolute risks for these complications, however, were less than 1% per year of use.

There were no indications of an increased risk for gallbladder diseases. Acute pancreatitis and acute gallbladder diseases are known complications of GLP-1 receptor agonists.

These results “reinforce that these are effective medications, and all medications have side effects,” said Dr. Yanovski. She emphasized that despite a significant increase in relative risk, however, the absolute risk remains very low.
 

Anesthetic Complications

In the spring of 2023, reports of patients taking GLP-1 receptor agonists and vomiting or aspirating food during anesthesia surfaced in some scientific journals. It was particularly noticeable that some of these patients vomited unusually large amounts of stomach contents, even though they had not eaten anything, as directed by the doctor before the operation.

Experts believe that the slowed gastric emptying intentionally caused by GLP-1 receptor agonists could be responsible for these problems.

The American Society of Anesthesiologists now recommends that patients do not take GLP-1 receptor agonists on the day of surgery and discontinue weekly administered agents like Wegovy 7 days before the procedure.

Increased Suicidality Risk?

In July, case reports of depression and suicidal ideation led the European Medicines Agency to investigate about 150 cases of potential self-harm and suicidal thoughts in patients who had received liraglutide or semaglutide. The review now also includes other GLP-1 receptor agonists. Results of the review process are expected in December.

Dr. Yanovski noted that it is unclear whether these incidents are caused by the drugs, but suicidal thoughts and suicidal behavior have also been observed with other medications for obesity treatment (eg, rimonabant). “It is certainly a good idea to use these medications cautiously in patients with a history of suicidality and monitor the patients accordingly,” she said.
 

 

 

Long-Term Safety

GLP-1 receptor agonists likely need to be used long term, potentially for life, for the effects on body weight to persist. Whether there are side effects and complications that only become apparent over time is currently unknown — especially when these medications are used for weight reduction.

Studies in rodents have suggested an increased risk of medullary thyroid carcinomas. Whether a similar signal exists in humans may only become apparent in many years. In patients who have had medullary thyroid carcinoma themselves or in the family, dulaglutide, liraglutide, semaglutide, and tirzepatide, a dual GLP-1/GIP receptor agonist, are contraindicated.

With dual agonists like tirzepatide or even triple agonists like retatrutide (GLP-1/GIP/glucagon), patients can lose significantly more weight than with the monoagonist semaglutide. Gastrointestinal events were also frequent in studies of dual agonists.
 

Awaiting Guideline Updates

Guidelines for using these new medications are still scarce. “There are clinical guidelines for obesity therapy, but they were all written before the GLP-1 receptor agonists came on the market,” said Dr. Yanovski. “Medical societies are currently working intensively to develop new guidelines to help doctors use these medications safely and effectively in clinical practice.”
 

This article was translated from the Medscape German edition. A version of this article appeared on Medscape.com.

The approval of the GLP-1 receptor agonist semaglutide for weight regulation in January 2023 ushered in a new era of obesity therapy. In recent months, however, drug regulatory authorities have also documented rare, occasionally severe side effects associated with the use of these agents in diabetes therapy that doctors may not necessarily have been aware of.

“When millions of people are treated with medications like semaglutide, even relatively rare side effects occur in a large number of individuals,” Susan Yanovski, MD, codirector of the Office of Obesity Research at the National Institute of Diabetes and Digestive and Kidney Diseases in Bethesda, Maryland, said in a JAMA news report.

Despite the low incidence of these adverse events and the likelihood that the benefits outweigh these risks in individuals with severe obesity, doctors and patients should be aware of these serious side effects, she added.

GLP-1 receptor agonists like semaglutide or liraglutide mimic certain intestinal hormones. Almost all their characteristic side effects involve the gastrointestinal tract: nausea, vomiting, constipation, and diarrhea. However, these are not the rare, severe side effects that are gaining increasing attention.
 

Severe Gastric Problems

A recent analysis published in JAMA shows that GLP-1 receptor agonists are associated with a ninefold higher risk of pancreatitis, compared with bupropion, an older weight-loss medication. Patients receiving GLP-1 receptor agonists also had four times more frequent intestinal obstruction and more than three times more frequent gastroparesis. The absolute risks for these complications, however, were less than 1% per year of use.

There were no indications of an increased risk for gallbladder diseases. Acute pancreatitis and acute gallbladder diseases are known complications of GLP-1 receptor agonists.

These results “reinforce that these are effective medications, and all medications have side effects,” said Dr. Yanovski. She emphasized that despite a significant increase in relative risk, however, the absolute risk remains very low.
 

Anesthetic Complications

In the spring of 2023, reports of patients taking GLP-1 receptor agonists and vomiting or aspirating food during anesthesia surfaced in some scientific journals. It was particularly noticeable that some of these patients vomited unusually large amounts of stomach contents, even though they had not eaten anything, as directed by the doctor before the operation.

Experts believe that the slowed gastric emptying intentionally caused by GLP-1 receptor agonists could be responsible for these problems.

The American Society of Anesthesiologists now recommends that patients do not take GLP-1 receptor agonists on the day of surgery and discontinue weekly administered agents like Wegovy 7 days before the procedure.

Increased Suicidality Risk?

In July, case reports of depression and suicidal ideation led the European Medicines Agency to investigate about 150 cases of potential self-harm and suicidal thoughts in patients who had received liraglutide or semaglutide. The review now also includes other GLP-1 receptor agonists. Results of the review process are expected in December.

Dr. Yanovski noted that it is unclear whether these incidents are caused by the drugs, but suicidal thoughts and suicidal behavior have also been observed with other medications for obesity treatment (eg, rimonabant). “It is certainly a good idea to use these medications cautiously in patients with a history of suicidality and monitor the patients accordingly,” she said.
 

 

 

Long-Term Safety

GLP-1 receptor agonists likely need to be used long term, potentially for life, for the effects on body weight to persist. Whether there are side effects and complications that only become apparent over time is currently unknown — especially when these medications are used for weight reduction.

Studies in rodents have suggested an increased risk of medullary thyroid carcinomas. Whether a similar signal exists in humans may only become apparent in many years. In patients who have had medullary thyroid carcinoma themselves or in the family, dulaglutide, liraglutide, semaglutide, and tirzepatide, a dual GLP-1/GIP receptor agonist, are contraindicated.

With dual agonists like tirzepatide or even triple agonists like retatrutide (GLP-1/GIP/glucagon), patients can lose significantly more weight than with the monoagonist semaglutide. Gastrointestinal events were also frequent in studies of dual agonists.
 

Awaiting Guideline Updates

Guidelines for using these new medications are still scarce. “There are clinical guidelines for obesity therapy, but they were all written before the GLP-1 receptor agonists came on the market,” said Dr. Yanovski. “Medical societies are currently working intensively to develop new guidelines to help doctors use these medications safely and effectively in clinical practice.”
 

This article was translated from the Medscape German edition. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Insufficient sleep impairs women’s insulin sensitivity

Article Type
Changed
Wed, 11/29/2023 - 09:54

Women, particularly those who are postmenopausal, who sleep less than the recommended 7 hours per night may have impaired insulin sensitivity regardless of their degree of adiposity, a randomized crossover trial reveals.

The research was published recently in Diabetes Care.

Nearly 40 women were randomly assigned to either restricted sleep or adequate sleep for 6 weeks, then crossed over to the other sleep condition. During sleep restriction, women slept an average of 6.2 hours per night versus 7-9 hours per night.

Both fasting insulin levels and insulin resistance were significantly increased during sleep restriction, with the effect on insulin resistance particularly notable in postmenopausal women. This was independent of adiposity and changes in adiposity.

“What we’re seeing is that more insulin is needed to normalize glucose levels in the women under conditions of sleep restriction,” said senior author Marie-Pierre St-Onge, PhD, director of the Center of Excellence for Sleep and Circadian Research at Columbia University Vagelos College of Physicians and Surgeons, New York, in a release.

“Even then, the insulin may not have been doing enough to counteract rising blood glucose levels of postmenopausal women,” she stated.
 

Prolonged lack of sleep may accelerate diabetes progression

Dr. St-Onge added, “If that’s sustained over time, it is possible that prolonged insufficient sleep among individuals with prediabetes could accelerate the progression to type 2 diabetes.”

Dr. St-Onge said in an interview that it was crucial to show the impact of sleep restriction in a randomized study, because “observational studies don’t provide information on causality.”

The study did not rely on people “living in our clinical research facility,” but instead enrolled individuals who were “living their lives,” and the reduction in sleep achieved was “similar to what is seen in the general population with sleep,” she said.

Dr. St-Onge therefore believes the findings indicate that sleep has been overlooked as a contributory factor in insulin sensitivity.

Robert Gabbay, MD, PhD, chief scientific and medical officer at the American Diabetes Association, said in an interview that this is an “important study [that] builds on what we have seen on the importance of sleep for metabolic outcomes and diabetes.”

Joslin Diabetes Center
Dr. Robert A. Gabbay


He continued, “There have been several studies showing the association of sleep and diabetes, but that does not necessarily mean cause and effect.”

On the other hand, Dr. Gabbay said, “randomizing people can help see sleep influences on key metabolic measures of diabetes, [which] helps to build a stronger case that sleep disturbances can cause worsening metabolic health.”

He emphasized that both the quantity and quality of sleep are “critical for optimal diabetes health” and highlighted that the ADA’s Standards of Care “recommends screening for sleep issues and counseling to improve sleep.”

“This study provides new insight into the health effects of even small sleep deficits in women across all stages of adulthood and racial and ethnic backgrounds,” commented Corinne Silva, PhD, program director in the Division of Diabetes, Endocrinology, and Metabolic Diseases at the National Institute of Diabetes and Digestive and Kidney Diseases, which co-funded the study.

The authors note that more than one-third of adults sleep less than the recommended 7 hours per night, which is “concerning given robust associations of short sleep with cardiometabolic diseases.”

Moreover, “women report poorer sleep than men,” explained Marishka Brown, PhD, director of the National Center on Sleep Disorders Research at the National Heart, Lung, and Blood Institute, which also co-funded the study.

“So understanding how sleep disturbances impact their health across the lifespan is critical, especially for postmenopausal women,” she said, particularly because previous studies have not reflected real-world sleep patterns or have focused on men.

The researchers conducted a trial to evaluate the causal impact of prolonged, mild sleep restriction on cardiometabolic risk factors in women as part of the American Heart Association Go Red for Women Strategically Focused Research Network.

They recruited metabolically healthy women aged 20-75 years who were at increased risk for cardiometabolic disease due to having either overweight or class I obesity or at least one parent with type 2 diabetes, hyperlipidemia, or cardiovascular disease.

They were also required to have a habitual total sleep time on actigraphy of 7-9 hours per night and low risk for sleep apnea. Exclusion criteria included excessive caffeine intake, a significantly advanced or delayed sleep phase, shift work, and travel across time zones.

The participants were randomly assigned to either adequate sleep, defined as 7-9 hours per night, or sleep restriction, defined as a reduction in sleep duration of 1.5 hours per night, for 6 weeks. They were then crossed over to the other sleep condition.

Assessments, including MRI and oral glucose tolerance tests, were performed at baseline and at the end of each study phase.

The researchers report on 38 women who took part in the trial, of whom 11 were postmenopausal. The mean age was 37.6 years; 31.6% self-identified as Black and 26.3% as Hispanic. The mean body mass index (BMI) was 25.5.

Postmenopausal women had a higher mean age than other women, at 56.1 years versus 30.1 years, and a higher baseline fasting blood glucose, at 5.26 mmol/L (94.68 mg/dL) versus 4.70 mmol/L (84.6 mg/dL).

The team reported that compliance with the sleep protocol was “excellent,” with women during sleep restriction having a reduction in total sleep time of 1.34 hours per night versus women in the adequate sleep arm (P < .0001).

Sleep restriction was also associated with significant increases in fasting plasma insulin versus adequate sleep, at a beta value of 0.68 pmol/L (P = .016), and significantly increased Homeostatic Model Assessment for Insulin Resistance (HOMA-IR) values (beta = 0.30; P = .016).

The impact on HOMA-IR values was significantly more pronounced in postmenopausal than menopausal women, at beta values of 0.45 versus 0.27 (P for interaction = .042).

Sleep restriction had no significant effect on fasting plasma glucose levels, and the association between sleep duration and cardiometabolic parameters was not modified by the proportion of either total or visceral adipose tissue, or by changes in adiposity.

This clinical trial was supported by the American Heart Association, a National Institutes of Health Clinical and Translational Science Award to Columbia University, and N.Y. Nutrition Obesity Research Center. Individual authors received support from the National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. No relevant financial relationships were declared.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Women, particularly those who are postmenopausal, who sleep less than the recommended 7 hours per night may have impaired insulin sensitivity regardless of their degree of adiposity, a randomized crossover trial reveals.

The research was published recently in Diabetes Care.

Nearly 40 women were randomly assigned to either restricted sleep or adequate sleep for 6 weeks, then crossed over to the other sleep condition. During sleep restriction, women slept an average of 6.2 hours per night versus 7-9 hours per night.

Both fasting insulin levels and insulin resistance were significantly increased during sleep restriction, with the effect on insulin resistance particularly notable in postmenopausal women. This was independent of adiposity and changes in adiposity.

“What we’re seeing is that more insulin is needed to normalize glucose levels in the women under conditions of sleep restriction,” said senior author Marie-Pierre St-Onge, PhD, director of the Center of Excellence for Sleep and Circadian Research at Columbia University Vagelos College of Physicians and Surgeons, New York, in a release.

“Even then, the insulin may not have been doing enough to counteract rising blood glucose levels of postmenopausal women,” she stated.
 

Prolonged lack of sleep may accelerate diabetes progression

Dr. St-Onge added, “If that’s sustained over time, it is possible that prolonged insufficient sleep among individuals with prediabetes could accelerate the progression to type 2 diabetes.”

Dr. St-Onge said in an interview that it was crucial to show the impact of sleep restriction in a randomized study, because “observational studies don’t provide information on causality.”

The study did not rely on people “living in our clinical research facility,” but instead enrolled individuals who were “living their lives,” and the reduction in sleep achieved was “similar to what is seen in the general population with sleep,” she said.

Dr. St-Onge therefore believes the findings indicate that sleep has been overlooked as a contributory factor in insulin sensitivity.

Robert Gabbay, MD, PhD, chief scientific and medical officer at the American Diabetes Association, said in an interview that this is an “important study [that] builds on what we have seen on the importance of sleep for metabolic outcomes and diabetes.”

Joslin Diabetes Center
Dr. Robert A. Gabbay


He continued, “There have been several studies showing the association of sleep and diabetes, but that does not necessarily mean cause and effect.”

On the other hand, Dr. Gabbay said, “randomizing people can help see sleep influences on key metabolic measures of diabetes, [which] helps to build a stronger case that sleep disturbances can cause worsening metabolic health.”

He emphasized that both the quantity and quality of sleep are “critical for optimal diabetes health” and highlighted that the ADA’s Standards of Care “recommends screening for sleep issues and counseling to improve sleep.”

“This study provides new insight into the health effects of even small sleep deficits in women across all stages of adulthood and racial and ethnic backgrounds,” commented Corinne Silva, PhD, program director in the Division of Diabetes, Endocrinology, and Metabolic Diseases at the National Institute of Diabetes and Digestive and Kidney Diseases, which co-funded the study.

The authors note that more than one-third of adults sleep less than the recommended 7 hours per night, which is “concerning given robust associations of short sleep with cardiometabolic diseases.”

Moreover, “women report poorer sleep than men,” explained Marishka Brown, PhD, director of the National Center on Sleep Disorders Research at the National Heart, Lung, and Blood Institute, which also co-funded the study.

“So understanding how sleep disturbances impact their health across the lifespan is critical, especially for postmenopausal women,” she said, particularly because previous studies have not reflected real-world sleep patterns or have focused on men.

The researchers conducted a trial to evaluate the causal impact of prolonged, mild sleep restriction on cardiometabolic risk factors in women as part of the American Heart Association Go Red for Women Strategically Focused Research Network.

They recruited metabolically healthy women aged 20-75 years who were at increased risk for cardiometabolic disease due to having either overweight or class I obesity or at least one parent with type 2 diabetes, hyperlipidemia, or cardiovascular disease.

They were also required to have a habitual total sleep time on actigraphy of 7-9 hours per night and low risk for sleep apnea. Exclusion criteria included excessive caffeine intake, a significantly advanced or delayed sleep phase, shift work, and travel across time zones.

The participants were randomly assigned to either adequate sleep, defined as 7-9 hours per night, or sleep restriction, defined as a reduction in sleep duration of 1.5 hours per night, for 6 weeks. They were then crossed over to the other sleep condition.

Assessments, including MRI and oral glucose tolerance tests, were performed at baseline and at the end of each study phase.

The researchers report on 38 women who took part in the trial, of whom 11 were postmenopausal. The mean age was 37.6 years; 31.6% self-identified as Black and 26.3% as Hispanic. The mean body mass index (BMI) was 25.5.

Postmenopausal women had a higher mean age than other women, at 56.1 years versus 30.1 years, and a higher baseline fasting blood glucose, at 5.26 mmol/L (94.68 mg/dL) versus 4.70 mmol/L (84.6 mg/dL).

The team reported that compliance with the sleep protocol was “excellent,” with women during sleep restriction having a reduction in total sleep time of 1.34 hours per night versus women in the adequate sleep arm (P < .0001).

Sleep restriction was also associated with significant increases in fasting plasma insulin versus adequate sleep, at a beta value of 0.68 pmol/L (P = .016), and significantly increased Homeostatic Model Assessment for Insulin Resistance (HOMA-IR) values (beta = 0.30; P = .016).

The impact on HOMA-IR values was significantly more pronounced in postmenopausal than menopausal women, at beta values of 0.45 versus 0.27 (P for interaction = .042).

Sleep restriction had no significant effect on fasting plasma glucose levels, and the association between sleep duration and cardiometabolic parameters was not modified by the proportion of either total or visceral adipose tissue, or by changes in adiposity.

This clinical trial was supported by the American Heart Association, a National Institutes of Health Clinical and Translational Science Award to Columbia University, and N.Y. Nutrition Obesity Research Center. Individual authors received support from the National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. No relevant financial relationships were declared.

A version of this article appeared on Medscape.com.

Women, particularly those who are postmenopausal, who sleep less than the recommended 7 hours per night may have impaired insulin sensitivity regardless of their degree of adiposity, a randomized crossover trial reveals.

The research was published recently in Diabetes Care.

Nearly 40 women were randomly assigned to either restricted sleep or adequate sleep for 6 weeks, then crossed over to the other sleep condition. During sleep restriction, women slept an average of 6.2 hours per night versus 7-9 hours per night.

Both fasting insulin levels and insulin resistance were significantly increased during sleep restriction, with the effect on insulin resistance particularly notable in postmenopausal women. This was independent of adiposity and changes in adiposity.

“What we’re seeing is that more insulin is needed to normalize glucose levels in the women under conditions of sleep restriction,” said senior author Marie-Pierre St-Onge, PhD, director of the Center of Excellence for Sleep and Circadian Research at Columbia University Vagelos College of Physicians and Surgeons, New York, in a release.

“Even then, the insulin may not have been doing enough to counteract rising blood glucose levels of postmenopausal women,” she stated.
 

Prolonged lack of sleep may accelerate diabetes progression

Dr. St-Onge added, “If that’s sustained over time, it is possible that prolonged insufficient sleep among individuals with prediabetes could accelerate the progression to type 2 diabetes.”

Dr. St-Onge said in an interview that it was crucial to show the impact of sleep restriction in a randomized study, because “observational studies don’t provide information on causality.”

The study did not rely on people “living in our clinical research facility,” but instead enrolled individuals who were “living their lives,” and the reduction in sleep achieved was “similar to what is seen in the general population with sleep,” she said.

Dr. St-Onge therefore believes the findings indicate that sleep has been overlooked as a contributory factor in insulin sensitivity.

Robert Gabbay, MD, PhD, chief scientific and medical officer at the American Diabetes Association, said in an interview that this is an “important study [that] builds on what we have seen on the importance of sleep for metabolic outcomes and diabetes.”

Joslin Diabetes Center
Dr. Robert A. Gabbay


He continued, “There have been several studies showing the association of sleep and diabetes, but that does not necessarily mean cause and effect.”

On the other hand, Dr. Gabbay said, “randomizing people can help see sleep influences on key metabolic measures of diabetes, [which] helps to build a stronger case that sleep disturbances can cause worsening metabolic health.”

He emphasized that both the quantity and quality of sleep are “critical for optimal diabetes health” and highlighted that the ADA’s Standards of Care “recommends screening for sleep issues and counseling to improve sleep.”

“This study provides new insight into the health effects of even small sleep deficits in women across all stages of adulthood and racial and ethnic backgrounds,” commented Corinne Silva, PhD, program director in the Division of Diabetes, Endocrinology, and Metabolic Diseases at the National Institute of Diabetes and Digestive and Kidney Diseases, which co-funded the study.

The authors note that more than one-third of adults sleep less than the recommended 7 hours per night, which is “concerning given robust associations of short sleep with cardiometabolic diseases.”

Moreover, “women report poorer sleep than men,” explained Marishka Brown, PhD, director of the National Center on Sleep Disorders Research at the National Heart, Lung, and Blood Institute, which also co-funded the study.

“So understanding how sleep disturbances impact their health across the lifespan is critical, especially for postmenopausal women,” she said, particularly because previous studies have not reflected real-world sleep patterns or have focused on men.

The researchers conducted a trial to evaluate the causal impact of prolonged, mild sleep restriction on cardiometabolic risk factors in women as part of the American Heart Association Go Red for Women Strategically Focused Research Network.

They recruited metabolically healthy women aged 20-75 years who were at increased risk for cardiometabolic disease due to having either overweight or class I obesity or at least one parent with type 2 diabetes, hyperlipidemia, or cardiovascular disease.

They were also required to have a habitual total sleep time on actigraphy of 7-9 hours per night and low risk for sleep apnea. Exclusion criteria included excessive caffeine intake, a significantly advanced or delayed sleep phase, shift work, and travel across time zones.

The participants were randomly assigned to either adequate sleep, defined as 7-9 hours per night, or sleep restriction, defined as a reduction in sleep duration of 1.5 hours per night, for 6 weeks. They were then crossed over to the other sleep condition.

Assessments, including MRI and oral glucose tolerance tests, were performed at baseline and at the end of each study phase.

The researchers report on 38 women who took part in the trial, of whom 11 were postmenopausal. The mean age was 37.6 years; 31.6% self-identified as Black and 26.3% as Hispanic. The mean body mass index (BMI) was 25.5.

Postmenopausal women had a higher mean age than other women, at 56.1 years versus 30.1 years, and a higher baseline fasting blood glucose, at 5.26 mmol/L (94.68 mg/dL) versus 4.70 mmol/L (84.6 mg/dL).

The team reported that compliance with the sleep protocol was “excellent,” with women during sleep restriction having a reduction in total sleep time of 1.34 hours per night versus women in the adequate sleep arm (P < .0001).

Sleep restriction was also associated with significant increases in fasting plasma insulin versus adequate sleep, at a beta value of 0.68 pmol/L (P = .016), and significantly increased Homeostatic Model Assessment for Insulin Resistance (HOMA-IR) values (beta = 0.30; P = .016).

The impact on HOMA-IR values was significantly more pronounced in postmenopausal than menopausal women, at beta values of 0.45 versus 0.27 (P for interaction = .042).

Sleep restriction had no significant effect on fasting plasma glucose levels, and the association between sleep duration and cardiometabolic parameters was not modified by the proportion of either total or visceral adipose tissue, or by changes in adiposity.

This clinical trial was supported by the American Heart Association, a National Institutes of Health Clinical and Translational Science Award to Columbia University, and N.Y. Nutrition Obesity Research Center. Individual authors received support from the National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. No relevant financial relationships were declared.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DIABETES CARE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article