User login
Clinical Psychiatry News is the online destination and multimedia properties of Clinica Psychiatry News, the independent news publication for psychiatrists. Since 1971, Clinical Psychiatry News has been the leading source of news and commentary about clinical developments in psychiatry as well as health care policy and regulations that affect the physician's practice.
Dear Drupal User: You're seeing this because you're logged in to Drupal, and not redirected to MDedge.com/psychiatry.
Depression
adolescent depression
adolescent major depressive disorder
adolescent schizophrenia
adolescent with major depressive disorder
animals
autism
baby
brexpiprazole
child
child bipolar
child depression
child schizophrenia
children with bipolar disorder
children with depression
children with major depressive disorder
compulsive behaviors
cure
elderly bipolar
elderly depression
elderly major depressive disorder
elderly schizophrenia
elderly with dementia
first break
first episode
gambling
gaming
geriatric depression
geriatric major depressive disorder
geriatric schizophrenia
infant
ketamine
kid
major depressive disorder
major depressive disorder in adolescents
major depressive disorder in children
parenting
pediatric
pediatric bipolar
pediatric depression
pediatric major depressive disorder
pediatric schizophrenia
pregnancy
pregnant
rexulti
skin care
suicide
teen
wine
section[contains(@class, 'nav-hidden')]
footer[@id='footer']
div[contains(@class, 'pane-pub-article-cpn')]
div[contains(@class, 'pane-pub-home-cpn')]
div[contains(@class, 'pane-pub-topic-cpn')]
div[contains(@class, 'panel-panel-inner')]
div[contains(@class, 'pane-node-field-article-topics')]
section[contains(@class, 'footer-nav-section-wrapper')]
Novel Method Able to Predict if, When, Dementia Will Develop
Novel, noninvasive testing is able to predict dementia onset with 80% accuracy up to 9 years before clinical diagnosis.
The results suggest resting-state functional MRI (rs-fMRI) could be used to identify a neural network signature of dementia risk early in the pathological course of the disease, an important advance as disease-modifying drugs such as those targeting amyloid beta are now becoming available.
“The brain has been changing for a long time before people get symptoms of dementia, and if we’re very precise about how we do it, we can actually, in principle, detect those changes, which could be really exciting,” study investigator Charles R. Marshall, PhD, professor of clinical neurology, Centre for Preventive Neurology, Wolfson Institute of Population Health, Queen Mary University of London, London, England, told this news organization.
“This could become a platform for screening people for risk status in the future, and it could one day make all the difference in terms of being able to prevent dementia,” he added.
The findings were published online in Nature Mental Health.
The rs-fMRI measures fluctuations in blood oxygen level–dependent signals across the brain, which reflect functional connectivity.
Brain regions commonly implicated in altered functional connectivity in Alzheimer’s disease (AD) are within the default-mode network (DMN). This is the group of regions “connecting with each other and communicating with each other when someone is just lying in an MRI scanner doing nothing, which is how it came to be called the default-mode network,” explained Dr. Marshall.
The DMN encompasses the medial prefrontal cortex, posterior cingulate cortex or precuneus, and bilateral inferior parietal cortices, as well as supplementary brain regions including the medial temporal lobes and temporal poles.
This network is believed to be selectively vulnerable to AD neuropathology. “Something about that network starts to be disrupted in the very earliest stages of Alzheimer’s disease,” said Dr. Marshall.
While this has been known for some time, “what we’ve not been able to do before is build a precise enough model of how the network is connected to be able to tell whether individual participants were going to get dementia or not,” he added.
The investigators used data from the UK Biobank, a large-scale biomedical database and research resource containing genetic and health information from about a half a million UK volunteer participants.
The analysis included 103 individuals with dementia (22 with prevalent dementia and 81 later diagnosed with dementia over a median of 3.7 years) and 1030 matched participants without dementia. All participants had MRI imaging between 2006 and 2010.
The total sample had a mean age of 70.4 years at the time of MRI data acquisition. For each participant, researchers extracted relevant data from 10 predefined regions of interest in the brain, which together defined their DMN. This included two midline regions and four regions in each hemisphere.
Greater Predictive Power
Researchers built a model using an approach related to how brain regions communicate with each other. “The model sort of incorporates what we know about how the changes that you see on a functional MRI scan relate to changes in the firing of brain cells, in a very precise way,” said Dr. Marshall.
The researchers then used a machine learning approach to develop a model for effective connectivity, which describes the causal influence of one brain region over another. “We trained a machine learning tool to recognize what a dementia-like pattern of connectivity looks like,” said Dr. Marshall.
Investigators controlled for potential confounders, including age, sex, handedness, in-scanner head motion, and geographical location of data acquisition.
The model was able to determine the difference in brain connectivity patterns between those who would go on to develop dementia and those who would not, with an accuracy of 82% up to 9 years before an official diagnosis was made.
When the researchers trained a model to use brain connections to predict time to diagnosis, the predicted time to diagnosis and actual time to diagnosis were within about 2 years.
This effective connectivity approach has much more predictive power than memory test scores or brain structural measures, said Dr. Marshall. “We looked at brain volumes and they performed very poorly, only just better than tossing a coin, and the same with cognitive test scores, which were only just better than chance.”
As for markers of amyloid beta and tau in the brain, these are “very useful diagnostically” but only when someone has symptoms, said Dr. Marshall. He noted people live for years with these proteins without developing dementia symptoms.
“We wouldn’t necessarily want to expose somebody who has a brain full of amyloid but was not going to get symptoms for the next 20 years to a treatment, but if we knew that person was highly likely to develop symptoms of dementia in the next 5 years, then we probably would,” he said.
Dr. Marshall believes the predictive power of all these diagnostic tools could be boosted if they were used together.
Potential for Early Detection, Treatment
Researchers examined a number of modifiable dementia risk factors, including hearing loss, depression, hypertension, and physical inactivity. They found self-reported social isolation was the only variable that showed a significant association with effective connectivity, meaning those who are socially isolated were more likely to have a “dementia-like” pattern of DMN effective connectivity. This finding suggests social isolation is a cause, rather than a consequence, of dementia.
The study also revealed associations between DMN effective connectivity and AD polygenic risk score, derived from meta-analysis of multiple external genome-wide association study sources.
A predictive tool that uses rs-fMRI could also help select participants at a high risk for dementia to investigate potential treatments. “There’s good reason to think that if we could go in earlier with, for example, anti-amyloid treatments, they’re more likely to be effective,” said Dr. Marshall.
The new test might eventually have value as a population screening tool, something akin to colon cancer screening, he added. “We don’t send everyone for a colonoscopy; you do a kind of pre-screening test at home, and if that’s positive, then you get called in for a colonoscopy.”
The researchers looked at all-cause dementia and not just AD because dementia subtype diagnoses in the UK Biobank “are not at all reliable,” said Dr. Marshall.
Study limitations included the fact that UK Biobank participants are healthier and less socioeconomically deprived than the general population and are predominantly White. Another study limitation was that labeling of cases and controls depended on clinician coding rather than on standardized diagnostic criteria.
Kudos, Caveats
In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Sebastian Walsh, National Institute for Health and Care Research doctoral fellow in Public Health Medicine, University of Cambridge, Cambridge, England, said the results are “potentially exciting,” and he praised the way the team conducted the study.
However, he noted some caveats, including the small sample size, with only about 100 people with dementia, and the relatively short time between the brain scan and diagnosis (an average of 3.7 years).
Dr. Walsh emphasized the importance of replicating the findings “in bigger samples with a much longer delay between scan and onset of cognitive symptoms.”
He also noted the average age of study participants was 70 years, whereas the average age at which individuals in the United Kingdom develop dementia is mid to late 80s, “so we need to see these results repeated for more diverse and older samples.”
He also noted that MRI scans are expensive, and the approach used in the study needs “a high-quality scan which requires people to keep their head still.”
Also commenting, Andrew Doig, PhD, professor, Division of Neuroscience, the University of Manchester, Manchester, England, said the MRI connectivity method used in the study might form part of a broader diagnostic approach.
“Dementia is a complex condition, and it is unlikely that we will ever find one simple test that can accurately diagnose it,” Dr. Doig noted. “Within a few years, however, there is good reason to believe that we will be routinely testing for dementia in middle-aged people, using a combination of methods, such as a blood test, followed by imaging.”
“The MRI connectivity method described here could form part of this diagnostic platform. We will then have an excellent understanding of which people are likely to benefit most from the new generation of dementia drugs,” he said.
Dr. Marshall and Dr. Walsh reported no relevant disclosures. Dr. Doig reported that he is a founder, shareholder, and consultant for PharmaKure Ltd, which is developing new diagnostics for neurodegenerative diseases using blood biomarkers.
A version of this article first appeared on Medscape.com.
Novel, noninvasive testing is able to predict dementia onset with 80% accuracy up to 9 years before clinical diagnosis.
The results suggest resting-state functional MRI (rs-fMRI) could be used to identify a neural network signature of dementia risk early in the pathological course of the disease, an important advance as disease-modifying drugs such as those targeting amyloid beta are now becoming available.
“The brain has been changing for a long time before people get symptoms of dementia, and if we’re very precise about how we do it, we can actually, in principle, detect those changes, which could be really exciting,” study investigator Charles R. Marshall, PhD, professor of clinical neurology, Centre for Preventive Neurology, Wolfson Institute of Population Health, Queen Mary University of London, London, England, told this news organization.
“This could become a platform for screening people for risk status in the future, and it could one day make all the difference in terms of being able to prevent dementia,” he added.
The findings were published online in Nature Mental Health.
The rs-fMRI measures fluctuations in blood oxygen level–dependent signals across the brain, which reflect functional connectivity.
Brain regions commonly implicated in altered functional connectivity in Alzheimer’s disease (AD) are within the default-mode network (DMN). This is the group of regions “connecting with each other and communicating with each other when someone is just lying in an MRI scanner doing nothing, which is how it came to be called the default-mode network,” explained Dr. Marshall.
The DMN encompasses the medial prefrontal cortex, posterior cingulate cortex or precuneus, and bilateral inferior parietal cortices, as well as supplementary brain regions including the medial temporal lobes and temporal poles.
This network is believed to be selectively vulnerable to AD neuropathology. “Something about that network starts to be disrupted in the very earliest stages of Alzheimer’s disease,” said Dr. Marshall.
While this has been known for some time, “what we’ve not been able to do before is build a precise enough model of how the network is connected to be able to tell whether individual participants were going to get dementia or not,” he added.
The investigators used data from the UK Biobank, a large-scale biomedical database and research resource containing genetic and health information from about a half a million UK volunteer participants.
The analysis included 103 individuals with dementia (22 with prevalent dementia and 81 later diagnosed with dementia over a median of 3.7 years) and 1030 matched participants without dementia. All participants had MRI imaging between 2006 and 2010.
The total sample had a mean age of 70.4 years at the time of MRI data acquisition. For each participant, researchers extracted relevant data from 10 predefined regions of interest in the brain, which together defined their DMN. This included two midline regions and four regions in each hemisphere.
Greater Predictive Power
Researchers built a model using an approach related to how brain regions communicate with each other. “The model sort of incorporates what we know about how the changes that you see on a functional MRI scan relate to changes in the firing of brain cells, in a very precise way,” said Dr. Marshall.
The researchers then used a machine learning approach to develop a model for effective connectivity, which describes the causal influence of one brain region over another. “We trained a machine learning tool to recognize what a dementia-like pattern of connectivity looks like,” said Dr. Marshall.
Investigators controlled for potential confounders, including age, sex, handedness, in-scanner head motion, and geographical location of data acquisition.
The model was able to determine the difference in brain connectivity patterns between those who would go on to develop dementia and those who would not, with an accuracy of 82% up to 9 years before an official diagnosis was made.
When the researchers trained a model to use brain connections to predict time to diagnosis, the predicted time to diagnosis and actual time to diagnosis were within about 2 years.
This effective connectivity approach has much more predictive power than memory test scores or brain structural measures, said Dr. Marshall. “We looked at brain volumes and they performed very poorly, only just better than tossing a coin, and the same with cognitive test scores, which were only just better than chance.”
As for markers of amyloid beta and tau in the brain, these are “very useful diagnostically” but only when someone has symptoms, said Dr. Marshall. He noted people live for years with these proteins without developing dementia symptoms.
“We wouldn’t necessarily want to expose somebody who has a brain full of amyloid but was not going to get symptoms for the next 20 years to a treatment, but if we knew that person was highly likely to develop symptoms of dementia in the next 5 years, then we probably would,” he said.
Dr. Marshall believes the predictive power of all these diagnostic tools could be boosted if they were used together.
Potential for Early Detection, Treatment
Researchers examined a number of modifiable dementia risk factors, including hearing loss, depression, hypertension, and physical inactivity. They found self-reported social isolation was the only variable that showed a significant association with effective connectivity, meaning those who are socially isolated were more likely to have a “dementia-like” pattern of DMN effective connectivity. This finding suggests social isolation is a cause, rather than a consequence, of dementia.
The study also revealed associations between DMN effective connectivity and AD polygenic risk score, derived from meta-analysis of multiple external genome-wide association study sources.
A predictive tool that uses rs-fMRI could also help select participants at a high risk for dementia to investigate potential treatments. “There’s good reason to think that if we could go in earlier with, for example, anti-amyloid treatments, they’re more likely to be effective,” said Dr. Marshall.
The new test might eventually have value as a population screening tool, something akin to colon cancer screening, he added. “We don’t send everyone for a colonoscopy; you do a kind of pre-screening test at home, and if that’s positive, then you get called in for a colonoscopy.”
The researchers looked at all-cause dementia and not just AD because dementia subtype diagnoses in the UK Biobank “are not at all reliable,” said Dr. Marshall.
Study limitations included the fact that UK Biobank participants are healthier and less socioeconomically deprived than the general population and are predominantly White. Another study limitation was that labeling of cases and controls depended on clinician coding rather than on standardized diagnostic criteria.
Kudos, Caveats
In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Sebastian Walsh, National Institute for Health and Care Research doctoral fellow in Public Health Medicine, University of Cambridge, Cambridge, England, said the results are “potentially exciting,” and he praised the way the team conducted the study.
However, he noted some caveats, including the small sample size, with only about 100 people with dementia, and the relatively short time between the brain scan and diagnosis (an average of 3.7 years).
Dr. Walsh emphasized the importance of replicating the findings “in bigger samples with a much longer delay between scan and onset of cognitive symptoms.”
He also noted the average age of study participants was 70 years, whereas the average age at which individuals in the United Kingdom develop dementia is mid to late 80s, “so we need to see these results repeated for more diverse and older samples.”
He also noted that MRI scans are expensive, and the approach used in the study needs “a high-quality scan which requires people to keep their head still.”
Also commenting, Andrew Doig, PhD, professor, Division of Neuroscience, the University of Manchester, Manchester, England, said the MRI connectivity method used in the study might form part of a broader diagnostic approach.
“Dementia is a complex condition, and it is unlikely that we will ever find one simple test that can accurately diagnose it,” Dr. Doig noted. “Within a few years, however, there is good reason to believe that we will be routinely testing for dementia in middle-aged people, using a combination of methods, such as a blood test, followed by imaging.”
“The MRI connectivity method described here could form part of this diagnostic platform. We will then have an excellent understanding of which people are likely to benefit most from the new generation of dementia drugs,” he said.
Dr. Marshall and Dr. Walsh reported no relevant disclosures. Dr. Doig reported that he is a founder, shareholder, and consultant for PharmaKure Ltd, which is developing new diagnostics for neurodegenerative diseases using blood biomarkers.
A version of this article first appeared on Medscape.com.
Novel, noninvasive testing is able to predict dementia onset with 80% accuracy up to 9 years before clinical diagnosis.
The results suggest resting-state functional MRI (rs-fMRI) could be used to identify a neural network signature of dementia risk early in the pathological course of the disease, an important advance as disease-modifying drugs such as those targeting amyloid beta are now becoming available.
“The brain has been changing for a long time before people get symptoms of dementia, and if we’re very precise about how we do it, we can actually, in principle, detect those changes, which could be really exciting,” study investigator Charles R. Marshall, PhD, professor of clinical neurology, Centre for Preventive Neurology, Wolfson Institute of Population Health, Queen Mary University of London, London, England, told this news organization.
“This could become a platform for screening people for risk status in the future, and it could one day make all the difference in terms of being able to prevent dementia,” he added.
The findings were published online in Nature Mental Health.
The rs-fMRI measures fluctuations in blood oxygen level–dependent signals across the brain, which reflect functional connectivity.
Brain regions commonly implicated in altered functional connectivity in Alzheimer’s disease (AD) are within the default-mode network (DMN). This is the group of regions “connecting with each other and communicating with each other when someone is just lying in an MRI scanner doing nothing, which is how it came to be called the default-mode network,” explained Dr. Marshall.
The DMN encompasses the medial prefrontal cortex, posterior cingulate cortex or precuneus, and bilateral inferior parietal cortices, as well as supplementary brain regions including the medial temporal lobes and temporal poles.
This network is believed to be selectively vulnerable to AD neuropathology. “Something about that network starts to be disrupted in the very earliest stages of Alzheimer’s disease,” said Dr. Marshall.
While this has been known for some time, “what we’ve not been able to do before is build a precise enough model of how the network is connected to be able to tell whether individual participants were going to get dementia or not,” he added.
The investigators used data from the UK Biobank, a large-scale biomedical database and research resource containing genetic and health information from about a half a million UK volunteer participants.
The analysis included 103 individuals with dementia (22 with prevalent dementia and 81 later diagnosed with dementia over a median of 3.7 years) and 1030 matched participants without dementia. All participants had MRI imaging between 2006 and 2010.
The total sample had a mean age of 70.4 years at the time of MRI data acquisition. For each participant, researchers extracted relevant data from 10 predefined regions of interest in the brain, which together defined their DMN. This included two midline regions and four regions in each hemisphere.
Greater Predictive Power
Researchers built a model using an approach related to how brain regions communicate with each other. “The model sort of incorporates what we know about how the changes that you see on a functional MRI scan relate to changes in the firing of brain cells, in a very precise way,” said Dr. Marshall.
The researchers then used a machine learning approach to develop a model for effective connectivity, which describes the causal influence of one brain region over another. “We trained a machine learning tool to recognize what a dementia-like pattern of connectivity looks like,” said Dr. Marshall.
Investigators controlled for potential confounders, including age, sex, handedness, in-scanner head motion, and geographical location of data acquisition.
The model was able to determine the difference in brain connectivity patterns between those who would go on to develop dementia and those who would not, with an accuracy of 82% up to 9 years before an official diagnosis was made.
When the researchers trained a model to use brain connections to predict time to diagnosis, the predicted time to diagnosis and actual time to diagnosis were within about 2 years.
This effective connectivity approach has much more predictive power than memory test scores or brain structural measures, said Dr. Marshall. “We looked at brain volumes and they performed very poorly, only just better than tossing a coin, and the same with cognitive test scores, which were only just better than chance.”
As for markers of amyloid beta and tau in the brain, these are “very useful diagnostically” but only when someone has symptoms, said Dr. Marshall. He noted people live for years with these proteins without developing dementia symptoms.
“We wouldn’t necessarily want to expose somebody who has a brain full of amyloid but was not going to get symptoms for the next 20 years to a treatment, but if we knew that person was highly likely to develop symptoms of dementia in the next 5 years, then we probably would,” he said.
Dr. Marshall believes the predictive power of all these diagnostic tools could be boosted if they were used together.
Potential for Early Detection, Treatment
Researchers examined a number of modifiable dementia risk factors, including hearing loss, depression, hypertension, and physical inactivity. They found self-reported social isolation was the only variable that showed a significant association with effective connectivity, meaning those who are socially isolated were more likely to have a “dementia-like” pattern of DMN effective connectivity. This finding suggests social isolation is a cause, rather than a consequence, of dementia.
The study also revealed associations between DMN effective connectivity and AD polygenic risk score, derived from meta-analysis of multiple external genome-wide association study sources.
A predictive tool that uses rs-fMRI could also help select participants at a high risk for dementia to investigate potential treatments. “There’s good reason to think that if we could go in earlier with, for example, anti-amyloid treatments, they’re more likely to be effective,” said Dr. Marshall.
The new test might eventually have value as a population screening tool, something akin to colon cancer screening, he added. “We don’t send everyone for a colonoscopy; you do a kind of pre-screening test at home, and if that’s positive, then you get called in for a colonoscopy.”
The researchers looked at all-cause dementia and not just AD because dementia subtype diagnoses in the UK Biobank “are not at all reliable,” said Dr. Marshall.
Study limitations included the fact that UK Biobank participants are healthier and less socioeconomically deprived than the general population and are predominantly White. Another study limitation was that labeling of cases and controls depended on clinician coding rather than on standardized diagnostic criteria.
Kudos, Caveats
In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Sebastian Walsh, National Institute for Health and Care Research doctoral fellow in Public Health Medicine, University of Cambridge, Cambridge, England, said the results are “potentially exciting,” and he praised the way the team conducted the study.
However, he noted some caveats, including the small sample size, with only about 100 people with dementia, and the relatively short time between the brain scan and diagnosis (an average of 3.7 years).
Dr. Walsh emphasized the importance of replicating the findings “in bigger samples with a much longer delay between scan and onset of cognitive symptoms.”
He also noted the average age of study participants was 70 years, whereas the average age at which individuals in the United Kingdom develop dementia is mid to late 80s, “so we need to see these results repeated for more diverse and older samples.”
He also noted that MRI scans are expensive, and the approach used in the study needs “a high-quality scan which requires people to keep their head still.”
Also commenting, Andrew Doig, PhD, professor, Division of Neuroscience, the University of Manchester, Manchester, England, said the MRI connectivity method used in the study might form part of a broader diagnostic approach.
“Dementia is a complex condition, and it is unlikely that we will ever find one simple test that can accurately diagnose it,” Dr. Doig noted. “Within a few years, however, there is good reason to believe that we will be routinely testing for dementia in middle-aged people, using a combination of methods, such as a blood test, followed by imaging.”
“The MRI connectivity method described here could form part of this diagnostic platform. We will then have an excellent understanding of which people are likely to benefit most from the new generation of dementia drugs,” he said.
Dr. Marshall and Dr. Walsh reported no relevant disclosures. Dr. Doig reported that he is a founder, shareholder, and consultant for PharmaKure Ltd, which is developing new diagnostics for neurodegenerative diseases using blood biomarkers.
A version of this article first appeared on Medscape.com.
Chronotherapy: Why Timing Drugs to Our Body Clocks May Work
Do drugs work better if taken by the clock?
A new analysis published in The Lancet journal’s eClinicalMedicine suggests: Yes, they do — if you consider the patient’s individual body clock. The study is the first to find that timing blood pressure drugs to a person’s personal “chronotype” — that is, whether they are a night owl or an early bird — may reduce the risk for a heart attack.
The findings represent a significant advance in the field of circadian medicine or “chronotherapy” — timing drug administration to circadian rhythms. A growing stack of research suggests this approach could reduce side effects and improve the effectiveness of a wide range of therapies, including vaccines, cancer treatments, and drugs for depression, glaucoma, pain, seizures, and other conditions. Still, despite decades of research, time of day is rarely considered in writing prescriptions.
“We are really just at the beginning of an exciting new way of looking at patient care,” said Kenneth A. Dyar, PhD, whose lab at Helmholtz Zentrum München’s Institute for Diabetes and Cancer focuses on metabolic physiology. Dr. Dyar is co-lead author of the new blood pressure analysis.
“Chronotherapy is a rapidly growing field,” he said, “and I suspect we are soon going to see more and more studies focused on ‘personalized chronotherapy,’ not only in hypertension but also potentially in other clinical areas.”
The ‘Missing Piece’ in Chronotherapy Research
Blood pressure drugs have long been chronotherapy’s battleground. After all, blood pressure follows a circadian rhythm, peaking in the morning and dropping at night.
That healthy overnight dip can disappear in people with diabetes, kidney disease, and obstructive sleep apnea. Some physicians have suggested a bed-time dose to restore that dip. But studies have had mixed results, so “take at bedtime” has become a less common recommendation in recent years.
But the debate continued. After a large 2019 Spanish study found that bedtime doses had benefits so big that the results drew questions, an even larger, 2022 randomized, controlled trial from the University of Dundee in Dundee, Scotland — called the TIME study — aimed to settle the question.
Researchers assigned over 21,000 people to take morning or night hypertension drugs for several years and found no difference in cardiovascular outcomes.
“We did this study thinking nocturnal blood pressure tablets might be better,” said Thomas MacDonald, MD, professor emeritus of clinical pharmacology and pharmacoepidemiology at the University of Dundee and principal investigator for the TIME study and the recent chronotype analysis. “But there was no difference for heart attacks, strokes, or vascular death.”
So, the researchers then looked at participants’ chronotypes, sorting outcomes based on whether the participants were late-to-bed, late-to-rise “night owls” or early-to-bed, early-to-rise “morning larks.”
Their analysis of these 5358 TIME participants found the following results: Risk for hospitalization for a heart attack was at least 34% lower for “owls” who took their drugs at bedtime. By contrast, owls’ heart attack risk was at least 62% higher with morning doses. For “larks,” the opposite was true. Morning doses were associated with an 11% lower heart attack risk and night doses with an 11% higher risk, according to supplemental data.
The personalized approach could explain why some previous chronotherapy studies have failed to show a benefit. Those studies did not individualize drug timing as this one did. But personalization could be key to circadian medicine’s success.
“Our ‘internal personal time’ appears to be an important variable to consider when dosing antihypertensives,” said co-lead author Filippo Pigazzani, MD, PhD, clinical senior lecturer and honorary consultant cardiologist at the University of Dundee School of Medicine. “Chronotherapy research has been going on for decades. We knew there was something important with time of day. But researchers haven’t considered the internal time of individual people. I think that is the missing piece.”
The analysis has several important limitations, the researchers said. A total of 95% of participants were White. And it was an observational study, not a true randomized comparison. “We started it late in the original TIME study,” Dr. MacDonald said. “You could argue we were reporting on those who survived long enough to get into the analysis.” More research is needed, they concluded.
Looking Beyond Blood Pressure
What about the rest of the body? “Almost all the cells of our body contain ‘circadian clocks’ that are synchronized by daily environmental cues, including light-dark, activity-rest, and feeding-fasting cycles,” said Dr. Dyar.
An estimated 50% of prescription drugs hit targets in the body that have circadian patterns. So, experts suspect that syncing a drug with a person’s body clock might increase effectiveness of many drugs.
A handful of US Food and Drug Administration–approved drugs already have time-of-day recommendations on the label for effectiveness or to limit side effects, including bedtime or evening for the insomnia drug Ambien, the HIV antiviral Atripla, and cholesterol-lowering Zocor. Others are intended to be taken with or after your last meal of the day, such as the long-acting insulin Levemir and the cardiovascular drug Xarelto. A morning recommendation comes with the proton pump inhibitor Nexium and the attention-deficit/hyperactivity disorder drug Ritalin.
Interest is expanding. About one third of the papers published about chronotherapy in the past 25 years have come out in the past 5 years. The May 2024 meeting of the Society for Research on Biological Rhythms featured a day-long session aimed at bringing clinicians up to speed. An organization called the International Association of Circadian Health Clinics is trying to bring circadian medicine findings to clinicians and their patients and to support research.
Moreover, while recent research suggests minding the clock could have benefits for a wide range of treatments, ignoring it could cause problems.
In a Massachusetts Institute of Technology study published in April in Science Advances, researchers looked at engineered livers made from human donor cells and found more than 300 genes that operate on a circadian schedule, many with roles in drug metabolism. They also found that circadian patterns affected the toxicity of acetaminophen and atorvastatin. Identifying the time of day to take these drugs could maximize effectiveness and minimize adverse effects, the researchers said.
Timing and the Immune System
Circadian rhythms are also seen in immune processes. In a 2023 study in The Journal of Clinical Investigation of vaccine data from 1.5 million people in Israel, researchers found that children and older adults who got their second dose of the Pfizer mRNA COVID vaccine earlier in the day were about 36% less likely to be hospitalized with SARS-CoV-2 infection than those who got an evening shot.
“The sweet spot in our data was somewhere around late morning to late afternoon,” said lead researcher Jeffrey Haspel, MD, PhD, associate professor of medicine in the division of pulmonary and critical care medicine at Washington University School of Medicine in St. Louis.
In a multicenter, 2024 analysis of 13 studies of immunotherapy for advanced cancers in 1663 people, researchers found treatment earlier in the day was associated with longer survival time and longer survival without cancer progression.
“Patients with selected metastatic cancers seemed to largely benefit from early [time of day] infusions, which is consistent with circadian mechanisms in immune-cell functions and trafficking,” the researchers noted. But “retrospective randomized trials are needed to establish recommendations for optimal circadian timing.”
Other research suggests or is investigating possible chronotherapy benefits for depression, glaucoma, respiratory diseases, stroke treatment, epilepsy, and sedatives used in surgery. So why aren’t healthcare providers adding time of day to more prescriptions? “What’s missing is more reliable data,” Dr. Dyar said.
Should You Use Chronotherapy Now?
Experts emphasize that more research is needed before doctors use chronotherapy and before medical organizations include it in treatment recommendations. But for some patients, circadian dosing may be worth a try:
Night owls whose blood pressure isn’t well controlled. Dr. Dyar and Dr. Pigazzani said night-time blood pressure drugs may be helpful for people with a “late chronotype.” Of course, patients shouldn’t change their medication schedule on their own, they said. And doctors may want to consider other concerns, like more overnight bathroom visits with evening diuretics.
In their study, the researchers determined participants’ chronotype with a few questions from the Munich Chronotype Questionnaire about what time they fell asleep and woke up on workdays and days off and whether they considered themselves “morning types” or “evening types.” (The questions can be found in supplementary data for the study.)
If a physician thinks matching the timing of a dose with chronotype would help, they can consider it, Dr. Pigazzani said. “However, I must add that this was an observational study, so I would advise healthcare practitioners to wait for our data to be confirmed in new RCTs of personalized chronotherapy of hypertension.”
Children and older adults getting vaccines. Timing COVID shots and possibly other vaccines from late morning to mid-afternoon could have a small benefit for individuals and a bigger public-health benefit, Dr. Haspel said. But the most important thing is getting vaccinated. “If you can only get one in the evening, it’s still worthwhile. Timing may add oomph at a public-health level for more vulnerable groups.”
A version of this article appeared on Medscape.com.
Do drugs work better if taken by the clock?
A new analysis published in The Lancet journal’s eClinicalMedicine suggests: Yes, they do — if you consider the patient’s individual body clock. The study is the first to find that timing blood pressure drugs to a person’s personal “chronotype” — that is, whether they are a night owl or an early bird — may reduce the risk for a heart attack.
The findings represent a significant advance in the field of circadian medicine or “chronotherapy” — timing drug administration to circadian rhythms. A growing stack of research suggests this approach could reduce side effects and improve the effectiveness of a wide range of therapies, including vaccines, cancer treatments, and drugs for depression, glaucoma, pain, seizures, and other conditions. Still, despite decades of research, time of day is rarely considered in writing prescriptions.
“We are really just at the beginning of an exciting new way of looking at patient care,” said Kenneth A. Dyar, PhD, whose lab at Helmholtz Zentrum München’s Institute for Diabetes and Cancer focuses on metabolic physiology. Dr. Dyar is co-lead author of the new blood pressure analysis.
“Chronotherapy is a rapidly growing field,” he said, “and I suspect we are soon going to see more and more studies focused on ‘personalized chronotherapy,’ not only in hypertension but also potentially in other clinical areas.”
The ‘Missing Piece’ in Chronotherapy Research
Blood pressure drugs have long been chronotherapy’s battleground. After all, blood pressure follows a circadian rhythm, peaking in the morning and dropping at night.
That healthy overnight dip can disappear in people with diabetes, kidney disease, and obstructive sleep apnea. Some physicians have suggested a bed-time dose to restore that dip. But studies have had mixed results, so “take at bedtime” has become a less common recommendation in recent years.
But the debate continued. After a large 2019 Spanish study found that bedtime doses had benefits so big that the results drew questions, an even larger, 2022 randomized, controlled trial from the University of Dundee in Dundee, Scotland — called the TIME study — aimed to settle the question.
Researchers assigned over 21,000 people to take morning or night hypertension drugs for several years and found no difference in cardiovascular outcomes.
“We did this study thinking nocturnal blood pressure tablets might be better,” said Thomas MacDonald, MD, professor emeritus of clinical pharmacology and pharmacoepidemiology at the University of Dundee and principal investigator for the TIME study and the recent chronotype analysis. “But there was no difference for heart attacks, strokes, or vascular death.”
So, the researchers then looked at participants’ chronotypes, sorting outcomes based on whether the participants were late-to-bed, late-to-rise “night owls” or early-to-bed, early-to-rise “morning larks.”
Their analysis of these 5358 TIME participants found the following results: Risk for hospitalization for a heart attack was at least 34% lower for “owls” who took their drugs at bedtime. By contrast, owls’ heart attack risk was at least 62% higher with morning doses. For “larks,” the opposite was true. Morning doses were associated with an 11% lower heart attack risk and night doses with an 11% higher risk, according to supplemental data.
The personalized approach could explain why some previous chronotherapy studies have failed to show a benefit. Those studies did not individualize drug timing as this one did. But personalization could be key to circadian medicine’s success.
“Our ‘internal personal time’ appears to be an important variable to consider when dosing antihypertensives,” said co-lead author Filippo Pigazzani, MD, PhD, clinical senior lecturer and honorary consultant cardiologist at the University of Dundee School of Medicine. “Chronotherapy research has been going on for decades. We knew there was something important with time of day. But researchers haven’t considered the internal time of individual people. I think that is the missing piece.”
The analysis has several important limitations, the researchers said. A total of 95% of participants were White. And it was an observational study, not a true randomized comparison. “We started it late in the original TIME study,” Dr. MacDonald said. “You could argue we were reporting on those who survived long enough to get into the analysis.” More research is needed, they concluded.
Looking Beyond Blood Pressure
What about the rest of the body? “Almost all the cells of our body contain ‘circadian clocks’ that are synchronized by daily environmental cues, including light-dark, activity-rest, and feeding-fasting cycles,” said Dr. Dyar.
An estimated 50% of prescription drugs hit targets in the body that have circadian patterns. So, experts suspect that syncing a drug with a person’s body clock might increase effectiveness of many drugs.
A handful of US Food and Drug Administration–approved drugs already have time-of-day recommendations on the label for effectiveness or to limit side effects, including bedtime or evening for the insomnia drug Ambien, the HIV antiviral Atripla, and cholesterol-lowering Zocor. Others are intended to be taken with or after your last meal of the day, such as the long-acting insulin Levemir and the cardiovascular drug Xarelto. A morning recommendation comes with the proton pump inhibitor Nexium and the attention-deficit/hyperactivity disorder drug Ritalin.
Interest is expanding. About one third of the papers published about chronotherapy in the past 25 years have come out in the past 5 years. The May 2024 meeting of the Society for Research on Biological Rhythms featured a day-long session aimed at bringing clinicians up to speed. An organization called the International Association of Circadian Health Clinics is trying to bring circadian medicine findings to clinicians and their patients and to support research.
Moreover, while recent research suggests minding the clock could have benefits for a wide range of treatments, ignoring it could cause problems.
In a Massachusetts Institute of Technology study published in April in Science Advances, researchers looked at engineered livers made from human donor cells and found more than 300 genes that operate on a circadian schedule, many with roles in drug metabolism. They also found that circadian patterns affected the toxicity of acetaminophen and atorvastatin. Identifying the time of day to take these drugs could maximize effectiveness and minimize adverse effects, the researchers said.
Timing and the Immune System
Circadian rhythms are also seen in immune processes. In a 2023 study in The Journal of Clinical Investigation of vaccine data from 1.5 million people in Israel, researchers found that children and older adults who got their second dose of the Pfizer mRNA COVID vaccine earlier in the day were about 36% less likely to be hospitalized with SARS-CoV-2 infection than those who got an evening shot.
“The sweet spot in our data was somewhere around late morning to late afternoon,” said lead researcher Jeffrey Haspel, MD, PhD, associate professor of medicine in the division of pulmonary and critical care medicine at Washington University School of Medicine in St. Louis.
In a multicenter, 2024 analysis of 13 studies of immunotherapy for advanced cancers in 1663 people, researchers found treatment earlier in the day was associated with longer survival time and longer survival without cancer progression.
“Patients with selected metastatic cancers seemed to largely benefit from early [time of day] infusions, which is consistent with circadian mechanisms in immune-cell functions and trafficking,” the researchers noted. But “retrospective randomized trials are needed to establish recommendations for optimal circadian timing.”
Other research suggests or is investigating possible chronotherapy benefits for depression, glaucoma, respiratory diseases, stroke treatment, epilepsy, and sedatives used in surgery. So why aren’t healthcare providers adding time of day to more prescriptions? “What’s missing is more reliable data,” Dr. Dyar said.
Should You Use Chronotherapy Now?
Experts emphasize that more research is needed before doctors use chronotherapy and before medical organizations include it in treatment recommendations. But for some patients, circadian dosing may be worth a try:
Night owls whose blood pressure isn’t well controlled. Dr. Dyar and Dr. Pigazzani said night-time blood pressure drugs may be helpful for people with a “late chronotype.” Of course, patients shouldn’t change their medication schedule on their own, they said. And doctors may want to consider other concerns, like more overnight bathroom visits with evening diuretics.
In their study, the researchers determined participants’ chronotype with a few questions from the Munich Chronotype Questionnaire about what time they fell asleep and woke up on workdays and days off and whether they considered themselves “morning types” or “evening types.” (The questions can be found in supplementary data for the study.)
If a physician thinks matching the timing of a dose with chronotype would help, they can consider it, Dr. Pigazzani said. “However, I must add that this was an observational study, so I would advise healthcare practitioners to wait for our data to be confirmed in new RCTs of personalized chronotherapy of hypertension.”
Children and older adults getting vaccines. Timing COVID shots and possibly other vaccines from late morning to mid-afternoon could have a small benefit for individuals and a bigger public-health benefit, Dr. Haspel said. But the most important thing is getting vaccinated. “If you can only get one in the evening, it’s still worthwhile. Timing may add oomph at a public-health level for more vulnerable groups.”
A version of this article appeared on Medscape.com.
Do drugs work better if taken by the clock?
A new analysis published in The Lancet journal’s eClinicalMedicine suggests: Yes, they do — if you consider the patient’s individual body clock. The study is the first to find that timing blood pressure drugs to a person’s personal “chronotype” — that is, whether they are a night owl or an early bird — may reduce the risk for a heart attack.
The findings represent a significant advance in the field of circadian medicine or “chronotherapy” — timing drug administration to circadian rhythms. A growing stack of research suggests this approach could reduce side effects and improve the effectiveness of a wide range of therapies, including vaccines, cancer treatments, and drugs for depression, glaucoma, pain, seizures, and other conditions. Still, despite decades of research, time of day is rarely considered in writing prescriptions.
“We are really just at the beginning of an exciting new way of looking at patient care,” said Kenneth A. Dyar, PhD, whose lab at Helmholtz Zentrum München’s Institute for Diabetes and Cancer focuses on metabolic physiology. Dr. Dyar is co-lead author of the new blood pressure analysis.
“Chronotherapy is a rapidly growing field,” he said, “and I suspect we are soon going to see more and more studies focused on ‘personalized chronotherapy,’ not only in hypertension but also potentially in other clinical areas.”
The ‘Missing Piece’ in Chronotherapy Research
Blood pressure drugs have long been chronotherapy’s battleground. After all, blood pressure follows a circadian rhythm, peaking in the morning and dropping at night.
That healthy overnight dip can disappear in people with diabetes, kidney disease, and obstructive sleep apnea. Some physicians have suggested a bed-time dose to restore that dip. But studies have had mixed results, so “take at bedtime” has become a less common recommendation in recent years.
But the debate continued. After a large 2019 Spanish study found that bedtime doses had benefits so big that the results drew questions, an even larger, 2022 randomized, controlled trial from the University of Dundee in Dundee, Scotland — called the TIME study — aimed to settle the question.
Researchers assigned over 21,000 people to take morning or night hypertension drugs for several years and found no difference in cardiovascular outcomes.
“We did this study thinking nocturnal blood pressure tablets might be better,” said Thomas MacDonald, MD, professor emeritus of clinical pharmacology and pharmacoepidemiology at the University of Dundee and principal investigator for the TIME study and the recent chronotype analysis. “But there was no difference for heart attacks, strokes, or vascular death.”
So, the researchers then looked at participants’ chronotypes, sorting outcomes based on whether the participants were late-to-bed, late-to-rise “night owls” or early-to-bed, early-to-rise “morning larks.”
Their analysis of these 5358 TIME participants found the following results: Risk for hospitalization for a heart attack was at least 34% lower for “owls” who took their drugs at bedtime. By contrast, owls’ heart attack risk was at least 62% higher with morning doses. For “larks,” the opposite was true. Morning doses were associated with an 11% lower heart attack risk and night doses with an 11% higher risk, according to supplemental data.
The personalized approach could explain why some previous chronotherapy studies have failed to show a benefit. Those studies did not individualize drug timing as this one did. But personalization could be key to circadian medicine’s success.
“Our ‘internal personal time’ appears to be an important variable to consider when dosing antihypertensives,” said co-lead author Filippo Pigazzani, MD, PhD, clinical senior lecturer and honorary consultant cardiologist at the University of Dundee School of Medicine. “Chronotherapy research has been going on for decades. We knew there was something important with time of day. But researchers haven’t considered the internal time of individual people. I think that is the missing piece.”
The analysis has several important limitations, the researchers said. A total of 95% of participants were White. And it was an observational study, not a true randomized comparison. “We started it late in the original TIME study,” Dr. MacDonald said. “You could argue we were reporting on those who survived long enough to get into the analysis.” More research is needed, they concluded.
Looking Beyond Blood Pressure
What about the rest of the body? “Almost all the cells of our body contain ‘circadian clocks’ that are synchronized by daily environmental cues, including light-dark, activity-rest, and feeding-fasting cycles,” said Dr. Dyar.
An estimated 50% of prescription drugs hit targets in the body that have circadian patterns. So, experts suspect that syncing a drug with a person’s body clock might increase effectiveness of many drugs.
A handful of US Food and Drug Administration–approved drugs already have time-of-day recommendations on the label for effectiveness or to limit side effects, including bedtime or evening for the insomnia drug Ambien, the HIV antiviral Atripla, and cholesterol-lowering Zocor. Others are intended to be taken with or after your last meal of the day, such as the long-acting insulin Levemir and the cardiovascular drug Xarelto. A morning recommendation comes with the proton pump inhibitor Nexium and the attention-deficit/hyperactivity disorder drug Ritalin.
Interest is expanding. About one third of the papers published about chronotherapy in the past 25 years have come out in the past 5 years. The May 2024 meeting of the Society for Research on Biological Rhythms featured a day-long session aimed at bringing clinicians up to speed. An organization called the International Association of Circadian Health Clinics is trying to bring circadian medicine findings to clinicians and their patients and to support research.
Moreover, while recent research suggests minding the clock could have benefits for a wide range of treatments, ignoring it could cause problems.
In a Massachusetts Institute of Technology study published in April in Science Advances, researchers looked at engineered livers made from human donor cells and found more than 300 genes that operate on a circadian schedule, many with roles in drug metabolism. They also found that circadian patterns affected the toxicity of acetaminophen and atorvastatin. Identifying the time of day to take these drugs could maximize effectiveness and minimize adverse effects, the researchers said.
Timing and the Immune System
Circadian rhythms are also seen in immune processes. In a 2023 study in The Journal of Clinical Investigation of vaccine data from 1.5 million people in Israel, researchers found that children and older adults who got their second dose of the Pfizer mRNA COVID vaccine earlier in the day were about 36% less likely to be hospitalized with SARS-CoV-2 infection than those who got an evening shot.
“The sweet spot in our data was somewhere around late morning to late afternoon,” said lead researcher Jeffrey Haspel, MD, PhD, associate professor of medicine in the division of pulmonary and critical care medicine at Washington University School of Medicine in St. Louis.
In a multicenter, 2024 analysis of 13 studies of immunotherapy for advanced cancers in 1663 people, researchers found treatment earlier in the day was associated with longer survival time and longer survival without cancer progression.
“Patients with selected metastatic cancers seemed to largely benefit from early [time of day] infusions, which is consistent with circadian mechanisms in immune-cell functions and trafficking,” the researchers noted. But “retrospective randomized trials are needed to establish recommendations for optimal circadian timing.”
Other research suggests or is investigating possible chronotherapy benefits for depression, glaucoma, respiratory diseases, stroke treatment, epilepsy, and sedatives used in surgery. So why aren’t healthcare providers adding time of day to more prescriptions? “What’s missing is more reliable data,” Dr. Dyar said.
Should You Use Chronotherapy Now?
Experts emphasize that more research is needed before doctors use chronotherapy and before medical organizations include it in treatment recommendations. But for some patients, circadian dosing may be worth a try:
Night owls whose blood pressure isn’t well controlled. Dr. Dyar and Dr. Pigazzani said night-time blood pressure drugs may be helpful for people with a “late chronotype.” Of course, patients shouldn’t change their medication schedule on their own, they said. And doctors may want to consider other concerns, like more overnight bathroom visits with evening diuretics.
In their study, the researchers determined participants’ chronotype with a few questions from the Munich Chronotype Questionnaire about what time they fell asleep and woke up on workdays and days off and whether they considered themselves “morning types” or “evening types.” (The questions can be found in supplementary data for the study.)
If a physician thinks matching the timing of a dose with chronotype would help, they can consider it, Dr. Pigazzani said. “However, I must add that this was an observational study, so I would advise healthcare practitioners to wait for our data to be confirmed in new RCTs of personalized chronotherapy of hypertension.”
Children and older adults getting vaccines. Timing COVID shots and possibly other vaccines from late morning to mid-afternoon could have a small benefit for individuals and a bigger public-health benefit, Dr. Haspel said. But the most important thing is getting vaccinated. “If you can only get one in the evening, it’s still worthwhile. Timing may add oomph at a public-health level for more vulnerable groups.”
A version of this article appeared on Medscape.com.
Antidepressants and Dementia Risk: New Data
TOPLINE:
Taking antidepressants in midlife was not associated with an increased risk of subsequent Alzheimer’s disease (AD) or AD-related dementias (ADRD), data from a large prospective study of US veterans show.
METHODOLOGY:
- Investigators analyzed data from 35,200 US veterans aged ≥ 55 years diagnosed with major depressive disorder from January 1, 2000, to June 1, 2022, and followed them for ≤ 20 years to track subsequent AD/ADRD diagnoses.
- Health information was pulled from electronic health records of the Veterans Health Administration (VHA) Corporate Data Warehouse, and veterans had to be at the VHA for ≥ 1 year before diagnosis.
- Participants were considered to be exposed to an antidepressant when a prescription lasted ≥ 3 months.
TAKEAWAY:
- A total of 32,500 individuals were diagnosed with MDD. The mean age was 65 years, and 91% were men. 17,000 patients received antidepressants for a median duration of 4 years. Median follow-up time was 3.2 years.
- There was no significant association between antidepressant exposure and the risk for AD/ADRD (events = 1056; hazard ratio, 0.93; 95% CI, 0.80-1.08) vs no exposure.
- In a subgroup analysis, investigators found no significant link between different classes of antidepressants and dementia risk. These included selective serotonin reuptake inhibitors, norepinephrine and dopamine reuptake inhibitors, and serotonin-norepinephrine reuptake inhibitors.
- Investigators emphasized the need for further research, particularly in populations with a larger representation of female patients.
IN PRACTICE:
“A possibility for the conflicting results in retrospective studies is that the heightened risk identified in participants on antidepressants may be attributed to depression itself, rather than the result of a potential pharmacological action. So, this and other clinical confounding factors need to be taken into account,” the investigators noted.
SOURCE:
The study was led by Jaime Ramos-Cejudo, PhD, VA Boston Healthcare System, Boston. It was published online May 8 in Alzheimer’s & Dementia.
LIMITATIONS:
The cohort’s relatively young age limited the number of dementia cases captured. Data from supplemental insurance, including Medicare, were not included, potentially limiting outcome capture.
DISCLOSURES:
The study was supported by the National Institutes of Health and the National Alzheimer’s Coordinating Center. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
Taking antidepressants in midlife was not associated with an increased risk of subsequent Alzheimer’s disease (AD) or AD-related dementias (ADRD), data from a large prospective study of US veterans show.
METHODOLOGY:
- Investigators analyzed data from 35,200 US veterans aged ≥ 55 years diagnosed with major depressive disorder from January 1, 2000, to June 1, 2022, and followed them for ≤ 20 years to track subsequent AD/ADRD diagnoses.
- Health information was pulled from electronic health records of the Veterans Health Administration (VHA) Corporate Data Warehouse, and veterans had to be at the VHA for ≥ 1 year before diagnosis.
- Participants were considered to be exposed to an antidepressant when a prescription lasted ≥ 3 months.
TAKEAWAY:
- A total of 32,500 individuals were diagnosed with MDD. The mean age was 65 years, and 91% were men. 17,000 patients received antidepressants for a median duration of 4 years. Median follow-up time was 3.2 years.
- There was no significant association between antidepressant exposure and the risk for AD/ADRD (events = 1056; hazard ratio, 0.93; 95% CI, 0.80-1.08) vs no exposure.
- In a subgroup analysis, investigators found no significant link between different classes of antidepressants and dementia risk. These included selective serotonin reuptake inhibitors, norepinephrine and dopamine reuptake inhibitors, and serotonin-norepinephrine reuptake inhibitors.
- Investigators emphasized the need for further research, particularly in populations with a larger representation of female patients.
IN PRACTICE:
“A possibility for the conflicting results in retrospective studies is that the heightened risk identified in participants on antidepressants may be attributed to depression itself, rather than the result of a potential pharmacological action. So, this and other clinical confounding factors need to be taken into account,” the investigators noted.
SOURCE:
The study was led by Jaime Ramos-Cejudo, PhD, VA Boston Healthcare System, Boston. It was published online May 8 in Alzheimer’s & Dementia.
LIMITATIONS:
The cohort’s relatively young age limited the number of dementia cases captured. Data from supplemental insurance, including Medicare, were not included, potentially limiting outcome capture.
DISCLOSURES:
The study was supported by the National Institutes of Health and the National Alzheimer’s Coordinating Center. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
Taking antidepressants in midlife was not associated with an increased risk of subsequent Alzheimer’s disease (AD) or AD-related dementias (ADRD), data from a large prospective study of US veterans show.
METHODOLOGY:
- Investigators analyzed data from 35,200 US veterans aged ≥ 55 years diagnosed with major depressive disorder from January 1, 2000, to June 1, 2022, and followed them for ≤ 20 years to track subsequent AD/ADRD diagnoses.
- Health information was pulled from electronic health records of the Veterans Health Administration (VHA) Corporate Data Warehouse, and veterans had to be at the VHA for ≥ 1 year before diagnosis.
- Participants were considered to be exposed to an antidepressant when a prescription lasted ≥ 3 months.
TAKEAWAY:
- A total of 32,500 individuals were diagnosed with MDD. The mean age was 65 years, and 91% were men. 17,000 patients received antidepressants for a median duration of 4 years. Median follow-up time was 3.2 years.
- There was no significant association between antidepressant exposure and the risk for AD/ADRD (events = 1056; hazard ratio, 0.93; 95% CI, 0.80-1.08) vs no exposure.
- In a subgroup analysis, investigators found no significant link between different classes of antidepressants and dementia risk. These included selective serotonin reuptake inhibitors, norepinephrine and dopamine reuptake inhibitors, and serotonin-norepinephrine reuptake inhibitors.
- Investigators emphasized the need for further research, particularly in populations with a larger representation of female patients.
IN PRACTICE:
“A possibility for the conflicting results in retrospective studies is that the heightened risk identified in participants on antidepressants may be attributed to depression itself, rather than the result of a potential pharmacological action. So, this and other clinical confounding factors need to be taken into account,” the investigators noted.
SOURCE:
The study was led by Jaime Ramos-Cejudo, PhD, VA Boston Healthcare System, Boston. It was published online May 8 in Alzheimer’s & Dementia.
LIMITATIONS:
The cohort’s relatively young age limited the number of dementia cases captured. Data from supplemental insurance, including Medicare, were not included, potentially limiting outcome capture.
DISCLOSURES:
The study was supported by the National Institutes of Health and the National Alzheimer’s Coordinating Center. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
PTSD Rates Soar Among College Students
TOPLINE:
Posttraumatic stress disorder (PTSD) rates among college students more than doubled between 2017 and 2022, new data showed. Rates of acute stress disorder (ASD) also increased during that time.
METHODOLOGY:
- Researchers conducted five waves of cross-sectional study from 2017 to 2022, involving 392,377 participants across 332 colleges and universities.
- The study utilized the Healthy Minds Study data, ensuring representativeness by applying sample weights based on institutional demographics.
- Outcome variables were diagnoses of PTSD and ASD, confirmed by healthcare practitioners, with statistical analysis assessing change in odds of estimated prevalence during 2017-2022.
TAKEAWAY:
- The prevalence of PTSD among US college students increased from 3.4% in 2017-2018 to 7.5% in 2021-2022.
- ASD diagnoses also rose from 0.2% in 2017-2018 to 0.7% in 2021-2022, with both increases remaining statistically significant after adjusting for demographic differences.
- Investigators noted that these findings underscore the need for targeted, trauma-informed intervention strategies in college settings.
IN PRACTICE:
“These trends highlight the escalating mental health challenges among college students, which is consistent with recent research reporting a surge in psychiatric diagnoses,” the authors wrote. “Factors contributing to this rise may include pandemic-related stressors (eg, loss of loved ones) and the effect of traumatic events (eg, campus shootings and racial trauma),” they added.
SOURCE:
The study was led by Yusen Zhai, PhD, University of Alabama at Birmingham. It was published online on May 30, 2024, in JAMA Network Open.
LIMITATIONS:
The study’s reliance on self-reported data and single questions for diagnosed PTSD and ASD may have limited the accuracy of the findings. The retrospective design and the absence of longitudinal follow-up may have restricted the ability to infer causality from the observed trends.
DISCLOSURES:
No disclosures were reported. No funding information was available.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
Posttraumatic stress disorder (PTSD) rates among college students more than doubled between 2017 and 2022, new data showed. Rates of acute stress disorder (ASD) also increased during that time.
METHODOLOGY:
- Researchers conducted five waves of cross-sectional study from 2017 to 2022, involving 392,377 participants across 332 colleges and universities.
- The study utilized the Healthy Minds Study data, ensuring representativeness by applying sample weights based on institutional demographics.
- Outcome variables were diagnoses of PTSD and ASD, confirmed by healthcare practitioners, with statistical analysis assessing change in odds of estimated prevalence during 2017-2022.
TAKEAWAY:
- The prevalence of PTSD among US college students increased from 3.4% in 2017-2018 to 7.5% in 2021-2022.
- ASD diagnoses also rose from 0.2% in 2017-2018 to 0.7% in 2021-2022, with both increases remaining statistically significant after adjusting for demographic differences.
- Investigators noted that these findings underscore the need for targeted, trauma-informed intervention strategies in college settings.
IN PRACTICE:
“These trends highlight the escalating mental health challenges among college students, which is consistent with recent research reporting a surge in psychiatric diagnoses,” the authors wrote. “Factors contributing to this rise may include pandemic-related stressors (eg, loss of loved ones) and the effect of traumatic events (eg, campus shootings and racial trauma),” they added.
SOURCE:
The study was led by Yusen Zhai, PhD, University of Alabama at Birmingham. It was published online on May 30, 2024, in JAMA Network Open.
LIMITATIONS:
The study’s reliance on self-reported data and single questions for diagnosed PTSD and ASD may have limited the accuracy of the findings. The retrospective design and the absence of longitudinal follow-up may have restricted the ability to infer causality from the observed trends.
DISCLOSURES:
No disclosures were reported. No funding information was available.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
Posttraumatic stress disorder (PTSD) rates among college students more than doubled between 2017 and 2022, new data showed. Rates of acute stress disorder (ASD) also increased during that time.
METHODOLOGY:
- Researchers conducted five waves of cross-sectional study from 2017 to 2022, involving 392,377 participants across 332 colleges and universities.
- The study utilized the Healthy Minds Study data, ensuring representativeness by applying sample weights based on institutional demographics.
- Outcome variables were diagnoses of PTSD and ASD, confirmed by healthcare practitioners, with statistical analysis assessing change in odds of estimated prevalence during 2017-2022.
TAKEAWAY:
- The prevalence of PTSD among US college students increased from 3.4% in 2017-2018 to 7.5% in 2021-2022.
- ASD diagnoses also rose from 0.2% in 2017-2018 to 0.7% in 2021-2022, with both increases remaining statistically significant after adjusting for demographic differences.
- Investigators noted that these findings underscore the need for targeted, trauma-informed intervention strategies in college settings.
IN PRACTICE:
“These trends highlight the escalating mental health challenges among college students, which is consistent with recent research reporting a surge in psychiatric diagnoses,” the authors wrote. “Factors contributing to this rise may include pandemic-related stressors (eg, loss of loved ones) and the effect of traumatic events (eg, campus shootings and racial trauma),” they added.
SOURCE:
The study was led by Yusen Zhai, PhD, University of Alabama at Birmingham. It was published online on May 30, 2024, in JAMA Network Open.
LIMITATIONS:
The study’s reliance on self-reported data and single questions for diagnosed PTSD and ASD may have limited the accuracy of the findings. The retrospective design and the absence of longitudinal follow-up may have restricted the ability to infer causality from the observed trends.
DISCLOSURES:
No disclosures were reported. No funding information was available.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
Early Memory Problems Linked to Increased Tau
Reports from older adults and their partners of early memory issues are associated with higher levels of tau neurofibrillary tangles in the brain, new research suggests.
The findings show that in addition to beta-amyloid, tau is implicated in cognitive decline even in the absence of overt clinical symptoms.
“Understanding the earliest signs of Alzheimer’s disease is even more important now that new disease-modifying drugs are becoming available,” study author
Rebecca E. Amariglio, PhD, clinical neuropsychologist at Brigham and Women’s Hospital and the Massachusetts General Hospital and assistant professor in neurology at Harvard Medical School, Boston, said in a news release. “Our study found early suspicions of memory problems by both participants and the people who knew them well were linked to higher levels of tau tangles in the brain.”
The study was published online in Neurology.
Subjective Cognitive Decline
Beta-amyloid plaque accumulations and tau neurofibrillary tangles both underlie the clinical continuum of Alzheimer’s disease (AD). Previous studies have investigated beta-amyloid burden and self- and partner-reported cognitive decline, but fewer have examined regional tau.
Subjective cognitive decline may be an early sign of AD, but self-awareness declines as individuals become increasingly symptomatic. So, a report from a partner about the participant’s level of cognitive functioning is often required in studies of mild cognitive impairment and dementia. The relevance of this model during the preclinical stage is less clear.
For the multicohort, cross-sectional study, investigators studied 675 cognitively unimpaired older adults (mean age, 72 years; 59% female), including persons with nonelevated beta-amyloid levels and those with elevated beta-amyloid levels, as determined by PET.
Participants brought a spouse, adult child, or other study partner with them to answer questions about the participant’s cognitive abilities and their ability to complete daily tasks. About 65% of participants lived with their partners and both completed the Cognitive Function Index (CFI) to assess cognitive decline, with higher scores indicating greater cognitive decline.
Covariates included age, sex, education, and cohort as well as objective cognitive performance.
The Value of Partner Reporting
Investigators found that higher tau levels were associated with greater self- and partner-reported cognitive decline (P < .001 for both).
Significant associations between self- and partner-reported CFI measures were driven by elevated beta-amyloid levels, with continuous beta-amyloid levels showing an independent effect on CFI in addition to tau.
“Our findings suggest that asking older people who have elevated Alzheimer’s disease biomarkers about subjective cognitive decline may be valuable for early detection,” Dr. Amariglio said.
Limitations include the fact that most participants were White and highly educated. Future studies should include participants from more diverse racial and ethnic groups and people with diverse levels of education, researchers noted.
“Although this study was cross-sectional, findings suggest that among older CU individuals who at risk for AD dementia, capturing self-report and study partner report of cognitive function may be valuable for understanding the relationship between early pathophysiologic progression and the emergence of functional impairment,” the authors concluded.
The study was funded in part by the National Institute on Aging, Eli Lily, and the Alzheimer’s Association, among others. Dr. Amariglio receives research funding from the National Institute on Aging. Complete study funding and other authors’ disclosures are listed in the original paper.
A version of this article first appeared on Medscape.com.
Reports from older adults and their partners of early memory issues are associated with higher levels of tau neurofibrillary tangles in the brain, new research suggests.
The findings show that in addition to beta-amyloid, tau is implicated in cognitive decline even in the absence of overt clinical symptoms.
“Understanding the earliest signs of Alzheimer’s disease is even more important now that new disease-modifying drugs are becoming available,” study author
Rebecca E. Amariglio, PhD, clinical neuropsychologist at Brigham and Women’s Hospital and the Massachusetts General Hospital and assistant professor in neurology at Harvard Medical School, Boston, said in a news release. “Our study found early suspicions of memory problems by both participants and the people who knew them well were linked to higher levels of tau tangles in the brain.”
The study was published online in Neurology.
Subjective Cognitive Decline
Beta-amyloid plaque accumulations and tau neurofibrillary tangles both underlie the clinical continuum of Alzheimer’s disease (AD). Previous studies have investigated beta-amyloid burden and self- and partner-reported cognitive decline, but fewer have examined regional tau.
Subjective cognitive decline may be an early sign of AD, but self-awareness declines as individuals become increasingly symptomatic. So, a report from a partner about the participant’s level of cognitive functioning is often required in studies of mild cognitive impairment and dementia. The relevance of this model during the preclinical stage is less clear.
For the multicohort, cross-sectional study, investigators studied 675 cognitively unimpaired older adults (mean age, 72 years; 59% female), including persons with nonelevated beta-amyloid levels and those with elevated beta-amyloid levels, as determined by PET.
Participants brought a spouse, adult child, or other study partner with them to answer questions about the participant’s cognitive abilities and their ability to complete daily tasks. About 65% of participants lived with their partners and both completed the Cognitive Function Index (CFI) to assess cognitive decline, with higher scores indicating greater cognitive decline.
Covariates included age, sex, education, and cohort as well as objective cognitive performance.
The Value of Partner Reporting
Investigators found that higher tau levels were associated with greater self- and partner-reported cognitive decline (P < .001 for both).
Significant associations between self- and partner-reported CFI measures were driven by elevated beta-amyloid levels, with continuous beta-amyloid levels showing an independent effect on CFI in addition to tau.
“Our findings suggest that asking older people who have elevated Alzheimer’s disease biomarkers about subjective cognitive decline may be valuable for early detection,” Dr. Amariglio said.
Limitations include the fact that most participants were White and highly educated. Future studies should include participants from more diverse racial and ethnic groups and people with diverse levels of education, researchers noted.
“Although this study was cross-sectional, findings suggest that among older CU individuals who at risk for AD dementia, capturing self-report and study partner report of cognitive function may be valuable for understanding the relationship between early pathophysiologic progression and the emergence of functional impairment,” the authors concluded.
The study was funded in part by the National Institute on Aging, Eli Lily, and the Alzheimer’s Association, among others. Dr. Amariglio receives research funding from the National Institute on Aging. Complete study funding and other authors’ disclosures are listed in the original paper.
A version of this article first appeared on Medscape.com.
Reports from older adults and their partners of early memory issues are associated with higher levels of tau neurofibrillary tangles in the brain, new research suggests.
The findings show that in addition to beta-amyloid, tau is implicated in cognitive decline even in the absence of overt clinical symptoms.
“Understanding the earliest signs of Alzheimer’s disease is even more important now that new disease-modifying drugs are becoming available,” study author
Rebecca E. Amariglio, PhD, clinical neuropsychologist at Brigham and Women’s Hospital and the Massachusetts General Hospital and assistant professor in neurology at Harvard Medical School, Boston, said in a news release. “Our study found early suspicions of memory problems by both participants and the people who knew them well were linked to higher levels of tau tangles in the brain.”
The study was published online in Neurology.
Subjective Cognitive Decline
Beta-amyloid plaque accumulations and tau neurofibrillary tangles both underlie the clinical continuum of Alzheimer’s disease (AD). Previous studies have investigated beta-amyloid burden and self- and partner-reported cognitive decline, but fewer have examined regional tau.
Subjective cognitive decline may be an early sign of AD, but self-awareness declines as individuals become increasingly symptomatic. So, a report from a partner about the participant’s level of cognitive functioning is often required in studies of mild cognitive impairment and dementia. The relevance of this model during the preclinical stage is less clear.
For the multicohort, cross-sectional study, investigators studied 675 cognitively unimpaired older adults (mean age, 72 years; 59% female), including persons with nonelevated beta-amyloid levels and those with elevated beta-amyloid levels, as determined by PET.
Participants brought a spouse, adult child, or other study partner with them to answer questions about the participant’s cognitive abilities and their ability to complete daily tasks. About 65% of participants lived with their partners and both completed the Cognitive Function Index (CFI) to assess cognitive decline, with higher scores indicating greater cognitive decline.
Covariates included age, sex, education, and cohort as well as objective cognitive performance.
The Value of Partner Reporting
Investigators found that higher tau levels were associated with greater self- and partner-reported cognitive decline (P < .001 for both).
Significant associations between self- and partner-reported CFI measures were driven by elevated beta-amyloid levels, with continuous beta-amyloid levels showing an independent effect on CFI in addition to tau.
“Our findings suggest that asking older people who have elevated Alzheimer’s disease biomarkers about subjective cognitive decline may be valuable for early detection,” Dr. Amariglio said.
Limitations include the fact that most participants were White and highly educated. Future studies should include participants from more diverse racial and ethnic groups and people with diverse levels of education, researchers noted.
“Although this study was cross-sectional, findings suggest that among older CU individuals who at risk for AD dementia, capturing self-report and study partner report of cognitive function may be valuable for understanding the relationship between early pathophysiologic progression and the emergence of functional impairment,” the authors concluded.
The study was funded in part by the National Institute on Aging, Eli Lily, and the Alzheimer’s Association, among others. Dr. Amariglio receives research funding from the National Institute on Aging. Complete study funding and other authors’ disclosures are listed in the original paper.
A version of this article first appeared on Medscape.com.
Early-Life Exposure to Pollution Linked to Psychosis, Anxiety, Depression
Early-life exposure to air and noise pollution is associated with a higher risk for psychosis, depression, and anxiety in adolescence and early adulthood, results from a longitudinal birth cohort study showed.
While air pollution was associated primarily with psychotic experiences and depression, noise pollution was more likely to be associated with anxiety in adolescence and early adulthood.
“Early-life exposure could be detrimental to mental health given the extensive brain development and epigenetic processes that occur in utero and during infancy,” the researchers, led by Joanne Newbury, PhD, of Bristol Medical School, University of Bristol, England, wrote, adding that “the results of this cohort study provide novel evidence that early-life exposure to particulate matter is prospectively associated with the development of psychotic experiences and depression in youth.”
The findings were published online on May 28 in JAMA Network Open.
Large, Longitudinal Study
To learn more about how air and noise pollution may affect the brain from an early age, the investigators used data from the Avon Longitudinal Study of Parents and Children, an ongoing longitudinal birth cohort capturing data on new births in Southwest England from 1991 to 1992.
Investigators captured levels of air pollutants, which included nitrogen dioxide and fine particulate matter with a diameter smaller than 2.5 µm (PM2.5), in the areas where expectant mothers lived and where their children lived until age 12.
They also collected decibel levels of noise pollution in neighborhoods where expectant mothers and their children lived.
Participants were assessed for psychotic experiences, depression, and anxiety when they were 13, 18, and 24 years old.
Among the 9065 participants who had mental health data, 20% reported psychotic experiences, 11% reported depression, and 10% reported anxiety. About 60% of the participants had a family history of mental illness.
When they were age 13, 13.6% of participants reported psychotic experiences; 9.2% reported them at age 18, and 12.6% at age 24.
A lower number of participants reported feeling depressed and anxious at 13 years (5.6% for depression and 3.6% for anxiety) and 18 years (7.9% for depression and 5.7% for anxiety).
After adjusting for individual and family-level variables, including family psychiatric history, maternal social class, and neighborhood deprivation, elevated PM2.5 levels during pregnancy (P = .002) and childhood (P = .04) were associated with a significantly increased risk for psychotic experiences later in life. Pregnancy PM2.5 exposure was also associated with depression (P = .01).
Participants exposed to higher noise pollution in childhood and adolescence had an increased risk for anxiety (P = .03) as teenagers.
Vulnerability of the Developing Brain
The investigators noted that more information is needed to understand the underlying mechanisms behind these associations but noted that early-life exposure could be detrimental to mental health given “extensive brain development and epigenetic processes that occur in utero.”
They also noted that air pollution could lead to restricted fetal growth and premature birth, both of which are risk factors for psychopathology.
Martin Clift, PhD, of Swansea University in Swansea, Wales, who was not involved in the study, said that the paper highlights the need for more consideration of health consequences related to these exposures.
“As noted by the authors, this is an area that has received a lot of recent attention, yet there remains a large void of knowledge,” Dr. Clift said in a UK Science Media Centre release. “It highlights that some of the most dominant air pollutants can impact different mental health diagnoses, but that time-of-life is particularly important as to how each individual air pollutant may impact this diagnosis.”
Study limitations included limitations to generalizability of the data — the families in the study were more affluent and less diverse than the UK population overall.
The study was funded by the UK Medical Research Council, Wellcome Trust, and University of Bristol. Disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
Early-life exposure to air and noise pollution is associated with a higher risk for psychosis, depression, and anxiety in adolescence and early adulthood, results from a longitudinal birth cohort study showed.
While air pollution was associated primarily with psychotic experiences and depression, noise pollution was more likely to be associated with anxiety in adolescence and early adulthood.
“Early-life exposure could be detrimental to mental health given the extensive brain development and epigenetic processes that occur in utero and during infancy,” the researchers, led by Joanne Newbury, PhD, of Bristol Medical School, University of Bristol, England, wrote, adding that “the results of this cohort study provide novel evidence that early-life exposure to particulate matter is prospectively associated with the development of psychotic experiences and depression in youth.”
The findings were published online on May 28 in JAMA Network Open.
Large, Longitudinal Study
To learn more about how air and noise pollution may affect the brain from an early age, the investigators used data from the Avon Longitudinal Study of Parents and Children, an ongoing longitudinal birth cohort capturing data on new births in Southwest England from 1991 to 1992.
Investigators captured levels of air pollutants, which included nitrogen dioxide and fine particulate matter with a diameter smaller than 2.5 µm (PM2.5), in the areas where expectant mothers lived and where their children lived until age 12.
They also collected decibel levels of noise pollution in neighborhoods where expectant mothers and their children lived.
Participants were assessed for psychotic experiences, depression, and anxiety when they were 13, 18, and 24 years old.
Among the 9065 participants who had mental health data, 20% reported psychotic experiences, 11% reported depression, and 10% reported anxiety. About 60% of the participants had a family history of mental illness.
When they were age 13, 13.6% of participants reported psychotic experiences; 9.2% reported them at age 18, and 12.6% at age 24.
A lower number of participants reported feeling depressed and anxious at 13 years (5.6% for depression and 3.6% for anxiety) and 18 years (7.9% for depression and 5.7% for anxiety).
After adjusting for individual and family-level variables, including family psychiatric history, maternal social class, and neighborhood deprivation, elevated PM2.5 levels during pregnancy (P = .002) and childhood (P = .04) were associated with a significantly increased risk for psychotic experiences later in life. Pregnancy PM2.5 exposure was also associated with depression (P = .01).
Participants exposed to higher noise pollution in childhood and adolescence had an increased risk for anxiety (P = .03) as teenagers.
Vulnerability of the Developing Brain
The investigators noted that more information is needed to understand the underlying mechanisms behind these associations but noted that early-life exposure could be detrimental to mental health given “extensive brain development and epigenetic processes that occur in utero.”
They also noted that air pollution could lead to restricted fetal growth and premature birth, both of which are risk factors for psychopathology.
Martin Clift, PhD, of Swansea University in Swansea, Wales, who was not involved in the study, said that the paper highlights the need for more consideration of health consequences related to these exposures.
“As noted by the authors, this is an area that has received a lot of recent attention, yet there remains a large void of knowledge,” Dr. Clift said in a UK Science Media Centre release. “It highlights that some of the most dominant air pollutants can impact different mental health diagnoses, but that time-of-life is particularly important as to how each individual air pollutant may impact this diagnosis.”
Study limitations included limitations to generalizability of the data — the families in the study were more affluent and less diverse than the UK population overall.
The study was funded by the UK Medical Research Council, Wellcome Trust, and University of Bristol. Disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
Early-life exposure to air and noise pollution is associated with a higher risk for psychosis, depression, and anxiety in adolescence and early adulthood, results from a longitudinal birth cohort study showed.
While air pollution was associated primarily with psychotic experiences and depression, noise pollution was more likely to be associated with anxiety in adolescence and early adulthood.
“Early-life exposure could be detrimental to mental health given the extensive brain development and epigenetic processes that occur in utero and during infancy,” the researchers, led by Joanne Newbury, PhD, of Bristol Medical School, University of Bristol, England, wrote, adding that “the results of this cohort study provide novel evidence that early-life exposure to particulate matter is prospectively associated with the development of psychotic experiences and depression in youth.”
The findings were published online on May 28 in JAMA Network Open.
Large, Longitudinal Study
To learn more about how air and noise pollution may affect the brain from an early age, the investigators used data from the Avon Longitudinal Study of Parents and Children, an ongoing longitudinal birth cohort capturing data on new births in Southwest England from 1991 to 1992.
Investigators captured levels of air pollutants, which included nitrogen dioxide and fine particulate matter with a diameter smaller than 2.5 µm (PM2.5), in the areas where expectant mothers lived and where their children lived until age 12.
They also collected decibel levels of noise pollution in neighborhoods where expectant mothers and their children lived.
Participants were assessed for psychotic experiences, depression, and anxiety when they were 13, 18, and 24 years old.
Among the 9065 participants who had mental health data, 20% reported psychotic experiences, 11% reported depression, and 10% reported anxiety. About 60% of the participants had a family history of mental illness.
When they were age 13, 13.6% of participants reported psychotic experiences; 9.2% reported them at age 18, and 12.6% at age 24.
A lower number of participants reported feeling depressed and anxious at 13 years (5.6% for depression and 3.6% for anxiety) and 18 years (7.9% for depression and 5.7% for anxiety).
After adjusting for individual and family-level variables, including family psychiatric history, maternal social class, and neighborhood deprivation, elevated PM2.5 levels during pregnancy (P = .002) and childhood (P = .04) were associated with a significantly increased risk for psychotic experiences later in life. Pregnancy PM2.5 exposure was also associated with depression (P = .01).
Participants exposed to higher noise pollution in childhood and adolescence had an increased risk for anxiety (P = .03) as teenagers.
Vulnerability of the Developing Brain
The investigators noted that more information is needed to understand the underlying mechanisms behind these associations but noted that early-life exposure could be detrimental to mental health given “extensive brain development and epigenetic processes that occur in utero.”
They also noted that air pollution could lead to restricted fetal growth and premature birth, both of which are risk factors for psychopathology.
Martin Clift, PhD, of Swansea University in Swansea, Wales, who was not involved in the study, said that the paper highlights the need for more consideration of health consequences related to these exposures.
“As noted by the authors, this is an area that has received a lot of recent attention, yet there remains a large void of knowledge,” Dr. Clift said in a UK Science Media Centre release. “It highlights that some of the most dominant air pollutants can impact different mental health diagnoses, but that time-of-life is particularly important as to how each individual air pollutant may impact this diagnosis.”
Study limitations included limitations to generalizability of the data — the families in the study were more affluent and less diverse than the UK population overall.
The study was funded by the UK Medical Research Council, Wellcome Trust, and University of Bristol. Disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
Losing Weight, Decreasing Alcohol, and Improving Sex Life?
Richard* was a master-of-the-universe type. He went to Wharton, ran a large hedge fund, and lived in Greenwich, Connecticut. His three children attended Ivy League schools. He played golf on the weekends and ate three healthy meals per day. There was just one issue: He had gained 90 pounds since the 1990s from consuming six to seven alcoholic beverages per day. He already had one DUI under his belt, and his marriage was on shaky ground. He had tried to address his alcohol abuse disorder on multiple occasions: He went to a yearlong class on alcoholism, saw a psychologist for cognitive-behavioral therapy, and joined Alcoholics Anonymous, all to no avail.
When I met him in December 2023, he had hit rock bottom and was willing to try anything.
At our first visit, I prescribed him weekly tirzepatide (Zepbound) off label, along with a small dose of naltrexone.
Richard shared some feedback after his first 2 weeks:
The naltrexone works great and is strong ... small dose for me effective ... I haven’t wanted to drink and when I do I can’t finish a glass over 2 hours … went from 25 drinks a week to about 4 … don’t notice other side effects … sleeping better too.
And after 6 weeks:
Some more feedback … on week 6-7 and all going well ... drinking very little alcohol and still on half tab of naltrexone ... that works well and have no side effects ... the Zepbound works well too. I do get hungry a few days after the shot but still don’t crave sugar or bad snacks … weight down 21 pounds since started … 292 to 271.
And finally, after 8 weeks:
Looking at my last text to you I see the progress … been incredible ... now down 35 pounds and at 257 … continue to feel excellent with plenty of energy … want to exercise more ... and no temptation to eat or drink unhealthy stuff ... I’m very happy this has surpassed my expectations on how fast it’s worked and I don’t feel any side effects. Marriage has never been better … all thanks to you.
Tirzepatide contains two hormones, glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic polypeptide (GIP), that are naturally produced by our bodies after meals. Scientists recently learned that the GLP-1 system contributes to the feedback loop of addictive behaviors. Increasing synthetic GLP-1, through medications like tirzepatide, appears to minimize addictive behaviors by limiting their ability to upregulate the brain’s production of dopamine.
Dopamine is a neurotransmitter produced in the brain’s reward center, which regulates how people experience pleasure and control impulses. Dopamine reinforces the pleasure experienced by certain behaviors like drinking, smoking, and eating sweets. These new medications reduce the amount of dopamine released after these activities and thereby lower the motivation to repeat these behaviors.
Contrary to some reports in the news, the vast majority of my male patients using these medications for alcohol abuse disorder experience concurrent increases in testosterone, for two reasons: (1) testosterone increases as body mass index decreases and (2) chronic alcohol use can damage the cells in the testicles that produce testosterone and also decrease the brain’s ability to stimulate the testicles to produce testosterone.
At his most recent checkup last month, Richard’s testosterone had risen from borderline to robust levels, his libido and sleep had improved, and he reported never having felt so healthy or confident. Fingers crossed that the US Food and Drug Administration won’t wait too long before approving this class of medications for more than just diabetes, heart disease, and obesity.
*Patient’s name has been changed.
Dr. Messer is clinical assistant professor, Icahn School of Medicine at Mount Sinai, New York, and associate professor, Zucker School of Medicine at Hofstra University, Hempstead, New York. She has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Richard* was a master-of-the-universe type. He went to Wharton, ran a large hedge fund, and lived in Greenwich, Connecticut. His three children attended Ivy League schools. He played golf on the weekends and ate three healthy meals per day. There was just one issue: He had gained 90 pounds since the 1990s from consuming six to seven alcoholic beverages per day. He already had one DUI under his belt, and his marriage was on shaky ground. He had tried to address his alcohol abuse disorder on multiple occasions: He went to a yearlong class on alcoholism, saw a psychologist for cognitive-behavioral therapy, and joined Alcoholics Anonymous, all to no avail.
When I met him in December 2023, he had hit rock bottom and was willing to try anything.
At our first visit, I prescribed him weekly tirzepatide (Zepbound) off label, along with a small dose of naltrexone.
Richard shared some feedback after his first 2 weeks:
The naltrexone works great and is strong ... small dose for me effective ... I haven’t wanted to drink and when I do I can’t finish a glass over 2 hours … went from 25 drinks a week to about 4 … don’t notice other side effects … sleeping better too.
And after 6 weeks:
Some more feedback … on week 6-7 and all going well ... drinking very little alcohol and still on half tab of naltrexone ... that works well and have no side effects ... the Zepbound works well too. I do get hungry a few days after the shot but still don’t crave sugar or bad snacks … weight down 21 pounds since started … 292 to 271.
And finally, after 8 weeks:
Looking at my last text to you I see the progress … been incredible ... now down 35 pounds and at 257 … continue to feel excellent with plenty of energy … want to exercise more ... and no temptation to eat or drink unhealthy stuff ... I’m very happy this has surpassed my expectations on how fast it’s worked and I don’t feel any side effects. Marriage has never been better … all thanks to you.
Tirzepatide contains two hormones, glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic polypeptide (GIP), that are naturally produced by our bodies after meals. Scientists recently learned that the GLP-1 system contributes to the feedback loop of addictive behaviors. Increasing synthetic GLP-1, through medications like tirzepatide, appears to minimize addictive behaviors by limiting their ability to upregulate the brain’s production of dopamine.
Dopamine is a neurotransmitter produced in the brain’s reward center, which regulates how people experience pleasure and control impulses. Dopamine reinforces the pleasure experienced by certain behaviors like drinking, smoking, and eating sweets. These new medications reduce the amount of dopamine released after these activities and thereby lower the motivation to repeat these behaviors.
Contrary to some reports in the news, the vast majority of my male patients using these medications for alcohol abuse disorder experience concurrent increases in testosterone, for two reasons: (1) testosterone increases as body mass index decreases and (2) chronic alcohol use can damage the cells in the testicles that produce testosterone and also decrease the brain’s ability to stimulate the testicles to produce testosterone.
At his most recent checkup last month, Richard’s testosterone had risen from borderline to robust levels, his libido and sleep had improved, and he reported never having felt so healthy or confident. Fingers crossed that the US Food and Drug Administration won’t wait too long before approving this class of medications for more than just diabetes, heart disease, and obesity.
*Patient’s name has been changed.
Dr. Messer is clinical assistant professor, Icahn School of Medicine at Mount Sinai, New York, and associate professor, Zucker School of Medicine at Hofstra University, Hempstead, New York. She has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Richard* was a master-of-the-universe type. He went to Wharton, ran a large hedge fund, and lived in Greenwich, Connecticut. His three children attended Ivy League schools. He played golf on the weekends and ate three healthy meals per day. There was just one issue: He had gained 90 pounds since the 1990s from consuming six to seven alcoholic beverages per day. He already had one DUI under his belt, and his marriage was on shaky ground. He had tried to address his alcohol abuse disorder on multiple occasions: He went to a yearlong class on alcoholism, saw a psychologist for cognitive-behavioral therapy, and joined Alcoholics Anonymous, all to no avail.
When I met him in December 2023, he had hit rock bottom and was willing to try anything.
At our first visit, I prescribed him weekly tirzepatide (Zepbound) off label, along with a small dose of naltrexone.
Richard shared some feedback after his first 2 weeks:
The naltrexone works great and is strong ... small dose for me effective ... I haven’t wanted to drink and when I do I can’t finish a glass over 2 hours … went from 25 drinks a week to about 4 … don’t notice other side effects … sleeping better too.
And after 6 weeks:
Some more feedback … on week 6-7 and all going well ... drinking very little alcohol and still on half tab of naltrexone ... that works well and have no side effects ... the Zepbound works well too. I do get hungry a few days after the shot but still don’t crave sugar or bad snacks … weight down 21 pounds since started … 292 to 271.
And finally, after 8 weeks:
Looking at my last text to you I see the progress … been incredible ... now down 35 pounds and at 257 … continue to feel excellent with plenty of energy … want to exercise more ... and no temptation to eat or drink unhealthy stuff ... I’m very happy this has surpassed my expectations on how fast it’s worked and I don’t feel any side effects. Marriage has never been better … all thanks to you.
Tirzepatide contains two hormones, glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic polypeptide (GIP), that are naturally produced by our bodies after meals. Scientists recently learned that the GLP-1 system contributes to the feedback loop of addictive behaviors. Increasing synthetic GLP-1, through medications like tirzepatide, appears to minimize addictive behaviors by limiting their ability to upregulate the brain’s production of dopamine.
Dopamine is a neurotransmitter produced in the brain’s reward center, which regulates how people experience pleasure and control impulses. Dopamine reinforces the pleasure experienced by certain behaviors like drinking, smoking, and eating sweets. These new medications reduce the amount of dopamine released after these activities and thereby lower the motivation to repeat these behaviors.
Contrary to some reports in the news, the vast majority of my male patients using these medications for alcohol abuse disorder experience concurrent increases in testosterone, for two reasons: (1) testosterone increases as body mass index decreases and (2) chronic alcohol use can damage the cells in the testicles that produce testosterone and also decrease the brain’s ability to stimulate the testicles to produce testosterone.
At his most recent checkup last month, Richard’s testosterone had risen from borderline to robust levels, his libido and sleep had improved, and he reported never having felt so healthy or confident. Fingers crossed that the US Food and Drug Administration won’t wait too long before approving this class of medications for more than just diabetes, heart disease, and obesity.
*Patient’s name has been changed.
Dr. Messer is clinical assistant professor, Icahn School of Medicine at Mount Sinai, New York, and associate professor, Zucker School of Medicine at Hofstra University, Hempstead, New York. She has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Irisin Shows Potential as Alzheimer’s Disease Biomarker
, according to investigators.
Irisin, a hormone released by muscles during physical exercise, also negatively correlated with Clinical Dementia Rating Scale Sum of Boxes (CDR-SOB) in female patients, pointing to a sex-specific disease phenomenon, reported by co-lead authors Manuela Dicarlo, PhD, and Patrizia Pignataro, MSc, of the University of Bari “A. Moro,” Bari, Italy, and colleagues.
Regular physical exercise can slow cognitive decline in individuals at risk for or with Alzheimer’s disease, and irisin appears to play a key role in this process, the investigators wrote in Annals of Neurology. Previous studies have shown that increased irisin levels in the brain are associated with improved cognitive function and reduced amyloid beta levels, suggesting the hormone’s potential as a biomarker and therapeutic target for Alzheimer’s disease.
“Based on the protective effect of irisin in Alzheimer’s disease shown in animal and cell models, the goal of the present study was to investigate the levels of irisin in the biological fluids of a large cohort of patients biologically characterized according to the amyloid/tau/neurodegeneration (ATN) scheme of the National Institute on Aging–Alzheimer’s Association (NIA-AA),” Dr. Dicarlo and colleagues wrote. “We aimed to understand whether there may be variations of irisin levels across the disease stages, identified through the ATN system.”
Lower Levels of Irisin Seen in Patients With Alzheimer’s Disease
The study included 82 patients with Alzheimer’s disease, 44 individuals with mild cognitive impairment (MCI), and 20 with subjective memory complaints (SMC). Participants underwent comprehensive assessments, including neurological and neuropsychological exams, nutritional evaluations, MRI scans, and routine lab tests. Cognitive impairment severity was measured using the CDR-SOB and other metrics.
Blood and CSF samples were collected from all patients, the latter via lumbar puncture. These samples were analyzed for irisin levels and known Alzheimer’s disease biomarkers, including Abeta42, total tau (t-tau), and hyperphosphorylated tau (p-tau).
Mean CSF irisin levels were significantly lower among patients with Alzheimer’s disease than those with SMC (0.80 vs 1.23 pg/mL; P < .0001), and among those with MCI vs SMC (0.95 vs 1.23 pg/mL; P = .046). Among patients with Alzheimer’s disease, irisin levels were significantly lower among women than men (0.70 vs 0.96 pg/mL; P = .031).
Further analyses revealed positive correlations between CSF irisin level and Abeta42 in both males (r = 0.262; P < 005) and females (r = 0.379; P < .001). Conversely, in female patients, a significant negative correlation was found between CSF irisin level and CDR-SOB score (r = −0.234; P < .05).
Although a negative trend was observed between CSF irisin and total tau (t-tau) in the overall patient population (r = −0.144; P = 0.082), and more notably in female patients (r = −0.189; P = 0.084), these results were not statistically significant.
Plasma irisin levels were not significantly correlated with any of the other biomarkers.
Clinical Implications
This study “verifies that irisin levels do have a relationship to the Alzheimer’s disease process,” said Dylan Wint, MD, director of Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas.
In a written comment, Dr. Wint speculated that measuring irisin levels could theoretically help individualize physical exercise routines designed to combat cognitive decline.
“For example, maybe someone who is exercising but has a low irisin level would need to change the type of exercise they’re doing in order to optimally protect their brain health,” he said. “Or maybe they won’t get the same benefits for brain health as someone whose irisin shoots up every time they walk a flight of stairs.”
It’s “near-impossible to tell,” however, if irisin will be employed in clinical trials or real-world practice, he added.
“I don’t see this being a highly useful serum biomarker for Alzheimer’s disease itself because other serum biomarkers are so far ahead and have more face validity,” Dr. Wint said.
The route of collection could also cause challenges.
“In the United States, CSF-based biomarkers can be a difficult sell, especially for serial testing,” Dr. Wint said. “But we have usable serum biomarkers for Alzheimer’s disease only because we have had CSF biomarkers against which to evaluate them. They may develop a way to evaluate this in the serum.”
Dr. Dicarlo and colleagues suggested that more work is needed to determine the ultimate value of irisin measurement.“The true ability of irisin to represent a biomarker of disease progression and severity remains to be further investigated,” they concluded. “However, our findings might offer interesting perspectives toward the potential role of irisin in the modulation of AD pathology and can guide the exploration of medication targeting the irisin system.”
The study was supported by Regione Puglia and CNR for Tecnopolo per la Medicina di Precisione, CIREMIC, the University of Bari, and Next Generation EU. The investigators and Dr. Wint disclosed no conflicts of interest.
, according to investigators.
Irisin, a hormone released by muscles during physical exercise, also negatively correlated with Clinical Dementia Rating Scale Sum of Boxes (CDR-SOB) in female patients, pointing to a sex-specific disease phenomenon, reported by co-lead authors Manuela Dicarlo, PhD, and Patrizia Pignataro, MSc, of the University of Bari “A. Moro,” Bari, Italy, and colleagues.
Regular physical exercise can slow cognitive decline in individuals at risk for or with Alzheimer’s disease, and irisin appears to play a key role in this process, the investigators wrote in Annals of Neurology. Previous studies have shown that increased irisin levels in the brain are associated with improved cognitive function and reduced amyloid beta levels, suggesting the hormone’s potential as a biomarker and therapeutic target for Alzheimer’s disease.
“Based on the protective effect of irisin in Alzheimer’s disease shown in animal and cell models, the goal of the present study was to investigate the levels of irisin in the biological fluids of a large cohort of patients biologically characterized according to the amyloid/tau/neurodegeneration (ATN) scheme of the National Institute on Aging–Alzheimer’s Association (NIA-AA),” Dr. Dicarlo and colleagues wrote. “We aimed to understand whether there may be variations of irisin levels across the disease stages, identified through the ATN system.”
Lower Levels of Irisin Seen in Patients With Alzheimer’s Disease
The study included 82 patients with Alzheimer’s disease, 44 individuals with mild cognitive impairment (MCI), and 20 with subjective memory complaints (SMC). Participants underwent comprehensive assessments, including neurological and neuropsychological exams, nutritional evaluations, MRI scans, and routine lab tests. Cognitive impairment severity was measured using the CDR-SOB and other metrics.
Blood and CSF samples were collected from all patients, the latter via lumbar puncture. These samples were analyzed for irisin levels and known Alzheimer’s disease biomarkers, including Abeta42, total tau (t-tau), and hyperphosphorylated tau (p-tau).
Mean CSF irisin levels were significantly lower among patients with Alzheimer’s disease than those with SMC (0.80 vs 1.23 pg/mL; P < .0001), and among those with MCI vs SMC (0.95 vs 1.23 pg/mL; P = .046). Among patients with Alzheimer’s disease, irisin levels were significantly lower among women than men (0.70 vs 0.96 pg/mL; P = .031).
Further analyses revealed positive correlations between CSF irisin level and Abeta42 in both males (r = 0.262; P < 005) and females (r = 0.379; P < .001). Conversely, in female patients, a significant negative correlation was found between CSF irisin level and CDR-SOB score (r = −0.234; P < .05).
Although a negative trend was observed between CSF irisin and total tau (t-tau) in the overall patient population (r = −0.144; P = 0.082), and more notably in female patients (r = −0.189; P = 0.084), these results were not statistically significant.
Plasma irisin levels were not significantly correlated with any of the other biomarkers.
Clinical Implications
This study “verifies that irisin levels do have a relationship to the Alzheimer’s disease process,” said Dylan Wint, MD, director of Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas.
In a written comment, Dr. Wint speculated that measuring irisin levels could theoretically help individualize physical exercise routines designed to combat cognitive decline.
“For example, maybe someone who is exercising but has a low irisin level would need to change the type of exercise they’re doing in order to optimally protect their brain health,” he said. “Or maybe they won’t get the same benefits for brain health as someone whose irisin shoots up every time they walk a flight of stairs.”
It’s “near-impossible to tell,” however, if irisin will be employed in clinical trials or real-world practice, he added.
“I don’t see this being a highly useful serum biomarker for Alzheimer’s disease itself because other serum biomarkers are so far ahead and have more face validity,” Dr. Wint said.
The route of collection could also cause challenges.
“In the United States, CSF-based biomarkers can be a difficult sell, especially for serial testing,” Dr. Wint said. “But we have usable serum biomarkers for Alzheimer’s disease only because we have had CSF biomarkers against which to evaluate them. They may develop a way to evaluate this in the serum.”
Dr. Dicarlo and colleagues suggested that more work is needed to determine the ultimate value of irisin measurement.“The true ability of irisin to represent a biomarker of disease progression and severity remains to be further investigated,” they concluded. “However, our findings might offer interesting perspectives toward the potential role of irisin in the modulation of AD pathology and can guide the exploration of medication targeting the irisin system.”
The study was supported by Regione Puglia and CNR for Tecnopolo per la Medicina di Precisione, CIREMIC, the University of Bari, and Next Generation EU. The investigators and Dr. Wint disclosed no conflicts of interest.
, according to investigators.
Irisin, a hormone released by muscles during physical exercise, also negatively correlated with Clinical Dementia Rating Scale Sum of Boxes (CDR-SOB) in female patients, pointing to a sex-specific disease phenomenon, reported by co-lead authors Manuela Dicarlo, PhD, and Patrizia Pignataro, MSc, of the University of Bari “A. Moro,” Bari, Italy, and colleagues.
Regular physical exercise can slow cognitive decline in individuals at risk for or with Alzheimer’s disease, and irisin appears to play a key role in this process, the investigators wrote in Annals of Neurology. Previous studies have shown that increased irisin levels in the brain are associated with improved cognitive function and reduced amyloid beta levels, suggesting the hormone’s potential as a biomarker and therapeutic target for Alzheimer’s disease.
“Based on the protective effect of irisin in Alzheimer’s disease shown in animal and cell models, the goal of the present study was to investigate the levels of irisin in the biological fluids of a large cohort of patients biologically characterized according to the amyloid/tau/neurodegeneration (ATN) scheme of the National Institute on Aging–Alzheimer’s Association (NIA-AA),” Dr. Dicarlo and colleagues wrote. “We aimed to understand whether there may be variations of irisin levels across the disease stages, identified through the ATN system.”
Lower Levels of Irisin Seen in Patients With Alzheimer’s Disease
The study included 82 patients with Alzheimer’s disease, 44 individuals with mild cognitive impairment (MCI), and 20 with subjective memory complaints (SMC). Participants underwent comprehensive assessments, including neurological and neuropsychological exams, nutritional evaluations, MRI scans, and routine lab tests. Cognitive impairment severity was measured using the CDR-SOB and other metrics.
Blood and CSF samples were collected from all patients, the latter via lumbar puncture. These samples were analyzed for irisin levels and known Alzheimer’s disease biomarkers, including Abeta42, total tau (t-tau), and hyperphosphorylated tau (p-tau).
Mean CSF irisin levels were significantly lower among patients with Alzheimer’s disease than those with SMC (0.80 vs 1.23 pg/mL; P < .0001), and among those with MCI vs SMC (0.95 vs 1.23 pg/mL; P = .046). Among patients with Alzheimer’s disease, irisin levels were significantly lower among women than men (0.70 vs 0.96 pg/mL; P = .031).
Further analyses revealed positive correlations between CSF irisin level and Abeta42 in both males (r = 0.262; P < 005) and females (r = 0.379; P < .001). Conversely, in female patients, a significant negative correlation was found between CSF irisin level and CDR-SOB score (r = −0.234; P < .05).
Although a negative trend was observed between CSF irisin and total tau (t-tau) in the overall patient population (r = −0.144; P = 0.082), and more notably in female patients (r = −0.189; P = 0.084), these results were not statistically significant.
Plasma irisin levels were not significantly correlated with any of the other biomarkers.
Clinical Implications
This study “verifies that irisin levels do have a relationship to the Alzheimer’s disease process,” said Dylan Wint, MD, director of Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas.
In a written comment, Dr. Wint speculated that measuring irisin levels could theoretically help individualize physical exercise routines designed to combat cognitive decline.
“For example, maybe someone who is exercising but has a low irisin level would need to change the type of exercise they’re doing in order to optimally protect their brain health,” he said. “Or maybe they won’t get the same benefits for brain health as someone whose irisin shoots up every time they walk a flight of stairs.”
It’s “near-impossible to tell,” however, if irisin will be employed in clinical trials or real-world practice, he added.
“I don’t see this being a highly useful serum biomarker for Alzheimer’s disease itself because other serum biomarkers are so far ahead and have more face validity,” Dr. Wint said.
The route of collection could also cause challenges.
“In the United States, CSF-based biomarkers can be a difficult sell, especially for serial testing,” Dr. Wint said. “But we have usable serum biomarkers for Alzheimer’s disease only because we have had CSF biomarkers against which to evaluate them. They may develop a way to evaluate this in the serum.”
Dr. Dicarlo and colleagues suggested that more work is needed to determine the ultimate value of irisin measurement.“The true ability of irisin to represent a biomarker of disease progression and severity remains to be further investigated,” they concluded. “However, our findings might offer interesting perspectives toward the potential role of irisin in the modulation of AD pathology and can guide the exploration of medication targeting the irisin system.”
The study was supported by Regione Puglia and CNR for Tecnopolo per la Medicina di Precisione, CIREMIC, the University of Bari, and Next Generation EU. The investigators and Dr. Wint disclosed no conflicts of interest.
FROM ANNALS OF NEUROLOGY
Antidepressant Withdrawal Symptoms Much Lower Than Previously Thought
The incidence of antidepressant discontinuation symptoms appears to be much lower than was previously thought, results from a new meta-analysis of studies assessing this issue showed.
After accounting for placebo effects, results showed that about 15% of patients who discontinue antidepressant therapy had true discontinuation symptoms, with severe symptoms occurring in about 2% of patients.
“Considering all available data, we conservatively estimate that one out of every six to seven patients has truly pharmacologically-caused antidepressant discontinuation symptoms. This might still be an over-estimate, as it is difficult to factor in residual or re-emerging symptoms of depression or anxiety,” the researchers concluded.
The study was published online in The Lancet.
More Reliable Data
“We are not saying all antidepressant discontinuation symptoms are a placebo effect. It is a real phenomenon. And we are not saying that there is no problem discontinuing antidepressants. But these findings suggest that true antidepressant discontinuation symptoms are lower than previous studies have suggested,” study investigator, Christopher Baethge, MD, University of Cologne, Germany, said at a Science Media Centre press briefing.
“Our data should de-emotionalize the debate on this issue. Yes, antidepressant discontinuation symptoms are a problem, but they should not cause undue alarm to patients or doctors,” Dr. Baethge added.
Lead investigator, Jonathan Henssler, MD, Charité – Universitätsmedizin Berlin, Germany, noted that “previous studies on this issue have included surveys which have selection bias in that people with symptoms antidepressant discontinuation are more likely to participate. This study includes a broader range of research and excluded surveys, so we believe these are more reliable results.”
A Controversial Issue
The investigators note that antidepressant discontinuation symptoms can be highly variable and nonspecific, with the most frequently reported symptoms being dizziness, headache, nausea, insomnia, and irritability. These symptoms typically occur within a few days and are usually transient but can last up to several weeks or months.
Explaining the mechanism behind the phenomenon, Dr. Baethge noted that selective serotonin reuptake inhibitor antidepressants increase the available serotonin in the brain, but the body responds by reducing the number of serotonin receptors. If the amount of available serotonin is reduced after stopping the medication, then this can lead to discontinuation symptoms.
However, the incidence and severity of these symptoms remains controversial, the researchers noted. They point out that some estimates suggest that antidepressant discontinuation symptoms occurred in the majority of patients (56%), with almost half of cases classed as severe.
Previous attempts at assessment have been questioned on methodologic grounds especially because of inclusion of online surveys or other studies prone to selection and dissatisfaction bias.
“Medical professionals continue to hold polarized positions on the incidence and severity of antidepressant discontinuation symptoms, and the debate continues in public media,” they wrote.
This is the first publication of a larger project on antidepressant discontinuation symptoms.
For the study, the researchers conducted a meta-analysis of 44 controlled trials and 35 observational studies assessing the incidence of antidepressant discontinuation symptoms including a total of 21,002 patients. Of these, 16,532 patients discontinued antidepressant treatment, and 4470 patients discontinued placebo.
Incidence of at least one antidepressant discontinuation symptom occurred in 31% of patients stopping antidepressant therapy and in 17% after discontinuation of placebo, giving a true rate of pharmacologic-driven antidepressant discontinuation symptoms of 14%-15%.
The study also showed that severe discontinuation symptoms occurred in 2.8% of those stopping antidepressants and in 0.6% of those stopping placebo, giving a true rate of severe antidepressant discontinuation symptoms of around 2%.
There was no association with treatment duration or with pharmaceutical company funding, and different statistical analyses produced similar results, suggesting the findings are robust, Dr. Baethge reported.
Risks by Medication
Desvenlafaxine, venlafaxine, imipramine, and escitalopram were associated with higher frequency of discontinuation symptoms and imipramine, paroxetine, and either desvenlafaxine or venlafaxine were associated with a higher severity of symptoms.
Fluoxetine, sertraline, and citalopram had lower rates of discontinuation symptoms. No data were available for bupropion, mirtazapine, and amitriptyline.
As for the clinical implications of the findings, Dr. Henssler said that he does consider discontinuation symptoms when selecting a medication. “I would choose a drug with lower rate of these symptoms unless there was a specific reason to choose one with a higher rate,” he said.
Dr. Henssler added that these data raise awareness of the placebo effect.
“Considering the placebo results, approximately half of antidepressant discontinuation symptoms could be attributable to expectation or non-specific symptoms,” the researchers noted.
“This is not to say all antidepressant discontinuation symptoms are caused by patient expectations; in practice, all patients discontinuing antidepressants need to be counseled and monitored, and patients who report antidepressant discontinuation symptoms must be helped, in particular those who develop severe antidepressant discontinuation symptoms,” they concluded.
Experts Weigh In
Commenting on the study at a press briefing, Oliver Howes, MD, chair of the psychopharmacology committee at the Royal College of Psychiatrists, United Kingdom, said that he welcomed “the insight that this robust study provides.”
“If someone chooses to stop taking their antidepressants, their doctor should help them to do so slowly and in a controlled manner that limits the impact of any potential withdrawal symptoms,” Dr. Howes said.
He added that the Royal College of Psychiatrists has produced a resource for patients and carers on stopping antidepressants that offers information on tapering medication at a pace that suits individual patient needs.
Also commenting, Tony Kendrick, MD, professor of primary care, University of Southampton, United Kingdom, pointed out some limitations of the new meta-analysis — in particular, that the method of assessment of discontinuation symptoms in the included studies was very variable, with specific measurement scales of discontinuation symptoms used in only six of the studies.
“In most cases the assessment seemed to depend at least partly on the judgment of the authors of the included studies rather than being based on a systematic collection of data,” Dr. Kendrick added.
In an accompanying editorial, Glyn Lewis, PhD, and Gemma Lewis, PhD, University College London, United Kingdom, wrote that though the meta-analysis has its limitations, including the fact that many of the studies were small, often use antidepressants that are not commonly used now, and studied people who had not taken the antidepressants for a very long time, “the results here are a substantial improvement on anything that has been published before.”
They emphasize the importance of discussing the issue of a placebo effect with patients when stopping antidepressants.
The editorialists pointed out that as antidepressants are prescribed to many millions of people, the relatively uncommon severe withdrawal symptoms will still affect a substantial number of people. However, for individual clinicians, severe withdrawal symptoms will seem uncommon, and most patients will probably not be troubled by antidepressant withdrawal, especially when medication is tapered over a few weeks.
They noted that cessation of antidepressants can lead to an increase in depressive and anxious symptoms, and distinguishing between relapsing symptoms and withdrawal is difficult.
“Short-term symptoms that reduce quickly, without intervention, are best thought of as a form of withdrawal, even if those symptoms might be similar or identical to the symptoms of depression and anxiety. More serious and longer-term symptoms might best be managed by tapering more slowly, or even deciding to remain on the antidepressant,” the editorialists wrote.
There was no funding source for this study. The authors declare no competing interests. Dr. Kendrick led the NIHR REDUCE trial of internet and telephone support for antidepressant discontinuation and was a member of the guideline committee for the NICE 2022 Depression Guideline.
A version of this article appeared on Medscape.com.
The incidence of antidepressant discontinuation symptoms appears to be much lower than was previously thought, results from a new meta-analysis of studies assessing this issue showed.
After accounting for placebo effects, results showed that about 15% of patients who discontinue antidepressant therapy had true discontinuation symptoms, with severe symptoms occurring in about 2% of patients.
“Considering all available data, we conservatively estimate that one out of every six to seven patients has truly pharmacologically-caused antidepressant discontinuation symptoms. This might still be an over-estimate, as it is difficult to factor in residual or re-emerging symptoms of depression or anxiety,” the researchers concluded.
The study was published online in The Lancet.
More Reliable Data
“We are not saying all antidepressant discontinuation symptoms are a placebo effect. It is a real phenomenon. And we are not saying that there is no problem discontinuing antidepressants. But these findings suggest that true antidepressant discontinuation symptoms are lower than previous studies have suggested,” study investigator, Christopher Baethge, MD, University of Cologne, Germany, said at a Science Media Centre press briefing.
“Our data should de-emotionalize the debate on this issue. Yes, antidepressant discontinuation symptoms are a problem, but they should not cause undue alarm to patients or doctors,” Dr. Baethge added.
Lead investigator, Jonathan Henssler, MD, Charité – Universitätsmedizin Berlin, Germany, noted that “previous studies on this issue have included surveys which have selection bias in that people with symptoms antidepressant discontinuation are more likely to participate. This study includes a broader range of research and excluded surveys, so we believe these are more reliable results.”
A Controversial Issue
The investigators note that antidepressant discontinuation symptoms can be highly variable and nonspecific, with the most frequently reported symptoms being dizziness, headache, nausea, insomnia, and irritability. These symptoms typically occur within a few days and are usually transient but can last up to several weeks or months.
Explaining the mechanism behind the phenomenon, Dr. Baethge noted that selective serotonin reuptake inhibitor antidepressants increase the available serotonin in the brain, but the body responds by reducing the number of serotonin receptors. If the amount of available serotonin is reduced after stopping the medication, then this can lead to discontinuation symptoms.
However, the incidence and severity of these symptoms remains controversial, the researchers noted. They point out that some estimates suggest that antidepressant discontinuation symptoms occurred in the majority of patients (56%), with almost half of cases classed as severe.
Previous attempts at assessment have been questioned on methodologic grounds especially because of inclusion of online surveys or other studies prone to selection and dissatisfaction bias.
“Medical professionals continue to hold polarized positions on the incidence and severity of antidepressant discontinuation symptoms, and the debate continues in public media,” they wrote.
This is the first publication of a larger project on antidepressant discontinuation symptoms.
For the study, the researchers conducted a meta-analysis of 44 controlled trials and 35 observational studies assessing the incidence of antidepressant discontinuation symptoms including a total of 21,002 patients. Of these, 16,532 patients discontinued antidepressant treatment, and 4470 patients discontinued placebo.
Incidence of at least one antidepressant discontinuation symptom occurred in 31% of patients stopping antidepressant therapy and in 17% after discontinuation of placebo, giving a true rate of pharmacologic-driven antidepressant discontinuation symptoms of 14%-15%.
The study also showed that severe discontinuation symptoms occurred in 2.8% of those stopping antidepressants and in 0.6% of those stopping placebo, giving a true rate of severe antidepressant discontinuation symptoms of around 2%.
There was no association with treatment duration or with pharmaceutical company funding, and different statistical analyses produced similar results, suggesting the findings are robust, Dr. Baethge reported.
Risks by Medication
Desvenlafaxine, venlafaxine, imipramine, and escitalopram were associated with higher frequency of discontinuation symptoms and imipramine, paroxetine, and either desvenlafaxine or venlafaxine were associated with a higher severity of symptoms.
Fluoxetine, sertraline, and citalopram had lower rates of discontinuation symptoms. No data were available for bupropion, mirtazapine, and amitriptyline.
As for the clinical implications of the findings, Dr. Henssler said that he does consider discontinuation symptoms when selecting a medication. “I would choose a drug with lower rate of these symptoms unless there was a specific reason to choose one with a higher rate,” he said.
Dr. Henssler added that these data raise awareness of the placebo effect.
“Considering the placebo results, approximately half of antidepressant discontinuation symptoms could be attributable to expectation or non-specific symptoms,” the researchers noted.
“This is not to say all antidepressant discontinuation symptoms are caused by patient expectations; in practice, all patients discontinuing antidepressants need to be counseled and monitored, and patients who report antidepressant discontinuation symptoms must be helped, in particular those who develop severe antidepressant discontinuation symptoms,” they concluded.
Experts Weigh In
Commenting on the study at a press briefing, Oliver Howes, MD, chair of the psychopharmacology committee at the Royal College of Psychiatrists, United Kingdom, said that he welcomed “the insight that this robust study provides.”
“If someone chooses to stop taking their antidepressants, their doctor should help them to do so slowly and in a controlled manner that limits the impact of any potential withdrawal symptoms,” Dr. Howes said.
He added that the Royal College of Psychiatrists has produced a resource for patients and carers on stopping antidepressants that offers information on tapering medication at a pace that suits individual patient needs.
Also commenting, Tony Kendrick, MD, professor of primary care, University of Southampton, United Kingdom, pointed out some limitations of the new meta-analysis — in particular, that the method of assessment of discontinuation symptoms in the included studies was very variable, with specific measurement scales of discontinuation symptoms used in only six of the studies.
“In most cases the assessment seemed to depend at least partly on the judgment of the authors of the included studies rather than being based on a systematic collection of data,” Dr. Kendrick added.
In an accompanying editorial, Glyn Lewis, PhD, and Gemma Lewis, PhD, University College London, United Kingdom, wrote that though the meta-analysis has its limitations, including the fact that many of the studies were small, often use antidepressants that are not commonly used now, and studied people who had not taken the antidepressants for a very long time, “the results here are a substantial improvement on anything that has been published before.”
They emphasize the importance of discussing the issue of a placebo effect with patients when stopping antidepressants.
The editorialists pointed out that as antidepressants are prescribed to many millions of people, the relatively uncommon severe withdrawal symptoms will still affect a substantial number of people. However, for individual clinicians, severe withdrawal symptoms will seem uncommon, and most patients will probably not be troubled by antidepressant withdrawal, especially when medication is tapered over a few weeks.
They noted that cessation of antidepressants can lead to an increase in depressive and anxious symptoms, and distinguishing between relapsing symptoms and withdrawal is difficult.
“Short-term symptoms that reduce quickly, without intervention, are best thought of as a form of withdrawal, even if those symptoms might be similar or identical to the symptoms of depression and anxiety. More serious and longer-term symptoms might best be managed by tapering more slowly, or even deciding to remain on the antidepressant,” the editorialists wrote.
There was no funding source for this study. The authors declare no competing interests. Dr. Kendrick led the NIHR REDUCE trial of internet and telephone support for antidepressant discontinuation and was a member of the guideline committee for the NICE 2022 Depression Guideline.
A version of this article appeared on Medscape.com.
The incidence of antidepressant discontinuation symptoms appears to be much lower than was previously thought, results from a new meta-analysis of studies assessing this issue showed.
After accounting for placebo effects, results showed that about 15% of patients who discontinue antidepressant therapy had true discontinuation symptoms, with severe symptoms occurring in about 2% of patients.
“Considering all available data, we conservatively estimate that one out of every six to seven patients has truly pharmacologically-caused antidepressant discontinuation symptoms. This might still be an over-estimate, as it is difficult to factor in residual or re-emerging symptoms of depression or anxiety,” the researchers concluded.
The study was published online in The Lancet.
More Reliable Data
“We are not saying all antidepressant discontinuation symptoms are a placebo effect. It is a real phenomenon. And we are not saying that there is no problem discontinuing antidepressants. But these findings suggest that true antidepressant discontinuation symptoms are lower than previous studies have suggested,” study investigator, Christopher Baethge, MD, University of Cologne, Germany, said at a Science Media Centre press briefing.
“Our data should de-emotionalize the debate on this issue. Yes, antidepressant discontinuation symptoms are a problem, but they should not cause undue alarm to patients or doctors,” Dr. Baethge added.
Lead investigator, Jonathan Henssler, MD, Charité – Universitätsmedizin Berlin, Germany, noted that “previous studies on this issue have included surveys which have selection bias in that people with symptoms antidepressant discontinuation are more likely to participate. This study includes a broader range of research and excluded surveys, so we believe these are more reliable results.”
A Controversial Issue
The investigators note that antidepressant discontinuation symptoms can be highly variable and nonspecific, with the most frequently reported symptoms being dizziness, headache, nausea, insomnia, and irritability. These symptoms typically occur within a few days and are usually transient but can last up to several weeks or months.
Explaining the mechanism behind the phenomenon, Dr. Baethge noted that selective serotonin reuptake inhibitor antidepressants increase the available serotonin in the brain, but the body responds by reducing the number of serotonin receptors. If the amount of available serotonin is reduced after stopping the medication, then this can lead to discontinuation symptoms.
However, the incidence and severity of these symptoms remains controversial, the researchers noted. They point out that some estimates suggest that antidepressant discontinuation symptoms occurred in the majority of patients (56%), with almost half of cases classed as severe.
Previous attempts at assessment have been questioned on methodologic grounds especially because of inclusion of online surveys or other studies prone to selection and dissatisfaction bias.
“Medical professionals continue to hold polarized positions on the incidence and severity of antidepressant discontinuation symptoms, and the debate continues in public media,” they wrote.
This is the first publication of a larger project on antidepressant discontinuation symptoms.
For the study, the researchers conducted a meta-analysis of 44 controlled trials and 35 observational studies assessing the incidence of antidepressant discontinuation symptoms including a total of 21,002 patients. Of these, 16,532 patients discontinued antidepressant treatment, and 4470 patients discontinued placebo.
Incidence of at least one antidepressant discontinuation symptom occurred in 31% of patients stopping antidepressant therapy and in 17% after discontinuation of placebo, giving a true rate of pharmacologic-driven antidepressant discontinuation symptoms of 14%-15%.
The study also showed that severe discontinuation symptoms occurred in 2.8% of those stopping antidepressants and in 0.6% of those stopping placebo, giving a true rate of severe antidepressant discontinuation symptoms of around 2%.
There was no association with treatment duration or with pharmaceutical company funding, and different statistical analyses produced similar results, suggesting the findings are robust, Dr. Baethge reported.
Risks by Medication
Desvenlafaxine, venlafaxine, imipramine, and escitalopram were associated with higher frequency of discontinuation symptoms and imipramine, paroxetine, and either desvenlafaxine or venlafaxine were associated with a higher severity of symptoms.
Fluoxetine, sertraline, and citalopram had lower rates of discontinuation symptoms. No data were available for bupropion, mirtazapine, and amitriptyline.
As for the clinical implications of the findings, Dr. Henssler said that he does consider discontinuation symptoms when selecting a medication. “I would choose a drug with lower rate of these symptoms unless there was a specific reason to choose one with a higher rate,” he said.
Dr. Henssler added that these data raise awareness of the placebo effect.
“Considering the placebo results, approximately half of antidepressant discontinuation symptoms could be attributable to expectation or non-specific symptoms,” the researchers noted.
“This is not to say all antidepressant discontinuation symptoms are caused by patient expectations; in practice, all patients discontinuing antidepressants need to be counseled and monitored, and patients who report antidepressant discontinuation symptoms must be helped, in particular those who develop severe antidepressant discontinuation symptoms,” they concluded.
Experts Weigh In
Commenting on the study at a press briefing, Oliver Howes, MD, chair of the psychopharmacology committee at the Royal College of Psychiatrists, United Kingdom, said that he welcomed “the insight that this robust study provides.”
“If someone chooses to stop taking their antidepressants, their doctor should help them to do so slowly and in a controlled manner that limits the impact of any potential withdrawal symptoms,” Dr. Howes said.
He added that the Royal College of Psychiatrists has produced a resource for patients and carers on stopping antidepressants that offers information on tapering medication at a pace that suits individual patient needs.
Also commenting, Tony Kendrick, MD, professor of primary care, University of Southampton, United Kingdom, pointed out some limitations of the new meta-analysis — in particular, that the method of assessment of discontinuation symptoms in the included studies was very variable, with specific measurement scales of discontinuation symptoms used in only six of the studies.
“In most cases the assessment seemed to depend at least partly on the judgment of the authors of the included studies rather than being based on a systematic collection of data,” Dr. Kendrick added.
In an accompanying editorial, Glyn Lewis, PhD, and Gemma Lewis, PhD, University College London, United Kingdom, wrote that though the meta-analysis has its limitations, including the fact that many of the studies were small, often use antidepressants that are not commonly used now, and studied people who had not taken the antidepressants for a very long time, “the results here are a substantial improvement on anything that has been published before.”
They emphasize the importance of discussing the issue of a placebo effect with patients when stopping antidepressants.
The editorialists pointed out that as antidepressants are prescribed to many millions of people, the relatively uncommon severe withdrawal symptoms will still affect a substantial number of people. However, for individual clinicians, severe withdrawal symptoms will seem uncommon, and most patients will probably not be troubled by antidepressant withdrawal, especially when medication is tapered over a few weeks.
They noted that cessation of antidepressants can lead to an increase in depressive and anxious symptoms, and distinguishing between relapsing symptoms and withdrawal is difficult.
“Short-term symptoms that reduce quickly, without intervention, are best thought of as a form of withdrawal, even if those symptoms might be similar or identical to the symptoms of depression and anxiety. More serious and longer-term symptoms might best be managed by tapering more slowly, or even deciding to remain on the antidepressant,” the editorialists wrote.
There was no funding source for this study. The authors declare no competing interests. Dr. Kendrick led the NIHR REDUCE trial of internet and telephone support for antidepressant discontinuation and was a member of the guideline committee for the NICE 2022 Depression Guideline.
A version of this article appeared on Medscape.com.
FROM THE LANCET
Teen Cannabis Use Tied to Dramatic Increased Risk for Psychosis
, new research showed.
Investigators at the University of Toronto, The Centre for Addiction and Mental Health (CAMH), and the Institute for Clinical Evaluative Sciences (ICES), in Canada, linked recent population-based survey data from more than 11,000 youngsters to health service use records, including hospitalizations, emergency department (ED) visits, and outpatient visits.
“We found a very strong association between cannabis use and risk of psychotic disorder in adolescence [although] surprisingly, we didn’t find evidence of association in young adulthood,” lead author André J. McDonald, PhD, currently a postdoctoral fellow at the Peter Boris Centre for Addictions Research and the Michael G. DeGroote Centre for Medicinal Cannabis Research, McMaster University, Hamilton, Ontario, Canada, said in a news release.
“These findings are consistent with the neurodevelopmental theory that teens are especially vulnerable to the effects of cannabis,” said Dr. McDonald, who conducted the research.
The study was published online in Psychological Medicine.
Increased Potency
“Epidemiologic research suggests that cannabis use may be a significant risk factor for psychotic disorders,” the authors wrote. However, methodological limitations of previous studies make it difficult to estimate the strength of association, with the current evidence base relying largely on cannabis use during the twentieth century, when the drug was “significantly less potent.” It’s plausible that the strength of association has increased due to increased cannabis potency.
The researchers believe youth cannabis use and psychotic disorders is “a critical public health issue,” especially as more jurisdictions liberalize cannabis use and the perception of harm declines among youth.
To estimate the association between cannabis use during youth and the risk for a psychotic disorder diagnosis, using recent population-based data, they used data from the 2009-2012 cycles of the Canadian Community Health Survey (CCHS) linked to administrative health data at ICES to study noninstitutionalized Ontario residents, aged 12-24 years, who had completed the CCHS during that period.
They excluded respondents who used health services for psychotic disorders during the 6 years prior to their CCHS interview date.
Respondents (n = 11,363; 51% men; mean age [SD], 18.3 [15.2-21.3] years) were followed for 6-9 years, with days to first hospitalization, ED visit, or outpatient visit related to a psychotic disorder as the primary outcome.
The researchers estimated age-specific hazard ratios during adolescence (12-19 years) and young adulthood (20-33 years) and conducted sensitivity analyses to explore alternative model conditions, including restricting the outcome to hospitalizations and ED visits, to increase specificity.
Compared with no cannabis use, cannabis use was significantly associated with an 11-fold increased risk for psychotic disorders during adolescence, although not during young adulthood (adjusted hazard ratio [aHR], 11.2; 95% CI, 4.6-27.3 and aHR, 1.3; 95% CI, 0.6-2.6, respectively).
Perception of Harm Declining
When the researchers restricted the outcome to hospitalizations and ED visits only, the strength of association “increased markedly” during adolescence, with a 26-fold higher association in cannabis users than in nonusers (aHR, 26.7; 95% CI, 7.7-92.8). However, there was no meaningful change during young adulthood (aHR, 1.8; 95% CI, 0.6-5.4).
“Many have hypothesized that adolescence is a more sensitive risk period than adulthood for the effect of cannabis use on psychotic disorder development, yet prior to this study, little epidemiologic evidence existed to support this view,” the authors wrote.
The data also suggest that cannabis use is “more strongly associated with more severe psychotic outcomes, as the strength of association during adolescence increased markedly when we restricted the outcome to hospitalizations and ED visits (the most severe types of health service use),” the investigators noted.
The authors noted several limitations. For instance, it’s unclear to what extent unmeasured confounders including genetic predisposition, family history of psychotic disorders, and trauma might have biased the results. In addition, they could not assess the potential confounding impact of genetic predisposition to psychotic disorders. The possibility of reverse causality also cannot be ruled out. It’s possible, they noted, that individuals with “psychotic dispositions” may self-medicate or show greater disposition to cannabis use.
Moreover, the dataset neither captured important factors regarding the cannabis itself, including delta-9-tetrahydrocannabinol potency, mode of use, product type, or cannabis dependence, nor captured institutionalized and homeless youth.
Nevertheless, they pointed to the findings as supporting a “precautionary principle” — as more jurisdictions move to liberalize cannabis use and perception of harm declines among youth, the findings suggest that evidence-based cannabis prevention strategies for adolescents are warranted.
This study was supported by CAMH, the University of Toronto, and ICES, which is funded by an annual grant from the Ontario Ministry of Health and the Ministry of Long-Term Care. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
, new research showed.
Investigators at the University of Toronto, The Centre for Addiction and Mental Health (CAMH), and the Institute for Clinical Evaluative Sciences (ICES), in Canada, linked recent population-based survey data from more than 11,000 youngsters to health service use records, including hospitalizations, emergency department (ED) visits, and outpatient visits.
“We found a very strong association between cannabis use and risk of psychotic disorder in adolescence [although] surprisingly, we didn’t find evidence of association in young adulthood,” lead author André J. McDonald, PhD, currently a postdoctoral fellow at the Peter Boris Centre for Addictions Research and the Michael G. DeGroote Centre for Medicinal Cannabis Research, McMaster University, Hamilton, Ontario, Canada, said in a news release.
“These findings are consistent with the neurodevelopmental theory that teens are especially vulnerable to the effects of cannabis,” said Dr. McDonald, who conducted the research.
The study was published online in Psychological Medicine.
Increased Potency
“Epidemiologic research suggests that cannabis use may be a significant risk factor for psychotic disorders,” the authors wrote. However, methodological limitations of previous studies make it difficult to estimate the strength of association, with the current evidence base relying largely on cannabis use during the twentieth century, when the drug was “significantly less potent.” It’s plausible that the strength of association has increased due to increased cannabis potency.
The researchers believe youth cannabis use and psychotic disorders is “a critical public health issue,” especially as more jurisdictions liberalize cannabis use and the perception of harm declines among youth.
To estimate the association between cannabis use during youth and the risk for a psychotic disorder diagnosis, using recent population-based data, they used data from the 2009-2012 cycles of the Canadian Community Health Survey (CCHS) linked to administrative health data at ICES to study noninstitutionalized Ontario residents, aged 12-24 years, who had completed the CCHS during that period.
They excluded respondents who used health services for psychotic disorders during the 6 years prior to their CCHS interview date.
Respondents (n = 11,363; 51% men; mean age [SD], 18.3 [15.2-21.3] years) were followed for 6-9 years, with days to first hospitalization, ED visit, or outpatient visit related to a psychotic disorder as the primary outcome.
The researchers estimated age-specific hazard ratios during adolescence (12-19 years) and young adulthood (20-33 years) and conducted sensitivity analyses to explore alternative model conditions, including restricting the outcome to hospitalizations and ED visits, to increase specificity.
Compared with no cannabis use, cannabis use was significantly associated with an 11-fold increased risk for psychotic disorders during adolescence, although not during young adulthood (adjusted hazard ratio [aHR], 11.2; 95% CI, 4.6-27.3 and aHR, 1.3; 95% CI, 0.6-2.6, respectively).
Perception of Harm Declining
When the researchers restricted the outcome to hospitalizations and ED visits only, the strength of association “increased markedly” during adolescence, with a 26-fold higher association in cannabis users than in nonusers (aHR, 26.7; 95% CI, 7.7-92.8). However, there was no meaningful change during young adulthood (aHR, 1.8; 95% CI, 0.6-5.4).
“Many have hypothesized that adolescence is a more sensitive risk period than adulthood for the effect of cannabis use on psychotic disorder development, yet prior to this study, little epidemiologic evidence existed to support this view,” the authors wrote.
The data also suggest that cannabis use is “more strongly associated with more severe psychotic outcomes, as the strength of association during adolescence increased markedly when we restricted the outcome to hospitalizations and ED visits (the most severe types of health service use),” the investigators noted.
The authors noted several limitations. For instance, it’s unclear to what extent unmeasured confounders including genetic predisposition, family history of psychotic disorders, and trauma might have biased the results. In addition, they could not assess the potential confounding impact of genetic predisposition to psychotic disorders. The possibility of reverse causality also cannot be ruled out. It’s possible, they noted, that individuals with “psychotic dispositions” may self-medicate or show greater disposition to cannabis use.
Moreover, the dataset neither captured important factors regarding the cannabis itself, including delta-9-tetrahydrocannabinol potency, mode of use, product type, or cannabis dependence, nor captured institutionalized and homeless youth.
Nevertheless, they pointed to the findings as supporting a “precautionary principle” — as more jurisdictions move to liberalize cannabis use and perception of harm declines among youth, the findings suggest that evidence-based cannabis prevention strategies for adolescents are warranted.
This study was supported by CAMH, the University of Toronto, and ICES, which is funded by an annual grant from the Ontario Ministry of Health and the Ministry of Long-Term Care. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
, new research showed.
Investigators at the University of Toronto, The Centre for Addiction and Mental Health (CAMH), and the Institute for Clinical Evaluative Sciences (ICES), in Canada, linked recent population-based survey data from more than 11,000 youngsters to health service use records, including hospitalizations, emergency department (ED) visits, and outpatient visits.
“We found a very strong association between cannabis use and risk of psychotic disorder in adolescence [although] surprisingly, we didn’t find evidence of association in young adulthood,” lead author André J. McDonald, PhD, currently a postdoctoral fellow at the Peter Boris Centre for Addictions Research and the Michael G. DeGroote Centre for Medicinal Cannabis Research, McMaster University, Hamilton, Ontario, Canada, said in a news release.
“These findings are consistent with the neurodevelopmental theory that teens are especially vulnerable to the effects of cannabis,” said Dr. McDonald, who conducted the research.
The study was published online in Psychological Medicine.
Increased Potency
“Epidemiologic research suggests that cannabis use may be a significant risk factor for psychotic disorders,” the authors wrote. However, methodological limitations of previous studies make it difficult to estimate the strength of association, with the current evidence base relying largely on cannabis use during the twentieth century, when the drug was “significantly less potent.” It’s plausible that the strength of association has increased due to increased cannabis potency.
The researchers believe youth cannabis use and psychotic disorders is “a critical public health issue,” especially as more jurisdictions liberalize cannabis use and the perception of harm declines among youth.
To estimate the association between cannabis use during youth and the risk for a psychotic disorder diagnosis, using recent population-based data, they used data from the 2009-2012 cycles of the Canadian Community Health Survey (CCHS) linked to administrative health data at ICES to study noninstitutionalized Ontario residents, aged 12-24 years, who had completed the CCHS during that period.
They excluded respondents who used health services for psychotic disorders during the 6 years prior to their CCHS interview date.
Respondents (n = 11,363; 51% men; mean age [SD], 18.3 [15.2-21.3] years) were followed for 6-9 years, with days to first hospitalization, ED visit, or outpatient visit related to a psychotic disorder as the primary outcome.
The researchers estimated age-specific hazard ratios during adolescence (12-19 years) and young adulthood (20-33 years) and conducted sensitivity analyses to explore alternative model conditions, including restricting the outcome to hospitalizations and ED visits, to increase specificity.
Compared with no cannabis use, cannabis use was significantly associated with an 11-fold increased risk for psychotic disorders during adolescence, although not during young adulthood (adjusted hazard ratio [aHR], 11.2; 95% CI, 4.6-27.3 and aHR, 1.3; 95% CI, 0.6-2.6, respectively).
Perception of Harm Declining
When the researchers restricted the outcome to hospitalizations and ED visits only, the strength of association “increased markedly” during adolescence, with a 26-fold higher association in cannabis users than in nonusers (aHR, 26.7; 95% CI, 7.7-92.8). However, there was no meaningful change during young adulthood (aHR, 1.8; 95% CI, 0.6-5.4).
“Many have hypothesized that adolescence is a more sensitive risk period than adulthood for the effect of cannabis use on psychotic disorder development, yet prior to this study, little epidemiologic evidence existed to support this view,” the authors wrote.
The data also suggest that cannabis use is “more strongly associated with more severe psychotic outcomes, as the strength of association during adolescence increased markedly when we restricted the outcome to hospitalizations and ED visits (the most severe types of health service use),” the investigators noted.
The authors noted several limitations. For instance, it’s unclear to what extent unmeasured confounders including genetic predisposition, family history of psychotic disorders, and trauma might have biased the results. In addition, they could not assess the potential confounding impact of genetic predisposition to psychotic disorders. The possibility of reverse causality also cannot be ruled out. It’s possible, they noted, that individuals with “psychotic dispositions” may self-medicate or show greater disposition to cannabis use.
Moreover, the dataset neither captured important factors regarding the cannabis itself, including delta-9-tetrahydrocannabinol potency, mode of use, product type, or cannabis dependence, nor captured institutionalized and homeless youth.
Nevertheless, they pointed to the findings as supporting a “precautionary principle” — as more jurisdictions move to liberalize cannabis use and perception of harm declines among youth, the findings suggest that evidence-based cannabis prevention strategies for adolescents are warranted.
This study was supported by CAMH, the University of Toronto, and ICES, which is funded by an annual grant from the Ontario Ministry of Health and the Ministry of Long-Term Care. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.