User login
Ferric carboxymaltose calms restless legs
Treatment with intravenous ferric carboxymaltose significantly improved symptoms in restless legs syndrome (RLS) patients with iron-deficiency anemia (IDA), data from 29 adults show.
RLS occurs among individuals with normal iron but is at least six times higher among individuals with IDA, Hyoeun Bae, MD, of Keimyung University, Daegu, South Korea, and colleagues wrote. Previous studies have explored iron treatments for RLS patients with IDA, however, guidelines for treatment have not yet been published.
In a study published in Sleep Medicine, the researchers randomized 29 RLS patients with IDA to either 1,500 mg IV ferric carboxymaltose (FCM) or placebo for a short-term period of 6 weeks, followed by a phase 2 study for responders that lasted for 52 weeks. Baseline characteristics, including age, gender, iron parameters, and sleep and mood scales were similar between the groups.
At 6 weeks, patients in the FCM group showed significant improvement in RLS symptom severity based on changes from baseline International Restless Legs Syndrome Study Group scale (IRLS) scores, compared with placebo patients (–13.47 vs. 1.36, P < .001). A secondary outcome of sleep quality also improved significantly in the FCM group, compared with the placebo group.
After 6 weeks, 11 of the 14 patients in the placebo group also received 1,500 mg FCM for an open-label study. These patients also showed significant improvement in IRLS scores from baseline to 6 weeks.
All 23 responders from the short-term studies (13 who received FCM initially and 10 from the postplacebo group) enrolled in a phase 2 long-term study that lasted for 52 weeks; 14 of these completed the full 52-week study period.
Overall, 61% of participants in phase 2 of the study remained off their RLS medications at 52 weeks, and no serious adverse events were reported during the study period. Of these, 10 received one additional dose of FCM and 4 received more than one additional dose. The median change in IRLS score at 4 weeks after treatment was –4.00, compared with the score prior to treatment.
The study is the first of its design to show benefits of intravenous iron therapy for RLS in patients with IDA, the researchers said, noting that the findings of improved, but not cured, RLS symptoms might suggest that more than 1,500 mg of iron is needed to fully treat RLS in this patient population. “A second interpretation is that the RLS and IDA were separate events: a patient with idiopathic RLS who subsequently developed anemia,” they said. “Treating the IDA might improve symptoms but may not eliminate the symptoms.”
The study findings were limited by several factors, including the relatively small study population and inability to know the time frame for the development of IDA, the researchers noted. However, the results support the use of intravenous iron therapy for relief of RLS in IDA patients.
“Since IDA could result in epigenetic changes leading to irreversible state of RLS, then urgent and adequate management of the IDA in RLS patients would seem a very prudent and important clinical approach to this specific clinical condition,” they concluded.
The study received no outside funding. The researchers had no disclosures.
Treatment with intravenous ferric carboxymaltose significantly improved symptoms in restless legs syndrome (RLS) patients with iron-deficiency anemia (IDA), data from 29 adults show.
RLS occurs among individuals with normal iron but is at least six times higher among individuals with IDA, Hyoeun Bae, MD, of Keimyung University, Daegu, South Korea, and colleagues wrote. Previous studies have explored iron treatments for RLS patients with IDA, however, guidelines for treatment have not yet been published.
In a study published in Sleep Medicine, the researchers randomized 29 RLS patients with IDA to either 1,500 mg IV ferric carboxymaltose (FCM) or placebo for a short-term period of 6 weeks, followed by a phase 2 study for responders that lasted for 52 weeks. Baseline characteristics, including age, gender, iron parameters, and sleep and mood scales were similar between the groups.
At 6 weeks, patients in the FCM group showed significant improvement in RLS symptom severity based on changes from baseline International Restless Legs Syndrome Study Group scale (IRLS) scores, compared with placebo patients (–13.47 vs. 1.36, P < .001). A secondary outcome of sleep quality also improved significantly in the FCM group, compared with the placebo group.
After 6 weeks, 11 of the 14 patients in the placebo group also received 1,500 mg FCM for an open-label study. These patients also showed significant improvement in IRLS scores from baseline to 6 weeks.
All 23 responders from the short-term studies (13 who received FCM initially and 10 from the postplacebo group) enrolled in a phase 2 long-term study that lasted for 52 weeks; 14 of these completed the full 52-week study period.
Overall, 61% of participants in phase 2 of the study remained off their RLS medications at 52 weeks, and no serious adverse events were reported during the study period. Of these, 10 received one additional dose of FCM and 4 received more than one additional dose. The median change in IRLS score at 4 weeks after treatment was –4.00, compared with the score prior to treatment.
The study is the first of its design to show benefits of intravenous iron therapy for RLS in patients with IDA, the researchers said, noting that the findings of improved, but not cured, RLS symptoms might suggest that more than 1,500 mg of iron is needed to fully treat RLS in this patient population. “A second interpretation is that the RLS and IDA were separate events: a patient with idiopathic RLS who subsequently developed anemia,” they said. “Treating the IDA might improve symptoms but may not eliminate the symptoms.”
The study findings were limited by several factors, including the relatively small study population and inability to know the time frame for the development of IDA, the researchers noted. However, the results support the use of intravenous iron therapy for relief of RLS in IDA patients.
“Since IDA could result in epigenetic changes leading to irreversible state of RLS, then urgent and adequate management of the IDA in RLS patients would seem a very prudent and important clinical approach to this specific clinical condition,” they concluded.
The study received no outside funding. The researchers had no disclosures.
Treatment with intravenous ferric carboxymaltose significantly improved symptoms in restless legs syndrome (RLS) patients with iron-deficiency anemia (IDA), data from 29 adults show.
RLS occurs among individuals with normal iron but is at least six times higher among individuals with IDA, Hyoeun Bae, MD, of Keimyung University, Daegu, South Korea, and colleagues wrote. Previous studies have explored iron treatments for RLS patients with IDA, however, guidelines for treatment have not yet been published.
In a study published in Sleep Medicine, the researchers randomized 29 RLS patients with IDA to either 1,500 mg IV ferric carboxymaltose (FCM) or placebo for a short-term period of 6 weeks, followed by a phase 2 study for responders that lasted for 52 weeks. Baseline characteristics, including age, gender, iron parameters, and sleep and mood scales were similar between the groups.
At 6 weeks, patients in the FCM group showed significant improvement in RLS symptom severity based on changes from baseline International Restless Legs Syndrome Study Group scale (IRLS) scores, compared with placebo patients (–13.47 vs. 1.36, P < .001). A secondary outcome of sleep quality also improved significantly in the FCM group, compared with the placebo group.
After 6 weeks, 11 of the 14 patients in the placebo group also received 1,500 mg FCM for an open-label study. These patients also showed significant improvement in IRLS scores from baseline to 6 weeks.
All 23 responders from the short-term studies (13 who received FCM initially and 10 from the postplacebo group) enrolled in a phase 2 long-term study that lasted for 52 weeks; 14 of these completed the full 52-week study period.
Overall, 61% of participants in phase 2 of the study remained off their RLS medications at 52 weeks, and no serious adverse events were reported during the study period. Of these, 10 received one additional dose of FCM and 4 received more than one additional dose. The median change in IRLS score at 4 weeks after treatment was –4.00, compared with the score prior to treatment.
The study is the first of its design to show benefits of intravenous iron therapy for RLS in patients with IDA, the researchers said, noting that the findings of improved, but not cured, RLS symptoms might suggest that more than 1,500 mg of iron is needed to fully treat RLS in this patient population. “A second interpretation is that the RLS and IDA were separate events: a patient with idiopathic RLS who subsequently developed anemia,” they said. “Treating the IDA might improve symptoms but may not eliminate the symptoms.”
The study findings were limited by several factors, including the relatively small study population and inability to know the time frame for the development of IDA, the researchers noted. However, the results support the use of intravenous iron therapy for relief of RLS in IDA patients.
“Since IDA could result in epigenetic changes leading to irreversible state of RLS, then urgent and adequate management of the IDA in RLS patients would seem a very prudent and important clinical approach to this specific clinical condition,” they concluded.
The study received no outside funding. The researchers had no disclosures.
FROM SLEEP MEDICINE
Premenopausal bilateral oophorectomy linked to later cognitive impairment
Women whose ovaries were surgically removed before the age of 46 had a higher risk of mild cognitive impairment (MCI) around 30 years later, compared with those who did not undergo bilateral oophorectomy, according to a population-based linkage study published in JAMA Network Open.
The findings suggest that “physicians treating women with premenopausal bilateral oophorectomy need to be aware of their patients’ risk of cognitive impairment or MCI and should consider implementing treatment-monitoring plans,” noted lead author Walter A. Rocca, MD, MPH, from the division of epidemiology, department of quantitative health sciences, at the Mayo Clinic, Rochester, Minn. and colleagues.
The results may particularly “help women at mean risk levels of ovarian cancer to better evaluate the risk-to-benefit ratio of undergoing bilateral oophorectomy prior to spontaneous menopause for the prevention of ovarian cancer,” they emphasized.
While the link between premenopausal bilateral oophorectomy and higher risk of cognitive impairment has been previously suggested, this new study “contributes valuable new data to a major public health importance issue and addresses a number of important shortcomings of existing literature,” Marios K. Georgakis, MD, PhD, and Eleni T. Petridou, MD, PhD, noted in an accompanying commentary.
“As bilateral oophorectomy is still a common procedure at least in well-resourced countries, the results of these studies should alert clinicians about its potential public health consequences. Given that the abrupt cessation of ovarian hormones might be accompanied by previously underestimated long-term adverse effects, treating physicians proposing the operation should weigh its benefits against potential long-term harmful effects, especially among women without an absolute indication,” noted Dr. Georgakis and Dr. Petridou, respectively from the Center for Genomic Medicine at Massachusetts General Hospital in Boston and the National and Kapodistrian University of Athens.
The case-control cross-sectional study used data from the Mayo Clinic Study of Aging (MCSA), a prospective, population-based study examining risk factors for, as well as prevalence and incidence of cognitive decline and MCI among a representative sample of women in Olmsted County, Minn. It included 2,732 women aged 50-89 years who participated in the MCSA study from 2004 to 2019 and underwent a clinical evaluation and comprehensive cognitive testing including nine tests covering four cognitive domains. Almost all of the subjects (98.4%) were White. The mean age of cognitive evaluation was 74 years – at which time 283 women (10.4%) were diagnosed with MCI (197 with amnestic and 86 with nonamnestic MCI). Data from the Rochester Epidemiology Project medical record–linkage system showed a total of 625 women (22.9%) had a history of bilateral oophorectomy. Among this group, 161 women underwent the procedure both before age 46, and before menopause, with 46 (28.6%) receiving oral conjugated equine estrogen (unopposed) and the remaining 95 (59.0%) receiving no estrogen therapy.
The study found that, compared with women who did not undergo bilateral oophorectomy, those who did so before age 46, but not after this age, had statistically significantly increased odds of MCI (adjusted odds ratio, 2.21; P < .001). When type of MCI was examined, the risk was statistically significant for nonamnestic MCI (aOR, 2.96; P < .001), and amnestic (aOR, 1.87; P =.03). The study also found no evidence that estrogen therapy was associated with decreased risk of MCI among women aged less than 46 years, with an aOR of 2.56 in those who received estrogen therapy and 2.05 in those who did not (P = .01 for both).
Finally, in women who had bilateral oophorectomy before menopause and before age 50, surgical indication for the procedure affected the association with MCI. Indications of either cancer or “no ovarian condition” (i.e., performed at the time of hysterectomy) were associated with no increased risk, whereas there was a statistically significantly increased risk associated with benign indications such as an adnexal mass, cyst or endometriosis (aOR, 2.43; P = .003). “This is important,” noted the commentators, “because in many of those cases removal of both ovaries could be avoided.”
The study also found that, compared with women who had not undergone bilateral oophorectomy, those who had also had increased frequency of cardiovascular risk factors, heart disease, and stroke at the time of their cognitive evaluation. “Additional research is needed to clarify the biological explanation of the association,” the investigators said.
The prevailing hypothesis for why premenopausal bilateral oophorectomy is associated with cognitive decline “is that the abrupt endocrine cessation of exposure to ovarian hormones accelerates the aging process,” the commentators noted. “Most important from a clinical perspective is whether these women would benefit from specific hormone replacement therapy schemes. Observational studies cannot reliably answer this question, and possibly it is time to rethink designing trials in specific groups of women who underwent bilateral oophorectomy before 46 years of age starting treatment immediately thereafter.”
In an interview Dr. Georgakis elaborated on this point, saying that, while the Women’s Health Study clearly showed no benefit of hormone replacement therapy for preventing dementia, it recruited women who were aged 65 years or older and had therefore undergone menopause more than 10-15 years earlier. “A hypothesis suggests that a critical vulnerability window exists shortly after menopause during which hormone replacement therapy might be needed to ameliorate any elevated risk,” he said. “Thus, it might make sense to reconsider a trial focused on this group of premenopausal women, who need to undergo oophorectomy at a young age (<46 years). Early initiation would be important. Unfortunately, such a trial would be difficult to conduct, because these women would need to be followed up for very long periods, as cognitive decline usually does not occur before the age of 65.”
Asked to comment on the study, Meadow Good, DO, an ob.gyn., female pelvic medicine and reconstructive surgeon, and physician adviser for Winnie Palmer Hospital for Women & Babies in Orlando, said this study adds credibility to previous studies showing the cognitive risk associated with premenopausal bilateral oophorectomy. “The literature is now pointing to a need to refrain from elective bilateral oophorectomy in women less than 60,” she said in an interview. “It should not be common that a women receives a bilateral oophorectomy before 60 for benign reasons.”
She added that cognition is not the only think at stake. “Bilateral oophorectomy before the age of 60 has a higher risk of incident heart disease, stroke, lung cancer and total cancers,” she said, citing a prospective cohort study within the Nurses’ Health Study.
Dr. Rocca reported financial support from the Mayo Clinic Research Committee during the conduct of the study. One coauthor reported unrestricted grants from Biogen and consulting fees from Brain Protection outside the submitted work. No other disclosures were reported from the authors. Dr. Georgakis, Dr. Petridou, and Dr. Good reported no conflicts of interest. The study was funded by the National Institute on Aging. It also used resources of the Rochester Epidemiology Project medical record–linkage system, which is supported by the NIA, the Mayo Clinic Research Committee, and user fees. Dr. Rocca was partly funded by the Ralph S. and Beverley E. Caulkins Professorship of Neurodegenerative Diseases Research of the Mayo Clinic.
Women whose ovaries were surgically removed before the age of 46 had a higher risk of mild cognitive impairment (MCI) around 30 years later, compared with those who did not undergo bilateral oophorectomy, according to a population-based linkage study published in JAMA Network Open.
The findings suggest that “physicians treating women with premenopausal bilateral oophorectomy need to be aware of their patients’ risk of cognitive impairment or MCI and should consider implementing treatment-monitoring plans,” noted lead author Walter A. Rocca, MD, MPH, from the division of epidemiology, department of quantitative health sciences, at the Mayo Clinic, Rochester, Minn. and colleagues.
The results may particularly “help women at mean risk levels of ovarian cancer to better evaluate the risk-to-benefit ratio of undergoing bilateral oophorectomy prior to spontaneous menopause for the prevention of ovarian cancer,” they emphasized.
While the link between premenopausal bilateral oophorectomy and higher risk of cognitive impairment has been previously suggested, this new study “contributes valuable new data to a major public health importance issue and addresses a number of important shortcomings of existing literature,” Marios K. Georgakis, MD, PhD, and Eleni T. Petridou, MD, PhD, noted in an accompanying commentary.
“As bilateral oophorectomy is still a common procedure at least in well-resourced countries, the results of these studies should alert clinicians about its potential public health consequences. Given that the abrupt cessation of ovarian hormones might be accompanied by previously underestimated long-term adverse effects, treating physicians proposing the operation should weigh its benefits against potential long-term harmful effects, especially among women without an absolute indication,” noted Dr. Georgakis and Dr. Petridou, respectively from the Center for Genomic Medicine at Massachusetts General Hospital in Boston and the National and Kapodistrian University of Athens.
The case-control cross-sectional study used data from the Mayo Clinic Study of Aging (MCSA), a prospective, population-based study examining risk factors for, as well as prevalence and incidence of cognitive decline and MCI among a representative sample of women in Olmsted County, Minn. It included 2,732 women aged 50-89 years who participated in the MCSA study from 2004 to 2019 and underwent a clinical evaluation and comprehensive cognitive testing including nine tests covering four cognitive domains. Almost all of the subjects (98.4%) were White. The mean age of cognitive evaluation was 74 years – at which time 283 women (10.4%) were diagnosed with MCI (197 with amnestic and 86 with nonamnestic MCI). Data from the Rochester Epidemiology Project medical record–linkage system showed a total of 625 women (22.9%) had a history of bilateral oophorectomy. Among this group, 161 women underwent the procedure both before age 46, and before menopause, with 46 (28.6%) receiving oral conjugated equine estrogen (unopposed) and the remaining 95 (59.0%) receiving no estrogen therapy.
The study found that, compared with women who did not undergo bilateral oophorectomy, those who did so before age 46, but not after this age, had statistically significantly increased odds of MCI (adjusted odds ratio, 2.21; P < .001). When type of MCI was examined, the risk was statistically significant for nonamnestic MCI (aOR, 2.96; P < .001), and amnestic (aOR, 1.87; P =.03). The study also found no evidence that estrogen therapy was associated with decreased risk of MCI among women aged less than 46 years, with an aOR of 2.56 in those who received estrogen therapy and 2.05 in those who did not (P = .01 for both).
Finally, in women who had bilateral oophorectomy before menopause and before age 50, surgical indication for the procedure affected the association with MCI. Indications of either cancer or “no ovarian condition” (i.e., performed at the time of hysterectomy) were associated with no increased risk, whereas there was a statistically significantly increased risk associated with benign indications such as an adnexal mass, cyst or endometriosis (aOR, 2.43; P = .003). “This is important,” noted the commentators, “because in many of those cases removal of both ovaries could be avoided.”
The study also found that, compared with women who had not undergone bilateral oophorectomy, those who had also had increased frequency of cardiovascular risk factors, heart disease, and stroke at the time of their cognitive evaluation. “Additional research is needed to clarify the biological explanation of the association,” the investigators said.
The prevailing hypothesis for why premenopausal bilateral oophorectomy is associated with cognitive decline “is that the abrupt endocrine cessation of exposure to ovarian hormones accelerates the aging process,” the commentators noted. “Most important from a clinical perspective is whether these women would benefit from specific hormone replacement therapy schemes. Observational studies cannot reliably answer this question, and possibly it is time to rethink designing trials in specific groups of women who underwent bilateral oophorectomy before 46 years of age starting treatment immediately thereafter.”
In an interview Dr. Georgakis elaborated on this point, saying that, while the Women’s Health Study clearly showed no benefit of hormone replacement therapy for preventing dementia, it recruited women who were aged 65 years or older and had therefore undergone menopause more than 10-15 years earlier. “A hypothesis suggests that a critical vulnerability window exists shortly after menopause during which hormone replacement therapy might be needed to ameliorate any elevated risk,” he said. “Thus, it might make sense to reconsider a trial focused on this group of premenopausal women, who need to undergo oophorectomy at a young age (<46 years). Early initiation would be important. Unfortunately, such a trial would be difficult to conduct, because these women would need to be followed up for very long periods, as cognitive decline usually does not occur before the age of 65.”
Asked to comment on the study, Meadow Good, DO, an ob.gyn., female pelvic medicine and reconstructive surgeon, and physician adviser for Winnie Palmer Hospital for Women & Babies in Orlando, said this study adds credibility to previous studies showing the cognitive risk associated with premenopausal bilateral oophorectomy. “The literature is now pointing to a need to refrain from elective bilateral oophorectomy in women less than 60,” she said in an interview. “It should not be common that a women receives a bilateral oophorectomy before 60 for benign reasons.”
She added that cognition is not the only think at stake. “Bilateral oophorectomy before the age of 60 has a higher risk of incident heart disease, stroke, lung cancer and total cancers,” she said, citing a prospective cohort study within the Nurses’ Health Study.
Dr. Rocca reported financial support from the Mayo Clinic Research Committee during the conduct of the study. One coauthor reported unrestricted grants from Biogen and consulting fees from Brain Protection outside the submitted work. No other disclosures were reported from the authors. Dr. Georgakis, Dr. Petridou, and Dr. Good reported no conflicts of interest. The study was funded by the National Institute on Aging. It also used resources of the Rochester Epidemiology Project medical record–linkage system, which is supported by the NIA, the Mayo Clinic Research Committee, and user fees. Dr. Rocca was partly funded by the Ralph S. and Beverley E. Caulkins Professorship of Neurodegenerative Diseases Research of the Mayo Clinic.
Women whose ovaries were surgically removed before the age of 46 had a higher risk of mild cognitive impairment (MCI) around 30 years later, compared with those who did not undergo bilateral oophorectomy, according to a population-based linkage study published in JAMA Network Open.
The findings suggest that “physicians treating women with premenopausal bilateral oophorectomy need to be aware of their patients’ risk of cognitive impairment or MCI and should consider implementing treatment-monitoring plans,” noted lead author Walter A. Rocca, MD, MPH, from the division of epidemiology, department of quantitative health sciences, at the Mayo Clinic, Rochester, Minn. and colleagues.
The results may particularly “help women at mean risk levels of ovarian cancer to better evaluate the risk-to-benefit ratio of undergoing bilateral oophorectomy prior to spontaneous menopause for the prevention of ovarian cancer,” they emphasized.
While the link between premenopausal bilateral oophorectomy and higher risk of cognitive impairment has been previously suggested, this new study “contributes valuable new data to a major public health importance issue and addresses a number of important shortcomings of existing literature,” Marios K. Georgakis, MD, PhD, and Eleni T. Petridou, MD, PhD, noted in an accompanying commentary.
“As bilateral oophorectomy is still a common procedure at least in well-resourced countries, the results of these studies should alert clinicians about its potential public health consequences. Given that the abrupt cessation of ovarian hormones might be accompanied by previously underestimated long-term adverse effects, treating physicians proposing the operation should weigh its benefits against potential long-term harmful effects, especially among women without an absolute indication,” noted Dr. Georgakis and Dr. Petridou, respectively from the Center for Genomic Medicine at Massachusetts General Hospital in Boston and the National and Kapodistrian University of Athens.
The case-control cross-sectional study used data from the Mayo Clinic Study of Aging (MCSA), a prospective, population-based study examining risk factors for, as well as prevalence and incidence of cognitive decline and MCI among a representative sample of women in Olmsted County, Minn. It included 2,732 women aged 50-89 years who participated in the MCSA study from 2004 to 2019 and underwent a clinical evaluation and comprehensive cognitive testing including nine tests covering four cognitive domains. Almost all of the subjects (98.4%) were White. The mean age of cognitive evaluation was 74 years – at which time 283 women (10.4%) were diagnosed with MCI (197 with amnestic and 86 with nonamnestic MCI). Data from the Rochester Epidemiology Project medical record–linkage system showed a total of 625 women (22.9%) had a history of bilateral oophorectomy. Among this group, 161 women underwent the procedure both before age 46, and before menopause, with 46 (28.6%) receiving oral conjugated equine estrogen (unopposed) and the remaining 95 (59.0%) receiving no estrogen therapy.
The study found that, compared with women who did not undergo bilateral oophorectomy, those who did so before age 46, but not after this age, had statistically significantly increased odds of MCI (adjusted odds ratio, 2.21; P < .001). When type of MCI was examined, the risk was statistically significant for nonamnestic MCI (aOR, 2.96; P < .001), and amnestic (aOR, 1.87; P =.03). The study also found no evidence that estrogen therapy was associated with decreased risk of MCI among women aged less than 46 years, with an aOR of 2.56 in those who received estrogen therapy and 2.05 in those who did not (P = .01 for both).
Finally, in women who had bilateral oophorectomy before menopause and before age 50, surgical indication for the procedure affected the association with MCI. Indications of either cancer or “no ovarian condition” (i.e., performed at the time of hysterectomy) were associated with no increased risk, whereas there was a statistically significantly increased risk associated with benign indications such as an adnexal mass, cyst or endometriosis (aOR, 2.43; P = .003). “This is important,” noted the commentators, “because in many of those cases removal of both ovaries could be avoided.”
The study also found that, compared with women who had not undergone bilateral oophorectomy, those who had also had increased frequency of cardiovascular risk factors, heart disease, and stroke at the time of their cognitive evaluation. “Additional research is needed to clarify the biological explanation of the association,” the investigators said.
The prevailing hypothesis for why premenopausal bilateral oophorectomy is associated with cognitive decline “is that the abrupt endocrine cessation of exposure to ovarian hormones accelerates the aging process,” the commentators noted. “Most important from a clinical perspective is whether these women would benefit from specific hormone replacement therapy schemes. Observational studies cannot reliably answer this question, and possibly it is time to rethink designing trials in specific groups of women who underwent bilateral oophorectomy before 46 years of age starting treatment immediately thereafter.”
In an interview Dr. Georgakis elaborated on this point, saying that, while the Women’s Health Study clearly showed no benefit of hormone replacement therapy for preventing dementia, it recruited women who were aged 65 years or older and had therefore undergone menopause more than 10-15 years earlier. “A hypothesis suggests that a critical vulnerability window exists shortly after menopause during which hormone replacement therapy might be needed to ameliorate any elevated risk,” he said. “Thus, it might make sense to reconsider a trial focused on this group of premenopausal women, who need to undergo oophorectomy at a young age (<46 years). Early initiation would be important. Unfortunately, such a trial would be difficult to conduct, because these women would need to be followed up for very long periods, as cognitive decline usually does not occur before the age of 65.”
Asked to comment on the study, Meadow Good, DO, an ob.gyn., female pelvic medicine and reconstructive surgeon, and physician adviser for Winnie Palmer Hospital for Women & Babies in Orlando, said this study adds credibility to previous studies showing the cognitive risk associated with premenopausal bilateral oophorectomy. “The literature is now pointing to a need to refrain from elective bilateral oophorectomy in women less than 60,” she said in an interview. “It should not be common that a women receives a bilateral oophorectomy before 60 for benign reasons.”
She added that cognition is not the only think at stake. “Bilateral oophorectomy before the age of 60 has a higher risk of incident heart disease, stroke, lung cancer and total cancers,” she said, citing a prospective cohort study within the Nurses’ Health Study.
Dr. Rocca reported financial support from the Mayo Clinic Research Committee during the conduct of the study. One coauthor reported unrestricted grants from Biogen and consulting fees from Brain Protection outside the submitted work. No other disclosures were reported from the authors. Dr. Georgakis, Dr. Petridou, and Dr. Good reported no conflicts of interest. The study was funded by the National Institute on Aging. It also used resources of the Rochester Epidemiology Project medical record–linkage system, which is supported by the NIA, the Mayo Clinic Research Committee, and user fees. Dr. Rocca was partly funded by the Ralph S. and Beverley E. Caulkins Professorship of Neurodegenerative Diseases Research of the Mayo Clinic.
FROM JAMA NETWORK OPEN
Intranasal oxytocin for autism promising – then came the data
When parents of children with autism spectrum disorder (ASD) participating in the largest clinical trial of intranasal oxytocin to date came in for follow-up visits with investigators, they reported marked improvement in the children’s social functioning.
Kids who rarely communicated with their families began to interact more. Those who usually preferred to isolate themselves started joining their parents for meals. It all seemed so promising – until the data came in.
“Those sounded like real improvements to me, and it seemed like they increased over the period of the study,” lead investigator Linmarie Sikich, MD, an associate clinical professor of psychiatry with Duke University School of Medicine and the Duke Center for Autism and Brain Development, Durham, N.C., told this news organization. “Turns out it wasn’t oxytocin that was making that difference.”
Researchers found that after 24 weeks of daily treatment with intranasal oxytocin, there were no significant differences in social functioning between children who received active treatment and those in the placebo group.
The much-anticipated results were published online in The New England Journal of Medicine. To say that they are disappointing, Dr. Sikich said, is an understatement.
Increase in off-label use
Most studies in mouse models of ASD and small trials in children produced conflicting results, although there were modest improvements in social functioning associated with the use of intranasal oxytocin. Some clinicians were already prescribing it off label.
On the basis of this research and early feedback from parents of children, Dr. Sikich and colleagues were hopeful.
However, results from a rigorous, 5-year, $11.4 million randomized trial were negative. Yet, parents were convinced their child improved during the study, and there was a significant increase in off-label prescribing of a treatment her research says doesn’t work. What’s next for oxytocin?
Known as the “love hormone,” oxytocin is a neurotransmitter that is primarily synthesized in the hypothalamus. It plays a role in childbirth and lactation and is also involved in the regulation of social functioning and emotions. Research suggests low oxytocin levels are associated with diminished social functioning, regardless of ASD status.
Its potential as an autism therapy for children has been under study for a decade. Some findings link its use to improvements in core deficits associated with ASD, including repetitive behaviors, fixated or restricted interest, and social communication. A study published in 2020 showed that the treatment improved symptoms in high-functioning adults with ASD.
These were mostly small studies and were underpowered to reliably detect an effect of the therapy on social functioning. They often involved only a single dose of oxytocin. Some studies showed improvements, but others did not.
Still, interest in the treatment grew. Physicians began prescribing it for children with ASD, and parents began buying products containing oxytocin on the internet. Researchers feared this off-label use was becoming widespread, despite inconclusive evidence of efficacy.
High hopes
With support from a National Institutes of Health grant, Dr. Sikich and her team designed a phase 2, multicenter, randomized, double-blind, placebo-controlled study to determine whether the use of oxytocin in children with ASD works and is safe.
The challenges began before they even enrolled a single child. A number of behavioral assessment tools are used to measure social function in ASD, but there is no consensus on which one is best.
A simple blood test could determine how much oxytocin from the nasal spray was absorbed in the blood, but identifying how much made it to the brain would require fMRI, which is expensive and is challenging to use in this study population. Then there was the acquisition of the drug itself.
The Food and Drug Administration has approved intravenous oxytocin for inducing labor. Intranasal oxytocin is not approved for any indication and isn’t available commercially in the United States. Patients or researchers must secure the drug from a manufacturer in a country where it is approved or order it from a U.S. pharmacy that is capable of compounding IV oxytocin into an intranasal formulation.
The pharmacy in Switzerland Dr. Sikich planned to use couldn’t make enough for the study. Contracting with a compounding pharmacy in the United States was significantly more expensive and time consuming, but it was the researchers’ only option.
“If it hadn’t been something we expected to have a major benefit, I think we would have given up the project at multiple points along the line due to all of these challenges,” said Dr. Sikich.
In August 2014, with all the pieces finally in place, researchers began enrolling children aged 3-17 years. The final cohort included 290 participants with ASD, 146 in the oxytocin group and 144 in the placebo group. Of these, 48% had minimal verbal fluency, and 52% had fluent verbal speech.
Participants received daily synthetic oxytocin or placebo via a nasal spray for 24 weeks. The daily oxytocin dose was 48 IU for the first 7 weeks. After that, the dosage could be titrated to a maximum of 80 IU/d. The mean maximal total daily dose of oxytocin throughout the study was 67.6 ± 16.9 IU.
‘It just didn’t work’
Both study groups showed improvement in social withdrawal beginning at 4 weeks and continuing throughout the trial, as determined on the basis of caretakers’ responses on the Aberrant Behavior Checklist Modified Social Withdrawal Subscale, the study’s primary outcome measure.
Sociability and social motivation also improved in both groups, as measured by the Pervasive Developmental Disorders Behavior Inventory and the Social Responsiveness Scale.
But by the end of the trial, the difference between the groups in improvement of social function wasn’t significant (difference, -0.2 points; P = .61) after adjusting for age, verbal fluency, and baseline oxytocin level.
“We were so convinced that it would work,” Dr. Sikich said, “but it just didn’t.”
From observation, parents were also convinced the therapy was working. At the trial’s conclusion, fewer than half of caregivers correctly guessed whether their child was in the treatment group or the placebo group.
A lot of development changes can happen in a child over 6 months. It’s possible the improvements would have occurred regardless of the trial, Dr. Sikich said. Parents’ perceptions could also be a placebo effect. Their child was in a clinical trial of a drug they believed could improve social functioning, so in their mind, it did.
Caregivers received training in how to identify certain behavioral changes, which may have helped them spot an existing positive change they had previously overlooked. Or they may have worked with their child more intently as a result of their participation in the trial.
“People may start doing more things or doing them more intensively or purposefully, consciously or subconsciously, to try to help their child improve the skills or behaviors targeted by the active therapy in the study,” Dr. Sikich said. “These are things that might really help the child move forward which are completely separate from the medication being studied.”
The safety analysis offered more hopeful results. Only one serious adverse event from the treatment was reported: A 17-year-old participant taking a daily dose of 48 IU experienced a sedating effect while driving and had an accident.
Too soon to walk away?
Perhaps the most important take-away from the study is that even if it’s safe, intranasal oxytocin as it is currently used doesn’t work and clinicians shouldn’t prescribe it, said Daniel Geschwind, MD, PhD, director of the University of California, Los Angeles (UCLA) Center for Autism Research, who penned a commentary on the study and discussed the findings with this news organization.
“This study shows that using oxytocin the way it’s used in the community right now is not helping anybody, so why put a child through that?” added Dr. Geschwind, who also is a professor of genetics, neurology, and psychiatry at UCLA.
The trial highlights areas that need to be addressed in order to improve research in the field, he said. Establishing a consensus process to measure social functioning and figuring out a better way to access intranasal oxytocin would lead to studies that are more conclusive, comparable, and less expensive. Dr. Sikich agrees.
Despite the findings, Dr. Geschwind and other autism researchers say it’s too soon to walk away from oxytocin altogether, although it may be time to change the approach to autism research.
“We have to take a page from the playbook of modern medicine in other areas and begin to recognize that these syndromes are incredibly heterogeneous,” Dr. Geschwind says. “We can surmise, although we don’t know, that there might be different biological forms of autism that have different pathways involved that are going to respond differently to different medications.”
Calling the researchers’ efforts “heroic,” Karen Parker, PhD, an associate professor and associate chair of psychiatry and behavioral sciences at Stanford (Calif.) University, says efficacy trials such as this one are critical. However, Dr. Parker said in an interview, there are a number of questions that the study didn’t address.
The majority of medication dispensed in a standard intranasal device is sprayed into the back of the throat. Regular blood tests confirmed that oxytocin was getting into participants’ system, but, given how quickly oxytocin degrades in the blood, Dr. Parker said it’s hard to know just how much reached the brain.
It’s also unclear whether the results would have been different had the treatment been paired with behavioral therapy, an approach Dr. Parker suggests might benefit a subset of children with ASD.
A 2017 study from Dr. Parker’s lab found that children with ASD whose use of oxytocin at baseline was low derived greater benefit from synthetic oxytocin, something the new study failed to find. Still, Dr. Parker said, it’s possible oxytocin might increase social motivation and increase a child’s receptiveness to behavioral therapy.
“When you see a negative trial like this, it decreases enthusiasm for the therapy for autism in this context,” Dr. Parker said. “I hope people who are studying these syndromes will continue to explore oxytocin as a therapy.”
The study was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development through the Autism Centers of Excellence Program and the Department of Psychiatry and Behavioral Sciences at Duke University. Full disclosures of the authors’ possible conflicts of interest are available online.
A version of this article first appeared on Medscape.com.
When parents of children with autism spectrum disorder (ASD) participating in the largest clinical trial of intranasal oxytocin to date came in for follow-up visits with investigators, they reported marked improvement in the children’s social functioning.
Kids who rarely communicated with their families began to interact more. Those who usually preferred to isolate themselves started joining their parents for meals. It all seemed so promising – until the data came in.
“Those sounded like real improvements to me, and it seemed like they increased over the period of the study,” lead investigator Linmarie Sikich, MD, an associate clinical professor of psychiatry with Duke University School of Medicine and the Duke Center for Autism and Brain Development, Durham, N.C., told this news organization. “Turns out it wasn’t oxytocin that was making that difference.”
Researchers found that after 24 weeks of daily treatment with intranasal oxytocin, there were no significant differences in social functioning between children who received active treatment and those in the placebo group.
The much-anticipated results were published online in The New England Journal of Medicine. To say that they are disappointing, Dr. Sikich said, is an understatement.
Increase in off-label use
Most studies in mouse models of ASD and small trials in children produced conflicting results, although there were modest improvements in social functioning associated with the use of intranasal oxytocin. Some clinicians were already prescribing it off label.
On the basis of this research and early feedback from parents of children, Dr. Sikich and colleagues were hopeful.
However, results from a rigorous, 5-year, $11.4 million randomized trial were negative. Yet, parents were convinced their child improved during the study, and there was a significant increase in off-label prescribing of a treatment her research says doesn’t work. What’s next for oxytocin?
Known as the “love hormone,” oxytocin is a neurotransmitter that is primarily synthesized in the hypothalamus. It plays a role in childbirth and lactation and is also involved in the regulation of social functioning and emotions. Research suggests low oxytocin levels are associated with diminished social functioning, regardless of ASD status.
Its potential as an autism therapy for children has been under study for a decade. Some findings link its use to improvements in core deficits associated with ASD, including repetitive behaviors, fixated or restricted interest, and social communication. A study published in 2020 showed that the treatment improved symptoms in high-functioning adults with ASD.
These were mostly small studies and were underpowered to reliably detect an effect of the therapy on social functioning. They often involved only a single dose of oxytocin. Some studies showed improvements, but others did not.
Still, interest in the treatment grew. Physicians began prescribing it for children with ASD, and parents began buying products containing oxytocin on the internet. Researchers feared this off-label use was becoming widespread, despite inconclusive evidence of efficacy.
High hopes
With support from a National Institutes of Health grant, Dr. Sikich and her team designed a phase 2, multicenter, randomized, double-blind, placebo-controlled study to determine whether the use of oxytocin in children with ASD works and is safe.
The challenges began before they even enrolled a single child. A number of behavioral assessment tools are used to measure social function in ASD, but there is no consensus on which one is best.
A simple blood test could determine how much oxytocin from the nasal spray was absorbed in the blood, but identifying how much made it to the brain would require fMRI, which is expensive and is challenging to use in this study population. Then there was the acquisition of the drug itself.
The Food and Drug Administration has approved intravenous oxytocin for inducing labor. Intranasal oxytocin is not approved for any indication and isn’t available commercially in the United States. Patients or researchers must secure the drug from a manufacturer in a country where it is approved or order it from a U.S. pharmacy that is capable of compounding IV oxytocin into an intranasal formulation.
The pharmacy in Switzerland Dr. Sikich planned to use couldn’t make enough for the study. Contracting with a compounding pharmacy in the United States was significantly more expensive and time consuming, but it was the researchers’ only option.
“If it hadn’t been something we expected to have a major benefit, I think we would have given up the project at multiple points along the line due to all of these challenges,” said Dr. Sikich.
In August 2014, with all the pieces finally in place, researchers began enrolling children aged 3-17 years. The final cohort included 290 participants with ASD, 146 in the oxytocin group and 144 in the placebo group. Of these, 48% had minimal verbal fluency, and 52% had fluent verbal speech.
Participants received daily synthetic oxytocin or placebo via a nasal spray for 24 weeks. The daily oxytocin dose was 48 IU for the first 7 weeks. After that, the dosage could be titrated to a maximum of 80 IU/d. The mean maximal total daily dose of oxytocin throughout the study was 67.6 ± 16.9 IU.
‘It just didn’t work’
Both study groups showed improvement in social withdrawal beginning at 4 weeks and continuing throughout the trial, as determined on the basis of caretakers’ responses on the Aberrant Behavior Checklist Modified Social Withdrawal Subscale, the study’s primary outcome measure.
Sociability and social motivation also improved in both groups, as measured by the Pervasive Developmental Disorders Behavior Inventory and the Social Responsiveness Scale.
But by the end of the trial, the difference between the groups in improvement of social function wasn’t significant (difference, -0.2 points; P = .61) after adjusting for age, verbal fluency, and baseline oxytocin level.
“We were so convinced that it would work,” Dr. Sikich said, “but it just didn’t.”
From observation, parents were also convinced the therapy was working. At the trial’s conclusion, fewer than half of caregivers correctly guessed whether their child was in the treatment group or the placebo group.
A lot of development changes can happen in a child over 6 months. It’s possible the improvements would have occurred regardless of the trial, Dr. Sikich said. Parents’ perceptions could also be a placebo effect. Their child was in a clinical trial of a drug they believed could improve social functioning, so in their mind, it did.
Caregivers received training in how to identify certain behavioral changes, which may have helped them spot an existing positive change they had previously overlooked. Or they may have worked with their child more intently as a result of their participation in the trial.
“People may start doing more things or doing them more intensively or purposefully, consciously or subconsciously, to try to help their child improve the skills or behaviors targeted by the active therapy in the study,” Dr. Sikich said. “These are things that might really help the child move forward which are completely separate from the medication being studied.”
The safety analysis offered more hopeful results. Only one serious adverse event from the treatment was reported: A 17-year-old participant taking a daily dose of 48 IU experienced a sedating effect while driving and had an accident.
Too soon to walk away?
Perhaps the most important take-away from the study is that even if it’s safe, intranasal oxytocin as it is currently used doesn’t work and clinicians shouldn’t prescribe it, said Daniel Geschwind, MD, PhD, director of the University of California, Los Angeles (UCLA) Center for Autism Research, who penned a commentary on the study and discussed the findings with this news organization.
“This study shows that using oxytocin the way it’s used in the community right now is not helping anybody, so why put a child through that?” added Dr. Geschwind, who also is a professor of genetics, neurology, and psychiatry at UCLA.
The trial highlights areas that need to be addressed in order to improve research in the field, he said. Establishing a consensus process to measure social functioning and figuring out a better way to access intranasal oxytocin would lead to studies that are more conclusive, comparable, and less expensive. Dr. Sikich agrees.
Despite the findings, Dr. Geschwind and other autism researchers say it’s too soon to walk away from oxytocin altogether, although it may be time to change the approach to autism research.
“We have to take a page from the playbook of modern medicine in other areas and begin to recognize that these syndromes are incredibly heterogeneous,” Dr. Geschwind says. “We can surmise, although we don’t know, that there might be different biological forms of autism that have different pathways involved that are going to respond differently to different medications.”
Calling the researchers’ efforts “heroic,” Karen Parker, PhD, an associate professor and associate chair of psychiatry and behavioral sciences at Stanford (Calif.) University, says efficacy trials such as this one are critical. However, Dr. Parker said in an interview, there are a number of questions that the study didn’t address.
The majority of medication dispensed in a standard intranasal device is sprayed into the back of the throat. Regular blood tests confirmed that oxytocin was getting into participants’ system, but, given how quickly oxytocin degrades in the blood, Dr. Parker said it’s hard to know just how much reached the brain.
It’s also unclear whether the results would have been different had the treatment been paired with behavioral therapy, an approach Dr. Parker suggests might benefit a subset of children with ASD.
A 2017 study from Dr. Parker’s lab found that children with ASD whose use of oxytocin at baseline was low derived greater benefit from synthetic oxytocin, something the new study failed to find. Still, Dr. Parker said, it’s possible oxytocin might increase social motivation and increase a child’s receptiveness to behavioral therapy.
“When you see a negative trial like this, it decreases enthusiasm for the therapy for autism in this context,” Dr. Parker said. “I hope people who are studying these syndromes will continue to explore oxytocin as a therapy.”
The study was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development through the Autism Centers of Excellence Program and the Department of Psychiatry and Behavioral Sciences at Duke University. Full disclosures of the authors’ possible conflicts of interest are available online.
A version of this article first appeared on Medscape.com.
When parents of children with autism spectrum disorder (ASD) participating in the largest clinical trial of intranasal oxytocin to date came in for follow-up visits with investigators, they reported marked improvement in the children’s social functioning.
Kids who rarely communicated with their families began to interact more. Those who usually preferred to isolate themselves started joining their parents for meals. It all seemed so promising – until the data came in.
“Those sounded like real improvements to me, and it seemed like they increased over the period of the study,” lead investigator Linmarie Sikich, MD, an associate clinical professor of psychiatry with Duke University School of Medicine and the Duke Center for Autism and Brain Development, Durham, N.C., told this news organization. “Turns out it wasn’t oxytocin that was making that difference.”
Researchers found that after 24 weeks of daily treatment with intranasal oxytocin, there were no significant differences in social functioning between children who received active treatment and those in the placebo group.
The much-anticipated results were published online in The New England Journal of Medicine. To say that they are disappointing, Dr. Sikich said, is an understatement.
Increase in off-label use
Most studies in mouse models of ASD and small trials in children produced conflicting results, although there were modest improvements in social functioning associated with the use of intranasal oxytocin. Some clinicians were already prescribing it off label.
On the basis of this research and early feedback from parents of children, Dr. Sikich and colleagues were hopeful.
However, results from a rigorous, 5-year, $11.4 million randomized trial were negative. Yet, parents were convinced their child improved during the study, and there was a significant increase in off-label prescribing of a treatment her research says doesn’t work. What’s next for oxytocin?
Known as the “love hormone,” oxytocin is a neurotransmitter that is primarily synthesized in the hypothalamus. It plays a role in childbirth and lactation and is also involved in the regulation of social functioning and emotions. Research suggests low oxytocin levels are associated with diminished social functioning, regardless of ASD status.
Its potential as an autism therapy for children has been under study for a decade. Some findings link its use to improvements in core deficits associated with ASD, including repetitive behaviors, fixated or restricted interest, and social communication. A study published in 2020 showed that the treatment improved symptoms in high-functioning adults with ASD.
These were mostly small studies and were underpowered to reliably detect an effect of the therapy on social functioning. They often involved only a single dose of oxytocin. Some studies showed improvements, but others did not.
Still, interest in the treatment grew. Physicians began prescribing it for children with ASD, and parents began buying products containing oxytocin on the internet. Researchers feared this off-label use was becoming widespread, despite inconclusive evidence of efficacy.
High hopes
With support from a National Institutes of Health grant, Dr. Sikich and her team designed a phase 2, multicenter, randomized, double-blind, placebo-controlled study to determine whether the use of oxytocin in children with ASD works and is safe.
The challenges began before they even enrolled a single child. A number of behavioral assessment tools are used to measure social function in ASD, but there is no consensus on which one is best.
A simple blood test could determine how much oxytocin from the nasal spray was absorbed in the blood, but identifying how much made it to the brain would require fMRI, which is expensive and is challenging to use in this study population. Then there was the acquisition of the drug itself.
The Food and Drug Administration has approved intravenous oxytocin for inducing labor. Intranasal oxytocin is not approved for any indication and isn’t available commercially in the United States. Patients or researchers must secure the drug from a manufacturer in a country where it is approved or order it from a U.S. pharmacy that is capable of compounding IV oxytocin into an intranasal formulation.
The pharmacy in Switzerland Dr. Sikich planned to use couldn’t make enough for the study. Contracting with a compounding pharmacy in the United States was significantly more expensive and time consuming, but it was the researchers’ only option.
“If it hadn’t been something we expected to have a major benefit, I think we would have given up the project at multiple points along the line due to all of these challenges,” said Dr. Sikich.
In August 2014, with all the pieces finally in place, researchers began enrolling children aged 3-17 years. The final cohort included 290 participants with ASD, 146 in the oxytocin group and 144 in the placebo group. Of these, 48% had minimal verbal fluency, and 52% had fluent verbal speech.
Participants received daily synthetic oxytocin or placebo via a nasal spray for 24 weeks. The daily oxytocin dose was 48 IU for the first 7 weeks. After that, the dosage could be titrated to a maximum of 80 IU/d. The mean maximal total daily dose of oxytocin throughout the study was 67.6 ± 16.9 IU.
‘It just didn’t work’
Both study groups showed improvement in social withdrawal beginning at 4 weeks and continuing throughout the trial, as determined on the basis of caretakers’ responses on the Aberrant Behavior Checklist Modified Social Withdrawal Subscale, the study’s primary outcome measure.
Sociability and social motivation also improved in both groups, as measured by the Pervasive Developmental Disorders Behavior Inventory and the Social Responsiveness Scale.
But by the end of the trial, the difference between the groups in improvement of social function wasn’t significant (difference, -0.2 points; P = .61) after adjusting for age, verbal fluency, and baseline oxytocin level.
“We were so convinced that it would work,” Dr. Sikich said, “but it just didn’t.”
From observation, parents were also convinced the therapy was working. At the trial’s conclusion, fewer than half of caregivers correctly guessed whether their child was in the treatment group or the placebo group.
A lot of development changes can happen in a child over 6 months. It’s possible the improvements would have occurred regardless of the trial, Dr. Sikich said. Parents’ perceptions could also be a placebo effect. Their child was in a clinical trial of a drug they believed could improve social functioning, so in their mind, it did.
Caregivers received training in how to identify certain behavioral changes, which may have helped them spot an existing positive change they had previously overlooked. Or they may have worked with their child more intently as a result of their participation in the trial.
“People may start doing more things or doing them more intensively or purposefully, consciously or subconsciously, to try to help their child improve the skills or behaviors targeted by the active therapy in the study,” Dr. Sikich said. “These are things that might really help the child move forward which are completely separate from the medication being studied.”
The safety analysis offered more hopeful results. Only one serious adverse event from the treatment was reported: A 17-year-old participant taking a daily dose of 48 IU experienced a sedating effect while driving and had an accident.
Too soon to walk away?
Perhaps the most important take-away from the study is that even if it’s safe, intranasal oxytocin as it is currently used doesn’t work and clinicians shouldn’t prescribe it, said Daniel Geschwind, MD, PhD, director of the University of California, Los Angeles (UCLA) Center for Autism Research, who penned a commentary on the study and discussed the findings with this news organization.
“This study shows that using oxytocin the way it’s used in the community right now is not helping anybody, so why put a child through that?” added Dr. Geschwind, who also is a professor of genetics, neurology, and psychiatry at UCLA.
The trial highlights areas that need to be addressed in order to improve research in the field, he said. Establishing a consensus process to measure social functioning and figuring out a better way to access intranasal oxytocin would lead to studies that are more conclusive, comparable, and less expensive. Dr. Sikich agrees.
Despite the findings, Dr. Geschwind and other autism researchers say it’s too soon to walk away from oxytocin altogether, although it may be time to change the approach to autism research.
“We have to take a page from the playbook of modern medicine in other areas and begin to recognize that these syndromes are incredibly heterogeneous,” Dr. Geschwind says. “We can surmise, although we don’t know, that there might be different biological forms of autism that have different pathways involved that are going to respond differently to different medications.”
Calling the researchers’ efforts “heroic,” Karen Parker, PhD, an associate professor and associate chair of psychiatry and behavioral sciences at Stanford (Calif.) University, says efficacy trials such as this one are critical. However, Dr. Parker said in an interview, there are a number of questions that the study didn’t address.
The majority of medication dispensed in a standard intranasal device is sprayed into the back of the throat. Regular blood tests confirmed that oxytocin was getting into participants’ system, but, given how quickly oxytocin degrades in the blood, Dr. Parker said it’s hard to know just how much reached the brain.
It’s also unclear whether the results would have been different had the treatment been paired with behavioral therapy, an approach Dr. Parker suggests might benefit a subset of children with ASD.
A 2017 study from Dr. Parker’s lab found that children with ASD whose use of oxytocin at baseline was low derived greater benefit from synthetic oxytocin, something the new study failed to find. Still, Dr. Parker said, it’s possible oxytocin might increase social motivation and increase a child’s receptiveness to behavioral therapy.
“When you see a negative trial like this, it decreases enthusiasm for the therapy for autism in this context,” Dr. Parker said. “I hope people who are studying these syndromes will continue to explore oxytocin as a therapy.”
The study was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development through the Autism Centers of Excellence Program and the Department of Psychiatry and Behavioral Sciences at Duke University. Full disclosures of the authors’ possible conflicts of interest are available online.
A version of this article first appeared on Medscape.com.
Distance learning may cause convergence insufficiency
NEW ORLEANS – The increased use of digital screens for school during the COVID-19 pandemic may be causing convergence insufficiency in children, researchers say.
Although the long-term implications for current schoolchildren are not clear, convergence insufficiency sometimes persists for a lifetime, said Kammi Gunton, MD, interim chief of pediatric ophthalmology and strabismus at Wills Eye Hospital, Philadelphia.
“It’s important, if we use digital technology for education, that we are aware that it might contribute to increased eye symptoms in children,” Dr. Gunton told this news organization.
Dr. Gunton’s colleague, Jordan Hamburger, an MD candidate at Sidney Kimmel Medical College, Philadelphia, presented the finding at the American Academy of Ophthalmology 2021 Annual Meeting.
Convergence insufficiency is an impairment of binocularity. Symptoms include headaches while reading, words that seem to move around the page, blurriness, diplopia, and eye fatigue. It can be treated with exercise, prism glasses, or, rarely, surgery.
“We have some kids who improve with either time or maturity, then we have other patients who suffer from it for their entire lives,” Dr. Gunton said.
Previous research has linked the use of digital screens to convergence insufficiency, so when many schools shifted to distance learning for the pandemic, Dr. Gunton and her colleagues wanted to see whether it would have this effect on the students’ eyes.
They surveyed 110 healthy schoolchildren and adolescent students regarding eye symptoms before and after a day of virtual school. The mean age of the participants was 14 years (range, 10-17 years). The participants spent an average of 6.96 hours per day in virtual school. Forty-one percent also attended school in person part time. These students filled out the survey on days when they were in virtual school.
The participants answered questions on the Convergence Insufficiency Symptom Survey (CISS). The survey consists of 15 questions about eye complaints. On each question, the students rated symptoms from 0 to 4, with 4 indicating a severe symptom.
The average sum of the CISS scores rose from 5.17 before school to 9.82 after school, a statistically significant change (P < .001). Sixty-one percent of the participants reported an increase in convergence insufficiency symptoms.
Seventeen percent scored a total of at least 16, which is the threshold score considered suggestive of convergence insufficiency.
The researchers also found that, on average, the more hours each student spent in virtual school, the higher their CISS scores.
This makes sense, because reading requires convergence, Dr. Gunton said. The same problem might occur in traditional school if the students were looking at books all day instead of focusing on objects at various distances in their classrooms, such as the teacher or the whiteboard. “So, in the past, if you read a book, maybe you wouldn’t read for several hours, but now we’re asking children during virtual learning to stay on a device with the camera on,” she said.
Previous research has shown that people blink less when reading or using electronic devices, probably because of their increased concentration. This might explain symptoms such as burning and itching. Fifty-three percent of the students reported an increase in asthenopia symptoms.
The researchers would have liked to have compared the students in virtual school to a matched group of students in traditional school. However, almost all students were enrolled in virtual school when the study was conducted, making such a control difficult.
Although previous research has related virtual learning to myopia, as reported by this news organization, this study did not investigate myopia, and the researchers do not believe that convergence insufficiency causes myopia or vice versa.
Parents can help prevent convergence insufficiency during school by reminding their children to take breaks, Dr. Gunton said. She recommends the 20/20/20 rule: After 20 minutes of work that involves looking at objects nearby, students should take a 20-second break and look at something 20 feet away.
“I also think the take-home message is for parents to ask students if they’re having symptoms,” she said, “and if they hear complaints while kids are on the computers, to have them see an eye doctor and have an evaluation.”
Stephen Lipsky, MD, who wasn’t involved in the study, said he is seeing more cases of eye strain at Children’s Healthcare of Atlanta, where he is a consulting ophthalmologist.
“The study is very valuable in that it shines a light on the fact that these children do have symptoms, such as asthenopia or convergence insufficiency,” he told this news organization. “But I’m optimistic that with a return to more traditional learning, we will return the more traditional incidence of these problems.”
Dr. Gunton and Dr. Lipsky have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
NEW ORLEANS – The increased use of digital screens for school during the COVID-19 pandemic may be causing convergence insufficiency in children, researchers say.
Although the long-term implications for current schoolchildren are not clear, convergence insufficiency sometimes persists for a lifetime, said Kammi Gunton, MD, interim chief of pediatric ophthalmology and strabismus at Wills Eye Hospital, Philadelphia.
“It’s important, if we use digital technology for education, that we are aware that it might contribute to increased eye symptoms in children,” Dr. Gunton told this news organization.
Dr. Gunton’s colleague, Jordan Hamburger, an MD candidate at Sidney Kimmel Medical College, Philadelphia, presented the finding at the American Academy of Ophthalmology 2021 Annual Meeting.
Convergence insufficiency is an impairment of binocularity. Symptoms include headaches while reading, words that seem to move around the page, blurriness, diplopia, and eye fatigue. It can be treated with exercise, prism glasses, or, rarely, surgery.
“We have some kids who improve with either time or maturity, then we have other patients who suffer from it for their entire lives,” Dr. Gunton said.
Previous research has linked the use of digital screens to convergence insufficiency, so when many schools shifted to distance learning for the pandemic, Dr. Gunton and her colleagues wanted to see whether it would have this effect on the students’ eyes.
They surveyed 110 healthy schoolchildren and adolescent students regarding eye symptoms before and after a day of virtual school. The mean age of the participants was 14 years (range, 10-17 years). The participants spent an average of 6.96 hours per day in virtual school. Forty-one percent also attended school in person part time. These students filled out the survey on days when they were in virtual school.
The participants answered questions on the Convergence Insufficiency Symptom Survey (CISS). The survey consists of 15 questions about eye complaints. On each question, the students rated symptoms from 0 to 4, with 4 indicating a severe symptom.
The average sum of the CISS scores rose from 5.17 before school to 9.82 after school, a statistically significant change (P < .001). Sixty-one percent of the participants reported an increase in convergence insufficiency symptoms.
Seventeen percent scored a total of at least 16, which is the threshold score considered suggestive of convergence insufficiency.
The researchers also found that, on average, the more hours each student spent in virtual school, the higher their CISS scores.
This makes sense, because reading requires convergence, Dr. Gunton said. The same problem might occur in traditional school if the students were looking at books all day instead of focusing on objects at various distances in their classrooms, such as the teacher or the whiteboard. “So, in the past, if you read a book, maybe you wouldn’t read for several hours, but now we’re asking children during virtual learning to stay on a device with the camera on,” she said.
Previous research has shown that people blink less when reading or using electronic devices, probably because of their increased concentration. This might explain symptoms such as burning and itching. Fifty-three percent of the students reported an increase in asthenopia symptoms.
The researchers would have liked to have compared the students in virtual school to a matched group of students in traditional school. However, almost all students were enrolled in virtual school when the study was conducted, making such a control difficult.
Although previous research has related virtual learning to myopia, as reported by this news organization, this study did not investigate myopia, and the researchers do not believe that convergence insufficiency causes myopia or vice versa.
Parents can help prevent convergence insufficiency during school by reminding their children to take breaks, Dr. Gunton said. She recommends the 20/20/20 rule: After 20 minutes of work that involves looking at objects nearby, students should take a 20-second break and look at something 20 feet away.
“I also think the take-home message is for parents to ask students if they’re having symptoms,” she said, “and if they hear complaints while kids are on the computers, to have them see an eye doctor and have an evaluation.”
Stephen Lipsky, MD, who wasn’t involved in the study, said he is seeing more cases of eye strain at Children’s Healthcare of Atlanta, where he is a consulting ophthalmologist.
“The study is very valuable in that it shines a light on the fact that these children do have symptoms, such as asthenopia or convergence insufficiency,” he told this news organization. “But I’m optimistic that with a return to more traditional learning, we will return the more traditional incidence of these problems.”
Dr. Gunton and Dr. Lipsky have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
NEW ORLEANS – The increased use of digital screens for school during the COVID-19 pandemic may be causing convergence insufficiency in children, researchers say.
Although the long-term implications for current schoolchildren are not clear, convergence insufficiency sometimes persists for a lifetime, said Kammi Gunton, MD, interim chief of pediatric ophthalmology and strabismus at Wills Eye Hospital, Philadelphia.
“It’s important, if we use digital technology for education, that we are aware that it might contribute to increased eye symptoms in children,” Dr. Gunton told this news organization.
Dr. Gunton’s colleague, Jordan Hamburger, an MD candidate at Sidney Kimmel Medical College, Philadelphia, presented the finding at the American Academy of Ophthalmology 2021 Annual Meeting.
Convergence insufficiency is an impairment of binocularity. Symptoms include headaches while reading, words that seem to move around the page, blurriness, diplopia, and eye fatigue. It can be treated with exercise, prism glasses, or, rarely, surgery.
“We have some kids who improve with either time or maturity, then we have other patients who suffer from it for their entire lives,” Dr. Gunton said.
Previous research has linked the use of digital screens to convergence insufficiency, so when many schools shifted to distance learning for the pandemic, Dr. Gunton and her colleagues wanted to see whether it would have this effect on the students’ eyes.
They surveyed 110 healthy schoolchildren and adolescent students regarding eye symptoms before and after a day of virtual school. The mean age of the participants was 14 years (range, 10-17 years). The participants spent an average of 6.96 hours per day in virtual school. Forty-one percent also attended school in person part time. These students filled out the survey on days when they were in virtual school.
The participants answered questions on the Convergence Insufficiency Symptom Survey (CISS). The survey consists of 15 questions about eye complaints. On each question, the students rated symptoms from 0 to 4, with 4 indicating a severe symptom.
The average sum of the CISS scores rose from 5.17 before school to 9.82 after school, a statistically significant change (P < .001). Sixty-one percent of the participants reported an increase in convergence insufficiency symptoms.
Seventeen percent scored a total of at least 16, which is the threshold score considered suggestive of convergence insufficiency.
The researchers also found that, on average, the more hours each student spent in virtual school, the higher their CISS scores.
This makes sense, because reading requires convergence, Dr. Gunton said. The same problem might occur in traditional school if the students were looking at books all day instead of focusing on objects at various distances in their classrooms, such as the teacher or the whiteboard. “So, in the past, if you read a book, maybe you wouldn’t read for several hours, but now we’re asking children during virtual learning to stay on a device with the camera on,” she said.
Previous research has shown that people blink less when reading or using electronic devices, probably because of their increased concentration. This might explain symptoms such as burning and itching. Fifty-three percent of the students reported an increase in asthenopia symptoms.
The researchers would have liked to have compared the students in virtual school to a matched group of students in traditional school. However, almost all students were enrolled in virtual school when the study was conducted, making such a control difficult.
Although previous research has related virtual learning to myopia, as reported by this news organization, this study did not investigate myopia, and the researchers do not believe that convergence insufficiency causes myopia or vice versa.
Parents can help prevent convergence insufficiency during school by reminding their children to take breaks, Dr. Gunton said. She recommends the 20/20/20 rule: After 20 minutes of work that involves looking at objects nearby, students should take a 20-second break and look at something 20 feet away.
“I also think the take-home message is for parents to ask students if they’re having symptoms,” she said, “and if they hear complaints while kids are on the computers, to have them see an eye doctor and have an evaluation.”
Stephen Lipsky, MD, who wasn’t involved in the study, said he is seeing more cases of eye strain at Children’s Healthcare of Atlanta, where he is a consulting ophthalmologist.
“The study is very valuable in that it shines a light on the fact that these children do have symptoms, such as asthenopia or convergence insufficiency,” he told this news organization. “But I’m optimistic that with a return to more traditional learning, we will return the more traditional incidence of these problems.”
Dr. Gunton and Dr. Lipsky have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM AAO 2021
Moms’ cannabis use in pregnancy tied to anxiety and hyperactivity in offspring
Mothers who use cannabis during pregnancy risk disrupting immune gene networks in the placenta and potentially increasing the risk of anxiety and hyperactivity in their children.
These findings emerged from a study led by Yasmin Hurd, PhD, a professor of psychiatry and director of the Addiction Institute at the Icahn School of Medicine at Mount Sinai, New York, and Yoko Nomura, PhD, a professor of behavioral neuroscience at Queen’s College, City University of New York, that was published online in Proceedings of the National Academy of Sciences.
The analysis assessed the effects of gestational maternal cannabis use on psychosocial and physiological measures in young children as well as its potentially immunomodulatory effect on the in utero environment as reflected in the placental transcriptome.
Participants were drawn from a larger cohort in a study launched in 2012; the investigators evaluated offspring aged 3-6 years for hair hormone levels, neurobehavioral traits on the Behavioral Assessment System for Children survey, and heart rate variability (HRV) at rest and during auditory startle.
The cohort consisted of 322 mother-child dyads and children with prenatal exposure to cannabis were compared with those having no exposure. The cohort consisted of 251 non–cannabis-using mothers and 71 cannabis-using mothers, with mean maternal ages in the two groups of 28.46 years and 25.91 years, respectively, The mothers gave birth at Mount Sinai and they and their children were assessed annually at affiliated medical centers in Mount Sinai’s catchment area.
For a subset of children with behavioral assessments, placental specimens collected at birth were processed for RNA sequencing.
Among the findings:
- Maternal cannabis use was associated with reduced maternal and paternal age, more single-mother pregnancies, state anxiety, trait anxiety, depression, cigarette smoking, and African American race.
- Hair hormone analysis revealed increased cortisol levels in the children of cannabis-using mothers, and was associated with greater anxiety, aggression, and hyperactivity.
- Affected children showed a reduction in the high-frequency component of HRV at baseline, reflecting reduced vagal tone.
- In the placenta, there was reduced expression of many genes involved in immune system function. These included genes for type I interferon, neutrophil, and cytokine-signaling pathways.
Several of these genes organized into coexpression networks that correlated with child anxiety and hyperactivity.
The principal active component of cannabis, tetrahydrocannabinol (THC), targets the endocannabinoid system in placental tissue and the developing brain, the authors noted. Exposure during pregnancy is associated with a range of adverse outcomes from fetal growth restriction to low birth weight and preterm birth.
“There are cannabinoid receptors on immune cells, and it is known that cannabinoids can alter immune function, which is important for maintaining maternal tolerance and protecting the fetus,” Dr. Hurd said. “It’s not surprising that something that affects the immune cells can have an impact on the developing fetus.”
“Overall, our findings reveal a relationship between [maternal cannabis use] and immune response gene networks in the placenta as a potential mediator of risk for anxiety-related problems in early childhood,” Dr. Hurd and colleagues wrote, adding that the results have significant implications for defining mental health issues in the children gestated by cannabis-smoking mothers.
Their results align with previous research indicating a greater risk for psychiatric illness in children with prenatal cannabis exposure from maternal use.
“While data are pretty limited in this realm, there are other studies that demonstrate a relationship between early child developmental and behavioral measures and cannabis use during pregnancy,” Camille Hoffman, MD, MSc, a high-risk obstetrics specialist and an associate professor at the University of Colorado at Denver, Aurora, said in an interview. “Our research group found children exposed to cannabis in utero at 10 weeks’ gestation and beyond were less interactive and more withdrawn than children who were not exposed.”
And THC remains in maternal breast milk even 6 weeks after usage stops.
The long-term effects of prenatal cannabis exposure remain to be determined and it is unknown whether the effects of gestational THC might attenuate as a child grows older. “We use early childhood measures in research as a proxy for the later development of diagnosed mental health conditions or behavioral problems,” Dr. Hoffman explained. “We know when we do this that not every child with an abnormal score early will go on to develop an actual condition. Fortunately, or unfortunately, other factors and exposures during childhood can change the trajectory for the better or worse.”
According to Dr. Hurd, child development is a dynamic process and epigenetic events in utero need not be deterministic. “The important thing is to identify children at risk early and to be able to go in and try to improve the environment they’re being raised in – not in terms of impoverishment but in terms of positive nurturing and giving the mother and family support.”
At the prenatal level, what’s the best advice for cannabis-using mothers-to-be? “If a woman doesn’t know she’s pregnant and has been using cannabis, taking extra choline for the remainder of the pregnancy can help buffer the potential negative impact of the cannabis exposure,” Dr. Hoffman said. The Food and Drug Administration and the American Medical Association recommend a dose of 550 mg daily. “The same is true for alcohol, which we know is also very bad for fetal brain development. This is not to say go ahead and use these substances and just take choline. The choline is more to try and salvage damage to the fetal brain that may have already occurred.”
This study was supported by the National Institute of Mental Health and the National Institute on Drug Abuse. The authors declared no competing interests. Dr. Hoffman disclosed no conflicts of interest with respect to her comments.
Mothers who use cannabis during pregnancy risk disrupting immune gene networks in the placenta and potentially increasing the risk of anxiety and hyperactivity in their children.
These findings emerged from a study led by Yasmin Hurd, PhD, a professor of psychiatry and director of the Addiction Institute at the Icahn School of Medicine at Mount Sinai, New York, and Yoko Nomura, PhD, a professor of behavioral neuroscience at Queen’s College, City University of New York, that was published online in Proceedings of the National Academy of Sciences.
The analysis assessed the effects of gestational maternal cannabis use on psychosocial and physiological measures in young children as well as its potentially immunomodulatory effect on the in utero environment as reflected in the placental transcriptome.
Participants were drawn from a larger cohort in a study launched in 2012; the investigators evaluated offspring aged 3-6 years for hair hormone levels, neurobehavioral traits on the Behavioral Assessment System for Children survey, and heart rate variability (HRV) at rest and during auditory startle.
The cohort consisted of 322 mother-child dyads and children with prenatal exposure to cannabis were compared with those having no exposure. The cohort consisted of 251 non–cannabis-using mothers and 71 cannabis-using mothers, with mean maternal ages in the two groups of 28.46 years and 25.91 years, respectively, The mothers gave birth at Mount Sinai and they and their children were assessed annually at affiliated medical centers in Mount Sinai’s catchment area.
For a subset of children with behavioral assessments, placental specimens collected at birth were processed for RNA sequencing.
Among the findings:
- Maternal cannabis use was associated with reduced maternal and paternal age, more single-mother pregnancies, state anxiety, trait anxiety, depression, cigarette smoking, and African American race.
- Hair hormone analysis revealed increased cortisol levels in the children of cannabis-using mothers, and was associated with greater anxiety, aggression, and hyperactivity.
- Affected children showed a reduction in the high-frequency component of HRV at baseline, reflecting reduced vagal tone.
- In the placenta, there was reduced expression of many genes involved in immune system function. These included genes for type I interferon, neutrophil, and cytokine-signaling pathways.
Several of these genes organized into coexpression networks that correlated with child anxiety and hyperactivity.
The principal active component of cannabis, tetrahydrocannabinol (THC), targets the endocannabinoid system in placental tissue and the developing brain, the authors noted. Exposure during pregnancy is associated with a range of adverse outcomes from fetal growth restriction to low birth weight and preterm birth.
“There are cannabinoid receptors on immune cells, and it is known that cannabinoids can alter immune function, which is important for maintaining maternal tolerance and protecting the fetus,” Dr. Hurd said. “It’s not surprising that something that affects the immune cells can have an impact on the developing fetus.”
“Overall, our findings reveal a relationship between [maternal cannabis use] and immune response gene networks in the placenta as a potential mediator of risk for anxiety-related problems in early childhood,” Dr. Hurd and colleagues wrote, adding that the results have significant implications for defining mental health issues in the children gestated by cannabis-smoking mothers.
Their results align with previous research indicating a greater risk for psychiatric illness in children with prenatal cannabis exposure from maternal use.
“While data are pretty limited in this realm, there are other studies that demonstrate a relationship between early child developmental and behavioral measures and cannabis use during pregnancy,” Camille Hoffman, MD, MSc, a high-risk obstetrics specialist and an associate professor at the University of Colorado at Denver, Aurora, said in an interview. “Our research group found children exposed to cannabis in utero at 10 weeks’ gestation and beyond were less interactive and more withdrawn than children who were not exposed.”
And THC remains in maternal breast milk even 6 weeks after usage stops.
The long-term effects of prenatal cannabis exposure remain to be determined and it is unknown whether the effects of gestational THC might attenuate as a child grows older. “We use early childhood measures in research as a proxy for the later development of diagnosed mental health conditions or behavioral problems,” Dr. Hoffman explained. “We know when we do this that not every child with an abnormal score early will go on to develop an actual condition. Fortunately, or unfortunately, other factors and exposures during childhood can change the trajectory for the better or worse.”
According to Dr. Hurd, child development is a dynamic process and epigenetic events in utero need not be deterministic. “The important thing is to identify children at risk early and to be able to go in and try to improve the environment they’re being raised in – not in terms of impoverishment but in terms of positive nurturing and giving the mother and family support.”
At the prenatal level, what’s the best advice for cannabis-using mothers-to-be? “If a woman doesn’t know she’s pregnant and has been using cannabis, taking extra choline for the remainder of the pregnancy can help buffer the potential negative impact of the cannabis exposure,” Dr. Hoffman said. The Food and Drug Administration and the American Medical Association recommend a dose of 550 mg daily. “The same is true for alcohol, which we know is also very bad for fetal brain development. This is not to say go ahead and use these substances and just take choline. The choline is more to try and salvage damage to the fetal brain that may have already occurred.”
This study was supported by the National Institute of Mental Health and the National Institute on Drug Abuse. The authors declared no competing interests. Dr. Hoffman disclosed no conflicts of interest with respect to her comments.
Mothers who use cannabis during pregnancy risk disrupting immune gene networks in the placenta and potentially increasing the risk of anxiety and hyperactivity in their children.
These findings emerged from a study led by Yasmin Hurd, PhD, a professor of psychiatry and director of the Addiction Institute at the Icahn School of Medicine at Mount Sinai, New York, and Yoko Nomura, PhD, a professor of behavioral neuroscience at Queen’s College, City University of New York, that was published online in Proceedings of the National Academy of Sciences.
The analysis assessed the effects of gestational maternal cannabis use on psychosocial and physiological measures in young children as well as its potentially immunomodulatory effect on the in utero environment as reflected in the placental transcriptome.
Participants were drawn from a larger cohort in a study launched in 2012; the investigators evaluated offspring aged 3-6 years for hair hormone levels, neurobehavioral traits on the Behavioral Assessment System for Children survey, and heart rate variability (HRV) at rest and during auditory startle.
The cohort consisted of 322 mother-child dyads and children with prenatal exposure to cannabis were compared with those having no exposure. The cohort consisted of 251 non–cannabis-using mothers and 71 cannabis-using mothers, with mean maternal ages in the two groups of 28.46 years and 25.91 years, respectively, The mothers gave birth at Mount Sinai and they and their children were assessed annually at affiliated medical centers in Mount Sinai’s catchment area.
For a subset of children with behavioral assessments, placental specimens collected at birth were processed for RNA sequencing.
Among the findings:
- Maternal cannabis use was associated with reduced maternal and paternal age, more single-mother pregnancies, state anxiety, trait anxiety, depression, cigarette smoking, and African American race.
- Hair hormone analysis revealed increased cortisol levels in the children of cannabis-using mothers, and was associated with greater anxiety, aggression, and hyperactivity.
- Affected children showed a reduction in the high-frequency component of HRV at baseline, reflecting reduced vagal tone.
- In the placenta, there was reduced expression of many genes involved in immune system function. These included genes for type I interferon, neutrophil, and cytokine-signaling pathways.
Several of these genes organized into coexpression networks that correlated with child anxiety and hyperactivity.
The principal active component of cannabis, tetrahydrocannabinol (THC), targets the endocannabinoid system in placental tissue and the developing brain, the authors noted. Exposure during pregnancy is associated with a range of adverse outcomes from fetal growth restriction to low birth weight and preterm birth.
“There are cannabinoid receptors on immune cells, and it is known that cannabinoids can alter immune function, which is important for maintaining maternal tolerance and protecting the fetus,” Dr. Hurd said. “It’s not surprising that something that affects the immune cells can have an impact on the developing fetus.”
“Overall, our findings reveal a relationship between [maternal cannabis use] and immune response gene networks in the placenta as a potential mediator of risk for anxiety-related problems in early childhood,” Dr. Hurd and colleagues wrote, adding that the results have significant implications for defining mental health issues in the children gestated by cannabis-smoking mothers.
Their results align with previous research indicating a greater risk for psychiatric illness in children with prenatal cannabis exposure from maternal use.
“While data are pretty limited in this realm, there are other studies that demonstrate a relationship between early child developmental and behavioral measures and cannabis use during pregnancy,” Camille Hoffman, MD, MSc, a high-risk obstetrics specialist and an associate professor at the University of Colorado at Denver, Aurora, said in an interview. “Our research group found children exposed to cannabis in utero at 10 weeks’ gestation and beyond were less interactive and more withdrawn than children who were not exposed.”
And THC remains in maternal breast milk even 6 weeks after usage stops.
The long-term effects of prenatal cannabis exposure remain to be determined and it is unknown whether the effects of gestational THC might attenuate as a child grows older. “We use early childhood measures in research as a proxy for the later development of diagnosed mental health conditions or behavioral problems,” Dr. Hoffman explained. “We know when we do this that not every child with an abnormal score early will go on to develop an actual condition. Fortunately, or unfortunately, other factors and exposures during childhood can change the trajectory for the better or worse.”
According to Dr. Hurd, child development is a dynamic process and epigenetic events in utero need not be deterministic. “The important thing is to identify children at risk early and to be able to go in and try to improve the environment they’re being raised in – not in terms of impoverishment but in terms of positive nurturing and giving the mother and family support.”
At the prenatal level, what’s the best advice for cannabis-using mothers-to-be? “If a woman doesn’t know she’s pregnant and has been using cannabis, taking extra choline for the remainder of the pregnancy can help buffer the potential negative impact of the cannabis exposure,” Dr. Hoffman said. The Food and Drug Administration and the American Medical Association recommend a dose of 550 mg daily. “The same is true for alcohol, which we know is also very bad for fetal brain development. This is not to say go ahead and use these substances and just take choline. The choline is more to try and salvage damage to the fetal brain that may have already occurred.”
This study was supported by the National Institute of Mental Health and the National Institute on Drug Abuse. The authors declared no competing interests. Dr. Hoffman disclosed no conflicts of interest with respect to her comments.
FROM PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES
Erenumab beats topiramate for migraine in first head-to-head trial
, according to data from almost 800 patients in the first head-to-head trial of its kind.
The findings suggest that erenumab may help overcome longstanding issues with migraine medication adherence, and additional supportive data may alter treatment sequencing, reported lead author Uwe Reuter, MD, professor at Charité Universitätsmedizin Berlin, and colleagues.
“So far, no study has been done in order to compare the efficacy of a monoclonal antibody targeting the CGRP pathway to that of a standard of care oral preventive drug,” the investigators wrote in Cephalalgia.
The phase 4 HER-MES trial aimed to address this knowledge gap by enrolling 777 adult patients with a history of migraine. All patients reported migraine with or without aura for at least 1 year prior to screening. At baseline, most patients (65%) reported 8-14 migraine days per months, followed by 4-7 days (24.0%), and at least 15 days (11.0%). No patients had previously received topiramate or a CGRP-targeting agent.
“HER-MES includes a broad migraine population with two-thirds of the patients in the high-frequency migraine spectrum,” the investigators noted. “Despite a mean disease duration of about 20 years, almost 60% of the patients had not received previous prophylactic treatment, which underlines the long-standing problem of undertreatment in migraine.”
The trial had a double-dummy design; patients were randomized in a 1:1 ratio to receive either subcutaneous erenumab (70 or 140 mg/month) plus oral placebo, or oral topiramate (50-100 mg/day) plus subcutaneous placebo. The topiramate dose was uptitrated over the first 6 weeks. Treatments were given for a total of 24 weeks or until discontinuation due to adverse events, which was the primary endpoint. The secondary endpoint was efficacy over months 4-6, defined as at least 50% reduction in monthly migraine days, compared with baseline. Other patient-reported outcomes were also evaluated.
After 24 weeks, 95.1% of patients were still enrolled in the trial. Discontinuations due to adverse events were almost four times as common in the topiramate group than the erenumab group (38.9% vs. 10.6%; odds ratio [OR], 0.19; confidence interval, 0.13-0.27; P less than .001). Efficacy findings followed suit, with 55.4% of patients in the erenumab group reporting at least 50% reduction in monthly migraine days, compared with 31.2% of patients in the topiramate group (OR, 2.76; 95% CI, 2.06-3.71; P less than.001).
Erenumab significantly improved monthly migraine days, headache impact test (HIT-6) scores, and short form health survey version (SF-35v2) scores, including physical and mental components (P less than .001 for all).
Safety profiles aligned with previous findings.
“Compared to topiramate, treatment with erenumab has a superior tolerability profile and a significantly higher efficacy,” the investigators concluded. “HER-MES supports the potential of erenumab in overcoming issues of low adherence in clinical practice observed with topiramate, lessening migraine burden, and improving quality of life in a broad migraine population.”
Superior tolerability
Commenting on the study, Alan Rapoport, MD, clinical professor of neurology at the University of California, Los Angeles, and editor-in-chief of Neurology Reviews, said this is “a very important, very well conducted trial that documents what many of us already suspected; erenumab clearly has better tolerability than topiramate as well as better efficacy.”
Dr. Rapoport, a past president of the International Headache Society, said the study highlights an area of unmet need in neurology practice.
“Despite most patients in the trial having chronic headaches for 20 years, 60% of them had never received preventive treatment,” he said, noting that this reflects current practice in the United States.
Dr. Rapoport said primary care providers in the United States prescribe preventive migraine medications to 10%-15% of eligible patients. Prescribing rates for general neurologists are slightly higher, he said, ranging from 35% to 40%, while headache specialists prescribe 70%-90% of the time.
“How can we improve this situation?” Dr. Rapoport asked. “For years we have tried to improve it with education, but we need to do a better job. We need to educate our primary care physicians in more practical ways. We have to teach them how to make a diagnosis of high frequency migraine and chronic migraine and strongly suggest that those patients be put on appropriate preventive medications.”
Barriers to care may be systemic, according to Dr. Rapoport.
“One issue in the U.S. is that patients with commercial insurance are almost always required to fail two or three categories of older oral preventive migraine medications before they can get a monoclonal antibody or gepants for prevention,” he said. “It would be good if we could change that system so that patients that absolutely need the better tolerated, more effective preventive medications could get them sooner rather than later. This will help them feel and function better, with less pain, and eventually bring down the cost of migraine therapy.”
While Dr. Reuter and colleagues concluded that revised treatment sequencing may be warranted after more trials show similar results, Dr. Rapoport suggested that “this was such a large, well-performed, 6-month study with few dropouts, that further trials to confirm these findings are unnecessary, in my opinion.”
The HER-MES trial was funded by Novartis. Dr. Reuter and colleagues disclosed additional relationships with Eli Lilly, Teva Pharmaceutical, Allergan, and others. Dr. Rapoport was involved in early topiramate trials for prevention and migraine, and is a speaker for Amgen.
, according to data from almost 800 patients in the first head-to-head trial of its kind.
The findings suggest that erenumab may help overcome longstanding issues with migraine medication adherence, and additional supportive data may alter treatment sequencing, reported lead author Uwe Reuter, MD, professor at Charité Universitätsmedizin Berlin, and colleagues.
“So far, no study has been done in order to compare the efficacy of a monoclonal antibody targeting the CGRP pathway to that of a standard of care oral preventive drug,” the investigators wrote in Cephalalgia.
The phase 4 HER-MES trial aimed to address this knowledge gap by enrolling 777 adult patients with a history of migraine. All patients reported migraine with or without aura for at least 1 year prior to screening. At baseline, most patients (65%) reported 8-14 migraine days per months, followed by 4-7 days (24.0%), and at least 15 days (11.0%). No patients had previously received topiramate or a CGRP-targeting agent.
“HER-MES includes a broad migraine population with two-thirds of the patients in the high-frequency migraine spectrum,” the investigators noted. “Despite a mean disease duration of about 20 years, almost 60% of the patients had not received previous prophylactic treatment, which underlines the long-standing problem of undertreatment in migraine.”
The trial had a double-dummy design; patients were randomized in a 1:1 ratio to receive either subcutaneous erenumab (70 or 140 mg/month) plus oral placebo, or oral topiramate (50-100 mg/day) plus subcutaneous placebo. The topiramate dose was uptitrated over the first 6 weeks. Treatments were given for a total of 24 weeks or until discontinuation due to adverse events, which was the primary endpoint. The secondary endpoint was efficacy over months 4-6, defined as at least 50% reduction in monthly migraine days, compared with baseline. Other patient-reported outcomes were also evaluated.
After 24 weeks, 95.1% of patients were still enrolled in the trial. Discontinuations due to adverse events were almost four times as common in the topiramate group than the erenumab group (38.9% vs. 10.6%; odds ratio [OR], 0.19; confidence interval, 0.13-0.27; P less than .001). Efficacy findings followed suit, with 55.4% of patients in the erenumab group reporting at least 50% reduction in monthly migraine days, compared with 31.2% of patients in the topiramate group (OR, 2.76; 95% CI, 2.06-3.71; P less than.001).
Erenumab significantly improved monthly migraine days, headache impact test (HIT-6) scores, and short form health survey version (SF-35v2) scores, including physical and mental components (P less than .001 for all).
Safety profiles aligned with previous findings.
“Compared to topiramate, treatment with erenumab has a superior tolerability profile and a significantly higher efficacy,” the investigators concluded. “HER-MES supports the potential of erenumab in overcoming issues of low adherence in clinical practice observed with topiramate, lessening migraine burden, and improving quality of life in a broad migraine population.”
Superior tolerability
Commenting on the study, Alan Rapoport, MD, clinical professor of neurology at the University of California, Los Angeles, and editor-in-chief of Neurology Reviews, said this is “a very important, very well conducted trial that documents what many of us already suspected; erenumab clearly has better tolerability than topiramate as well as better efficacy.”
Dr. Rapoport, a past president of the International Headache Society, said the study highlights an area of unmet need in neurology practice.
“Despite most patients in the trial having chronic headaches for 20 years, 60% of them had never received preventive treatment,” he said, noting that this reflects current practice in the United States.
Dr. Rapoport said primary care providers in the United States prescribe preventive migraine medications to 10%-15% of eligible patients. Prescribing rates for general neurologists are slightly higher, he said, ranging from 35% to 40%, while headache specialists prescribe 70%-90% of the time.
“How can we improve this situation?” Dr. Rapoport asked. “For years we have tried to improve it with education, but we need to do a better job. We need to educate our primary care physicians in more practical ways. We have to teach them how to make a diagnosis of high frequency migraine and chronic migraine and strongly suggest that those patients be put on appropriate preventive medications.”
Barriers to care may be systemic, according to Dr. Rapoport.
“One issue in the U.S. is that patients with commercial insurance are almost always required to fail two or three categories of older oral preventive migraine medications before they can get a monoclonal antibody or gepants for prevention,” he said. “It would be good if we could change that system so that patients that absolutely need the better tolerated, more effective preventive medications could get them sooner rather than later. This will help them feel and function better, with less pain, and eventually bring down the cost of migraine therapy.”
While Dr. Reuter and colleagues concluded that revised treatment sequencing may be warranted after more trials show similar results, Dr. Rapoport suggested that “this was such a large, well-performed, 6-month study with few dropouts, that further trials to confirm these findings are unnecessary, in my opinion.”
The HER-MES trial was funded by Novartis. Dr. Reuter and colleagues disclosed additional relationships with Eli Lilly, Teva Pharmaceutical, Allergan, and others. Dr. Rapoport was involved in early topiramate trials for prevention and migraine, and is a speaker for Amgen.
, according to data from almost 800 patients in the first head-to-head trial of its kind.
The findings suggest that erenumab may help overcome longstanding issues with migraine medication adherence, and additional supportive data may alter treatment sequencing, reported lead author Uwe Reuter, MD, professor at Charité Universitätsmedizin Berlin, and colleagues.
“So far, no study has been done in order to compare the efficacy of a monoclonal antibody targeting the CGRP pathway to that of a standard of care oral preventive drug,” the investigators wrote in Cephalalgia.
The phase 4 HER-MES trial aimed to address this knowledge gap by enrolling 777 adult patients with a history of migraine. All patients reported migraine with or without aura for at least 1 year prior to screening. At baseline, most patients (65%) reported 8-14 migraine days per months, followed by 4-7 days (24.0%), and at least 15 days (11.0%). No patients had previously received topiramate or a CGRP-targeting agent.
“HER-MES includes a broad migraine population with two-thirds of the patients in the high-frequency migraine spectrum,” the investigators noted. “Despite a mean disease duration of about 20 years, almost 60% of the patients had not received previous prophylactic treatment, which underlines the long-standing problem of undertreatment in migraine.”
The trial had a double-dummy design; patients were randomized in a 1:1 ratio to receive either subcutaneous erenumab (70 or 140 mg/month) plus oral placebo, or oral topiramate (50-100 mg/day) plus subcutaneous placebo. The topiramate dose was uptitrated over the first 6 weeks. Treatments were given for a total of 24 weeks or until discontinuation due to adverse events, which was the primary endpoint. The secondary endpoint was efficacy over months 4-6, defined as at least 50% reduction in monthly migraine days, compared with baseline. Other patient-reported outcomes were also evaluated.
After 24 weeks, 95.1% of patients were still enrolled in the trial. Discontinuations due to adverse events were almost four times as common in the topiramate group than the erenumab group (38.9% vs. 10.6%; odds ratio [OR], 0.19; confidence interval, 0.13-0.27; P less than .001). Efficacy findings followed suit, with 55.4% of patients in the erenumab group reporting at least 50% reduction in monthly migraine days, compared with 31.2% of patients in the topiramate group (OR, 2.76; 95% CI, 2.06-3.71; P less than.001).
Erenumab significantly improved monthly migraine days, headache impact test (HIT-6) scores, and short form health survey version (SF-35v2) scores, including physical and mental components (P less than .001 for all).
Safety profiles aligned with previous findings.
“Compared to topiramate, treatment with erenumab has a superior tolerability profile and a significantly higher efficacy,” the investigators concluded. “HER-MES supports the potential of erenumab in overcoming issues of low adherence in clinical practice observed with topiramate, lessening migraine burden, and improving quality of life in a broad migraine population.”
Superior tolerability
Commenting on the study, Alan Rapoport, MD, clinical professor of neurology at the University of California, Los Angeles, and editor-in-chief of Neurology Reviews, said this is “a very important, very well conducted trial that documents what many of us already suspected; erenumab clearly has better tolerability than topiramate as well as better efficacy.”
Dr. Rapoport, a past president of the International Headache Society, said the study highlights an area of unmet need in neurology practice.
“Despite most patients in the trial having chronic headaches for 20 years, 60% of them had never received preventive treatment,” he said, noting that this reflects current practice in the United States.
Dr. Rapoport said primary care providers in the United States prescribe preventive migraine medications to 10%-15% of eligible patients. Prescribing rates for general neurologists are slightly higher, he said, ranging from 35% to 40%, while headache specialists prescribe 70%-90% of the time.
“How can we improve this situation?” Dr. Rapoport asked. “For years we have tried to improve it with education, but we need to do a better job. We need to educate our primary care physicians in more practical ways. We have to teach them how to make a diagnosis of high frequency migraine and chronic migraine and strongly suggest that those patients be put on appropriate preventive medications.”
Barriers to care may be systemic, according to Dr. Rapoport.
“One issue in the U.S. is that patients with commercial insurance are almost always required to fail two or three categories of older oral preventive migraine medications before they can get a monoclonal antibody or gepants for prevention,” he said. “It would be good if we could change that system so that patients that absolutely need the better tolerated, more effective preventive medications could get them sooner rather than later. This will help them feel and function better, with less pain, and eventually bring down the cost of migraine therapy.”
While Dr. Reuter and colleagues concluded that revised treatment sequencing may be warranted after more trials show similar results, Dr. Rapoport suggested that “this was such a large, well-performed, 6-month study with few dropouts, that further trials to confirm these findings are unnecessary, in my opinion.”
The HER-MES trial was funded by Novartis. Dr. Reuter and colleagues disclosed additional relationships with Eli Lilly, Teva Pharmaceutical, Allergan, and others. Dr. Rapoport was involved in early topiramate trials for prevention and migraine, and is a speaker for Amgen.
FROM CEPHALALGIA
The neurological super powers of grandma are real
Deer, COVID, how?
Usually humans cannot get close enough to a deer to really be face-to-face, so it’s easy to question how on Earth deer are contracting COVID-19. Well, stranger things have happened, and honestly, we’ve just stopped questioning most of them.
Exhibit A comes to us from a Penn State University study: Eighty percent of deer sampled in Iowa in December 2020 and January 2021 – as part of the state’s chronic wasting disease surveillance program – were found to be positive for COVID-19.
A statement from the university said that “white-tailed deer may be a reservoir for the virus to continually circulate and raise concerns about the emergence of new strains that may prove a threat to wildlife and, possibly, to humans.” The investigators also suggested that deer probably caught the virus from humans and then transmitted it to other deer.
If you or someone you know is a hunter or a white-tailed deer, it’s best to proceed with caution. There’s no evidence that COVID-19 has jumped from deer to humans, but hunters should wear masks and gloves while working with deer, worrying not just about the deer’s face, but also … you know, the gastrointestinal parts, Robert Salata, MD, of University Hospitals Cleveland Medical Center, told Syracuse.com. It also shouldn’t be too risky to eat venison, he said, just make sure the meat is cooked thoroughly.
The more you know!
The neurological super powers of grandma are real
What is it about grandmothers that makes them seem almost magical at times? They somehow always know how you feel. And they can almost always tell when something is wrong. They also seem to be the biggest ally a child will have against his or her parents.
So what makes these super matriarchs? The answer is in the brain.
Apparently there’s a function in the brains of grandmothers geared toward “emotional empathy.” James Rilling, PhD, of Emory University, lead author of a recent study focused on looking at the brain function of grandmothers, suggested that they’re neurologically tapped into feeling how their grandchildren feel: “If their grandchild is smiling, they’re feeling the child’s joy. And if their grandchild is crying, they’re feeling the child’s pain and distress.”
And then there’s the cute factor. Never underestimate a child’s ability to manipulate his or her grandmother’s brain.
So how do the researchers know this? Functional MRI showed more brain activity in the parts of the brain that deal with emotional empathy and movement in the participating grandmas when shown pictures of their grandchildren. Images of their own adult children lit up areas more associated with cognitive empathy. So less emotional and more mental/logical understanding.
Kids, don’t tell Mom about the secret midnight snacks with grandma. She wouldn’t get it.
Then there’s the grandmother hypothesis, which suggests that women tend to live longer to provide some kind of evolutionary benefit to their children and grandchildren. Evidence also exists that children with positive engagement from their grandmothers tend to have better social and academic outcomes, behavior, and physical health.
A lot of credit on how children turn out, of course, goes to parents, but more can be said about grandmas. Don’t let the age and freshly baked cookies fool you. They have neurologic superpowers within.
Brain cleanup on aisle 5
You’ve got your local grocery store down. You know the ins and outs; you know where everything is. Last week you did your trip in record time. This week, however, you have to stop at a different store. Same chain, but a different location. You stroll in, confidently walk toward the first aisle for your fruits and veggies, and ... it’s all ice cream. Oops.
There’s a lot we don’t understand about the brain, including how it remembers familiar environments to avoid confusion. Or why it fails to do so, as with our grocery store example. However, thanks to a study from the University of Arizona, we may have an answer.
For the experiment, a group of participants watched a video tour of three virtual cities. Those cities were very similar, being laid out in basically identical fashion. Stores could be found in the same places, but the identity of those stores varied. Some stores were in all three cities, some were in two, and some were unique. Participants were asked to memorize the layouts, and those who got things more than 80% correct ran through the test again, only this time their brain activity was monitored through MRI.
In general, brain activity was similar for the participants; after all, they were recalling similar environments. However, when asked about stores that appeared in multiple cities, brain activity varied dramatically. This indicated to the researchers that the brain was recalling shared stores as if they were more dissimilar than two completely disparate and unique stores, a concept often known to brain scientists as “repulsion.” It also indicates that the memories regarding shared environments are stored in the prefrontal cortex, not the hippocampus, which typically handles memory.
The researchers plan to apply this information to questions about diseases such as Alzheimer’s, so the next time you get turned around in a weirdly unfamiliar grocery store, just think: “It’s okay, I’m helping to solve a terrible brain disease.”
The real endgame: Friction is the winner
Spoiler alert! If you haven’t seen “Avengers: Infinity War” yet, we’re about to ruin it for you.
For those still with us, here’s the spoiler: Thanos would not have been able to snap his fingers while wearing the Infinity Gauntlet.
Saad Bhamla, PhD, of Georgia Tech University’s school of chemical and biomolecular engineering, had been studying powerful and ultrafast motions in living organisms along with several colleagues before the movie came out in 2018, and when they saw the finger-snapping scene it got them wondering.
Being scientists of course, they had no choice. They got out their high-speed imaging equipment, automated image processing software, and dynamic force sensors and analyzed finger snaps, paying close attention to friction by covering fingers with “different materials, including metallic thimbles to simulate the effects of trying to snap while wearing a metallic gauntlet, much like Thanos,” according to a statement on Eurekalert.
With finger snaps, it’s all about the rotational velocity. The angular acceleration involved is the fastest ever measured in a human, with a professional baseball pitcher’s throwing arm a distant second.
Dr. Bhamla’s reaction to their work explains why scientists are the ones doing science. “When I first saw the data, I jumped out of my chair,” he said in the written statement.
Rotational velocities dropped dramatically when the friction-reducing thimbles were used, so there was no snap. Which means that billions and billions of fictional lives could have been saved if the filmmakers had just talked to the right scientist.
That scientist, clearly, is Dr. Bhamla, who said that “this is the only scientific project in my lab in which we could snap our fingers and get data.”
Deer, COVID, how?
Usually humans cannot get close enough to a deer to really be face-to-face, so it’s easy to question how on Earth deer are contracting COVID-19. Well, stranger things have happened, and honestly, we’ve just stopped questioning most of them.
Exhibit A comes to us from a Penn State University study: Eighty percent of deer sampled in Iowa in December 2020 and January 2021 – as part of the state’s chronic wasting disease surveillance program – were found to be positive for COVID-19.
A statement from the university said that “white-tailed deer may be a reservoir for the virus to continually circulate and raise concerns about the emergence of new strains that may prove a threat to wildlife and, possibly, to humans.” The investigators also suggested that deer probably caught the virus from humans and then transmitted it to other deer.
If you or someone you know is a hunter or a white-tailed deer, it’s best to proceed with caution. There’s no evidence that COVID-19 has jumped from deer to humans, but hunters should wear masks and gloves while working with deer, worrying not just about the deer’s face, but also … you know, the gastrointestinal parts, Robert Salata, MD, of University Hospitals Cleveland Medical Center, told Syracuse.com. It also shouldn’t be too risky to eat venison, he said, just make sure the meat is cooked thoroughly.
The more you know!
The neurological super powers of grandma are real
What is it about grandmothers that makes them seem almost magical at times? They somehow always know how you feel. And they can almost always tell when something is wrong. They also seem to be the biggest ally a child will have against his or her parents.
So what makes these super matriarchs? The answer is in the brain.
Apparently there’s a function in the brains of grandmothers geared toward “emotional empathy.” James Rilling, PhD, of Emory University, lead author of a recent study focused on looking at the brain function of grandmothers, suggested that they’re neurologically tapped into feeling how their grandchildren feel: “If their grandchild is smiling, they’re feeling the child’s joy. And if their grandchild is crying, they’re feeling the child’s pain and distress.”
And then there’s the cute factor. Never underestimate a child’s ability to manipulate his or her grandmother’s brain.
So how do the researchers know this? Functional MRI showed more brain activity in the parts of the brain that deal with emotional empathy and movement in the participating grandmas when shown pictures of their grandchildren. Images of their own adult children lit up areas more associated with cognitive empathy. So less emotional and more mental/logical understanding.
Kids, don’t tell Mom about the secret midnight snacks with grandma. She wouldn’t get it.
Then there’s the grandmother hypothesis, which suggests that women tend to live longer to provide some kind of evolutionary benefit to their children and grandchildren. Evidence also exists that children with positive engagement from their grandmothers tend to have better social and academic outcomes, behavior, and physical health.
A lot of credit on how children turn out, of course, goes to parents, but more can be said about grandmas. Don’t let the age and freshly baked cookies fool you. They have neurologic superpowers within.
Brain cleanup on aisle 5
You’ve got your local grocery store down. You know the ins and outs; you know where everything is. Last week you did your trip in record time. This week, however, you have to stop at a different store. Same chain, but a different location. You stroll in, confidently walk toward the first aisle for your fruits and veggies, and ... it’s all ice cream. Oops.
There’s a lot we don’t understand about the brain, including how it remembers familiar environments to avoid confusion. Or why it fails to do so, as with our grocery store example. However, thanks to a study from the University of Arizona, we may have an answer.
For the experiment, a group of participants watched a video tour of three virtual cities. Those cities were very similar, being laid out in basically identical fashion. Stores could be found in the same places, but the identity of those stores varied. Some stores were in all three cities, some were in two, and some were unique. Participants were asked to memorize the layouts, and those who got things more than 80% correct ran through the test again, only this time their brain activity was monitored through MRI.
In general, brain activity was similar for the participants; after all, they were recalling similar environments. However, when asked about stores that appeared in multiple cities, brain activity varied dramatically. This indicated to the researchers that the brain was recalling shared stores as if they were more dissimilar than two completely disparate and unique stores, a concept often known to brain scientists as “repulsion.” It also indicates that the memories regarding shared environments are stored in the prefrontal cortex, not the hippocampus, which typically handles memory.
The researchers plan to apply this information to questions about diseases such as Alzheimer’s, so the next time you get turned around in a weirdly unfamiliar grocery store, just think: “It’s okay, I’m helping to solve a terrible brain disease.”
The real endgame: Friction is the winner
Spoiler alert! If you haven’t seen “Avengers: Infinity War” yet, we’re about to ruin it for you.
For those still with us, here’s the spoiler: Thanos would not have been able to snap his fingers while wearing the Infinity Gauntlet.
Saad Bhamla, PhD, of Georgia Tech University’s school of chemical and biomolecular engineering, had been studying powerful and ultrafast motions in living organisms along with several colleagues before the movie came out in 2018, and when they saw the finger-snapping scene it got them wondering.
Being scientists of course, they had no choice. They got out their high-speed imaging equipment, automated image processing software, and dynamic force sensors and analyzed finger snaps, paying close attention to friction by covering fingers with “different materials, including metallic thimbles to simulate the effects of trying to snap while wearing a metallic gauntlet, much like Thanos,” according to a statement on Eurekalert.
With finger snaps, it’s all about the rotational velocity. The angular acceleration involved is the fastest ever measured in a human, with a professional baseball pitcher’s throwing arm a distant second.
Dr. Bhamla’s reaction to their work explains why scientists are the ones doing science. “When I first saw the data, I jumped out of my chair,” he said in the written statement.
Rotational velocities dropped dramatically when the friction-reducing thimbles were used, so there was no snap. Which means that billions and billions of fictional lives could have been saved if the filmmakers had just talked to the right scientist.
That scientist, clearly, is Dr. Bhamla, who said that “this is the only scientific project in my lab in which we could snap our fingers and get data.”
Deer, COVID, how?
Usually humans cannot get close enough to a deer to really be face-to-face, so it’s easy to question how on Earth deer are contracting COVID-19. Well, stranger things have happened, and honestly, we’ve just stopped questioning most of them.
Exhibit A comes to us from a Penn State University study: Eighty percent of deer sampled in Iowa in December 2020 and January 2021 – as part of the state’s chronic wasting disease surveillance program – were found to be positive for COVID-19.
A statement from the university said that “white-tailed deer may be a reservoir for the virus to continually circulate and raise concerns about the emergence of new strains that may prove a threat to wildlife and, possibly, to humans.” The investigators also suggested that deer probably caught the virus from humans and then transmitted it to other deer.
If you or someone you know is a hunter or a white-tailed deer, it’s best to proceed with caution. There’s no evidence that COVID-19 has jumped from deer to humans, but hunters should wear masks and gloves while working with deer, worrying not just about the deer’s face, but also … you know, the gastrointestinal parts, Robert Salata, MD, of University Hospitals Cleveland Medical Center, told Syracuse.com. It also shouldn’t be too risky to eat venison, he said, just make sure the meat is cooked thoroughly.
The more you know!
The neurological super powers of grandma are real
What is it about grandmothers that makes them seem almost magical at times? They somehow always know how you feel. And they can almost always tell when something is wrong. They also seem to be the biggest ally a child will have against his or her parents.
So what makes these super matriarchs? The answer is in the brain.
Apparently there’s a function in the brains of grandmothers geared toward “emotional empathy.” James Rilling, PhD, of Emory University, lead author of a recent study focused on looking at the brain function of grandmothers, suggested that they’re neurologically tapped into feeling how their grandchildren feel: “If their grandchild is smiling, they’re feeling the child’s joy. And if their grandchild is crying, they’re feeling the child’s pain and distress.”
And then there’s the cute factor. Never underestimate a child’s ability to manipulate his or her grandmother’s brain.
So how do the researchers know this? Functional MRI showed more brain activity in the parts of the brain that deal with emotional empathy and movement in the participating grandmas when shown pictures of their grandchildren. Images of their own adult children lit up areas more associated with cognitive empathy. So less emotional and more mental/logical understanding.
Kids, don’t tell Mom about the secret midnight snacks with grandma. She wouldn’t get it.
Then there’s the grandmother hypothesis, which suggests that women tend to live longer to provide some kind of evolutionary benefit to their children and grandchildren. Evidence also exists that children with positive engagement from their grandmothers tend to have better social and academic outcomes, behavior, and physical health.
A lot of credit on how children turn out, of course, goes to parents, but more can be said about grandmas. Don’t let the age and freshly baked cookies fool you. They have neurologic superpowers within.
Brain cleanup on aisle 5
You’ve got your local grocery store down. You know the ins and outs; you know where everything is. Last week you did your trip in record time. This week, however, you have to stop at a different store. Same chain, but a different location. You stroll in, confidently walk toward the first aisle for your fruits and veggies, and ... it’s all ice cream. Oops.
There’s a lot we don’t understand about the brain, including how it remembers familiar environments to avoid confusion. Or why it fails to do so, as with our grocery store example. However, thanks to a study from the University of Arizona, we may have an answer.
For the experiment, a group of participants watched a video tour of three virtual cities. Those cities were very similar, being laid out in basically identical fashion. Stores could be found in the same places, but the identity of those stores varied. Some stores were in all three cities, some were in two, and some were unique. Participants were asked to memorize the layouts, and those who got things more than 80% correct ran through the test again, only this time their brain activity was monitored through MRI.
In general, brain activity was similar for the participants; after all, they were recalling similar environments. However, when asked about stores that appeared in multiple cities, brain activity varied dramatically. This indicated to the researchers that the brain was recalling shared stores as if they were more dissimilar than two completely disparate and unique stores, a concept often known to brain scientists as “repulsion.” It also indicates that the memories regarding shared environments are stored in the prefrontal cortex, not the hippocampus, which typically handles memory.
The researchers plan to apply this information to questions about diseases such as Alzheimer’s, so the next time you get turned around in a weirdly unfamiliar grocery store, just think: “It’s okay, I’m helping to solve a terrible brain disease.”
The real endgame: Friction is the winner
Spoiler alert! If you haven’t seen “Avengers: Infinity War” yet, we’re about to ruin it for you.
For those still with us, here’s the spoiler: Thanos would not have been able to snap his fingers while wearing the Infinity Gauntlet.
Saad Bhamla, PhD, of Georgia Tech University’s school of chemical and biomolecular engineering, had been studying powerful and ultrafast motions in living organisms along with several colleagues before the movie came out in 2018, and when they saw the finger-snapping scene it got them wondering.
Being scientists of course, they had no choice. They got out their high-speed imaging equipment, automated image processing software, and dynamic force sensors and analyzed finger snaps, paying close attention to friction by covering fingers with “different materials, including metallic thimbles to simulate the effects of trying to snap while wearing a metallic gauntlet, much like Thanos,” according to a statement on Eurekalert.
With finger snaps, it’s all about the rotational velocity. The angular acceleration involved is the fastest ever measured in a human, with a professional baseball pitcher’s throwing arm a distant second.
Dr. Bhamla’s reaction to their work explains why scientists are the ones doing science. “When I first saw the data, I jumped out of my chair,” he said in the written statement.
Rotational velocities dropped dramatically when the friction-reducing thimbles were used, so there was no snap. Which means that billions and billions of fictional lives could have been saved if the filmmakers had just talked to the right scientist.
That scientist, clearly, is Dr. Bhamla, who said that “this is the only scientific project in my lab in which we could snap our fingers and get data.”
Coffee or tea? Drinking both tied to lower stroke, dementia risk
Drinking coffee or tea is associated with reduced risk for stroke and dementia, with the biggest benefit associated with consuming both beverages, new research suggests.
Investigators found that individuals who drank two to three cups of coffee and two to three cups of tea per day had a 30% decrease in incidence of stroke and a 28% lower risk for dementia compared with those who did not.
“From a public health perspective, because regular tea and coffee drinkers comprise such a large proportion of the population and because these beverages tend to be consumed habitually throughout adult life, even small potential health benefits or risks associated with tea and coffee intake may have important public health implications,” the investigators wrote.
The study was published online Nov. 16 in PLOS Medicine.
Synergistic effect?
Whereas earlier studies have shown significant health benefits from moderate coffee and tea intake separately, few have examined the effect of drinking both.
Researchers enrolled 365,682 participants from the UK Biobank for the analysis of coffee and tea consumption and stroke and dementia risk and 13,352 participants for the analysis of poststroke dementia.
During a median follow-up of 11.4 years, 2.8% of participants experienced a stroke and 1.4% developed dementia.
After adjustment for confounders, stroke risk was 10% lower in those who drank a half-cup to a cup of coffee per day (P < .001) and 8% lower in those who had more than two cups a day (P = .009). Tea drinkers who had more than two cups a day saw a 16% reduction in stroke (P < .001).
Those who drank both coffee and tea during the day saw the greatest benefit. Drinking two to three cups of coffee and two to three cups of tea lowered stroke risk by 32% (P < .001) and dementia risk by 28% (P = .002).
Drinking both beverages offered significantly greater benefits than drinking just coffee or tea alone, with an 11% lower risk for stroke (P < .001), an 8% lower risk for dementia (P = .001), and 18% lower risk for vascular dementia (P = .001).
Among those participants who experienced a stroke during the follow-up period, drinking two to three cups of coffee was associated with 20% lower risk for poststroke dementia (P = .044), and for those who drank both coffee and tea (half to one cup of coffee and two to three cups of tea per day) the risk for poststroke dementia was lowered by 50% (P =.006).
There was no significant association between coffee and tea consumption and risk for hemorrhagic stroke or Alzheimer’s disease.
The study was funded by the National Natural Science Foundation of China. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Drinking coffee or tea is associated with reduced risk for stroke and dementia, with the biggest benefit associated with consuming both beverages, new research suggests.
Investigators found that individuals who drank two to three cups of coffee and two to three cups of tea per day had a 30% decrease in incidence of stroke and a 28% lower risk for dementia compared with those who did not.
“From a public health perspective, because regular tea and coffee drinkers comprise such a large proportion of the population and because these beverages tend to be consumed habitually throughout adult life, even small potential health benefits or risks associated with tea and coffee intake may have important public health implications,” the investigators wrote.
The study was published online Nov. 16 in PLOS Medicine.
Synergistic effect?
Whereas earlier studies have shown significant health benefits from moderate coffee and tea intake separately, few have examined the effect of drinking both.
Researchers enrolled 365,682 participants from the UK Biobank for the analysis of coffee and tea consumption and stroke and dementia risk and 13,352 participants for the analysis of poststroke dementia.
During a median follow-up of 11.4 years, 2.8% of participants experienced a stroke and 1.4% developed dementia.
After adjustment for confounders, stroke risk was 10% lower in those who drank a half-cup to a cup of coffee per day (P < .001) and 8% lower in those who had more than two cups a day (P = .009). Tea drinkers who had more than two cups a day saw a 16% reduction in stroke (P < .001).
Those who drank both coffee and tea during the day saw the greatest benefit. Drinking two to three cups of coffee and two to three cups of tea lowered stroke risk by 32% (P < .001) and dementia risk by 28% (P = .002).
Drinking both beverages offered significantly greater benefits than drinking just coffee or tea alone, with an 11% lower risk for stroke (P < .001), an 8% lower risk for dementia (P = .001), and 18% lower risk for vascular dementia (P = .001).
Among those participants who experienced a stroke during the follow-up period, drinking two to three cups of coffee was associated with 20% lower risk for poststroke dementia (P = .044), and for those who drank both coffee and tea (half to one cup of coffee and two to three cups of tea per day) the risk for poststroke dementia was lowered by 50% (P =.006).
There was no significant association between coffee and tea consumption and risk for hemorrhagic stroke or Alzheimer’s disease.
The study was funded by the National Natural Science Foundation of China. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Drinking coffee or tea is associated with reduced risk for stroke and dementia, with the biggest benefit associated with consuming both beverages, new research suggests.
Investigators found that individuals who drank two to three cups of coffee and two to three cups of tea per day had a 30% decrease in incidence of stroke and a 28% lower risk for dementia compared with those who did not.
“From a public health perspective, because regular tea and coffee drinkers comprise such a large proportion of the population and because these beverages tend to be consumed habitually throughout adult life, even small potential health benefits or risks associated with tea and coffee intake may have important public health implications,” the investigators wrote.
The study was published online Nov. 16 in PLOS Medicine.
Synergistic effect?
Whereas earlier studies have shown significant health benefits from moderate coffee and tea intake separately, few have examined the effect of drinking both.
Researchers enrolled 365,682 participants from the UK Biobank for the analysis of coffee and tea consumption and stroke and dementia risk and 13,352 participants for the analysis of poststroke dementia.
During a median follow-up of 11.4 years, 2.8% of participants experienced a stroke and 1.4% developed dementia.
After adjustment for confounders, stroke risk was 10% lower in those who drank a half-cup to a cup of coffee per day (P < .001) and 8% lower in those who had more than two cups a day (P = .009). Tea drinkers who had more than two cups a day saw a 16% reduction in stroke (P < .001).
Those who drank both coffee and tea during the day saw the greatest benefit. Drinking two to three cups of coffee and two to three cups of tea lowered stroke risk by 32% (P < .001) and dementia risk by 28% (P = .002).
Drinking both beverages offered significantly greater benefits than drinking just coffee or tea alone, with an 11% lower risk for stroke (P < .001), an 8% lower risk for dementia (P = .001), and 18% lower risk for vascular dementia (P = .001).
Among those participants who experienced a stroke during the follow-up period, drinking two to three cups of coffee was associated with 20% lower risk for poststroke dementia (P = .044), and for those who drank both coffee and tea (half to one cup of coffee and two to three cups of tea per day) the risk for poststroke dementia was lowered by 50% (P =.006).
There was no significant association between coffee and tea consumption and risk for hemorrhagic stroke or Alzheimer’s disease.
The study was funded by the National Natural Science Foundation of China. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
ASCEND: Aspirin shows hint of dementia protection in T2D
A regimen of daily, low-dose aspirin failed to produce a significant reduction in the incidence of dementia or cognitive impairment in ASCEND, a randomized, multicenter trial with more than 15,000 people with diabetes followed for an average of more than 9 years, but the results hinted at enough of a benefit to warrant further study, some experts said.
“The question remains open,” said Jane Armitage, MBBS, FRCP, as she presented the findings at the American Heart Association scientific sessions. “The rate ratios suggest some benefit. It’s encouraging,” added Dr. Armitage, professor of clinical trials and epidemiology at Oxford (England) University.
The study tallied dementia outcomes three different ways: It applied a narrow definition that relied on a specific diagnosis of dementia in a person’s EHR or in their death record. (Dr. Armitage and her associates tracked outcomes for 99% of the enrolled participants by linking to their U.K. national health records and death records.)
A second metric used a broader outcome definition that tracked EHR entries for not only dementia but also diagnoses of cognitive impairment, delirium, confusion, prescription of dementia medications, and referral to a memory clinic or geriatric psychiatry. The third assessment was a cognitive-function test given to participants at the end of follow-up, but only 58% of enrolled participants completed this part of the study, and it’s also possible that some subjects missed this assessment because of dementia onset. These limitations hamper clear interpretation of this third metric, Dr. Armitage said.
The main findings for the other two, more reliable measures of incident dementia or cognitive deterioration showed a nonsignificant 9% relative risk reduction linked with aspirin use compared with placebo for the more inclusive endpoint, and a nonsignificant 11% relative risk reduction with aspirin using the narrow definition for dementia only, she reported. The third method, a directly administered assessment of dementia and cognition, also showed a small, nonsignificant effect from daily aspirin use relative to placebo.
Results can’t rule out modest aspirin effect
Dr. Armitage highlighted that the two more reliable measures both appeared to rule out risk for neurologic harm from aspirin because the upper limit of the 95% confidence interval for relative effect reached only 1.02 using the broad outcomes, and 1.06 for the narrower endpoint of dementia only. On the other hand, focus on the low end of the 95% confidence interval suggested potentially meaningful benefits, with a possible reduction by aspirin in events relative to placebo of as much as 19% by the broad outcome definition and by 25% with the narrow definition.
“Even if it was only a 15% relative risk reduction, that would be important,” given the high dementia incidence worldwide, Dr. Armitage said during a press briefing. “It’s entirely possible, with our results, that a modest benefit exists.”
This take on the findings won some support. Further studies with more people, longer follow-up, and perhaps enrolling a more selected, higher risk cohort may better address potential neurologic benefit from aspirin, suggested Amytis Towfighi, MD, a stroke neurologist and professor of neurology at the University of Southern California, Los Angeles, and a designated discussant for the report.
The result “was rather encouraging. I was a little surprised” by the findings, commented Chrystie M. Ballantyne, MD, professor and director of the Center for Cardiometabolic Disease Prevention at Baylor College of Medicine, Houston, also a discussant.
The results “don’t mean that no one benefits from aspirin. Perhaps certain people at risk would benefit from dementia protection. It’s an open question,” commented Erin D. Michos, MD, director of Women’s Cardiovascular Health at Johns Hopkins Medicine, Baltimore.
But others saw the findings as more unequivocally neutral. “This gives us an early, preliminary answer, that aspirin does not seem to improve dementia,” commented Amit Khera, MD, professor and director of Preventive Cardiology at UT Southwestern Medical Center, Dallas, and a third discussant at the meeting.
Evidence against routine, widespread primary prevention with aspirin
ASCEND had the primary goal of assessing a daily, 100-mg aspirin dose for its safety and efficacy for preventing vascular events such as MIs and ischemic strokes in 15,480 people with diabetes who were at least 40 years old at enrollment and had no history of cardiovascular disease. The main results came out in 2018 and showed that while aspirin produced a significant benefit by reducing thrombotic events, it also resulted in significantly more major bleeding events compared with placebo, and overall the magnitude of benefit roughly matched magnitude of risk.
These findings, along with similar results from two other high-profile aspirin studies reported at about the same time (ASPREE, and ARRIVE), led to recommendations from groups like the U.S. Preventive Services Task Force and from the American College of Cardiology and American Heart Association that caution against widespread, routine aspirin use for primary prevention of atherosclerotic cardiovascular disease events in most adults.
The groups instead endorsed a tailored strategy of targeting aspirin to people with a higher than average risk for ischemic thrombotic events and a lower than average bleeding risk. (The most recent aspirin recommendations from the USPSTF, currently in draft form, substantially curtail aspirin’s appropriate use, eliminating it in those over age 60 years.)
However, experts and prevailing practice recommendations continue to endorse routine aspirin use for secondary prevention in patients with an established history of cardiovascular disease.
The new findings reported by Dr. Armitage came from additional analyses of dementia and cognitive impairment overlaid on the main ASCEND outcome analyses. ASCEND actively treated and followed study participants for an average of 7.4 years, then researchers tracked further dementia outcomes based on medical-record entries for an average of another 1.8 years.
ASCEND received partial funding or support from Abbott, Bayer, Mylan, and Solvay. Dr. Armitage had no disclosures. Dr. Towfighi, Dr. Khera, and Dr. Michos had no disclosures. Dr. Ballantyne has had financial relationships with numerous companies.
A regimen of daily, low-dose aspirin failed to produce a significant reduction in the incidence of dementia or cognitive impairment in ASCEND, a randomized, multicenter trial with more than 15,000 people with diabetes followed for an average of more than 9 years, but the results hinted at enough of a benefit to warrant further study, some experts said.
“The question remains open,” said Jane Armitage, MBBS, FRCP, as she presented the findings at the American Heart Association scientific sessions. “The rate ratios suggest some benefit. It’s encouraging,” added Dr. Armitage, professor of clinical trials and epidemiology at Oxford (England) University.
The study tallied dementia outcomes three different ways: It applied a narrow definition that relied on a specific diagnosis of dementia in a person’s EHR or in their death record. (Dr. Armitage and her associates tracked outcomes for 99% of the enrolled participants by linking to their U.K. national health records and death records.)
A second metric used a broader outcome definition that tracked EHR entries for not only dementia but also diagnoses of cognitive impairment, delirium, confusion, prescription of dementia medications, and referral to a memory clinic or geriatric psychiatry. The third assessment was a cognitive-function test given to participants at the end of follow-up, but only 58% of enrolled participants completed this part of the study, and it’s also possible that some subjects missed this assessment because of dementia onset. These limitations hamper clear interpretation of this third metric, Dr. Armitage said.
The main findings for the other two, more reliable measures of incident dementia or cognitive deterioration showed a nonsignificant 9% relative risk reduction linked with aspirin use compared with placebo for the more inclusive endpoint, and a nonsignificant 11% relative risk reduction with aspirin using the narrow definition for dementia only, she reported. The third method, a directly administered assessment of dementia and cognition, also showed a small, nonsignificant effect from daily aspirin use relative to placebo.
Results can’t rule out modest aspirin effect
Dr. Armitage highlighted that the two more reliable measures both appeared to rule out risk for neurologic harm from aspirin because the upper limit of the 95% confidence interval for relative effect reached only 1.02 using the broad outcomes, and 1.06 for the narrower endpoint of dementia only. On the other hand, focus on the low end of the 95% confidence interval suggested potentially meaningful benefits, with a possible reduction by aspirin in events relative to placebo of as much as 19% by the broad outcome definition and by 25% with the narrow definition.
“Even if it was only a 15% relative risk reduction, that would be important,” given the high dementia incidence worldwide, Dr. Armitage said during a press briefing. “It’s entirely possible, with our results, that a modest benefit exists.”
This take on the findings won some support. Further studies with more people, longer follow-up, and perhaps enrolling a more selected, higher risk cohort may better address potential neurologic benefit from aspirin, suggested Amytis Towfighi, MD, a stroke neurologist and professor of neurology at the University of Southern California, Los Angeles, and a designated discussant for the report.
The result “was rather encouraging. I was a little surprised” by the findings, commented Chrystie M. Ballantyne, MD, professor and director of the Center for Cardiometabolic Disease Prevention at Baylor College of Medicine, Houston, also a discussant.
The results “don’t mean that no one benefits from aspirin. Perhaps certain people at risk would benefit from dementia protection. It’s an open question,” commented Erin D. Michos, MD, director of Women’s Cardiovascular Health at Johns Hopkins Medicine, Baltimore.
But others saw the findings as more unequivocally neutral. “This gives us an early, preliminary answer, that aspirin does not seem to improve dementia,” commented Amit Khera, MD, professor and director of Preventive Cardiology at UT Southwestern Medical Center, Dallas, and a third discussant at the meeting.
Evidence against routine, widespread primary prevention with aspirin
ASCEND had the primary goal of assessing a daily, 100-mg aspirin dose for its safety and efficacy for preventing vascular events such as MIs and ischemic strokes in 15,480 people with diabetes who were at least 40 years old at enrollment and had no history of cardiovascular disease. The main results came out in 2018 and showed that while aspirin produced a significant benefit by reducing thrombotic events, it also resulted in significantly more major bleeding events compared with placebo, and overall the magnitude of benefit roughly matched magnitude of risk.
These findings, along with similar results from two other high-profile aspirin studies reported at about the same time (ASPREE, and ARRIVE), led to recommendations from groups like the U.S. Preventive Services Task Force and from the American College of Cardiology and American Heart Association that caution against widespread, routine aspirin use for primary prevention of atherosclerotic cardiovascular disease events in most adults.
The groups instead endorsed a tailored strategy of targeting aspirin to people with a higher than average risk for ischemic thrombotic events and a lower than average bleeding risk. (The most recent aspirin recommendations from the USPSTF, currently in draft form, substantially curtail aspirin’s appropriate use, eliminating it in those over age 60 years.)
However, experts and prevailing practice recommendations continue to endorse routine aspirin use for secondary prevention in patients with an established history of cardiovascular disease.
The new findings reported by Dr. Armitage came from additional analyses of dementia and cognitive impairment overlaid on the main ASCEND outcome analyses. ASCEND actively treated and followed study participants for an average of 7.4 years, then researchers tracked further dementia outcomes based on medical-record entries for an average of another 1.8 years.
ASCEND received partial funding or support from Abbott, Bayer, Mylan, and Solvay. Dr. Armitage had no disclosures. Dr. Towfighi, Dr. Khera, and Dr. Michos had no disclosures. Dr. Ballantyne has had financial relationships with numerous companies.
A regimen of daily, low-dose aspirin failed to produce a significant reduction in the incidence of dementia or cognitive impairment in ASCEND, a randomized, multicenter trial with more than 15,000 people with diabetes followed for an average of more than 9 years, but the results hinted at enough of a benefit to warrant further study, some experts said.
“The question remains open,” said Jane Armitage, MBBS, FRCP, as she presented the findings at the American Heart Association scientific sessions. “The rate ratios suggest some benefit. It’s encouraging,” added Dr. Armitage, professor of clinical trials and epidemiology at Oxford (England) University.
The study tallied dementia outcomes three different ways: It applied a narrow definition that relied on a specific diagnosis of dementia in a person’s EHR or in their death record. (Dr. Armitage and her associates tracked outcomes for 99% of the enrolled participants by linking to their U.K. national health records and death records.)
A second metric used a broader outcome definition that tracked EHR entries for not only dementia but also diagnoses of cognitive impairment, delirium, confusion, prescription of dementia medications, and referral to a memory clinic or geriatric psychiatry. The third assessment was a cognitive-function test given to participants at the end of follow-up, but only 58% of enrolled participants completed this part of the study, and it’s also possible that some subjects missed this assessment because of dementia onset. These limitations hamper clear interpretation of this third metric, Dr. Armitage said.
The main findings for the other two, more reliable measures of incident dementia or cognitive deterioration showed a nonsignificant 9% relative risk reduction linked with aspirin use compared with placebo for the more inclusive endpoint, and a nonsignificant 11% relative risk reduction with aspirin using the narrow definition for dementia only, she reported. The third method, a directly administered assessment of dementia and cognition, also showed a small, nonsignificant effect from daily aspirin use relative to placebo.
Results can’t rule out modest aspirin effect
Dr. Armitage highlighted that the two more reliable measures both appeared to rule out risk for neurologic harm from aspirin because the upper limit of the 95% confidence interval for relative effect reached only 1.02 using the broad outcomes, and 1.06 for the narrower endpoint of dementia only. On the other hand, focus on the low end of the 95% confidence interval suggested potentially meaningful benefits, with a possible reduction by aspirin in events relative to placebo of as much as 19% by the broad outcome definition and by 25% with the narrow definition.
“Even if it was only a 15% relative risk reduction, that would be important,” given the high dementia incidence worldwide, Dr. Armitage said during a press briefing. “It’s entirely possible, with our results, that a modest benefit exists.”
This take on the findings won some support. Further studies with more people, longer follow-up, and perhaps enrolling a more selected, higher risk cohort may better address potential neurologic benefit from aspirin, suggested Amytis Towfighi, MD, a stroke neurologist and professor of neurology at the University of Southern California, Los Angeles, and a designated discussant for the report.
The result “was rather encouraging. I was a little surprised” by the findings, commented Chrystie M. Ballantyne, MD, professor and director of the Center for Cardiometabolic Disease Prevention at Baylor College of Medicine, Houston, also a discussant.
The results “don’t mean that no one benefits from aspirin. Perhaps certain people at risk would benefit from dementia protection. It’s an open question,” commented Erin D. Michos, MD, director of Women’s Cardiovascular Health at Johns Hopkins Medicine, Baltimore.
But others saw the findings as more unequivocally neutral. “This gives us an early, preliminary answer, that aspirin does not seem to improve dementia,” commented Amit Khera, MD, professor and director of Preventive Cardiology at UT Southwestern Medical Center, Dallas, and a third discussant at the meeting.
Evidence against routine, widespread primary prevention with aspirin
ASCEND had the primary goal of assessing a daily, 100-mg aspirin dose for its safety and efficacy for preventing vascular events such as MIs and ischemic strokes in 15,480 people with diabetes who were at least 40 years old at enrollment and had no history of cardiovascular disease. The main results came out in 2018 and showed that while aspirin produced a significant benefit by reducing thrombotic events, it also resulted in significantly more major bleeding events compared with placebo, and overall the magnitude of benefit roughly matched magnitude of risk.
These findings, along with similar results from two other high-profile aspirin studies reported at about the same time (ASPREE, and ARRIVE), led to recommendations from groups like the U.S. Preventive Services Task Force and from the American College of Cardiology and American Heart Association that caution against widespread, routine aspirin use for primary prevention of atherosclerotic cardiovascular disease events in most adults.
The groups instead endorsed a tailored strategy of targeting aspirin to people with a higher than average risk for ischemic thrombotic events and a lower than average bleeding risk. (The most recent aspirin recommendations from the USPSTF, currently in draft form, substantially curtail aspirin’s appropriate use, eliminating it in those over age 60 years.)
However, experts and prevailing practice recommendations continue to endorse routine aspirin use for secondary prevention in patients with an established history of cardiovascular disease.
The new findings reported by Dr. Armitage came from additional analyses of dementia and cognitive impairment overlaid on the main ASCEND outcome analyses. ASCEND actively treated and followed study participants for an average of 7.4 years, then researchers tracked further dementia outcomes based on medical-record entries for an average of another 1.8 years.
ASCEND received partial funding or support from Abbott, Bayer, Mylan, and Solvay. Dr. Armitage had no disclosures. Dr. Towfighi, Dr. Khera, and Dr. Michos had no disclosures. Dr. Ballantyne has had financial relationships with numerous companies.
FROM AHA 2021
Vegetable fats tied to lower stroke risk, animal fats to higher risk
Higher intake of vegetable fats from foods such as olive oil and nuts is associated with a lower risk for stroke, whereas people who eat more animal fats, especially processed red meats, may have a higher stroke risk, observational findings suggest.
In a study of more than 117,000 health professionals who were followed for 27 years, those whose diet was in the highest quintile for intake of vegetable fat had a 12% lower risk for stroke, compared with those who consumed the least amount of vegetable fats.
Conversely, having the highest intake of animal fat from nondairy sources was associated with a 16% increased risk of stroke.
Fenglei Wang, PhD, presented these results at the American Heart Association scientific sessions.
“Our findings support the Dietary Guidelines for Americans and dietary recommendations by AHA,” Dr. Wang, a postdoctoral fellow in the department of nutrition at Harvard University’s T.H. Chan School of Public Health in Boston, told this news organization.
“The main sources of vegetable fat have a large overlap with polyunsaturated fat, such as vegetable oils, nuts, walnuts, and peanut butter,” Dr. Wang noted, adding that fish, especially fatty fish, is a main source of polyunsaturated fat and is recommended for cardiovascular health.
“We would recommend that people reduce consumption of red and processed meat, minimize fatty parts of unprocessed meat if consumed, and replace lard or tallow (beef fat) with nontropical vegetable oils, such as olive oil, corn, or soybean oils in cooking, to lower their stroke risk,” she said.
Moreover, although the results from this study of dietary fat are informative, Dr. Wang continued, “there are other dietary factors (fruits, vegetables, salt, alcohol, et cetera), and lifestyle factors (physical activity, smoking, et cetera), that are associated with stroke risk and worthy of attention as well.”
“Many processed meats are high in salt and saturated fat, and low in vegetable fat,” Alice H. Lichtenstein, DSc, an AHA spokesperson who was not involved with this research, noted in a press release.
“Research shows that replacing processed meat with other protein sources, particularly plant sources, is associated with lower death rates,” added Dr. Lichtenstein, the Stanley N. Gershoff professor of nutrition science and policy at Tufts University in Boston, and lead author of the AHA’s 2021 scientific statement, Dietary Guidance to Improve Cardiovascular Health.
“Key features of a heart-healthy diet pattern,” she summarized, “are to balance calorie intake with calorie needs to achieve and maintain a healthy weight; choose whole grains, lean and plant-based protein, and a variety of fruits and vegetables; limit salt, sugar, animal fat, processed foods, and alcohol; and apply this guidance regardless of where the food is prepared or consumed.”
Replace processed meat with plant proteins
The focus on stroke in this study “is important” because, traditionally, studies of diet and cardiovascular health have focused on coronary heart disease, Andrew Mente, PhD, who also was not involved in this research, said in an email to this news organization.
“Overall, the take-home message from the study is that replacing processed meat with plant sources of protein in the diet is probably beneficial,” Dr. Mente, associate professor, health research methods, evidence, and impact, Faculty of Health Sciences, McMaster University, Hamilton, Ont., said.
The finding that people who ate the most vegetable fat had a modest 12% lower risk of stroke than those who ate the least vegetable fat “points to protective effects of foods like seeds, nuts, vegetables, and olive oil, which has been shown previously,” he continued.
The highest quintile of total red meat intake was associated with an 8% higher risk for stroke, but this was driven mainly by processed red meat (which was associated with a 12% higher risk for stroke). These findings are “generally consistent with cohort studies showing that processed meat, as with most highly processed foods for that matter, are associated with an increased risk of cardiovascular events,” Dr. Mente noted.
“Surprisingly, dairy products (such as cheese, butter, or milk) in the study were not connected with the risk of stroke,” he added. This finding differs from results of meta-analyses of multiple cohort studies of dairy intake and stroke and the recent large international PURE study, which showed that dairy intake was associated with a lower risk for stroke.
“What is needed to move the field forward,” according to Dr. Mente, “is to employ new methods that use cutting-edge technology to study nutritional biomarkers and health outcomes.”
“When dealing with modest associations as usually encountered in nutrition, it is a challenge to make causal connections based on dietary questionnaires, which are fraught with measurement error,” he added. “The use of novel methods is where the field is headed.”
Total dietary fat, different types, and different food sources
Dr. Wang and colleagues investigated how total dietary fat, different types of fat, and fats from different foods were associated with incident stroke in 73,867 women in the 1984-2016 Nurses’ Health Study and 43,269 men who participated in the 1986-2016 Health Professionals Follow-up Study.
The participants had an average age of 50 years, 63% were women, and 97% were White. They replied to food-frequency questionnaires every 4 years.
Total red meat included beef, pork, or lamb (as a main dish or in sandwiches or mixed dishes) as well as processed red meats (such as bacon, sausage, bologna, hot dogs, and salami).
Animal fat sources included meat, beef tallow, lard, and full-fat dairy products, such as full-fat milk and cheese.
The median percentage of total daily calories from different sources of fat ranged from 10% to 20% for vegetable fat, 3% to 10% for dairy fat, and 7% to 17% for nondairy animal fat (for lowest to highest quintiles).
The median percentage of total daily calories from different types of fat ranged from 5% to 8% for polyunsaturated fat, 4% to 7% for n-6 polyunsaturated fat, 9% to 15% for monounsaturated fat, 8% to 14% for saturated fat, and 1% to 2% for trans fat.
During follow-up, there were 6,189 incident strokes, including 2,967 ischemic strokes and 814 hemorrhagic strokes.
The researchers found that intake in the highest quintile of vegetable fat was associated with a lower risk for total stroke, compared with the lowest quintile (hazard ratio, 0.88; 95% confidence interval, 0.81-0.96; P for trend < .001).
Similarly, the highest intake of polyunsaturated fat was also associated with lower total stroke (HR, 0.88; 95% CI, 0.80-0.96; P for trend = .002).
Highest intake of nondairy animal fat, however, was associated with an increased risk for total stroke (HR, 1.16; 95% CI, 1.05-1.29; P for trend < .001). They observed “similar associations” for ischemic stroke, but the only positive association for nondairy animal fat was with hemorrhagic stroke, the abstract notes.
The risk for stroke was lower by 9% per serving per day for vegetable oil but increased by 8% and 12%, respectively, per serving of total red meat or processed red meat.
The association for vegetable oil was attenuated after adjustment for vegetable fat or polyunsaturated fat, whereas adjustment for nondairy animal fat rendered the association for total red meat and processed red meat nonsignificant.
The study was funded by the National Heart, Lung, and Blood Institute of the National Institutes of Health. Dr. Wang has no relevant financial disclosures. Dr. Mente has received research funding from the Dairy Farmers of Canada and the National Dairy Council to analyze data on dairy consumption and health outcomes in the PURE study, which is funded by the Population Health Research Institute, Hamilton Health Sciences Research Institute, and more than 70 other sources (government and pharmaceutical).
A version of this article first appeared on Medscape.com.
Higher intake of vegetable fats from foods such as olive oil and nuts is associated with a lower risk for stroke, whereas people who eat more animal fats, especially processed red meats, may have a higher stroke risk, observational findings suggest.
In a study of more than 117,000 health professionals who were followed for 27 years, those whose diet was in the highest quintile for intake of vegetable fat had a 12% lower risk for stroke, compared with those who consumed the least amount of vegetable fats.
Conversely, having the highest intake of animal fat from nondairy sources was associated with a 16% increased risk of stroke.
Fenglei Wang, PhD, presented these results at the American Heart Association scientific sessions.
“Our findings support the Dietary Guidelines for Americans and dietary recommendations by AHA,” Dr. Wang, a postdoctoral fellow in the department of nutrition at Harvard University’s T.H. Chan School of Public Health in Boston, told this news organization.
“The main sources of vegetable fat have a large overlap with polyunsaturated fat, such as vegetable oils, nuts, walnuts, and peanut butter,” Dr. Wang noted, adding that fish, especially fatty fish, is a main source of polyunsaturated fat and is recommended for cardiovascular health.
“We would recommend that people reduce consumption of red and processed meat, minimize fatty parts of unprocessed meat if consumed, and replace lard or tallow (beef fat) with nontropical vegetable oils, such as olive oil, corn, or soybean oils in cooking, to lower their stroke risk,” she said.
Moreover, although the results from this study of dietary fat are informative, Dr. Wang continued, “there are other dietary factors (fruits, vegetables, salt, alcohol, et cetera), and lifestyle factors (physical activity, smoking, et cetera), that are associated with stroke risk and worthy of attention as well.”
“Many processed meats are high in salt and saturated fat, and low in vegetable fat,” Alice H. Lichtenstein, DSc, an AHA spokesperson who was not involved with this research, noted in a press release.
“Research shows that replacing processed meat with other protein sources, particularly plant sources, is associated with lower death rates,” added Dr. Lichtenstein, the Stanley N. Gershoff professor of nutrition science and policy at Tufts University in Boston, and lead author of the AHA’s 2021 scientific statement, Dietary Guidance to Improve Cardiovascular Health.
“Key features of a heart-healthy diet pattern,” she summarized, “are to balance calorie intake with calorie needs to achieve and maintain a healthy weight; choose whole grains, lean and plant-based protein, and a variety of fruits and vegetables; limit salt, sugar, animal fat, processed foods, and alcohol; and apply this guidance regardless of where the food is prepared or consumed.”
Replace processed meat with plant proteins
The focus on stroke in this study “is important” because, traditionally, studies of diet and cardiovascular health have focused on coronary heart disease, Andrew Mente, PhD, who also was not involved in this research, said in an email to this news organization.
“Overall, the take-home message from the study is that replacing processed meat with plant sources of protein in the diet is probably beneficial,” Dr. Mente, associate professor, health research methods, evidence, and impact, Faculty of Health Sciences, McMaster University, Hamilton, Ont., said.
The finding that people who ate the most vegetable fat had a modest 12% lower risk of stroke than those who ate the least vegetable fat “points to protective effects of foods like seeds, nuts, vegetables, and olive oil, which has been shown previously,” he continued.
The highest quintile of total red meat intake was associated with an 8% higher risk for stroke, but this was driven mainly by processed red meat (which was associated with a 12% higher risk for stroke). These findings are “generally consistent with cohort studies showing that processed meat, as with most highly processed foods for that matter, are associated with an increased risk of cardiovascular events,” Dr. Mente noted.
“Surprisingly, dairy products (such as cheese, butter, or milk) in the study were not connected with the risk of stroke,” he added. This finding differs from results of meta-analyses of multiple cohort studies of dairy intake and stroke and the recent large international PURE study, which showed that dairy intake was associated with a lower risk for stroke.
“What is needed to move the field forward,” according to Dr. Mente, “is to employ new methods that use cutting-edge technology to study nutritional biomarkers and health outcomes.”
“When dealing with modest associations as usually encountered in nutrition, it is a challenge to make causal connections based on dietary questionnaires, which are fraught with measurement error,” he added. “The use of novel methods is where the field is headed.”
Total dietary fat, different types, and different food sources
Dr. Wang and colleagues investigated how total dietary fat, different types of fat, and fats from different foods were associated with incident stroke in 73,867 women in the 1984-2016 Nurses’ Health Study and 43,269 men who participated in the 1986-2016 Health Professionals Follow-up Study.
The participants had an average age of 50 years, 63% were women, and 97% were White. They replied to food-frequency questionnaires every 4 years.
Total red meat included beef, pork, or lamb (as a main dish or in sandwiches or mixed dishes) as well as processed red meats (such as bacon, sausage, bologna, hot dogs, and salami).
Animal fat sources included meat, beef tallow, lard, and full-fat dairy products, such as full-fat milk and cheese.
The median percentage of total daily calories from different sources of fat ranged from 10% to 20% for vegetable fat, 3% to 10% for dairy fat, and 7% to 17% for nondairy animal fat (for lowest to highest quintiles).
The median percentage of total daily calories from different types of fat ranged from 5% to 8% for polyunsaturated fat, 4% to 7% for n-6 polyunsaturated fat, 9% to 15% for monounsaturated fat, 8% to 14% for saturated fat, and 1% to 2% for trans fat.
During follow-up, there were 6,189 incident strokes, including 2,967 ischemic strokes and 814 hemorrhagic strokes.
The researchers found that intake in the highest quintile of vegetable fat was associated with a lower risk for total stroke, compared with the lowest quintile (hazard ratio, 0.88; 95% confidence interval, 0.81-0.96; P for trend < .001).
Similarly, the highest intake of polyunsaturated fat was also associated with lower total stroke (HR, 0.88; 95% CI, 0.80-0.96; P for trend = .002).
Highest intake of nondairy animal fat, however, was associated with an increased risk for total stroke (HR, 1.16; 95% CI, 1.05-1.29; P for trend < .001). They observed “similar associations” for ischemic stroke, but the only positive association for nondairy animal fat was with hemorrhagic stroke, the abstract notes.
The risk for stroke was lower by 9% per serving per day for vegetable oil but increased by 8% and 12%, respectively, per serving of total red meat or processed red meat.
The association for vegetable oil was attenuated after adjustment for vegetable fat or polyunsaturated fat, whereas adjustment for nondairy animal fat rendered the association for total red meat and processed red meat nonsignificant.
The study was funded by the National Heart, Lung, and Blood Institute of the National Institutes of Health. Dr. Wang has no relevant financial disclosures. Dr. Mente has received research funding from the Dairy Farmers of Canada and the National Dairy Council to analyze data on dairy consumption and health outcomes in the PURE study, which is funded by the Population Health Research Institute, Hamilton Health Sciences Research Institute, and more than 70 other sources (government and pharmaceutical).
A version of this article first appeared on Medscape.com.
Higher intake of vegetable fats from foods such as olive oil and nuts is associated with a lower risk for stroke, whereas people who eat more animal fats, especially processed red meats, may have a higher stroke risk, observational findings suggest.
In a study of more than 117,000 health professionals who were followed for 27 years, those whose diet was in the highest quintile for intake of vegetable fat had a 12% lower risk for stroke, compared with those who consumed the least amount of vegetable fats.
Conversely, having the highest intake of animal fat from nondairy sources was associated with a 16% increased risk of stroke.
Fenglei Wang, PhD, presented these results at the American Heart Association scientific sessions.
“Our findings support the Dietary Guidelines for Americans and dietary recommendations by AHA,” Dr. Wang, a postdoctoral fellow in the department of nutrition at Harvard University’s T.H. Chan School of Public Health in Boston, told this news organization.
“The main sources of vegetable fat have a large overlap with polyunsaturated fat, such as vegetable oils, nuts, walnuts, and peanut butter,” Dr. Wang noted, adding that fish, especially fatty fish, is a main source of polyunsaturated fat and is recommended for cardiovascular health.
“We would recommend that people reduce consumption of red and processed meat, minimize fatty parts of unprocessed meat if consumed, and replace lard or tallow (beef fat) with nontropical vegetable oils, such as olive oil, corn, or soybean oils in cooking, to lower their stroke risk,” she said.
Moreover, although the results from this study of dietary fat are informative, Dr. Wang continued, “there are other dietary factors (fruits, vegetables, salt, alcohol, et cetera), and lifestyle factors (physical activity, smoking, et cetera), that are associated with stroke risk and worthy of attention as well.”
“Many processed meats are high in salt and saturated fat, and low in vegetable fat,” Alice H. Lichtenstein, DSc, an AHA spokesperson who was not involved with this research, noted in a press release.
“Research shows that replacing processed meat with other protein sources, particularly plant sources, is associated with lower death rates,” added Dr. Lichtenstein, the Stanley N. Gershoff professor of nutrition science and policy at Tufts University in Boston, and lead author of the AHA’s 2021 scientific statement, Dietary Guidance to Improve Cardiovascular Health.
“Key features of a heart-healthy diet pattern,” she summarized, “are to balance calorie intake with calorie needs to achieve and maintain a healthy weight; choose whole grains, lean and plant-based protein, and a variety of fruits and vegetables; limit salt, sugar, animal fat, processed foods, and alcohol; and apply this guidance regardless of where the food is prepared or consumed.”
Replace processed meat with plant proteins
The focus on stroke in this study “is important” because, traditionally, studies of diet and cardiovascular health have focused on coronary heart disease, Andrew Mente, PhD, who also was not involved in this research, said in an email to this news organization.
“Overall, the take-home message from the study is that replacing processed meat with plant sources of protein in the diet is probably beneficial,” Dr. Mente, associate professor, health research methods, evidence, and impact, Faculty of Health Sciences, McMaster University, Hamilton, Ont., said.
The finding that people who ate the most vegetable fat had a modest 12% lower risk of stroke than those who ate the least vegetable fat “points to protective effects of foods like seeds, nuts, vegetables, and olive oil, which has been shown previously,” he continued.
The highest quintile of total red meat intake was associated with an 8% higher risk for stroke, but this was driven mainly by processed red meat (which was associated with a 12% higher risk for stroke). These findings are “generally consistent with cohort studies showing that processed meat, as with most highly processed foods for that matter, are associated with an increased risk of cardiovascular events,” Dr. Mente noted.
“Surprisingly, dairy products (such as cheese, butter, or milk) in the study were not connected with the risk of stroke,” he added. This finding differs from results of meta-analyses of multiple cohort studies of dairy intake and stroke and the recent large international PURE study, which showed that dairy intake was associated with a lower risk for stroke.
“What is needed to move the field forward,” according to Dr. Mente, “is to employ new methods that use cutting-edge technology to study nutritional biomarkers and health outcomes.”
“When dealing with modest associations as usually encountered in nutrition, it is a challenge to make causal connections based on dietary questionnaires, which are fraught with measurement error,” he added. “The use of novel methods is where the field is headed.”
Total dietary fat, different types, and different food sources
Dr. Wang and colleagues investigated how total dietary fat, different types of fat, and fats from different foods were associated with incident stroke in 73,867 women in the 1984-2016 Nurses’ Health Study and 43,269 men who participated in the 1986-2016 Health Professionals Follow-up Study.
The participants had an average age of 50 years, 63% were women, and 97% were White. They replied to food-frequency questionnaires every 4 years.
Total red meat included beef, pork, or lamb (as a main dish or in sandwiches or mixed dishes) as well as processed red meats (such as bacon, sausage, bologna, hot dogs, and salami).
Animal fat sources included meat, beef tallow, lard, and full-fat dairy products, such as full-fat milk and cheese.
The median percentage of total daily calories from different sources of fat ranged from 10% to 20% for vegetable fat, 3% to 10% for dairy fat, and 7% to 17% for nondairy animal fat (for lowest to highest quintiles).
The median percentage of total daily calories from different types of fat ranged from 5% to 8% for polyunsaturated fat, 4% to 7% for n-6 polyunsaturated fat, 9% to 15% for monounsaturated fat, 8% to 14% for saturated fat, and 1% to 2% for trans fat.
During follow-up, there were 6,189 incident strokes, including 2,967 ischemic strokes and 814 hemorrhagic strokes.
The researchers found that intake in the highest quintile of vegetable fat was associated with a lower risk for total stroke, compared with the lowest quintile (hazard ratio, 0.88; 95% confidence interval, 0.81-0.96; P for trend < .001).
Similarly, the highest intake of polyunsaturated fat was also associated with lower total stroke (HR, 0.88; 95% CI, 0.80-0.96; P for trend = .002).
Highest intake of nondairy animal fat, however, was associated with an increased risk for total stroke (HR, 1.16; 95% CI, 1.05-1.29; P for trend < .001). They observed “similar associations” for ischemic stroke, but the only positive association for nondairy animal fat was with hemorrhagic stroke, the abstract notes.
The risk for stroke was lower by 9% per serving per day for vegetable oil but increased by 8% and 12%, respectively, per serving of total red meat or processed red meat.
The association for vegetable oil was attenuated after adjustment for vegetable fat or polyunsaturated fat, whereas adjustment for nondairy animal fat rendered the association for total red meat and processed red meat nonsignificant.
The study was funded by the National Heart, Lung, and Blood Institute of the National Institutes of Health. Dr. Wang has no relevant financial disclosures. Dr. Mente has received research funding from the Dairy Farmers of Canada and the National Dairy Council to analyze data on dairy consumption and health outcomes in the PURE study, which is funded by the Population Health Research Institute, Hamilton Health Sciences Research Institute, and more than 70 other sources (government and pharmaceutical).
A version of this article first appeared on Medscape.com.
FROM AHA 2021