User login
Unique twin study sheds new light on TBI and risk of cognitive decline
The research, which included almost 9,000 individuals, showed that twins who had experienced a TBI were more likely to have lower cognitive function at age 70 versus their twin who did not experience a TBI, especially if they had lost consciousness or were older than age 24 at the time of injury. In addition, their cognitive decline occurred at a more rapid rate.
“We know that TBI increases the risk of developing Alzheimer’s disease and other dementias in later life, but we haven’t known about TBI’s effect on cognitive decline that does not quite meet the threshold for dementia,” study investigator Marianne Chanti-Ketterl, PhD, Duke University, Durham, N.C., said in an interview.
“We know that TBI increases the risk of dementia in later life, but we haven’t known if TBI affects cognitive function, causes cognitive decline that has not progressed to the point of severity with Alzheimer’s or dementia,” she added.
Being able to study the impact of TBI in monozygotic twins gives this study a unique strength, she noted.
“The important thing about this is that they are monozygotic twins, and we know they shared a lot of early life exposure, and almost 100% genetics,” Dr. Chanti-Ketterl said.
The study was published online in Neurology.
For the study, the investigators assessed 8,662 participants born between 1917 and 1927 who were part of the National Academy of Sciences National Research Council’s Twin Registry. The registry is composed of male veterans of World War II with a history of TBI, as reported by themselves or a caregiver.
The men were followed up for many years as part of the registry, but cognitive assessment only began in the 1990s. They were followed up at four different time points, at which time the Telephone Interview for Cognitive Status (TICS-m), an alternative to the Mini-Mental State Examination that must be given in person, was administered.
A total of 25% of participants had experienced concussion in their lifetime. Of this cohort, there were 589 pairs of monozygotic twins who were discordant (one twin had TBI and the other had not).
Among the monozygotic twin cohort, a history of any TBI and being older than age 24 at the time of TBI were associated with lower TICS-m scores.
A twin who experienced TBI after age 24 scored 0.59 points lower on the TICS-m at age 70 than his twin with no TBI, and cognitive function declined faster, by 0.05 points per year.
First study of its kind
Holly Elser, MD, PhD, MPH, an epidemiologist and resident physician in neurology at the University of Pennsylvania, Philadelphia, and coauthor of an accompanying editorial, said in an interview that the study’s twin design was a definite strength.
“There are lots of papers that have remarked on the apparent association between head injury and subsequent dementia or cognitive decline, but to my knowledge, this is one of the first, if not the first, to use a twin study design, which has the unique advantage of having better control over early life and genetic factors than would ever typically be possible in a dataset of unrelated adults,” said Dr. Elser.
She added that the study findings “strengthen our understanding of the relationship between TBI and later cognitive decline, so I think there is an etiologic value to the study.”
However, Dr. Elser noted that the composition of the study population may limit the extent to which the results apply to contemporary populations.
“This was a population of White male twins born between 1917 and 1927,” she noted. “However, does the experience of people who were in the military generalize to civilian populations? Are twins representative of the general population or are they unique in terms of their risk factors?”
It is always important to emphasize inclusivity in clinical research, and in dementia research in particular, Dr. Elser added.
“There are many examples of instances where racialized and otherwise economically marginalized groups have been excluded from analysis, which is problematic because there are already economically and socially marginalized groups who disproportionately bear the brunt of dementia.
“This is not a criticism of the authors’ work, that their data didn’t include a more diverse patient base, but I think it is an important reminder that we should always interpret study findings within the limitations of the data. It’s a reminder to be thoughtful about taking explicit steps to include more diverse groups in future research,” she said.
The study was funded by the National Institute on Aging/National Institutes of Health and the Department of Defense. Dr. Chanti-Ketterl and Dr. Elser have reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
The research, which included almost 9,000 individuals, showed that twins who had experienced a TBI were more likely to have lower cognitive function at age 70 versus their twin who did not experience a TBI, especially if they had lost consciousness or were older than age 24 at the time of injury. In addition, their cognitive decline occurred at a more rapid rate.
“We know that TBI increases the risk of developing Alzheimer’s disease and other dementias in later life, but we haven’t known about TBI’s effect on cognitive decline that does not quite meet the threshold for dementia,” study investigator Marianne Chanti-Ketterl, PhD, Duke University, Durham, N.C., said in an interview.
“We know that TBI increases the risk of dementia in later life, but we haven’t known if TBI affects cognitive function, causes cognitive decline that has not progressed to the point of severity with Alzheimer’s or dementia,” she added.
Being able to study the impact of TBI in monozygotic twins gives this study a unique strength, she noted.
“The important thing about this is that they are monozygotic twins, and we know they shared a lot of early life exposure, and almost 100% genetics,” Dr. Chanti-Ketterl said.
The study was published online in Neurology.
For the study, the investigators assessed 8,662 participants born between 1917 and 1927 who were part of the National Academy of Sciences National Research Council’s Twin Registry. The registry is composed of male veterans of World War II with a history of TBI, as reported by themselves or a caregiver.
The men were followed up for many years as part of the registry, but cognitive assessment only began in the 1990s. They were followed up at four different time points, at which time the Telephone Interview for Cognitive Status (TICS-m), an alternative to the Mini-Mental State Examination that must be given in person, was administered.
A total of 25% of participants had experienced concussion in their lifetime. Of this cohort, there were 589 pairs of monozygotic twins who were discordant (one twin had TBI and the other had not).
Among the monozygotic twin cohort, a history of any TBI and being older than age 24 at the time of TBI were associated with lower TICS-m scores.
A twin who experienced TBI after age 24 scored 0.59 points lower on the TICS-m at age 70 than his twin with no TBI, and cognitive function declined faster, by 0.05 points per year.
First study of its kind
Holly Elser, MD, PhD, MPH, an epidemiologist and resident physician in neurology at the University of Pennsylvania, Philadelphia, and coauthor of an accompanying editorial, said in an interview that the study’s twin design was a definite strength.
“There are lots of papers that have remarked on the apparent association between head injury and subsequent dementia or cognitive decline, but to my knowledge, this is one of the first, if not the first, to use a twin study design, which has the unique advantage of having better control over early life and genetic factors than would ever typically be possible in a dataset of unrelated adults,” said Dr. Elser.
She added that the study findings “strengthen our understanding of the relationship between TBI and later cognitive decline, so I think there is an etiologic value to the study.”
However, Dr. Elser noted that the composition of the study population may limit the extent to which the results apply to contemporary populations.
“This was a population of White male twins born between 1917 and 1927,” she noted. “However, does the experience of people who were in the military generalize to civilian populations? Are twins representative of the general population or are they unique in terms of their risk factors?”
It is always important to emphasize inclusivity in clinical research, and in dementia research in particular, Dr. Elser added.
“There are many examples of instances where racialized and otherwise economically marginalized groups have been excluded from analysis, which is problematic because there are already economically and socially marginalized groups who disproportionately bear the brunt of dementia.
“This is not a criticism of the authors’ work, that their data didn’t include a more diverse patient base, but I think it is an important reminder that we should always interpret study findings within the limitations of the data. It’s a reminder to be thoughtful about taking explicit steps to include more diverse groups in future research,” she said.
The study was funded by the National Institute on Aging/National Institutes of Health and the Department of Defense. Dr. Chanti-Ketterl and Dr. Elser have reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
The research, which included almost 9,000 individuals, showed that twins who had experienced a TBI were more likely to have lower cognitive function at age 70 versus their twin who did not experience a TBI, especially if they had lost consciousness or were older than age 24 at the time of injury. In addition, their cognitive decline occurred at a more rapid rate.
“We know that TBI increases the risk of developing Alzheimer’s disease and other dementias in later life, but we haven’t known about TBI’s effect on cognitive decline that does not quite meet the threshold for dementia,” study investigator Marianne Chanti-Ketterl, PhD, Duke University, Durham, N.C., said in an interview.
“We know that TBI increases the risk of dementia in later life, but we haven’t known if TBI affects cognitive function, causes cognitive decline that has not progressed to the point of severity with Alzheimer’s or dementia,” she added.
Being able to study the impact of TBI in monozygotic twins gives this study a unique strength, she noted.
“The important thing about this is that they are monozygotic twins, and we know they shared a lot of early life exposure, and almost 100% genetics,” Dr. Chanti-Ketterl said.
The study was published online in Neurology.
For the study, the investigators assessed 8,662 participants born between 1917 and 1927 who were part of the National Academy of Sciences National Research Council’s Twin Registry. The registry is composed of male veterans of World War II with a history of TBI, as reported by themselves or a caregiver.
The men were followed up for many years as part of the registry, but cognitive assessment only began in the 1990s. They were followed up at four different time points, at which time the Telephone Interview for Cognitive Status (TICS-m), an alternative to the Mini-Mental State Examination that must be given in person, was administered.
A total of 25% of participants had experienced concussion in their lifetime. Of this cohort, there were 589 pairs of monozygotic twins who were discordant (one twin had TBI and the other had not).
Among the monozygotic twin cohort, a history of any TBI and being older than age 24 at the time of TBI were associated with lower TICS-m scores.
A twin who experienced TBI after age 24 scored 0.59 points lower on the TICS-m at age 70 than his twin with no TBI, and cognitive function declined faster, by 0.05 points per year.
First study of its kind
Holly Elser, MD, PhD, MPH, an epidemiologist and resident physician in neurology at the University of Pennsylvania, Philadelphia, and coauthor of an accompanying editorial, said in an interview that the study’s twin design was a definite strength.
“There are lots of papers that have remarked on the apparent association between head injury and subsequent dementia or cognitive decline, but to my knowledge, this is one of the first, if not the first, to use a twin study design, which has the unique advantage of having better control over early life and genetic factors than would ever typically be possible in a dataset of unrelated adults,” said Dr. Elser.
She added that the study findings “strengthen our understanding of the relationship between TBI and later cognitive decline, so I think there is an etiologic value to the study.”
However, Dr. Elser noted that the composition of the study population may limit the extent to which the results apply to contemporary populations.
“This was a population of White male twins born between 1917 and 1927,” she noted. “However, does the experience of people who were in the military generalize to civilian populations? Are twins representative of the general population or are they unique in terms of their risk factors?”
It is always important to emphasize inclusivity in clinical research, and in dementia research in particular, Dr. Elser added.
“There are many examples of instances where racialized and otherwise economically marginalized groups have been excluded from analysis, which is problematic because there are already economically and socially marginalized groups who disproportionately bear the brunt of dementia.
“This is not a criticism of the authors’ work, that their data didn’t include a more diverse patient base, but I think it is an important reminder that we should always interpret study findings within the limitations of the data. It’s a reminder to be thoughtful about taking explicit steps to include more diverse groups in future research,” she said.
The study was funded by the National Institute on Aging/National Institutes of Health and the Department of Defense. Dr. Chanti-Ketterl and Dr. Elser have reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM NEUROLOGY
Weekly insulin with dosing app beneficial in type 2 diabetes
TOPLINE:
with improved treatment satisfaction and compliance scores and similarly low hypoglycemia rates.
METHODOLOGY:
- A 52-week, randomized, open-label, parallel-group, phase 3a trial with real-world elements was conducted at 176 sites in seven countries.
- A total of 1,085 insulin-naive patients with type 2 diabetes were randomly assigned to receive icodec with a dosing guide app or daily analogs (U100 glargine, U300 glargine, or icodec).
TAKEAWAY:
- A1c levels dropped from 8.96% at baseline to 7.24% at week 52 with icodec and from 8.88% to 7.61% with the daily analog, a treatment difference of 0.37 percentage point (P < .001 for noninferiority and P = .009 for superiority in favor of icodec plus the app).
- Patient-reported outcomes were more favorable with icodec plus the app vs. daily analogs, with estimated treatment differences that were significant for the Treatment Related Impact Measure for Diabetes (3.04) but not the Diabetes Treatment Satisfaction Questionnaire (0.78).
- Observed rates of combined clinically significant or severe hypoglycemia were low (0.19 event per patient-year of exposure for icodec plus the app vs. 0.14 for daily analogs; estimated rate ratio, 1.17).
IN PRACTICE:
“Once-weekly icodec with a dosing guide app could conceivably address several challenges seen in everyday practice, including inadequate dose titration and nonadherence to prescribed treatment regimens.”
SOURCE:
The study was conducted by Harpreet S. Bajaj, MD, MPH, of LMC Diabetes and Endocrinology, Brampton, Ontario, and colleagues. It was published online in Annals of Internal Medicine.
LIMITATIONS:
The research could not differentiate between the effects of icodec and those of the dosing guide app. The study had an open-label design. A 1-year duration is insufficient to assess long-term diabetes- and cardiovascular-related outcomes.
DISCLOSURES:
The study was funded by Novo Nordisk A/S.
A version of this article appeared on Medscape.com.
TOPLINE:
with improved treatment satisfaction and compliance scores and similarly low hypoglycemia rates.
METHODOLOGY:
- A 52-week, randomized, open-label, parallel-group, phase 3a trial with real-world elements was conducted at 176 sites in seven countries.
- A total of 1,085 insulin-naive patients with type 2 diabetes were randomly assigned to receive icodec with a dosing guide app or daily analogs (U100 glargine, U300 glargine, or icodec).
TAKEAWAY:
- A1c levels dropped from 8.96% at baseline to 7.24% at week 52 with icodec and from 8.88% to 7.61% with the daily analog, a treatment difference of 0.37 percentage point (P < .001 for noninferiority and P = .009 for superiority in favor of icodec plus the app).
- Patient-reported outcomes were more favorable with icodec plus the app vs. daily analogs, with estimated treatment differences that were significant for the Treatment Related Impact Measure for Diabetes (3.04) but not the Diabetes Treatment Satisfaction Questionnaire (0.78).
- Observed rates of combined clinically significant or severe hypoglycemia were low (0.19 event per patient-year of exposure for icodec plus the app vs. 0.14 for daily analogs; estimated rate ratio, 1.17).
IN PRACTICE:
“Once-weekly icodec with a dosing guide app could conceivably address several challenges seen in everyday practice, including inadequate dose titration and nonadherence to prescribed treatment regimens.”
SOURCE:
The study was conducted by Harpreet S. Bajaj, MD, MPH, of LMC Diabetes and Endocrinology, Brampton, Ontario, and colleagues. It was published online in Annals of Internal Medicine.
LIMITATIONS:
The research could not differentiate between the effects of icodec and those of the dosing guide app. The study had an open-label design. A 1-year duration is insufficient to assess long-term diabetes- and cardiovascular-related outcomes.
DISCLOSURES:
The study was funded by Novo Nordisk A/S.
A version of this article appeared on Medscape.com.
TOPLINE:
with improved treatment satisfaction and compliance scores and similarly low hypoglycemia rates.
METHODOLOGY:
- A 52-week, randomized, open-label, parallel-group, phase 3a trial with real-world elements was conducted at 176 sites in seven countries.
- A total of 1,085 insulin-naive patients with type 2 diabetes were randomly assigned to receive icodec with a dosing guide app or daily analogs (U100 glargine, U300 glargine, or icodec).
TAKEAWAY:
- A1c levels dropped from 8.96% at baseline to 7.24% at week 52 with icodec and from 8.88% to 7.61% with the daily analog, a treatment difference of 0.37 percentage point (P < .001 for noninferiority and P = .009 for superiority in favor of icodec plus the app).
- Patient-reported outcomes were more favorable with icodec plus the app vs. daily analogs, with estimated treatment differences that were significant for the Treatment Related Impact Measure for Diabetes (3.04) but not the Diabetes Treatment Satisfaction Questionnaire (0.78).
- Observed rates of combined clinically significant or severe hypoglycemia were low (0.19 event per patient-year of exposure for icodec plus the app vs. 0.14 for daily analogs; estimated rate ratio, 1.17).
IN PRACTICE:
“Once-weekly icodec with a dosing guide app could conceivably address several challenges seen in everyday practice, including inadequate dose titration and nonadherence to prescribed treatment regimens.”
SOURCE:
The study was conducted by Harpreet S. Bajaj, MD, MPH, of LMC Diabetes and Endocrinology, Brampton, Ontario, and colleagues. It was published online in Annals of Internal Medicine.
LIMITATIONS:
The research could not differentiate between the effects of icodec and those of the dosing guide app. The study had an open-label design. A 1-year duration is insufficient to assess long-term diabetes- and cardiovascular-related outcomes.
DISCLOSURES:
The study was funded by Novo Nordisk A/S.
A version of this article appeared on Medscape.com.
FROM ANNALS OF INTERNAL MEDICINE
Worm pulled from woman’s brain in case that ‘stunned’
When they started the open biopsy, surgeons didn’t know what they were going to find, but they certainly didn’t expect this.
The stringlike worm was five-sixteenths of an inch long, was alive, and wiggled.
“It stunned everyone in that operating theater,” Sanjaya Senanayake, MBBS, an associate professor of infectious disease at Australian National University, Canberra, and senior author of the case report, said in an interview. “When you operate on a brain, you don’t expect to find anything alive.”
The parasitic worm was about half the width of a dime. Helminths like it can usually be seen with the naked eye but are often found in the intestines after being transmitted by soil and infecting the gastrointestinal tract. But this one made it into a woman’s brain in a first-of-its-kind case reported in the journal Emerging Infectious Diseases).
“We weren’t suspecting a worm at all,” Dr. Senanayake said. “There was something abnormal there. Was it going to be granulomatous lesion? Was it going to be cancer? Who knows, but it needed to be biopsied, and a worm was the last thing at the back of anyone’s mind,” he said.
A year of inexplicable symptoms
The 64-year-old woman was diagnosed with pneumonia and had a high white blood cell count, low hemoglobin, high platelets, and a very high C-reactive protein of 102 mg/L.
She hadn’t fully recovered from her illness when the abdominal pain and diarrhea started. And then she had a dry cough and night sweats.
After 3 weeks of discomfort, she was admitted to the hospital. She had a history of diabetes, hypothyroidism, and depression, and doctors began looking for answers to her acute illness.
They tested for autoimmune diseases and parasitic infections and prescribed prednisolone to help ease symptoms.
But 3 weeks later, her fever and cough persisted, and she was readmitted to the hospital. Doctors ordered more tests, and her eosinophils were still high, plus there were lesions on her liver, spleen, and lungs.
But tests were negative for bacterial, fungal, and mycobacterial cultures. Her stools showed no evidence of parasites.
She was prescribed mycophenolate and then ivermectin in case her tests for roundworm were a false negative. Doctors suspected Strongyloides, but lesions remained on her spleen even as the liver and lung lesions improved.
Reducing the prednisolone dose affected respiratory symptoms, so by January 2022, a year after initial symptoms began, the medical team added the monoclonal antibody mepolizumab. But her symptoms worsened, and she developed forgetfulness and more depression.
The specimen was Ophidascaris robertsi, the intestinal roundworm typically of the carpet python. Never before seen in a human, the only other animals in its life cycle are small marsupials or mammals consumed by pythons.
A snake’s bug
Although this is the first case of an Ophidascaris infection in a human, other cases could occur, warn the doctors in their case report.
The best guess for how the patient contracted the infection was by inadvertently consuming larval eggs on wild vegetation that she collected near her home to eat. She lived near a lake known to be home to carpet pythons, so the eggs could have been on the plants she collected or on her hands or kitchen equipment.
“If you’re foraging or using native grasses or plants in recipes, it would be a good idea to cook those instead of having a salad,” Dr. Senanayake said. “That would make the chance of getting something really rare even less likely.”
It’s unclear how or why the worm, which usually stays in the gut, made its way into the patient’s brain, but her long course of immunosuppressing drugs may have played a role, the team points out. “If the normal immune barriers are reduced, then it’s easier for the parasite to move around between organ systems,” Dr. Senanayake said.
Doctors also wondered if she may have been getting re-infected when she went home between hospital admissions. After removing the worm, she received 4 weeks of treatment with albendazole to eliminate any other possible larvae in other organs, especially since Ophidascaris larvae have been known to survive for long periods – more than 4 years in laboratory rats. “The hope is that she’s been cured of this parasitic infection,” Dr. Senanayake said.
As people around the world contend with the global COVID pandemic, they might not realize that new infections are arising around the world every year, he explained.
Novel parasitic infections
“The reality is that 30 new infections appeared in the last 30 years, and three-quarters of them are zoonotic, animal infections spilling over into the human world,” Dr. Senanayake said.
Though some of that number is the result of improved surveillance and diagnostics, a real increase has been occurring as human settlements continue expanding.
“This is just a reflection of how burgeoning human populations are encroaching upon animal habitats, and we’re getting more interactions between humans and wild animals, domestic animals and wild animals, and humans and natural flora, which is increasing the risk of this type of infection being recognized,” he explained.
The Ophidascaris worm found in this instance is in other snake species in different continents around the world, too. “Awareness of this case will hopefully lead to the diagnosis and treatment of other cases,” Dr. Senanayake added.
Though it’s certainly surprising to find this particular parasite in a human, finding a zoonotic organism in a person isn’t that strange, according to Janet Foley, DVM, PhD, a professor of veterinary medicine at the University of California, Davis. This is especially true if the usual host is closely related to humans, like primates, or spends a lot of time around them, like rats.
“There are still a lot of parasites and diseases out there in wildlife that haven’t been discovered, and we don’t know the risk,” said Dr. Foley. “But still, the risk would have to be low, generally, or we would see more human cases.”
In the United States, the roundworm common in raccoon feces is Baylisascaris procyonis and can be dangerous for people. “There have been deaths in people exposed to these worms, which do seem to prefer to travel to a human brain,” Dr. Foley said.
A 2016 Centers for Disease Control and Prevention report described seven U.S. cases identified between May 2013 and December 2015, including six that caused central nervous system disease. Another case report in 2018 involved a toddler who had eaten dirt and animal feces in his backyard.
And this past June, an Emerging Infectious Diseases case report described a B. procyonis infection in a 7-year-old with autism spectrum disorder and a history of pica. He had put material in his mouth from the ground near a tree where epidemiologists later found raccoon feces.
Still, Dr. Senanayake cautions against people jumping to conclusions about parasitic infections when they experience symptoms that aren’t otherwise immediately explainable.
The typical person who develops forgetfulness, depression, and a fever probably doesn’t have a worm in their brain or need an immediate MRI, he pointed out. “There may be other cases out there, but common things happen commonly, and this is likely to be rare,” Dr. Senanayake said.
This case demonstrates the challenge in picking a course of treatment when the differential diagnoses for hypereosinophilic syndromes is so broad.
Tricky hypereosinophilic syndromes
One of those differentials for the syndromes is parasitic infections, for which treatment would be antiparasitic agents, but another differential is an autoimmune condition that would call for immunosuppression.
“Obviously, as with this case, you don’t want to give someone immunosuppressive treatment if they’ve got a parasite, so you want to look really hard for a parasite before you start them on immunosuppressive treatment for an immunological condition,” Dr. Senanayake said.
But all the blood tests for different antibodies came back negative for parasites, “and this parasite was simply difficult to find until they pulled it from her brain,” he said.
Infectious disease physicians are always looking for the unusual and exotic, Dr. Senanayake explained. But it’s important to exclude the common, easy things first, he added. It’s after exhausting all the likely culprits that “you have to start really thinking laterally and putting resources into unusual tests.”
A version of this article first appeared on Medscape.com.
When they started the open biopsy, surgeons didn’t know what they were going to find, but they certainly didn’t expect this.
The stringlike worm was five-sixteenths of an inch long, was alive, and wiggled.
“It stunned everyone in that operating theater,” Sanjaya Senanayake, MBBS, an associate professor of infectious disease at Australian National University, Canberra, and senior author of the case report, said in an interview. “When you operate on a brain, you don’t expect to find anything alive.”
The parasitic worm was about half the width of a dime. Helminths like it can usually be seen with the naked eye but are often found in the intestines after being transmitted by soil and infecting the gastrointestinal tract. But this one made it into a woman’s brain in a first-of-its-kind case reported in the journal Emerging Infectious Diseases).
“We weren’t suspecting a worm at all,” Dr. Senanayake said. “There was something abnormal there. Was it going to be granulomatous lesion? Was it going to be cancer? Who knows, but it needed to be biopsied, and a worm was the last thing at the back of anyone’s mind,” he said.
A year of inexplicable symptoms
The 64-year-old woman was diagnosed with pneumonia and had a high white blood cell count, low hemoglobin, high platelets, and a very high C-reactive protein of 102 mg/L.
She hadn’t fully recovered from her illness when the abdominal pain and diarrhea started. And then she had a dry cough and night sweats.
After 3 weeks of discomfort, she was admitted to the hospital. She had a history of diabetes, hypothyroidism, and depression, and doctors began looking for answers to her acute illness.
They tested for autoimmune diseases and parasitic infections and prescribed prednisolone to help ease symptoms.
But 3 weeks later, her fever and cough persisted, and she was readmitted to the hospital. Doctors ordered more tests, and her eosinophils were still high, plus there were lesions on her liver, spleen, and lungs.
But tests were negative for bacterial, fungal, and mycobacterial cultures. Her stools showed no evidence of parasites.
She was prescribed mycophenolate and then ivermectin in case her tests for roundworm were a false negative. Doctors suspected Strongyloides, but lesions remained on her spleen even as the liver and lung lesions improved.
Reducing the prednisolone dose affected respiratory symptoms, so by January 2022, a year after initial symptoms began, the medical team added the monoclonal antibody mepolizumab. But her symptoms worsened, and she developed forgetfulness and more depression.
The specimen was Ophidascaris robertsi, the intestinal roundworm typically of the carpet python. Never before seen in a human, the only other animals in its life cycle are small marsupials or mammals consumed by pythons.
A snake’s bug
Although this is the first case of an Ophidascaris infection in a human, other cases could occur, warn the doctors in their case report.
The best guess for how the patient contracted the infection was by inadvertently consuming larval eggs on wild vegetation that she collected near her home to eat. She lived near a lake known to be home to carpet pythons, so the eggs could have been on the plants she collected or on her hands or kitchen equipment.
“If you’re foraging or using native grasses or plants in recipes, it would be a good idea to cook those instead of having a salad,” Dr. Senanayake said. “That would make the chance of getting something really rare even less likely.”
It’s unclear how or why the worm, which usually stays in the gut, made its way into the patient’s brain, but her long course of immunosuppressing drugs may have played a role, the team points out. “If the normal immune barriers are reduced, then it’s easier for the parasite to move around between organ systems,” Dr. Senanayake said.
Doctors also wondered if she may have been getting re-infected when she went home between hospital admissions. After removing the worm, she received 4 weeks of treatment with albendazole to eliminate any other possible larvae in other organs, especially since Ophidascaris larvae have been known to survive for long periods – more than 4 years in laboratory rats. “The hope is that she’s been cured of this parasitic infection,” Dr. Senanayake said.
As people around the world contend with the global COVID pandemic, they might not realize that new infections are arising around the world every year, he explained.
Novel parasitic infections
“The reality is that 30 new infections appeared in the last 30 years, and three-quarters of them are zoonotic, animal infections spilling over into the human world,” Dr. Senanayake said.
Though some of that number is the result of improved surveillance and diagnostics, a real increase has been occurring as human settlements continue expanding.
“This is just a reflection of how burgeoning human populations are encroaching upon animal habitats, and we’re getting more interactions between humans and wild animals, domestic animals and wild animals, and humans and natural flora, which is increasing the risk of this type of infection being recognized,” he explained.
The Ophidascaris worm found in this instance is in other snake species in different continents around the world, too. “Awareness of this case will hopefully lead to the diagnosis and treatment of other cases,” Dr. Senanayake added.
Though it’s certainly surprising to find this particular parasite in a human, finding a zoonotic organism in a person isn’t that strange, according to Janet Foley, DVM, PhD, a professor of veterinary medicine at the University of California, Davis. This is especially true if the usual host is closely related to humans, like primates, or spends a lot of time around them, like rats.
“There are still a lot of parasites and diseases out there in wildlife that haven’t been discovered, and we don’t know the risk,” said Dr. Foley. “But still, the risk would have to be low, generally, or we would see more human cases.”
In the United States, the roundworm common in raccoon feces is Baylisascaris procyonis and can be dangerous for people. “There have been deaths in people exposed to these worms, which do seem to prefer to travel to a human brain,” Dr. Foley said.
A 2016 Centers for Disease Control and Prevention report described seven U.S. cases identified between May 2013 and December 2015, including six that caused central nervous system disease. Another case report in 2018 involved a toddler who had eaten dirt and animal feces in his backyard.
And this past June, an Emerging Infectious Diseases case report described a B. procyonis infection in a 7-year-old with autism spectrum disorder and a history of pica. He had put material in his mouth from the ground near a tree where epidemiologists later found raccoon feces.
Still, Dr. Senanayake cautions against people jumping to conclusions about parasitic infections when they experience symptoms that aren’t otherwise immediately explainable.
The typical person who develops forgetfulness, depression, and a fever probably doesn’t have a worm in their brain or need an immediate MRI, he pointed out. “There may be other cases out there, but common things happen commonly, and this is likely to be rare,” Dr. Senanayake said.
This case demonstrates the challenge in picking a course of treatment when the differential diagnoses for hypereosinophilic syndromes is so broad.
Tricky hypereosinophilic syndromes
One of those differentials for the syndromes is parasitic infections, for which treatment would be antiparasitic agents, but another differential is an autoimmune condition that would call for immunosuppression.
“Obviously, as with this case, you don’t want to give someone immunosuppressive treatment if they’ve got a parasite, so you want to look really hard for a parasite before you start them on immunosuppressive treatment for an immunological condition,” Dr. Senanayake said.
But all the blood tests for different antibodies came back negative for parasites, “and this parasite was simply difficult to find until they pulled it from her brain,” he said.
Infectious disease physicians are always looking for the unusual and exotic, Dr. Senanayake explained. But it’s important to exclude the common, easy things first, he added. It’s after exhausting all the likely culprits that “you have to start really thinking laterally and putting resources into unusual tests.”
A version of this article first appeared on Medscape.com.
When they started the open biopsy, surgeons didn’t know what they were going to find, but they certainly didn’t expect this.
The stringlike worm was five-sixteenths of an inch long, was alive, and wiggled.
“It stunned everyone in that operating theater,” Sanjaya Senanayake, MBBS, an associate professor of infectious disease at Australian National University, Canberra, and senior author of the case report, said in an interview. “When you operate on a brain, you don’t expect to find anything alive.”
The parasitic worm was about half the width of a dime. Helminths like it can usually be seen with the naked eye but are often found in the intestines after being transmitted by soil and infecting the gastrointestinal tract. But this one made it into a woman’s brain in a first-of-its-kind case reported in the journal Emerging Infectious Diseases).
“We weren’t suspecting a worm at all,” Dr. Senanayake said. “There was something abnormal there. Was it going to be granulomatous lesion? Was it going to be cancer? Who knows, but it needed to be biopsied, and a worm was the last thing at the back of anyone’s mind,” he said.
A year of inexplicable symptoms
The 64-year-old woman was diagnosed with pneumonia and had a high white blood cell count, low hemoglobin, high platelets, and a very high C-reactive protein of 102 mg/L.
She hadn’t fully recovered from her illness when the abdominal pain and diarrhea started. And then she had a dry cough and night sweats.
After 3 weeks of discomfort, she was admitted to the hospital. She had a history of diabetes, hypothyroidism, and depression, and doctors began looking for answers to her acute illness.
They tested for autoimmune diseases and parasitic infections and prescribed prednisolone to help ease symptoms.
But 3 weeks later, her fever and cough persisted, and she was readmitted to the hospital. Doctors ordered more tests, and her eosinophils were still high, plus there were lesions on her liver, spleen, and lungs.
But tests were negative for bacterial, fungal, and mycobacterial cultures. Her stools showed no evidence of parasites.
She was prescribed mycophenolate and then ivermectin in case her tests for roundworm were a false negative. Doctors suspected Strongyloides, but lesions remained on her spleen even as the liver and lung lesions improved.
Reducing the prednisolone dose affected respiratory symptoms, so by January 2022, a year after initial symptoms began, the medical team added the monoclonal antibody mepolizumab. But her symptoms worsened, and she developed forgetfulness and more depression.
The specimen was Ophidascaris robertsi, the intestinal roundworm typically of the carpet python. Never before seen in a human, the only other animals in its life cycle are small marsupials or mammals consumed by pythons.
A snake’s bug
Although this is the first case of an Ophidascaris infection in a human, other cases could occur, warn the doctors in their case report.
The best guess for how the patient contracted the infection was by inadvertently consuming larval eggs on wild vegetation that she collected near her home to eat. She lived near a lake known to be home to carpet pythons, so the eggs could have been on the plants she collected or on her hands or kitchen equipment.
“If you’re foraging or using native grasses or plants in recipes, it would be a good idea to cook those instead of having a salad,” Dr. Senanayake said. “That would make the chance of getting something really rare even less likely.”
It’s unclear how or why the worm, which usually stays in the gut, made its way into the patient’s brain, but her long course of immunosuppressing drugs may have played a role, the team points out. “If the normal immune barriers are reduced, then it’s easier for the parasite to move around between organ systems,” Dr. Senanayake said.
Doctors also wondered if she may have been getting re-infected when she went home between hospital admissions. After removing the worm, she received 4 weeks of treatment with albendazole to eliminate any other possible larvae in other organs, especially since Ophidascaris larvae have been known to survive for long periods – more than 4 years in laboratory rats. “The hope is that she’s been cured of this parasitic infection,” Dr. Senanayake said.
As people around the world contend with the global COVID pandemic, they might not realize that new infections are arising around the world every year, he explained.
Novel parasitic infections
“The reality is that 30 new infections appeared in the last 30 years, and three-quarters of them are zoonotic, animal infections spilling over into the human world,” Dr. Senanayake said.
Though some of that number is the result of improved surveillance and diagnostics, a real increase has been occurring as human settlements continue expanding.
“This is just a reflection of how burgeoning human populations are encroaching upon animal habitats, and we’re getting more interactions between humans and wild animals, domestic animals and wild animals, and humans and natural flora, which is increasing the risk of this type of infection being recognized,” he explained.
The Ophidascaris worm found in this instance is in other snake species in different continents around the world, too. “Awareness of this case will hopefully lead to the diagnosis and treatment of other cases,” Dr. Senanayake added.
Though it’s certainly surprising to find this particular parasite in a human, finding a zoonotic organism in a person isn’t that strange, according to Janet Foley, DVM, PhD, a professor of veterinary medicine at the University of California, Davis. This is especially true if the usual host is closely related to humans, like primates, or spends a lot of time around them, like rats.
“There are still a lot of parasites and diseases out there in wildlife that haven’t been discovered, and we don’t know the risk,” said Dr. Foley. “But still, the risk would have to be low, generally, or we would see more human cases.”
In the United States, the roundworm common in raccoon feces is Baylisascaris procyonis and can be dangerous for people. “There have been deaths in people exposed to these worms, which do seem to prefer to travel to a human brain,” Dr. Foley said.
A 2016 Centers for Disease Control and Prevention report described seven U.S. cases identified between May 2013 and December 2015, including six that caused central nervous system disease. Another case report in 2018 involved a toddler who had eaten dirt and animal feces in his backyard.
And this past June, an Emerging Infectious Diseases case report described a B. procyonis infection in a 7-year-old with autism spectrum disorder and a history of pica. He had put material in his mouth from the ground near a tree where epidemiologists later found raccoon feces.
Still, Dr. Senanayake cautions against people jumping to conclusions about parasitic infections when they experience symptoms that aren’t otherwise immediately explainable.
The typical person who develops forgetfulness, depression, and a fever probably doesn’t have a worm in their brain or need an immediate MRI, he pointed out. “There may be other cases out there, but common things happen commonly, and this is likely to be rare,” Dr. Senanayake said.
This case demonstrates the challenge in picking a course of treatment when the differential diagnoses for hypereosinophilic syndromes is so broad.
Tricky hypereosinophilic syndromes
One of those differentials for the syndromes is parasitic infections, for which treatment would be antiparasitic agents, but another differential is an autoimmune condition that would call for immunosuppression.
“Obviously, as with this case, you don’t want to give someone immunosuppressive treatment if they’ve got a parasite, so you want to look really hard for a parasite before you start them on immunosuppressive treatment for an immunological condition,” Dr. Senanayake said.
But all the blood tests for different antibodies came back negative for parasites, “and this parasite was simply difficult to find until they pulled it from her brain,” he said.
Infectious disease physicians are always looking for the unusual and exotic, Dr. Senanayake explained. But it’s important to exclude the common, easy things first, he added. It’s after exhausting all the likely culprits that “you have to start really thinking laterally and putting resources into unusual tests.”
A version of this article first appeared on Medscape.com.
FROM EMERGING INFECTIOUS DISEASES
Are vitamin D levels key to canagliflozin’s fracture risk?
Sodium-glucose cotransporter 2 (SGLT2) inhibitors are beneficial for treating type 2 diabetes and reducing cardiovascular and kidney disease risk. However, some, but not all, trial data have linked the SLGT2 inhibitor canagliflozin to increased fracture risk. That particular agent has been reported to accelerate loss of bone mineral density, which could contribute to fracture risk. Other drugs in the class have also been implicated in worsening markers of bone health.
The new findings, from a small study of Amish adults with vitamin D deficiency (≤ 20 ng/mL) but without diabetes or osteoporosis, suggest that physicians consider screening for vitamin D deficiency prior to prescribing SGLT2 inhibitor. Alternatively, these patients can simply be prescribed safe, inexpensive, OTC vitamin D supplements without being screening, Zhinous Shahidzadeh Yazdi, MD, of the division of endocrinology, diabetes, and nutrition at the University of Maryland, Baltimore, and colleagues wrote.
“Something as simple as OTC vitamin D might protect against bone fractures caused by chronic multiyear treatment with a drug,” study lead author Simeon I. Taylor, MD, PhD, professor of medicine at the University of Maryland, said in an interview.
In the study, published in the Journal of Clinical Endocrinology and Metabolism, 11 adults with vitamin D deficiency underwent two canagliflozin challenge protocols of 300 mg/d for 5 days, once before and once after vitamin D3 supplementation (either 50,000 IU per week or twice weekly for body mass index < 30 kg/m2 or ≥ 30 kg/m2, respectively), to achieve 25(OH)D of at least 30 ng/mL.
When the participants were vitamin D deficient, canagliflozin significantly decreased 1,25(OH)2D levels by 31.3%, from 43.8 pg/mL on day 1 to 29.1 pg/mL on day 3 (P = .0003). In contrast, after receiving the vitamin D3 supplements, canagliflozin reduced mean 1,25(OH)2D levels by a nonsignificant 9.3%, from 45 pg/mL on day 1 to 41 pg/mL on day 3 (P = .3).
“Thus, [vitamin D3] supplementation provided statistically significant protection from the adverse effect of canagliflozin to decrease mean plasma levels of 1,25(OH)2D (P = .04),” Yazdi and colleagues wrote.
Similarly, when the participants were vitamin D deficient, canagliflozin was associated with a significant 36.2% increase in mean parathyroid hormone (PTH) levels, from 47.5 pg/mL on day 1 to 58.5 pg/mL on day 6 (P = .0009). In contrast, after vitamin D3 supplementation, the increase in PTH was far less, from 48.4 pg/mL on day 1 to 53.3 pg/mL on day 6 (P = .02).
Therefore, the supplementation “significantly decreased the magnitude of the canagliflozin-induced increase in mean levels of PTH (P = .005),” they wrote.
Also, in the vitamin D deficient state, canagliflozin significantly increased mean serum phosphorous on day 3 in comparison with day 1 (P = .007), while after supplementation, that change was also insignificant (P = .8).
“We are saying that SGLT2 inhibitors, when superimposed on vitamin D deficiency, is bad for bone health. This group of people have two important risk factors – vitamin D deficiency and SGLT2 inhibitors – and are distinct from the general population of people who are not vitamin D deficient,” Dr. Taylor noted.
The findings “raise interesting questions about how to proceed,” he said in an interview, since “the gold standard study – in this case, a fracture prevention study – will never be done because it would cost more than $100 million. Vitamin D costs only $10-$20 per year, and at appropriate doses, is extremely safe. At worst, vitamin D supplements are unnecessary. At best, vitamin D supplements can protect some patients against a serious drug toxicity, bone fracture.”
The study was funded by the National Institutes of Health. Dr. Taylor serves as a consultant for Ionis Pharmaceuticals and receives an inventor’s share of royalties from the National Institute of Diabetes, Digestive, and Kidney Diseases for metreleptin as a treatment for generalized lipodystrophy. Dr. Yazdi disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Sodium-glucose cotransporter 2 (SGLT2) inhibitors are beneficial for treating type 2 diabetes and reducing cardiovascular and kidney disease risk. However, some, but not all, trial data have linked the SLGT2 inhibitor canagliflozin to increased fracture risk. That particular agent has been reported to accelerate loss of bone mineral density, which could contribute to fracture risk. Other drugs in the class have also been implicated in worsening markers of bone health.
The new findings, from a small study of Amish adults with vitamin D deficiency (≤ 20 ng/mL) but without diabetes or osteoporosis, suggest that physicians consider screening for vitamin D deficiency prior to prescribing SGLT2 inhibitor. Alternatively, these patients can simply be prescribed safe, inexpensive, OTC vitamin D supplements without being screening, Zhinous Shahidzadeh Yazdi, MD, of the division of endocrinology, diabetes, and nutrition at the University of Maryland, Baltimore, and colleagues wrote.
“Something as simple as OTC vitamin D might protect against bone fractures caused by chronic multiyear treatment with a drug,” study lead author Simeon I. Taylor, MD, PhD, professor of medicine at the University of Maryland, said in an interview.
In the study, published in the Journal of Clinical Endocrinology and Metabolism, 11 adults with vitamin D deficiency underwent two canagliflozin challenge protocols of 300 mg/d for 5 days, once before and once after vitamin D3 supplementation (either 50,000 IU per week or twice weekly for body mass index < 30 kg/m2 or ≥ 30 kg/m2, respectively), to achieve 25(OH)D of at least 30 ng/mL.
When the participants were vitamin D deficient, canagliflozin significantly decreased 1,25(OH)2D levels by 31.3%, from 43.8 pg/mL on day 1 to 29.1 pg/mL on day 3 (P = .0003). In contrast, after receiving the vitamin D3 supplements, canagliflozin reduced mean 1,25(OH)2D levels by a nonsignificant 9.3%, from 45 pg/mL on day 1 to 41 pg/mL on day 3 (P = .3).
“Thus, [vitamin D3] supplementation provided statistically significant protection from the adverse effect of canagliflozin to decrease mean plasma levels of 1,25(OH)2D (P = .04),” Yazdi and colleagues wrote.
Similarly, when the participants were vitamin D deficient, canagliflozin was associated with a significant 36.2% increase in mean parathyroid hormone (PTH) levels, from 47.5 pg/mL on day 1 to 58.5 pg/mL on day 6 (P = .0009). In contrast, after vitamin D3 supplementation, the increase in PTH was far less, from 48.4 pg/mL on day 1 to 53.3 pg/mL on day 6 (P = .02).
Therefore, the supplementation “significantly decreased the magnitude of the canagliflozin-induced increase in mean levels of PTH (P = .005),” they wrote.
Also, in the vitamin D deficient state, canagliflozin significantly increased mean serum phosphorous on day 3 in comparison with day 1 (P = .007), while after supplementation, that change was also insignificant (P = .8).
“We are saying that SGLT2 inhibitors, when superimposed on vitamin D deficiency, is bad for bone health. This group of people have two important risk factors – vitamin D deficiency and SGLT2 inhibitors – and are distinct from the general population of people who are not vitamin D deficient,” Dr. Taylor noted.
The findings “raise interesting questions about how to proceed,” he said in an interview, since “the gold standard study – in this case, a fracture prevention study – will never be done because it would cost more than $100 million. Vitamin D costs only $10-$20 per year, and at appropriate doses, is extremely safe. At worst, vitamin D supplements are unnecessary. At best, vitamin D supplements can protect some patients against a serious drug toxicity, bone fracture.”
The study was funded by the National Institutes of Health. Dr. Taylor serves as a consultant for Ionis Pharmaceuticals and receives an inventor’s share of royalties from the National Institute of Diabetes, Digestive, and Kidney Diseases for metreleptin as a treatment for generalized lipodystrophy. Dr. Yazdi disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Sodium-glucose cotransporter 2 (SGLT2) inhibitors are beneficial for treating type 2 diabetes and reducing cardiovascular and kidney disease risk. However, some, but not all, trial data have linked the SLGT2 inhibitor canagliflozin to increased fracture risk. That particular agent has been reported to accelerate loss of bone mineral density, which could contribute to fracture risk. Other drugs in the class have also been implicated in worsening markers of bone health.
The new findings, from a small study of Amish adults with vitamin D deficiency (≤ 20 ng/mL) but without diabetes or osteoporosis, suggest that physicians consider screening for vitamin D deficiency prior to prescribing SGLT2 inhibitor. Alternatively, these patients can simply be prescribed safe, inexpensive, OTC vitamin D supplements without being screening, Zhinous Shahidzadeh Yazdi, MD, of the division of endocrinology, diabetes, and nutrition at the University of Maryland, Baltimore, and colleagues wrote.
“Something as simple as OTC vitamin D might protect against bone fractures caused by chronic multiyear treatment with a drug,” study lead author Simeon I. Taylor, MD, PhD, professor of medicine at the University of Maryland, said in an interview.
In the study, published in the Journal of Clinical Endocrinology and Metabolism, 11 adults with vitamin D deficiency underwent two canagliflozin challenge protocols of 300 mg/d for 5 days, once before and once after vitamin D3 supplementation (either 50,000 IU per week or twice weekly for body mass index < 30 kg/m2 or ≥ 30 kg/m2, respectively), to achieve 25(OH)D of at least 30 ng/mL.
When the participants were vitamin D deficient, canagliflozin significantly decreased 1,25(OH)2D levels by 31.3%, from 43.8 pg/mL on day 1 to 29.1 pg/mL on day 3 (P = .0003). In contrast, after receiving the vitamin D3 supplements, canagliflozin reduced mean 1,25(OH)2D levels by a nonsignificant 9.3%, from 45 pg/mL on day 1 to 41 pg/mL on day 3 (P = .3).
“Thus, [vitamin D3] supplementation provided statistically significant protection from the adverse effect of canagliflozin to decrease mean plasma levels of 1,25(OH)2D (P = .04),” Yazdi and colleagues wrote.
Similarly, when the participants were vitamin D deficient, canagliflozin was associated with a significant 36.2% increase in mean parathyroid hormone (PTH) levels, from 47.5 pg/mL on day 1 to 58.5 pg/mL on day 6 (P = .0009). In contrast, after vitamin D3 supplementation, the increase in PTH was far less, from 48.4 pg/mL on day 1 to 53.3 pg/mL on day 6 (P = .02).
Therefore, the supplementation “significantly decreased the magnitude of the canagliflozin-induced increase in mean levels of PTH (P = .005),” they wrote.
Also, in the vitamin D deficient state, canagliflozin significantly increased mean serum phosphorous on day 3 in comparison with day 1 (P = .007), while after supplementation, that change was also insignificant (P = .8).
“We are saying that SGLT2 inhibitors, when superimposed on vitamin D deficiency, is bad for bone health. This group of people have two important risk factors – vitamin D deficiency and SGLT2 inhibitors – and are distinct from the general population of people who are not vitamin D deficient,” Dr. Taylor noted.
The findings “raise interesting questions about how to proceed,” he said in an interview, since “the gold standard study – in this case, a fracture prevention study – will never be done because it would cost more than $100 million. Vitamin D costs only $10-$20 per year, and at appropriate doses, is extremely safe. At worst, vitamin D supplements are unnecessary. At best, vitamin D supplements can protect some patients against a serious drug toxicity, bone fracture.”
The study was funded by the National Institutes of Health. Dr. Taylor serves as a consultant for Ionis Pharmaceuticals and receives an inventor’s share of royalties from the National Institute of Diabetes, Digestive, and Kidney Diseases for metreleptin as a treatment for generalized lipodystrophy. Dr. Yazdi disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF CLINICAL ENDOCRINOLOGY AND METABOLISM
AAP advises against low-carb diets for children with diabetes
according to a new clinical report.
Citing a lack of high-quality data and potential for adverse effects with carbohydrate restriction among younger individuals, lead author Anna Neyman, MD, of Indiana University, Indianapolis, and colleagues suggested that pediatric patients with type 2 diabetes should focus on reducing nutrient-poor carbohydrate intake, while those with type 1 diabetes should only pursue broader carbohydrate restriction under close medical supervision.
“There are no guidelines for restricting dietary carbohydrate consumption to reduce risk for diabetes or improve diabetes outcomes in youth,” the investigators wrote in Pediatrics. “Thus, there is a need to provide practical recommendations for pediatricians regarding the use of low-carbohydrate diets in patients who elect to follow these diets, including those with type 1 diabetes and for patients with obesity, prediabetes, and type 2 diabetes.”
Their new report includes a summary of the various types of carbohydrate-restricted diets, a review of available evidence for these diets among pediatric patients with type 1 and type 2 diabetes, and several practical recommendations based on their findings.
Dr. Neyman and colleagues first noted a lack of standardization in describing the various tiers of carbohydrate restriction; however, they offered some rough guidelines. Compared with a typical, balanced diet, which includes 45%-65% of calories from carbohydrates, a moderately restrictive diet includes 26%-44% of calories from carbohydrates, while a low-carb diet includes less than 26% of calories from carbs. Further down the scale, very low-carb diets and ketogenic diets call for 20-50 g of carbs per day or less than 20 g of carbs per day, respectively.
“There is evidence from adult studies that these diets can be associated with significant weight loss, reduction in insulin levels or insulin requirements, and improvement in glucose control,” the investigators noted. “Nevertheless, there is a lack of long-term safety and efficacy outcomes in youth.”
They went on to cite a range of safety concerns, including “growth deceleration, nutritional deficiencies, poor bone health, nutritional ketosis that cannot be distinguished from ketosis resulting from insulin deficiency, and disordered eating behaviors.”
“Body dissatisfaction associated with restrictive dieting practices places children and adolescents at risk for inadequate dietary intake, excessive weight gain resulting from binge-eating after restricting food intake, and use of harmful weight-control strategies,” the investigators wrote. “Moreover, restrictive dieting practices may negatively impact mental health and self-concept and are directly associated with decreased mood and increased feelings of anxiety.”
Until more evidence is available, Dr. Neyman and colleagues advised adherence to a balanced diet, including increased dietary fiber and reduced consumption of ultra-processed carbohydrates.
“Eliminating sugary beverages and juices significantly improves blood glucose and weight management in children and adolescents,” they noted.
For pediatric patients with type 1 diabetes, the investigators suggested that low-carb and very low-carb diets should only be pursued “under close diabetes care team supervision utilizing safety guidelines.”
Lack of evidence is the problem
David Ludwig, MD, PhD, codirector of the New Balance Foundation Obesity Prevention Center, Boston Children’s Hospital, and professor of pediatrics at Harvard Medical School, also in Boston, said the review is “rather general” and “reiterates common, although not always fair, concerns about carbohydrate restriction.”
“The main issue they highlight is the lack of evidence, especially from clinical trials, for a low-carbohydrate diet in children, as related to diabetes,” Dr. Ludwig said in a written comment, noting that this is indeed an issue. “However, what needs to be recognized is that a conventional high-carbohydrate diet has never been shown to be superior in adults or children for diabetes. Furthermore, whereas a poorly formulated low-carb diet may have adverse effects and risks (e.g., nutrient deficiencies), so can a high-carbohydrate diet – including an increase in triglycerides and other risk factors comprising metabolic syndrome.”
He said that the “main challenge in diabetes is to control blood glucose after eating,” and a high-carb makes this more difficult, as it requires more insulin after a meal than a low-carb meal would require, and increases risk of subsequent hypoglycemia.
For those interested in an alternative perspective to the AAP clinical report, Dr. Ludwig recommended two of his recent review articles, including one published in the Journal of Nutrition and another from the Journal of Clinical Investigation. In both, notes the long history of carbohydrate restriction for patients with diabetes, with usage dating back to the 1700s. Although the diet fell out of favor with the introduction of insulin, Dr. Ludwig believes that it needs to be reconsidered, and is more than a passing fad.
“Preliminary research suggests that this dietary approach might transform clinical management and perhaps normalize HbA1c for many people with diabetes, at substantially reduced treatment costs,” Dr. Ludwig and colleagues wrote in the JCI review. “High-quality randomized controlled trials, with intensive support for behavior changes, will be needed to address this possibility and assess long-term safety and sustainability. With total medical costs of diabetes in the United States approaching $1 billion a day, this research must assume high priority.”
This clinical report was commissioned by the AAP. Dr. Ludwig received royalties for books that recommend a carbohydrate-modified diet.
This article was updated 9/20/23.
according to a new clinical report.
Citing a lack of high-quality data and potential for adverse effects with carbohydrate restriction among younger individuals, lead author Anna Neyman, MD, of Indiana University, Indianapolis, and colleagues suggested that pediatric patients with type 2 diabetes should focus on reducing nutrient-poor carbohydrate intake, while those with type 1 diabetes should only pursue broader carbohydrate restriction under close medical supervision.
“There are no guidelines for restricting dietary carbohydrate consumption to reduce risk for diabetes or improve diabetes outcomes in youth,” the investigators wrote in Pediatrics. “Thus, there is a need to provide practical recommendations for pediatricians regarding the use of low-carbohydrate diets in patients who elect to follow these diets, including those with type 1 diabetes and for patients with obesity, prediabetes, and type 2 diabetes.”
Their new report includes a summary of the various types of carbohydrate-restricted diets, a review of available evidence for these diets among pediatric patients with type 1 and type 2 diabetes, and several practical recommendations based on their findings.
Dr. Neyman and colleagues first noted a lack of standardization in describing the various tiers of carbohydrate restriction; however, they offered some rough guidelines. Compared with a typical, balanced diet, which includes 45%-65% of calories from carbohydrates, a moderately restrictive diet includes 26%-44% of calories from carbohydrates, while a low-carb diet includes less than 26% of calories from carbs. Further down the scale, very low-carb diets and ketogenic diets call for 20-50 g of carbs per day or less than 20 g of carbs per day, respectively.
“There is evidence from adult studies that these diets can be associated with significant weight loss, reduction in insulin levels or insulin requirements, and improvement in glucose control,” the investigators noted. “Nevertheless, there is a lack of long-term safety and efficacy outcomes in youth.”
They went on to cite a range of safety concerns, including “growth deceleration, nutritional deficiencies, poor bone health, nutritional ketosis that cannot be distinguished from ketosis resulting from insulin deficiency, and disordered eating behaviors.”
“Body dissatisfaction associated with restrictive dieting practices places children and adolescents at risk for inadequate dietary intake, excessive weight gain resulting from binge-eating after restricting food intake, and use of harmful weight-control strategies,” the investigators wrote. “Moreover, restrictive dieting practices may negatively impact mental health and self-concept and are directly associated with decreased mood and increased feelings of anxiety.”
Until more evidence is available, Dr. Neyman and colleagues advised adherence to a balanced diet, including increased dietary fiber and reduced consumption of ultra-processed carbohydrates.
“Eliminating sugary beverages and juices significantly improves blood glucose and weight management in children and adolescents,” they noted.
For pediatric patients with type 1 diabetes, the investigators suggested that low-carb and very low-carb diets should only be pursued “under close diabetes care team supervision utilizing safety guidelines.”
Lack of evidence is the problem
David Ludwig, MD, PhD, codirector of the New Balance Foundation Obesity Prevention Center, Boston Children’s Hospital, and professor of pediatrics at Harvard Medical School, also in Boston, said the review is “rather general” and “reiterates common, although not always fair, concerns about carbohydrate restriction.”
“The main issue they highlight is the lack of evidence, especially from clinical trials, for a low-carbohydrate diet in children, as related to diabetes,” Dr. Ludwig said in a written comment, noting that this is indeed an issue. “However, what needs to be recognized is that a conventional high-carbohydrate diet has never been shown to be superior in adults or children for diabetes. Furthermore, whereas a poorly formulated low-carb diet may have adverse effects and risks (e.g., nutrient deficiencies), so can a high-carbohydrate diet – including an increase in triglycerides and other risk factors comprising metabolic syndrome.”
He said that the “main challenge in diabetes is to control blood glucose after eating,” and a high-carb makes this more difficult, as it requires more insulin after a meal than a low-carb meal would require, and increases risk of subsequent hypoglycemia.
For those interested in an alternative perspective to the AAP clinical report, Dr. Ludwig recommended two of his recent review articles, including one published in the Journal of Nutrition and another from the Journal of Clinical Investigation. In both, notes the long history of carbohydrate restriction for patients with diabetes, with usage dating back to the 1700s. Although the diet fell out of favor with the introduction of insulin, Dr. Ludwig believes that it needs to be reconsidered, and is more than a passing fad.
“Preliminary research suggests that this dietary approach might transform clinical management and perhaps normalize HbA1c for many people with diabetes, at substantially reduced treatment costs,” Dr. Ludwig and colleagues wrote in the JCI review. “High-quality randomized controlled trials, with intensive support for behavior changes, will be needed to address this possibility and assess long-term safety and sustainability. With total medical costs of diabetes in the United States approaching $1 billion a day, this research must assume high priority.”
This clinical report was commissioned by the AAP. Dr. Ludwig received royalties for books that recommend a carbohydrate-modified diet.
This article was updated 9/20/23.
according to a new clinical report.
Citing a lack of high-quality data and potential for adverse effects with carbohydrate restriction among younger individuals, lead author Anna Neyman, MD, of Indiana University, Indianapolis, and colleagues suggested that pediatric patients with type 2 diabetes should focus on reducing nutrient-poor carbohydrate intake, while those with type 1 diabetes should only pursue broader carbohydrate restriction under close medical supervision.
“There are no guidelines for restricting dietary carbohydrate consumption to reduce risk for diabetes or improve diabetes outcomes in youth,” the investigators wrote in Pediatrics. “Thus, there is a need to provide practical recommendations for pediatricians regarding the use of low-carbohydrate diets in patients who elect to follow these diets, including those with type 1 diabetes and for patients with obesity, prediabetes, and type 2 diabetes.”
Their new report includes a summary of the various types of carbohydrate-restricted diets, a review of available evidence for these diets among pediatric patients with type 1 and type 2 diabetes, and several practical recommendations based on their findings.
Dr. Neyman and colleagues first noted a lack of standardization in describing the various tiers of carbohydrate restriction; however, they offered some rough guidelines. Compared with a typical, balanced diet, which includes 45%-65% of calories from carbohydrates, a moderately restrictive diet includes 26%-44% of calories from carbohydrates, while a low-carb diet includes less than 26% of calories from carbs. Further down the scale, very low-carb diets and ketogenic diets call for 20-50 g of carbs per day or less than 20 g of carbs per day, respectively.
“There is evidence from adult studies that these diets can be associated with significant weight loss, reduction in insulin levels or insulin requirements, and improvement in glucose control,” the investigators noted. “Nevertheless, there is a lack of long-term safety and efficacy outcomes in youth.”
They went on to cite a range of safety concerns, including “growth deceleration, nutritional deficiencies, poor bone health, nutritional ketosis that cannot be distinguished from ketosis resulting from insulin deficiency, and disordered eating behaviors.”
“Body dissatisfaction associated with restrictive dieting practices places children and adolescents at risk for inadequate dietary intake, excessive weight gain resulting from binge-eating after restricting food intake, and use of harmful weight-control strategies,” the investigators wrote. “Moreover, restrictive dieting practices may negatively impact mental health and self-concept and are directly associated with decreased mood and increased feelings of anxiety.”
Until more evidence is available, Dr. Neyman and colleagues advised adherence to a balanced diet, including increased dietary fiber and reduced consumption of ultra-processed carbohydrates.
“Eliminating sugary beverages and juices significantly improves blood glucose and weight management in children and adolescents,” they noted.
For pediatric patients with type 1 diabetes, the investigators suggested that low-carb and very low-carb diets should only be pursued “under close diabetes care team supervision utilizing safety guidelines.”
Lack of evidence is the problem
David Ludwig, MD, PhD, codirector of the New Balance Foundation Obesity Prevention Center, Boston Children’s Hospital, and professor of pediatrics at Harvard Medical School, also in Boston, said the review is “rather general” and “reiterates common, although not always fair, concerns about carbohydrate restriction.”
“The main issue they highlight is the lack of evidence, especially from clinical trials, for a low-carbohydrate diet in children, as related to diabetes,” Dr. Ludwig said in a written comment, noting that this is indeed an issue. “However, what needs to be recognized is that a conventional high-carbohydrate diet has never been shown to be superior in adults or children for diabetes. Furthermore, whereas a poorly formulated low-carb diet may have adverse effects and risks (e.g., nutrient deficiencies), so can a high-carbohydrate diet – including an increase in triglycerides and other risk factors comprising metabolic syndrome.”
He said that the “main challenge in diabetes is to control blood glucose after eating,” and a high-carb makes this more difficult, as it requires more insulin after a meal than a low-carb meal would require, and increases risk of subsequent hypoglycemia.
For those interested in an alternative perspective to the AAP clinical report, Dr. Ludwig recommended two of his recent review articles, including one published in the Journal of Nutrition and another from the Journal of Clinical Investigation. In both, notes the long history of carbohydrate restriction for patients with diabetes, with usage dating back to the 1700s. Although the diet fell out of favor with the introduction of insulin, Dr. Ludwig believes that it needs to be reconsidered, and is more than a passing fad.
“Preliminary research suggests that this dietary approach might transform clinical management and perhaps normalize HbA1c for many people with diabetes, at substantially reduced treatment costs,” Dr. Ludwig and colleagues wrote in the JCI review. “High-quality randomized controlled trials, with intensive support for behavior changes, will be needed to address this possibility and assess long-term safety and sustainability. With total medical costs of diabetes in the United States approaching $1 billion a day, this research must assume high priority.”
This clinical report was commissioned by the AAP. Dr. Ludwig received royalties for books that recommend a carbohydrate-modified diet.
This article was updated 9/20/23.
FROM PEDIATRICS
How does lecanemab work in Alzheimer’s?
Lecanemab (Lequembi, Esai), an amyloid-beta–directed antibody therapy, is approved by the Food and Drug Administration for the treatment of Alzheimer’s disease (AD). But exactly how the drug clears amyloid-beta wasn’t clear.
The investigators tested the effectiveness of various forms of amyloid-beta in activating the plasma contact system and found that amyloid-beta protofibrils, known to be the most toxic form of amyloid-beta, promoted the activation of this molecular cascade and that lecanemab inhibited pathway activation.
“In our study, we looked at lecanemab and found it can block the activation of the contact system, which could be one of the reasons that it works so well for AD,” study coinvestigator Erin Norris, PhD, research associate professor, Rockefeller University, New York, said in an interview.
The study was published online in the Proceedings of the National Academy of Science.
Unknown mechanism
“Many years ago, we started looking at the involvement of vascular dysfunction in AD,” Dr. Norris said. “We wanted to see whether or not irregular blood clotting or problems with blood flow was problematic in Alzheimer’s patients.”
The researchers found that fibrin, a major component involved in blood clotting, can extravasate into the brain.
“The blood-brain barrier can break down in Alzheimer’s, so things from the blood can move into the brain and deposit there,” she added. Fibrin then interacts with amyloid-beta, the major pathogenic protein in AD.
Dr. Norris explained that fibrin clots can form in two different ways. One is through the normal process that occurs when there’s an injury and bleeding. The second is through intrinsic clotting, which takes place through the contact system.
“We started looking into this system and found that the plasma of Alzheimer’s patients showed irregular levels of these enzymes and proteins that are part of the intrinsic clotting system compared to those of normal controls,” said Dr. Norris.
“This paper was an extension of years studying this pathway and these mechanisms. It was also inspired by the approval of lecanemab and its release for use in Alzheimer’s patients,” she added.
In previous research, the same researchers found that amyloid-beta has different forms. “It’s normally soluble, and it’s a very tiny molecule,” Dr. Norris said. “But over time, and in different situations, it can start to aggregate, becoming bigger and bigger.”
Implications beyond Alzheimer’s
Postmortem tissue analysis has found fibrillar plaques that are “clumped together.” These are insoluble and hard to get rid of, she said. “Protofibrils are the step before amyloid-beta forms fibrils and are considered to be the most toxic form, although the mechanism behind why it’s so toxic is not understood.”
Previous research has already shown that amyloid-beta can activate the contact system. The contact system has two “arms,” the first of which is involved with clotting, and the second with inflammation, Dr. Norris said. In fact, it’s the plasma contact system that links vascular and inflammatory pathways.
The plasma contact system leads to the clotting of fibrin, Dr. Norris continued. It activates factor XII, which leads to blood clotting by binding to coagulation factor XI.
The contact system also causes inflammation – the second “arm.” Bradykinin, a potent inflammatory molecule, is released by binding to high-molecular-weight kininogen (HK). In addition to inflammation, bradykinin can cause edema and blood-brain barrier permeability.
Although it’s been known that amyloid-beta can activate the contact system, the particular form of amyloid-beta implicated in this cascade has not been identified. And so, the researchers incubated amyloid-beta42 with human plasma, testing various types of amyloid-beta – monomers, oligomers, protofibrils, and fibrils – to see which would activate the contact system.
Amyloid-beta protofibrils promoted the activation of the contact system, as evidenced by several reactions, including activation of factor XII, while other forms of amyloid-beta did not. HK also “bound tightly” to amyloid-beta protofibrils, with “weaker” binding to other amyloid-beta species, the authors reported, confirming that amyloid-beta protofibrils bind to HK and factor XII.
Bradykinin levels were increased by amyloid-beta protofibrils, which also induced faster clotting, compared with other forms of amyloid-beta.
The researchers introduced lecanemab into the picture and found it “dramatically inhibited” contact system activation induced by amyloid-beta protofibrils. For example, it blocked the binding of factor XII to amyloid-beta. By contrast, human IgG (which the researchers used as a control) had no effect.
Additionally, lecanemab also prevented accelerated intrinsic coagulation in normal human plasma mediated by amyloid-beta protofibril.
Senior author Sidney Strickland, PhD, the Zachary and Elizabeth M. Fisher professor in Alzheimer’s and neurodegenerative disease, Rockefeller University, said in an interview: “One of the strong motivators for conducting this study was the fact that this drug, which is effective in AD, targets this specific form of amyloid-beta; but no one knows why it›s more toxic. We thought we could see if we could tie it to what we›re working on, and we found it ties in beautifully.”
The findings have implications that go beyond AD, Dr. Strickland said. “The contact system is implicated in lots of different pathologies, including sickle cell anemia, sepsis, inflammatory bowel disease, and so on.” Blocking the contact system might be a helpful approach in these conditions too.
Innovative, plausible, but still preliminary
In a comment, Heather M. Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, called the investigation “innovative,” with ideas that are “certainly plausible.” However, “at this time, the work is preliminary and not conclusive.”
The hypothesized mechanisms for why amyloid (lecanemab’s target) is toxic to the brain “does incorporate important AD-related brain changes that have been observed in other studies, including inflammatory/immune changes and vascular-related changes,” said Dr. Snyder, who was not involved with the current study.
However, “additional studies that look both in model systems and in humans are needed to further illuminate these relationships,” Dr. Snyder said.
The study was supported by grants from the National Institutes of Health as well as the Robertson Therapeutic Development Fund, Samuel Newhouse Foundation, John A. Herrmann, and the May and Samuel Rudin Family Foundation. Dr. Norris, Dr. Strickland, and Dr. Snyder declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Lecanemab (Lequembi, Esai), an amyloid-beta–directed antibody therapy, is approved by the Food and Drug Administration for the treatment of Alzheimer’s disease (AD). But exactly how the drug clears amyloid-beta wasn’t clear.
The investigators tested the effectiveness of various forms of amyloid-beta in activating the plasma contact system and found that amyloid-beta protofibrils, known to be the most toxic form of amyloid-beta, promoted the activation of this molecular cascade and that lecanemab inhibited pathway activation.
“In our study, we looked at lecanemab and found it can block the activation of the contact system, which could be one of the reasons that it works so well for AD,” study coinvestigator Erin Norris, PhD, research associate professor, Rockefeller University, New York, said in an interview.
The study was published online in the Proceedings of the National Academy of Science.
Unknown mechanism
“Many years ago, we started looking at the involvement of vascular dysfunction in AD,” Dr. Norris said. “We wanted to see whether or not irregular blood clotting or problems with blood flow was problematic in Alzheimer’s patients.”
The researchers found that fibrin, a major component involved in blood clotting, can extravasate into the brain.
“The blood-brain barrier can break down in Alzheimer’s, so things from the blood can move into the brain and deposit there,” she added. Fibrin then interacts with amyloid-beta, the major pathogenic protein in AD.
Dr. Norris explained that fibrin clots can form in two different ways. One is through the normal process that occurs when there’s an injury and bleeding. The second is through intrinsic clotting, which takes place through the contact system.
“We started looking into this system and found that the plasma of Alzheimer’s patients showed irregular levels of these enzymes and proteins that are part of the intrinsic clotting system compared to those of normal controls,” said Dr. Norris.
“This paper was an extension of years studying this pathway and these mechanisms. It was also inspired by the approval of lecanemab and its release for use in Alzheimer’s patients,” she added.
In previous research, the same researchers found that amyloid-beta has different forms. “It’s normally soluble, and it’s a very tiny molecule,” Dr. Norris said. “But over time, and in different situations, it can start to aggregate, becoming bigger and bigger.”
Implications beyond Alzheimer’s
Postmortem tissue analysis has found fibrillar plaques that are “clumped together.” These are insoluble and hard to get rid of, she said. “Protofibrils are the step before amyloid-beta forms fibrils and are considered to be the most toxic form, although the mechanism behind why it’s so toxic is not understood.”
Previous research has already shown that amyloid-beta can activate the contact system. The contact system has two “arms,” the first of which is involved with clotting, and the second with inflammation, Dr. Norris said. In fact, it’s the plasma contact system that links vascular and inflammatory pathways.
The plasma contact system leads to the clotting of fibrin, Dr. Norris continued. It activates factor XII, which leads to blood clotting by binding to coagulation factor XI.
The contact system also causes inflammation – the second “arm.” Bradykinin, a potent inflammatory molecule, is released by binding to high-molecular-weight kininogen (HK). In addition to inflammation, bradykinin can cause edema and blood-brain barrier permeability.
Although it’s been known that amyloid-beta can activate the contact system, the particular form of amyloid-beta implicated in this cascade has not been identified. And so, the researchers incubated amyloid-beta42 with human plasma, testing various types of amyloid-beta – monomers, oligomers, protofibrils, and fibrils – to see which would activate the contact system.
Amyloid-beta protofibrils promoted the activation of the contact system, as evidenced by several reactions, including activation of factor XII, while other forms of amyloid-beta did not. HK also “bound tightly” to amyloid-beta protofibrils, with “weaker” binding to other amyloid-beta species, the authors reported, confirming that amyloid-beta protofibrils bind to HK and factor XII.
Bradykinin levels were increased by amyloid-beta protofibrils, which also induced faster clotting, compared with other forms of amyloid-beta.
The researchers introduced lecanemab into the picture and found it “dramatically inhibited” contact system activation induced by amyloid-beta protofibrils. For example, it blocked the binding of factor XII to amyloid-beta. By contrast, human IgG (which the researchers used as a control) had no effect.
Additionally, lecanemab also prevented accelerated intrinsic coagulation in normal human plasma mediated by amyloid-beta protofibril.
Senior author Sidney Strickland, PhD, the Zachary and Elizabeth M. Fisher professor in Alzheimer’s and neurodegenerative disease, Rockefeller University, said in an interview: “One of the strong motivators for conducting this study was the fact that this drug, which is effective in AD, targets this specific form of amyloid-beta; but no one knows why it›s more toxic. We thought we could see if we could tie it to what we›re working on, and we found it ties in beautifully.”
The findings have implications that go beyond AD, Dr. Strickland said. “The contact system is implicated in lots of different pathologies, including sickle cell anemia, sepsis, inflammatory bowel disease, and so on.” Blocking the contact system might be a helpful approach in these conditions too.
Innovative, plausible, but still preliminary
In a comment, Heather M. Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, called the investigation “innovative,” with ideas that are “certainly plausible.” However, “at this time, the work is preliminary and not conclusive.”
The hypothesized mechanisms for why amyloid (lecanemab’s target) is toxic to the brain “does incorporate important AD-related brain changes that have been observed in other studies, including inflammatory/immune changes and vascular-related changes,” said Dr. Snyder, who was not involved with the current study.
However, “additional studies that look both in model systems and in humans are needed to further illuminate these relationships,” Dr. Snyder said.
The study was supported by grants from the National Institutes of Health as well as the Robertson Therapeutic Development Fund, Samuel Newhouse Foundation, John A. Herrmann, and the May and Samuel Rudin Family Foundation. Dr. Norris, Dr. Strickland, and Dr. Snyder declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Lecanemab (Lequembi, Esai), an amyloid-beta–directed antibody therapy, is approved by the Food and Drug Administration for the treatment of Alzheimer’s disease (AD). But exactly how the drug clears amyloid-beta wasn’t clear.
The investigators tested the effectiveness of various forms of amyloid-beta in activating the plasma contact system and found that amyloid-beta protofibrils, known to be the most toxic form of amyloid-beta, promoted the activation of this molecular cascade and that lecanemab inhibited pathway activation.
“In our study, we looked at lecanemab and found it can block the activation of the contact system, which could be one of the reasons that it works so well for AD,” study coinvestigator Erin Norris, PhD, research associate professor, Rockefeller University, New York, said in an interview.
The study was published online in the Proceedings of the National Academy of Science.
Unknown mechanism
“Many years ago, we started looking at the involvement of vascular dysfunction in AD,” Dr. Norris said. “We wanted to see whether or not irregular blood clotting or problems with blood flow was problematic in Alzheimer’s patients.”
The researchers found that fibrin, a major component involved in blood clotting, can extravasate into the brain.
“The blood-brain barrier can break down in Alzheimer’s, so things from the blood can move into the brain and deposit there,” she added. Fibrin then interacts with amyloid-beta, the major pathogenic protein in AD.
Dr. Norris explained that fibrin clots can form in two different ways. One is through the normal process that occurs when there’s an injury and bleeding. The second is through intrinsic clotting, which takes place through the contact system.
“We started looking into this system and found that the plasma of Alzheimer’s patients showed irregular levels of these enzymes and proteins that are part of the intrinsic clotting system compared to those of normal controls,” said Dr. Norris.
“This paper was an extension of years studying this pathway and these mechanisms. It was also inspired by the approval of lecanemab and its release for use in Alzheimer’s patients,” she added.
In previous research, the same researchers found that amyloid-beta has different forms. “It’s normally soluble, and it’s a very tiny molecule,” Dr. Norris said. “But over time, and in different situations, it can start to aggregate, becoming bigger and bigger.”
Implications beyond Alzheimer’s
Postmortem tissue analysis has found fibrillar plaques that are “clumped together.” These are insoluble and hard to get rid of, she said. “Protofibrils are the step before amyloid-beta forms fibrils and are considered to be the most toxic form, although the mechanism behind why it’s so toxic is not understood.”
Previous research has already shown that amyloid-beta can activate the contact system. The contact system has two “arms,” the first of which is involved with clotting, and the second with inflammation, Dr. Norris said. In fact, it’s the plasma contact system that links vascular and inflammatory pathways.
The plasma contact system leads to the clotting of fibrin, Dr. Norris continued. It activates factor XII, which leads to blood clotting by binding to coagulation factor XI.
The contact system also causes inflammation – the second “arm.” Bradykinin, a potent inflammatory molecule, is released by binding to high-molecular-weight kininogen (HK). In addition to inflammation, bradykinin can cause edema and blood-brain barrier permeability.
Although it’s been known that amyloid-beta can activate the contact system, the particular form of amyloid-beta implicated in this cascade has not been identified. And so, the researchers incubated amyloid-beta42 with human plasma, testing various types of amyloid-beta – monomers, oligomers, protofibrils, and fibrils – to see which would activate the contact system.
Amyloid-beta protofibrils promoted the activation of the contact system, as evidenced by several reactions, including activation of factor XII, while other forms of amyloid-beta did not. HK also “bound tightly” to amyloid-beta protofibrils, with “weaker” binding to other amyloid-beta species, the authors reported, confirming that amyloid-beta protofibrils bind to HK and factor XII.
Bradykinin levels were increased by amyloid-beta protofibrils, which also induced faster clotting, compared with other forms of amyloid-beta.
The researchers introduced lecanemab into the picture and found it “dramatically inhibited” contact system activation induced by amyloid-beta protofibrils. For example, it blocked the binding of factor XII to amyloid-beta. By contrast, human IgG (which the researchers used as a control) had no effect.
Additionally, lecanemab also prevented accelerated intrinsic coagulation in normal human plasma mediated by amyloid-beta protofibril.
Senior author Sidney Strickland, PhD, the Zachary and Elizabeth M. Fisher professor in Alzheimer’s and neurodegenerative disease, Rockefeller University, said in an interview: “One of the strong motivators for conducting this study was the fact that this drug, which is effective in AD, targets this specific form of amyloid-beta; but no one knows why it›s more toxic. We thought we could see if we could tie it to what we›re working on, and we found it ties in beautifully.”
The findings have implications that go beyond AD, Dr. Strickland said. “The contact system is implicated in lots of different pathologies, including sickle cell anemia, sepsis, inflammatory bowel disease, and so on.” Blocking the contact system might be a helpful approach in these conditions too.
Innovative, plausible, but still preliminary
In a comment, Heather M. Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, called the investigation “innovative,” with ideas that are “certainly plausible.” However, “at this time, the work is preliminary and not conclusive.”
The hypothesized mechanisms for why amyloid (lecanemab’s target) is toxic to the brain “does incorporate important AD-related brain changes that have been observed in other studies, including inflammatory/immune changes and vascular-related changes,” said Dr. Snyder, who was not involved with the current study.
However, “additional studies that look both in model systems and in humans are needed to further illuminate these relationships,” Dr. Snyder said.
The study was supported by grants from the National Institutes of Health as well as the Robertson Therapeutic Development Fund, Samuel Newhouse Foundation, John A. Herrmann, and the May and Samuel Rudin Family Foundation. Dr. Norris, Dr. Strickland, and Dr. Snyder declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCE
New evidence early treatment improves preclinical MS outcomes
TOPLINE:
new research shows.
METHODOLOGY:
Early use of DMTs is typically recommended for patients with established MS, but mounting evidence, including the ARISE trial, which assessed Tecfidera, suggests these agents benefit patients with RIS, the earliest detectable preclinical MS stage.
The new study, known as Teriflunomide in Radiologically Isolated Syndrome (TERIS), included 89 adult patients with RIS (mean age, 37.8 years) from centers in France, Switzerland, and Turkey. Participants were randomly assigned to receive placebo or teriflunomide 14 mg daily. Teriflunomide is an oral immunomodulator approved for treating relapsing remitting MS.
Investigators performed MRI at baseline and at weeks 48, 96, and 144 and at any time during the study if warranted.
Researchers adjusted for potential confounders, including sex, age at RIS diagnosis, MS family history, brain T2-weighted hyperintense lesion volume, and presence of Gd+/− lesions.
The primary outcome was time to a first acute or progressive neurologic event resulting from central nervous system demyelination, expressed as a rate of conversion to clinical MS.
TAKEAWAY:
Eighteen participants – nine in each group – discontinued the study, resulting in a dropout rate of 20%.
The risk of a first clinical event was significantly reduced in the teriflunomide arm (mean time to event, 128.2 weeks) with 8 clinical events (6 acute, 2 progressive) in comparison with the placebo arm (mean time to event, 109.6 weeks) with 20 clinical events (18 acute, 2 progressive) and an adjusted hazard ratio of 0.28 (95% CI, 0.11-0.71; P = .007).
All secondary MRI measures, including the cumulative number of new and/or newly enlarging T2 lesions and the cumulative number of Gd+ lesions, did not reach statistical significance, although these were numerically lower in the teriflunomide arm, possibly because participants with early events switched to the treatment arm.
The most common adverse events that occurred more often in patients treated with teriflunomide were gastrointestinal disorders (11.4%), dysmenorrhea (9.1%), benign respiratory infections (6.8%), general disorders/conditions (6.8%), and transient increase of transaminases (4.5%).
IN PRACTICE:
“These results suggest that for the first time, we may have an opportunity to better identify those at risk for a primary progressive clinical course at this preclinical stage and prevent or delay clinical progression from the onset, which is a clear unmet need in MS clinical practice,” wrote the authors.
SOURCE:
The study was carried out by Christine Lebrun-Frénay MD, PhD, head of the inflammatory neurological disorders clinical research unit and MS center at the University of Nice (France). It was published online in JAMA Neurology.
LIMITATIONS:
The investigators could not stratify at-risk subgroups according to risk factors for developing MS, mainly because of power issues. The study was prematurely discontinued by its financial sponsor (Sanofi), owing primarily to slow enrollment that resulted from national regulations on activating recruitment sites and the impact of the COVID-19 pandemic. Another challenge for the study was that some individuals with RIS had already been exposed to a DMT or hesitated to participate in a clinical trial. The financial sponsor, which provided the study drug and placebo tablets, terminated their availability, given the anticipated release of generic teriflunomide.
DISCLOSURES:
The study was supported by Sanofi, the University Hospital of Nice, University Cote d’Azur, and the Radiologically Isolated Syndrome Consortium. Lebrun-Frénay has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
new research shows.
METHODOLOGY:
Early use of DMTs is typically recommended for patients with established MS, but mounting evidence, including the ARISE trial, which assessed Tecfidera, suggests these agents benefit patients with RIS, the earliest detectable preclinical MS stage.
The new study, known as Teriflunomide in Radiologically Isolated Syndrome (TERIS), included 89 adult patients with RIS (mean age, 37.8 years) from centers in France, Switzerland, and Turkey. Participants were randomly assigned to receive placebo or teriflunomide 14 mg daily. Teriflunomide is an oral immunomodulator approved for treating relapsing remitting MS.
Investigators performed MRI at baseline and at weeks 48, 96, and 144 and at any time during the study if warranted.
Researchers adjusted for potential confounders, including sex, age at RIS diagnosis, MS family history, brain T2-weighted hyperintense lesion volume, and presence of Gd+/− lesions.
The primary outcome was time to a first acute or progressive neurologic event resulting from central nervous system demyelination, expressed as a rate of conversion to clinical MS.
TAKEAWAY:
Eighteen participants – nine in each group – discontinued the study, resulting in a dropout rate of 20%.
The risk of a first clinical event was significantly reduced in the teriflunomide arm (mean time to event, 128.2 weeks) with 8 clinical events (6 acute, 2 progressive) in comparison with the placebo arm (mean time to event, 109.6 weeks) with 20 clinical events (18 acute, 2 progressive) and an adjusted hazard ratio of 0.28 (95% CI, 0.11-0.71; P = .007).
All secondary MRI measures, including the cumulative number of new and/or newly enlarging T2 lesions and the cumulative number of Gd+ lesions, did not reach statistical significance, although these were numerically lower in the teriflunomide arm, possibly because participants with early events switched to the treatment arm.
The most common adverse events that occurred more often in patients treated with teriflunomide were gastrointestinal disorders (11.4%), dysmenorrhea (9.1%), benign respiratory infections (6.8%), general disorders/conditions (6.8%), and transient increase of transaminases (4.5%).
IN PRACTICE:
“These results suggest that for the first time, we may have an opportunity to better identify those at risk for a primary progressive clinical course at this preclinical stage and prevent or delay clinical progression from the onset, which is a clear unmet need in MS clinical practice,” wrote the authors.
SOURCE:
The study was carried out by Christine Lebrun-Frénay MD, PhD, head of the inflammatory neurological disorders clinical research unit and MS center at the University of Nice (France). It was published online in JAMA Neurology.
LIMITATIONS:
The investigators could not stratify at-risk subgroups according to risk factors for developing MS, mainly because of power issues. The study was prematurely discontinued by its financial sponsor (Sanofi), owing primarily to slow enrollment that resulted from national regulations on activating recruitment sites and the impact of the COVID-19 pandemic. Another challenge for the study was that some individuals with RIS had already been exposed to a DMT or hesitated to participate in a clinical trial. The financial sponsor, which provided the study drug and placebo tablets, terminated their availability, given the anticipated release of generic teriflunomide.
DISCLOSURES:
The study was supported by Sanofi, the University Hospital of Nice, University Cote d’Azur, and the Radiologically Isolated Syndrome Consortium. Lebrun-Frénay has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
new research shows.
METHODOLOGY:
Early use of DMTs is typically recommended for patients with established MS, but mounting evidence, including the ARISE trial, which assessed Tecfidera, suggests these agents benefit patients with RIS, the earliest detectable preclinical MS stage.
The new study, known as Teriflunomide in Radiologically Isolated Syndrome (TERIS), included 89 adult patients with RIS (mean age, 37.8 years) from centers in France, Switzerland, and Turkey. Participants were randomly assigned to receive placebo or teriflunomide 14 mg daily. Teriflunomide is an oral immunomodulator approved for treating relapsing remitting MS.
Investigators performed MRI at baseline and at weeks 48, 96, and 144 and at any time during the study if warranted.
Researchers adjusted for potential confounders, including sex, age at RIS diagnosis, MS family history, brain T2-weighted hyperintense lesion volume, and presence of Gd+/− lesions.
The primary outcome was time to a first acute or progressive neurologic event resulting from central nervous system demyelination, expressed as a rate of conversion to clinical MS.
TAKEAWAY:
Eighteen participants – nine in each group – discontinued the study, resulting in a dropout rate of 20%.
The risk of a first clinical event was significantly reduced in the teriflunomide arm (mean time to event, 128.2 weeks) with 8 clinical events (6 acute, 2 progressive) in comparison with the placebo arm (mean time to event, 109.6 weeks) with 20 clinical events (18 acute, 2 progressive) and an adjusted hazard ratio of 0.28 (95% CI, 0.11-0.71; P = .007).
All secondary MRI measures, including the cumulative number of new and/or newly enlarging T2 lesions and the cumulative number of Gd+ lesions, did not reach statistical significance, although these were numerically lower in the teriflunomide arm, possibly because participants with early events switched to the treatment arm.
The most common adverse events that occurred more often in patients treated with teriflunomide were gastrointestinal disorders (11.4%), dysmenorrhea (9.1%), benign respiratory infections (6.8%), general disorders/conditions (6.8%), and transient increase of transaminases (4.5%).
IN PRACTICE:
“These results suggest that for the first time, we may have an opportunity to better identify those at risk for a primary progressive clinical course at this preclinical stage and prevent or delay clinical progression from the onset, which is a clear unmet need in MS clinical practice,” wrote the authors.
SOURCE:
The study was carried out by Christine Lebrun-Frénay MD, PhD, head of the inflammatory neurological disorders clinical research unit and MS center at the University of Nice (France). It was published online in JAMA Neurology.
LIMITATIONS:
The investigators could not stratify at-risk subgroups according to risk factors for developing MS, mainly because of power issues. The study was prematurely discontinued by its financial sponsor (Sanofi), owing primarily to slow enrollment that resulted from national regulations on activating recruitment sites and the impact of the COVID-19 pandemic. Another challenge for the study was that some individuals with RIS had already been exposed to a DMT or hesitated to participate in a clinical trial. The financial sponsor, which provided the study drug and placebo tablets, terminated their availability, given the anticipated release of generic teriflunomide.
DISCLOSURES:
The study was supported by Sanofi, the University Hospital of Nice, University Cote d’Azur, and the Radiologically Isolated Syndrome Consortium. Lebrun-Frénay has no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM JAMA NEUROLOGY
Blood transfusions linked to intracerebral hemorrhage risk
In an exploratory analysis, patients receiving red blood cell transfusions from donors who later developed multiple spontaneous ICHs, and were assumed to have CAA, were at a significantly increased risk of developing spontaneous ICH themselves.
“This may suggest a transfusion-transmissible agent associated with some types of spontaneous ICH, although the findings may be susceptible to selection bias and residual confounding, and further research is needed to investigate if transfusion transmission of CAA might explain this association,” the investigators noted.
“We do not think that the findings motivate a change in practice, and we should not let these results discourage otherwise indicated blood transfusion,” said lead author Jingcheng Zhao, MD, PhD, with Karolinska University Hospital Solna, Stockholm.
The study was published online in the Journal of the American Medical Association.
Novel finding
Recent evidence suggests that CAA exhibits “prion-like” transmissivity, with reports of transmission through cadaveric pituitary hormone contaminated with amyloid-beta and tau protein, dura mater grafts, and possibly neurosurgical instruments.
CAA, which is characterized by the deposition of amyloid protein in the brain, is the second most common cause of spontaneous ICH.
The researchers hypothesized that transfusion transmission of CAA may manifest through an increased risk for spontaneous ICH among transfusion recipients given blood from a donor with spontaneous ICH. To explore this hypothesis, they analyzed national registry data from Sweden and Denmark for ICH in recipients of red blood cell transfusion from donors who themselves had ICH over the years after their blood donations, with the assumption that donors with two or more ICHs would likely have CAA.
The cohort included nearly 760,000 individuals in Sweden (median age, 65 years; 59% women) and 330,000 in Denmark (median age, 64 years; 58% women), with a median follow-up of 5.8 and 6.1 years, respectively.
Receiving red blood cell transfusions from donors who later developed multiple spontaneous ICHs was associated with a greater than twofold increased risk of developing spontaneous ICH, compared with receiving a transfusion from donors without subsequent ICH (hazard ratio, 2.73; P < .001 in the Swedish cohort and HR, 2.32; P = .04 in the Danish cohort).
“The observed increased risk of spontaneous ICH associated with receiving a red blood cell transfusion from a donor who later developed multiple spontaneous ICHs, corresponding to a 30-year cumulative incidence difference of 2.3%, is a novel finding,” the researchers wrote.
There was no increase in post-transfusion ICH risk among recipients whose donors had a single post–blood-donation ICH.
The findings were robust to several of the sensitivity analyses.
A “negative” control analysis of post-transfusion ischemic stroke (instead of ICH) found no increased risk among recipients of blood from donors who had single or multiple ICHs.
This study provides “exploratory evidence of possible transfusion-transmission of a factor that causes ICHs, but more research is needed to confirm and to understand the mechanism,” said Dr. Zhao.
The researchers noted that they did not directly assess CAA but expect it would be more common among donors who develop multiple spontaneous ICHs, “as CAA-related ICH has been reported to have a 7-fold increase for recurrent ICHs, compared with non–CAA-related ICH.”
Worrisome finding or false alarm?
In an accompanying editorial, Steven Greenberg, MD, PhD, with the department of neurology, Harvard Medical School, Boston, said there are “good reasons to treat the possibility of CAA transmission via blood transfusion seriously – and good reasons to remain skeptical, at least for the present.”
“Powerful” arguments in support of the findings include the robust study methodology and the “striking” similarity in results from the two registries, which argues against a chance finding. Another is the negative control with ischemic stroke as the outcome, which argues against unsuspected confounding-causing associations with all types of stroke, Dr. Greenberg noted.
Arguments for remaining “unconvinced” of the association center on the weakness of evidence for a plausible biological mechanism for the finding, he points out. Another is the short-time course of ICHs after blood transfusion, which is “quite challenging to explain,” Dr. Greenberg said. Nearly half of the ICHs among blood recipients occurred within 5 years of transfusion, which is “dramatically” faster than the 30- to 40-year interval reported between neurosurgical exposure to cadaveric tissue and first ICH, he added.
Another related “mechanistic reservation” is the plausibility that a transmissible species of amyloid-beta could travel from blood to brain in sufficient quantities to trigger advanced CAA or Alzheimer disease pathology, he wrote.
He added the current study leaves him “squarely at the corner of anxiety and skepticism.”
With more than 10 million units of blood transfused in the United States each year, even a modest increase in risk for future brain hemorrhages or dementia conferred by “an uncommon – but as of now undetectable – donor trait would represent a substantial public health concern,” Dr. Greenberg wrote.
“From the standpoint of scientific plausibility, however, even this well-conducted analysis is at risk of representing a false alarm,” he cautioned.
Looking ahead, Dr. Greenberg said one clear direction is independent replication, ideally with datasets in which donor and recipient dementia can be reliably ascertained to assess the possibility of Alzheimer’s disease as well as CAA transmissibility.
“The other challenge is for experimental biologists to consider the alternative possibility of transfusion-related acceleration of downstream steps in the CAA-ICH pathway, such as the vessel remodeling by which amyloid beta–laden vessels proceed to rupture and bleed.”
“The current study is not yet a reason for alarm, certainly not a reason to avoid otherwise indicated blood transfusion, but it is a strong call for more scientific digging,” Dr. Greenberg concluded.
The study was funded by grants from the Karolinska Institute, the Swedish Research Council, and Region Stockholm. Dr. Zhao and Dr. Greenberg report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In an exploratory analysis, patients receiving red blood cell transfusions from donors who later developed multiple spontaneous ICHs, and were assumed to have CAA, were at a significantly increased risk of developing spontaneous ICH themselves.
“This may suggest a transfusion-transmissible agent associated with some types of spontaneous ICH, although the findings may be susceptible to selection bias and residual confounding, and further research is needed to investigate if transfusion transmission of CAA might explain this association,” the investigators noted.
“We do not think that the findings motivate a change in practice, and we should not let these results discourage otherwise indicated blood transfusion,” said lead author Jingcheng Zhao, MD, PhD, with Karolinska University Hospital Solna, Stockholm.
The study was published online in the Journal of the American Medical Association.
Novel finding
Recent evidence suggests that CAA exhibits “prion-like” transmissivity, with reports of transmission through cadaveric pituitary hormone contaminated with amyloid-beta and tau protein, dura mater grafts, and possibly neurosurgical instruments.
CAA, which is characterized by the deposition of amyloid protein in the brain, is the second most common cause of spontaneous ICH.
The researchers hypothesized that transfusion transmission of CAA may manifest through an increased risk for spontaneous ICH among transfusion recipients given blood from a donor with spontaneous ICH. To explore this hypothesis, they analyzed national registry data from Sweden and Denmark for ICH in recipients of red blood cell transfusion from donors who themselves had ICH over the years after their blood donations, with the assumption that donors with two or more ICHs would likely have CAA.
The cohort included nearly 760,000 individuals in Sweden (median age, 65 years; 59% women) and 330,000 in Denmark (median age, 64 years; 58% women), with a median follow-up of 5.8 and 6.1 years, respectively.
Receiving red blood cell transfusions from donors who later developed multiple spontaneous ICHs was associated with a greater than twofold increased risk of developing spontaneous ICH, compared with receiving a transfusion from donors without subsequent ICH (hazard ratio, 2.73; P < .001 in the Swedish cohort and HR, 2.32; P = .04 in the Danish cohort).
“The observed increased risk of spontaneous ICH associated with receiving a red blood cell transfusion from a donor who later developed multiple spontaneous ICHs, corresponding to a 30-year cumulative incidence difference of 2.3%, is a novel finding,” the researchers wrote.
There was no increase in post-transfusion ICH risk among recipients whose donors had a single post–blood-donation ICH.
The findings were robust to several of the sensitivity analyses.
A “negative” control analysis of post-transfusion ischemic stroke (instead of ICH) found no increased risk among recipients of blood from donors who had single or multiple ICHs.
This study provides “exploratory evidence of possible transfusion-transmission of a factor that causes ICHs, but more research is needed to confirm and to understand the mechanism,” said Dr. Zhao.
The researchers noted that they did not directly assess CAA but expect it would be more common among donors who develop multiple spontaneous ICHs, “as CAA-related ICH has been reported to have a 7-fold increase for recurrent ICHs, compared with non–CAA-related ICH.”
Worrisome finding or false alarm?
In an accompanying editorial, Steven Greenberg, MD, PhD, with the department of neurology, Harvard Medical School, Boston, said there are “good reasons to treat the possibility of CAA transmission via blood transfusion seriously – and good reasons to remain skeptical, at least for the present.”
“Powerful” arguments in support of the findings include the robust study methodology and the “striking” similarity in results from the two registries, which argues against a chance finding. Another is the negative control with ischemic stroke as the outcome, which argues against unsuspected confounding-causing associations with all types of stroke, Dr. Greenberg noted.
Arguments for remaining “unconvinced” of the association center on the weakness of evidence for a plausible biological mechanism for the finding, he points out. Another is the short-time course of ICHs after blood transfusion, which is “quite challenging to explain,” Dr. Greenberg said. Nearly half of the ICHs among blood recipients occurred within 5 years of transfusion, which is “dramatically” faster than the 30- to 40-year interval reported between neurosurgical exposure to cadaveric tissue and first ICH, he added.
Another related “mechanistic reservation” is the plausibility that a transmissible species of amyloid-beta could travel from blood to brain in sufficient quantities to trigger advanced CAA or Alzheimer disease pathology, he wrote.
He added the current study leaves him “squarely at the corner of anxiety and skepticism.”
With more than 10 million units of blood transfused in the United States each year, even a modest increase in risk for future brain hemorrhages or dementia conferred by “an uncommon – but as of now undetectable – donor trait would represent a substantial public health concern,” Dr. Greenberg wrote.
“From the standpoint of scientific plausibility, however, even this well-conducted analysis is at risk of representing a false alarm,” he cautioned.
Looking ahead, Dr. Greenberg said one clear direction is independent replication, ideally with datasets in which donor and recipient dementia can be reliably ascertained to assess the possibility of Alzheimer’s disease as well as CAA transmissibility.
“The other challenge is for experimental biologists to consider the alternative possibility of transfusion-related acceleration of downstream steps in the CAA-ICH pathway, such as the vessel remodeling by which amyloid beta–laden vessels proceed to rupture and bleed.”
“The current study is not yet a reason for alarm, certainly not a reason to avoid otherwise indicated blood transfusion, but it is a strong call for more scientific digging,” Dr. Greenberg concluded.
The study was funded by grants from the Karolinska Institute, the Swedish Research Council, and Region Stockholm. Dr. Zhao and Dr. Greenberg report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In an exploratory analysis, patients receiving red blood cell transfusions from donors who later developed multiple spontaneous ICHs, and were assumed to have CAA, were at a significantly increased risk of developing spontaneous ICH themselves.
“This may suggest a transfusion-transmissible agent associated with some types of spontaneous ICH, although the findings may be susceptible to selection bias and residual confounding, and further research is needed to investigate if transfusion transmission of CAA might explain this association,” the investigators noted.
“We do not think that the findings motivate a change in practice, and we should not let these results discourage otherwise indicated blood transfusion,” said lead author Jingcheng Zhao, MD, PhD, with Karolinska University Hospital Solna, Stockholm.
The study was published online in the Journal of the American Medical Association.
Novel finding
Recent evidence suggests that CAA exhibits “prion-like” transmissivity, with reports of transmission through cadaveric pituitary hormone contaminated with amyloid-beta and tau protein, dura mater grafts, and possibly neurosurgical instruments.
CAA, which is characterized by the deposition of amyloid protein in the brain, is the second most common cause of spontaneous ICH.
The researchers hypothesized that transfusion transmission of CAA may manifest through an increased risk for spontaneous ICH among transfusion recipients given blood from a donor with spontaneous ICH. To explore this hypothesis, they analyzed national registry data from Sweden and Denmark for ICH in recipients of red blood cell transfusion from donors who themselves had ICH over the years after their blood donations, with the assumption that donors with two or more ICHs would likely have CAA.
The cohort included nearly 760,000 individuals in Sweden (median age, 65 years; 59% women) and 330,000 in Denmark (median age, 64 years; 58% women), with a median follow-up of 5.8 and 6.1 years, respectively.
Receiving red blood cell transfusions from donors who later developed multiple spontaneous ICHs was associated with a greater than twofold increased risk of developing spontaneous ICH, compared with receiving a transfusion from donors without subsequent ICH (hazard ratio, 2.73; P < .001 in the Swedish cohort and HR, 2.32; P = .04 in the Danish cohort).
“The observed increased risk of spontaneous ICH associated with receiving a red blood cell transfusion from a donor who later developed multiple spontaneous ICHs, corresponding to a 30-year cumulative incidence difference of 2.3%, is a novel finding,” the researchers wrote.
There was no increase in post-transfusion ICH risk among recipients whose donors had a single post–blood-donation ICH.
The findings were robust to several of the sensitivity analyses.
A “negative” control analysis of post-transfusion ischemic stroke (instead of ICH) found no increased risk among recipients of blood from donors who had single or multiple ICHs.
This study provides “exploratory evidence of possible transfusion-transmission of a factor that causes ICHs, but more research is needed to confirm and to understand the mechanism,” said Dr. Zhao.
The researchers noted that they did not directly assess CAA but expect it would be more common among donors who develop multiple spontaneous ICHs, “as CAA-related ICH has been reported to have a 7-fold increase for recurrent ICHs, compared with non–CAA-related ICH.”
Worrisome finding or false alarm?
In an accompanying editorial, Steven Greenberg, MD, PhD, with the department of neurology, Harvard Medical School, Boston, said there are “good reasons to treat the possibility of CAA transmission via blood transfusion seriously – and good reasons to remain skeptical, at least for the present.”
“Powerful” arguments in support of the findings include the robust study methodology and the “striking” similarity in results from the two registries, which argues against a chance finding. Another is the negative control with ischemic stroke as the outcome, which argues against unsuspected confounding-causing associations with all types of stroke, Dr. Greenberg noted.
Arguments for remaining “unconvinced” of the association center on the weakness of evidence for a plausible biological mechanism for the finding, he points out. Another is the short-time course of ICHs after blood transfusion, which is “quite challenging to explain,” Dr. Greenberg said. Nearly half of the ICHs among blood recipients occurred within 5 years of transfusion, which is “dramatically” faster than the 30- to 40-year interval reported between neurosurgical exposure to cadaveric tissue and first ICH, he added.
Another related “mechanistic reservation” is the plausibility that a transmissible species of amyloid-beta could travel from blood to brain in sufficient quantities to trigger advanced CAA or Alzheimer disease pathology, he wrote.
He added the current study leaves him “squarely at the corner of anxiety and skepticism.”
With more than 10 million units of blood transfused in the United States each year, even a modest increase in risk for future brain hemorrhages or dementia conferred by “an uncommon – but as of now undetectable – donor trait would represent a substantial public health concern,” Dr. Greenberg wrote.
“From the standpoint of scientific plausibility, however, even this well-conducted analysis is at risk of representing a false alarm,” he cautioned.
Looking ahead, Dr. Greenberg said one clear direction is independent replication, ideally with datasets in which donor and recipient dementia can be reliably ascertained to assess the possibility of Alzheimer’s disease as well as CAA transmissibility.
“The other challenge is for experimental biologists to consider the alternative possibility of transfusion-related acceleration of downstream steps in the CAA-ICH pathway, such as the vessel remodeling by which amyloid beta–laden vessels proceed to rupture and bleed.”
“The current study is not yet a reason for alarm, certainly not a reason to avoid otherwise indicated blood transfusion, but it is a strong call for more scientific digging,” Dr. Greenberg concluded.
The study was funded by grants from the Karolinska Institute, the Swedish Research Council, and Region Stockholm. Dr. Zhao and Dr. Greenberg report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
From JAMA
Sedentary lifestyle tied to increased dementia risk
The study of nearly 50,000 adults in the UK Biobank shows that dementia risk increased 8% with 10 hours of sedentary time and 63% with 12 hours. That’s particularly concerning because Americans spend an average of 9.5 hours a day sitting.
Sleep wasn’t factored into the sedentary time and how someone accumulated the 10 hours – either in one continuous block or broken up throughout the day – was irrelevant.
“Our analysis cannot determine whether there is a causal link, so prescriptive conclusions are not really possible; however. I think it is very reasonable to conclude that sitting less and moving more may help reduce risk of dementia,” lead investigator David Raichlen, PhD, professor of biological sciences and anthropology, University of Southern California, Los Angeles, said in an interview.
The findings were published online in JAMA.
A surprising find?
The study is a retrospective analysis of prospectively collected data from the UK Biobank of 49,841 adults aged 60 years or older who wore an accelerometer on their wrists 24 hours a day for a week. Participants had no history of dementia when they wore the movement monitoring device.
Investigators used machine-based learning to determine sedentary time based on readings from the accelerometers. Sleep was not included as sedentary behavior.
Over a mean follow-up of 6.72 years, 414 participants were diagnosed with dementia.
Investigators found that dementia risk rises by 8% at 10 hours a day (adjusted hazard ratio, 1.08; P < .001) and 63% at 12 hours a day (aHR, 1.63; P < .001), compared with 9.27 hours a day. Those who logged 15 hours of sedentary behavior a day had more than triple the dementia risk (aHR, 3.21; P < .001).
Although previous studies had found that breaking up sedentary periods with short bursts of activity help offset some negative health effects of sitting, that wasn’t the case here. Dementia risk was elevated whether participants were sedentary for 10 uninterrupted hours or multiple sedentary periods that totaled 10 hours over the whole day.
“This was surprising,” Dr. Raichlen said. “We expected to find that patterns of sedentary behavior would play a role in risk of dementia, but once you take into account the daily volume of time spent sedentary, how you get there doesn’t seem to matter as much.”
The study did not examine how participants spent sedentary time, but an earlier study by Dr. Raichlen found that watching TV was associated with a greater risk of dementia in older adults, compared with working on a computer.
More research welcome
Dr. Raichlen noted that the number of dementia cases in the study is low and that the view of sedentary behavior is based on 1 week of accelerometer readings. A longitudinal study is needed to determine if the findings last over a longer time period.
In a comment, Claire Sexton, DPhil, senior director of scientific programs and outreach for the Alzheimer’s Association, says that earlier studies reported an association between sedentary time and dementia, so these results aren’t “particularly surprising.”
“However, reports that did not find an association have also been published, so additional research on possible associations is welcome,” she said.
It’s also important to note that this observational study doesn’t establish a causal relationship between inactivity and cognitive function, which Dr. Sexton said means the influence of other dementia risk factors that are also exacerbated by sedentary behavior can’t be ruled out.
“Although results remained significant after adjusting for several of these factors, further research is required to better understand the various elements that may influence the observed relationship,” noted Dr. Sexton, who was not part of the study. “Reverse causality – that changes in the brain related to dementia are causing the sedentary behavior – cannot be ruled out.”
The study was funded by the National Institutes of Health, the state of Arizona, the Arizona Department of Health Services, and the McKnight Brain Research Foundation. Dr. Raichlen and Dr. Sexton report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The study of nearly 50,000 adults in the UK Biobank shows that dementia risk increased 8% with 10 hours of sedentary time and 63% with 12 hours. That’s particularly concerning because Americans spend an average of 9.5 hours a day sitting.
Sleep wasn’t factored into the sedentary time and how someone accumulated the 10 hours – either in one continuous block or broken up throughout the day – was irrelevant.
“Our analysis cannot determine whether there is a causal link, so prescriptive conclusions are not really possible; however. I think it is very reasonable to conclude that sitting less and moving more may help reduce risk of dementia,” lead investigator David Raichlen, PhD, professor of biological sciences and anthropology, University of Southern California, Los Angeles, said in an interview.
The findings were published online in JAMA.
A surprising find?
The study is a retrospective analysis of prospectively collected data from the UK Biobank of 49,841 adults aged 60 years or older who wore an accelerometer on their wrists 24 hours a day for a week. Participants had no history of dementia when they wore the movement monitoring device.
Investigators used machine-based learning to determine sedentary time based on readings from the accelerometers. Sleep was not included as sedentary behavior.
Over a mean follow-up of 6.72 years, 414 participants were diagnosed with dementia.
Investigators found that dementia risk rises by 8% at 10 hours a day (adjusted hazard ratio, 1.08; P < .001) and 63% at 12 hours a day (aHR, 1.63; P < .001), compared with 9.27 hours a day. Those who logged 15 hours of sedentary behavior a day had more than triple the dementia risk (aHR, 3.21; P < .001).
Although previous studies had found that breaking up sedentary periods with short bursts of activity help offset some negative health effects of sitting, that wasn’t the case here. Dementia risk was elevated whether participants were sedentary for 10 uninterrupted hours or multiple sedentary periods that totaled 10 hours over the whole day.
“This was surprising,” Dr. Raichlen said. “We expected to find that patterns of sedentary behavior would play a role in risk of dementia, but once you take into account the daily volume of time spent sedentary, how you get there doesn’t seem to matter as much.”
The study did not examine how participants spent sedentary time, but an earlier study by Dr. Raichlen found that watching TV was associated with a greater risk of dementia in older adults, compared with working on a computer.
More research welcome
Dr. Raichlen noted that the number of dementia cases in the study is low and that the view of sedentary behavior is based on 1 week of accelerometer readings. A longitudinal study is needed to determine if the findings last over a longer time period.
In a comment, Claire Sexton, DPhil, senior director of scientific programs and outreach for the Alzheimer’s Association, says that earlier studies reported an association between sedentary time and dementia, so these results aren’t “particularly surprising.”
“However, reports that did not find an association have also been published, so additional research on possible associations is welcome,” she said.
It’s also important to note that this observational study doesn’t establish a causal relationship between inactivity and cognitive function, which Dr. Sexton said means the influence of other dementia risk factors that are also exacerbated by sedentary behavior can’t be ruled out.
“Although results remained significant after adjusting for several of these factors, further research is required to better understand the various elements that may influence the observed relationship,” noted Dr. Sexton, who was not part of the study. “Reverse causality – that changes in the brain related to dementia are causing the sedentary behavior – cannot be ruled out.”
The study was funded by the National Institutes of Health, the state of Arizona, the Arizona Department of Health Services, and the McKnight Brain Research Foundation. Dr. Raichlen and Dr. Sexton report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The study of nearly 50,000 adults in the UK Biobank shows that dementia risk increased 8% with 10 hours of sedentary time and 63% with 12 hours. That’s particularly concerning because Americans spend an average of 9.5 hours a day sitting.
Sleep wasn’t factored into the sedentary time and how someone accumulated the 10 hours – either in one continuous block or broken up throughout the day – was irrelevant.
“Our analysis cannot determine whether there is a causal link, so prescriptive conclusions are not really possible; however. I think it is very reasonable to conclude that sitting less and moving more may help reduce risk of dementia,” lead investigator David Raichlen, PhD, professor of biological sciences and anthropology, University of Southern California, Los Angeles, said in an interview.
The findings were published online in JAMA.
A surprising find?
The study is a retrospective analysis of prospectively collected data from the UK Biobank of 49,841 adults aged 60 years or older who wore an accelerometer on their wrists 24 hours a day for a week. Participants had no history of dementia when they wore the movement monitoring device.
Investigators used machine-based learning to determine sedentary time based on readings from the accelerometers. Sleep was not included as sedentary behavior.
Over a mean follow-up of 6.72 years, 414 participants were diagnosed with dementia.
Investigators found that dementia risk rises by 8% at 10 hours a day (adjusted hazard ratio, 1.08; P < .001) and 63% at 12 hours a day (aHR, 1.63; P < .001), compared with 9.27 hours a day. Those who logged 15 hours of sedentary behavior a day had more than triple the dementia risk (aHR, 3.21; P < .001).
Although previous studies had found that breaking up sedentary periods with short bursts of activity help offset some negative health effects of sitting, that wasn’t the case here. Dementia risk was elevated whether participants were sedentary for 10 uninterrupted hours or multiple sedentary periods that totaled 10 hours over the whole day.
“This was surprising,” Dr. Raichlen said. “We expected to find that patterns of sedentary behavior would play a role in risk of dementia, but once you take into account the daily volume of time spent sedentary, how you get there doesn’t seem to matter as much.”
The study did not examine how participants spent sedentary time, but an earlier study by Dr. Raichlen found that watching TV was associated with a greater risk of dementia in older adults, compared with working on a computer.
More research welcome
Dr. Raichlen noted that the number of dementia cases in the study is low and that the view of sedentary behavior is based on 1 week of accelerometer readings. A longitudinal study is needed to determine if the findings last over a longer time period.
In a comment, Claire Sexton, DPhil, senior director of scientific programs and outreach for the Alzheimer’s Association, says that earlier studies reported an association between sedentary time and dementia, so these results aren’t “particularly surprising.”
“However, reports that did not find an association have also been published, so additional research on possible associations is welcome,” she said.
It’s also important to note that this observational study doesn’t establish a causal relationship between inactivity and cognitive function, which Dr. Sexton said means the influence of other dementia risk factors that are also exacerbated by sedentary behavior can’t be ruled out.
“Although results remained significant after adjusting for several of these factors, further research is required to better understand the various elements that may influence the observed relationship,” noted Dr. Sexton, who was not part of the study. “Reverse causality – that changes in the brain related to dementia are causing the sedentary behavior – cannot be ruled out.”
The study was funded by the National Institutes of Health, the state of Arizona, the Arizona Department of Health Services, and the McKnight Brain Research Foundation. Dr. Raichlen and Dr. Sexton report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA
Abdominal fat linked to lower brain volume in midlife
In a large study of healthy middle-aged adults, greater visceral and subcutaneous abdominal fat on abdominal MRI predicted brain atrophy on imaging, especially in women.
“The study shows that excess fat is bad for the brain and worse in women, including in Alzheimer’s disease risk regions,” lead author Cyrus Raji, MD, PhD, with the Mallinckrodt Institute of Radiology, Washington University, St. Louis, Mo., said in an interview.
The study was published online in the journal Aging and Disease
Modifiable risk factor
Multiple studies have suggested a connection between body fat accumulation and increased dementia risk. But few have examined the relationship between types of fat (visceral and subcutaneous) and brain volume.
For the new study, 10,000 healthy adults aged 20-80 years (mean age, 52.9 years; 53% men) underwent a short whole-body MRI protocol. Regression analyses of abdominal fat types and normalized brain volumes were evaluated, controlling for age and sex.
The research team found that higher amounts of both visceral and subcutaneous abdominal fat predicted lower total gray and white matter volume, as well as lower volume in the hippocampus, frontal cortex, and temporal, parietal, and occipital lobes.
“The findings are quite dramatic,” Dr. Raji told this news organization. “Overall, we found that both subcutaneous and visceral fat has similar levels of negative relationships with brain volumes.”
Women had a higher burden of brain atrophy with increased visceral fat than men. However, it’s difficult to place the sex differences in context because of the lack of prior work specifically investigating visceral fat, brain volume loss, and sex differences, the researchers caution.
They also note that while statistically significant relationships were observed between visceral fat levels and gray matter volume changes, their effect sizes were generally small.
“Thus, the statistical significance of this work is influenced by the large sample size and less so by large effect size in any given set of regions,” the investigators write.
Other limitations include the cross-sectional nature of the study, which precludes conclusions about causality. The analysis also did not account for other lifestyle factors such as physical activity, diet, and genetic variables.
The researchers call for further investigation “to better elucidate underlying mechanisms and discover possible interventions targeting abdominal fat reduction as a strategy to maintain brain health.”
‘Helpful addition to the literature’
In a comment, Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach, noted that “previous studies have linked obesity with cognitive decline and increased risk of dementia. Rather than using BMI as a proxy for body fat, the current study examined visceral and subcutaneous fat directly using imaging techniques.”
Dr. Sexton, who was not associated with this study, said the finding that increased body fat was associated with reduced brain volumes suggests “a possible mechanism to explain the previously reported associations between obesity and cognition.”
“Though some degree of atrophy and brain shrinkage is common with old age, awareness of this association is important because reduced brain volume may be associated with problems with thinking, memory, and performing everyday tasks, and because rates of obesity continue to rise in the United States, along with obesity-related conditions including heart disease, stroke, type 2 diabetes and certain types of cancer,” she added.
“While a helpful addition to the literature, the study does have important limitations. As an observational study, it cannot establish whether higher levels of body fat directly causes reduced brain volumes,” Dr. Sexton cautioned.
In addition, the study did not take into account important related factors like physical activity and diet, which may influence any relationship between body fat and brain volumes, she noted. “Overall, it is not just one factor that is important to consider when considering risk for cognitive decline and dementia, but multiple factors.
“Obesity and the location of body fat must be considered in combination with one’s total lived experience and habits, including physical activity, education, head injury, sleep, mental health, and the health of your heart/cardiovascular system and other key bodily systems,” Dr. Sexton said.
The Alzheimer’s Association is leading a 2-year clinical trial known as U.S. POINTER to see whether combining physical activity, healthy nutrition, social and intellectual challenges, and improved self-management of medical conditions can protect cognitive function in older adults who are at increased risk for cognitive decline.
This work was supported in part by Providence St. Joseph Health in Seattle; Saint John’s Health Center Foundation; Pacific Neuroscience Institute and Foundation; Will and Cary Singleton; and the McLoughlin family. Dr. Raji is a consultant for Brainreader, Apollo Health, Voxelwise, Neurevolution, Pacific Neuroscience Institute Foundation, and Icometrix. Dr. Sexton reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In a large study of healthy middle-aged adults, greater visceral and subcutaneous abdominal fat on abdominal MRI predicted brain atrophy on imaging, especially in women.
“The study shows that excess fat is bad for the brain and worse in women, including in Alzheimer’s disease risk regions,” lead author Cyrus Raji, MD, PhD, with the Mallinckrodt Institute of Radiology, Washington University, St. Louis, Mo., said in an interview.
The study was published online in the journal Aging and Disease
Modifiable risk factor
Multiple studies have suggested a connection between body fat accumulation and increased dementia risk. But few have examined the relationship between types of fat (visceral and subcutaneous) and brain volume.
For the new study, 10,000 healthy adults aged 20-80 years (mean age, 52.9 years; 53% men) underwent a short whole-body MRI protocol. Regression analyses of abdominal fat types and normalized brain volumes were evaluated, controlling for age and sex.
The research team found that higher amounts of both visceral and subcutaneous abdominal fat predicted lower total gray and white matter volume, as well as lower volume in the hippocampus, frontal cortex, and temporal, parietal, and occipital lobes.
“The findings are quite dramatic,” Dr. Raji told this news organization. “Overall, we found that both subcutaneous and visceral fat has similar levels of negative relationships with brain volumes.”
Women had a higher burden of brain atrophy with increased visceral fat than men. However, it’s difficult to place the sex differences in context because of the lack of prior work specifically investigating visceral fat, brain volume loss, and sex differences, the researchers caution.
They also note that while statistically significant relationships were observed between visceral fat levels and gray matter volume changes, their effect sizes were generally small.
“Thus, the statistical significance of this work is influenced by the large sample size and less so by large effect size in any given set of regions,” the investigators write.
Other limitations include the cross-sectional nature of the study, which precludes conclusions about causality. The analysis also did not account for other lifestyle factors such as physical activity, diet, and genetic variables.
The researchers call for further investigation “to better elucidate underlying mechanisms and discover possible interventions targeting abdominal fat reduction as a strategy to maintain brain health.”
‘Helpful addition to the literature’
In a comment, Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach, noted that “previous studies have linked obesity with cognitive decline and increased risk of dementia. Rather than using BMI as a proxy for body fat, the current study examined visceral and subcutaneous fat directly using imaging techniques.”
Dr. Sexton, who was not associated with this study, said the finding that increased body fat was associated with reduced brain volumes suggests “a possible mechanism to explain the previously reported associations between obesity and cognition.”
“Though some degree of atrophy and brain shrinkage is common with old age, awareness of this association is important because reduced brain volume may be associated with problems with thinking, memory, and performing everyday tasks, and because rates of obesity continue to rise in the United States, along with obesity-related conditions including heart disease, stroke, type 2 diabetes and certain types of cancer,” she added.
“While a helpful addition to the literature, the study does have important limitations. As an observational study, it cannot establish whether higher levels of body fat directly causes reduced brain volumes,” Dr. Sexton cautioned.
In addition, the study did not take into account important related factors like physical activity and diet, which may influence any relationship between body fat and brain volumes, she noted. “Overall, it is not just one factor that is important to consider when considering risk for cognitive decline and dementia, but multiple factors.
“Obesity and the location of body fat must be considered in combination with one’s total lived experience and habits, including physical activity, education, head injury, sleep, mental health, and the health of your heart/cardiovascular system and other key bodily systems,” Dr. Sexton said.
The Alzheimer’s Association is leading a 2-year clinical trial known as U.S. POINTER to see whether combining physical activity, healthy nutrition, social and intellectual challenges, and improved self-management of medical conditions can protect cognitive function in older adults who are at increased risk for cognitive decline.
This work was supported in part by Providence St. Joseph Health in Seattle; Saint John’s Health Center Foundation; Pacific Neuroscience Institute and Foundation; Will and Cary Singleton; and the McLoughlin family. Dr. Raji is a consultant for Brainreader, Apollo Health, Voxelwise, Neurevolution, Pacific Neuroscience Institute Foundation, and Icometrix. Dr. Sexton reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In a large study of healthy middle-aged adults, greater visceral and subcutaneous abdominal fat on abdominal MRI predicted brain atrophy on imaging, especially in women.
“The study shows that excess fat is bad for the brain and worse in women, including in Alzheimer’s disease risk regions,” lead author Cyrus Raji, MD, PhD, with the Mallinckrodt Institute of Radiology, Washington University, St. Louis, Mo., said in an interview.
The study was published online in the journal Aging and Disease
Modifiable risk factor
Multiple studies have suggested a connection between body fat accumulation and increased dementia risk. But few have examined the relationship between types of fat (visceral and subcutaneous) and brain volume.
For the new study, 10,000 healthy adults aged 20-80 years (mean age, 52.9 years; 53% men) underwent a short whole-body MRI protocol. Regression analyses of abdominal fat types and normalized brain volumes were evaluated, controlling for age and sex.
The research team found that higher amounts of both visceral and subcutaneous abdominal fat predicted lower total gray and white matter volume, as well as lower volume in the hippocampus, frontal cortex, and temporal, parietal, and occipital lobes.
“The findings are quite dramatic,” Dr. Raji told this news organization. “Overall, we found that both subcutaneous and visceral fat has similar levels of negative relationships with brain volumes.”
Women had a higher burden of brain atrophy with increased visceral fat than men. However, it’s difficult to place the sex differences in context because of the lack of prior work specifically investigating visceral fat, brain volume loss, and sex differences, the researchers caution.
They also note that while statistically significant relationships were observed between visceral fat levels and gray matter volume changes, their effect sizes were generally small.
“Thus, the statistical significance of this work is influenced by the large sample size and less so by large effect size in any given set of regions,” the investigators write.
Other limitations include the cross-sectional nature of the study, which precludes conclusions about causality. The analysis also did not account for other lifestyle factors such as physical activity, diet, and genetic variables.
The researchers call for further investigation “to better elucidate underlying mechanisms and discover possible interventions targeting abdominal fat reduction as a strategy to maintain brain health.”
‘Helpful addition to the literature’
In a comment, Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach, noted that “previous studies have linked obesity with cognitive decline and increased risk of dementia. Rather than using BMI as a proxy for body fat, the current study examined visceral and subcutaneous fat directly using imaging techniques.”
Dr. Sexton, who was not associated with this study, said the finding that increased body fat was associated with reduced brain volumes suggests “a possible mechanism to explain the previously reported associations between obesity and cognition.”
“Though some degree of atrophy and brain shrinkage is common with old age, awareness of this association is important because reduced brain volume may be associated with problems with thinking, memory, and performing everyday tasks, and because rates of obesity continue to rise in the United States, along with obesity-related conditions including heart disease, stroke, type 2 diabetes and certain types of cancer,” she added.
“While a helpful addition to the literature, the study does have important limitations. As an observational study, it cannot establish whether higher levels of body fat directly causes reduced brain volumes,” Dr. Sexton cautioned.
In addition, the study did not take into account important related factors like physical activity and diet, which may influence any relationship between body fat and brain volumes, she noted. “Overall, it is not just one factor that is important to consider when considering risk for cognitive decline and dementia, but multiple factors.
“Obesity and the location of body fat must be considered in combination with one’s total lived experience and habits, including physical activity, education, head injury, sleep, mental health, and the health of your heart/cardiovascular system and other key bodily systems,” Dr. Sexton said.
The Alzheimer’s Association is leading a 2-year clinical trial known as U.S. POINTER to see whether combining physical activity, healthy nutrition, social and intellectual challenges, and improved self-management of medical conditions can protect cognitive function in older adults who are at increased risk for cognitive decline.
This work was supported in part by Providence St. Joseph Health in Seattle; Saint John’s Health Center Foundation; Pacific Neuroscience Institute and Foundation; Will and Cary Singleton; and the McLoughlin family. Dr. Raji is a consultant for Brainreader, Apollo Health, Voxelwise, Neurevolution, Pacific Neuroscience Institute Foundation, and Icometrix. Dr. Sexton reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM AGING AND DISEASES