A Common Pancreatic Condition That Few Have Heard Of

Article Type
Changed
Fri, 03/21/2025 - 08:51

The most common pathology affecting the pancreas is excess intra-pancreatic fat deposition (IPFD), often called fatty pancreas disease (FPD) — a disorder experienced by roughly one fifth of the world’s population. Although it is more common than type 2 diabetes, pancreatitis, and pancreatic cancer combined, it has remained relatively obscure.

By contrast, fatty liver — once called nonalcoholic fatty liver disease and recently renamed metabolic dysfunction–associated steatotic liver disease (MASLD) — is well-known.

“When it comes to diseases of the liver and pancreas, the liver is the big brother that has gotten all the attention, while the pancreas is the neglected little stepbrother that’s not sufficiently profiled in most medical textbooks and gets very little attention,” Max Petrov, MD, MPH, PhD, professor of pancreatology, University of Auckland, New Zealand, said in an interview. “The phenomenon of fatty pancreas has been observed for decades, but it is underappreciated and underrecognized.”

 

Dr. Mohammad Bilal

As early as 1926, fat depositions were identified during autopsies, but the condition remained relatively unknown, Mohammad Bilal, MD, associate professor of medicine-gastroenterology, University of Colorado Anschutz Medical Campus, Aurora, said in an interview. “Fortunately, FPD has recently been receiving more focus.”

Generally, healthy individuals have small amounts of fat in their pancreas. IPFD is defined as “the diffuse presence of fat in the pancreas, measured on a continuous scale,” and FPD refers to IPFD above the upper limit of normal. While there is no clear consensus as to what the normal range is, studies suggest it’s a pancreatic fat content ranging from 1.8% to 10.4%.

FPD’s “most important implication is that it can be a precursor for more challenging and burdensome diseases of the pancreas,” Petrov said.

Fatty changes in the pancreas affect both its endocrine and exocrine systems. FPD is associated with type 2 diabetes, the most common disease of the endocrine pancreas, as well as pancreatitis and pancreatic cancer, the most common diseases of the exocrine pancreas. It’s also implicated in the development of carotid atherosclerosis, pancreatic fistula following surgery, and exocrine pancreatic insufficiency (EPI).

 

A ‘Pandora’s Box’

Up to half of people with fatty pancreas are lean. The condition isn’t merely caused by an overflow of fat from the liver into the pancreas in people who consume more calories than they burn, Petrov said. Neither robust postmortem nor biopsy studies have found a statistically significant association between fatty deposition in the pancreas and liver fat.

Compared with the way people accumulate liver fat, the development of FPD is more complex, Petrov said.

“Hepatic fat is a relatively simple process: Lipid droplets accumulate in the hepatocytes; but, in the pancreas, there are several ways by which fat may accumulate,” he said.

One relates to the location of the pancreas within visceral, retroperitoneal fat, Petrov said. That fat can migrate and build up between pancreatic lobules.

Fat also can accumulate inside the lobes. This process can involve a buildup of fat droplets in acinar and stellate cells on the exocrine side and in the islets of Langerhans on the endocrine side. Additionally, when functional pancreatic cells die, particularly acinar cells, adult stem cells may replace them with adipocytes. Transformation of acinar cells into fat cells — a process called acinar-to-adipocyte transdifferentiation — also may be a way fat accumulates inside the lobes, Petrov said.

The accumulation of fat is a response to a wide array of insults to the pancreas over time. For example, obesity and metabolic syndrome lead to the accumulation of adipocytes and fat infiltration, whereas alcohol abuse and viral infections may lead to the death of acinar cells, which produce digestive enzymes.

Ultimately, the negative changes produced by excess fat in the pancreas are the origin of all common noninherited pancreatic diseases, bringing them under one umbrella, Petrov maintained. He dubbed this hypothesis PANcreatic Diseases Originating from intRapancreatic fAt (PANDORA).

The type of cells involved has implications for which disease may arise. For example, fat infiltration in stellate cells may promote pancreatic cancer, whereas its accumulation in the islets of Langerhans, which produce insulin and glucagon, is associated with type 2 diabetes.

The PANDORA hypothesis has eight foundational principles:

  • Fatty pancreas is a key driver of pancreatic diseases in most people.
  • Inflammation within the pancreatic microenvironment results from overwhelming lipotoxicity fueled by fatty pancreas.
  • Aberrant communication between acinar cells involving lipid droplets drives acute pancreatitis.
  • The pancreas responds to lipotoxicity with fibrosis and calcification — the hallmarks of chronic pancreatitis.
  • Fat deposition affects signaling between stellate cells and other components of the microenvironment in ways that raise the risk for pancreatic cancer.
  • The development of diabetes of the exocrine pancreas and EPI is affected by the presence of fatty pancreas.
  • The higher risk for pancreatic disease in older adults is influenced by fatty pancreas.
  • The multipronged nature of intrapancreatic fat deposition accounts for the common development of one pancreatic disease after another.

The idea that all common pancreatic diseases are the result of pathways emanating from FPD could “explain the bidirectional relationship between diabetes and pancreatitis or pancreatic cancer,” Petrov said.

 

Risk Factors, Symptoms, and Diagnosis

A variety of risk factors are involved in the accumulation of fat that may lead to pancreatic diseases, including aging, cholelithiasis, dyslipidemia, drugs/toxins (eg, steroids), genetic predisposition, iron overload, diet (eg, fatty foods, ultraprocessed foods), heavy alcohol use, overweight/obesity, pancreatic duct obstruction, tobacco use, viral infection (eg, hepatitis B, COVID-19), severe malnutrition, prediabetes, and dysglycemia.

Petrov described FPD as a “silent disease” that’s often asymptomatic, with its presence emerging as an incidental finding during abdominal ultrasonography for other reasons. However, patients may sometimes experience stomach pain or nausea if they have concurrent diseases of the pancreas, he said.

There are no currently available lab tests that can definitively detect the presence of FPD. Rather, the gold standard for a noninvasive diagnosis of FPD is MRI, with CT as the second-best choice, Petrov said.

In countries where advanced imaging is not available, a low-cost alternative might be a simple abdominal ultrasound, but it is not definitive, he said. “It’s operator-dependent and can be subjective.”

Some risk factors, such as derangements of glucose and lipid metabolism, especially in the presence of heavy alcohol use and a high-fat diet, can “be detected on lab tests,” Petrov said. “This, in combination with the abdominal ultrasound, might suggest the patients will benefit from deeper investigation, including MRI.”

Because the exocrine pancreas helps with digestion of fatty food, intralobular fatty deposits or replacement of pancreatic exocrine cells with adipose cells can lead to steatorrhea, Bilal said.

“Fat within the stool or oily diarrhea is a clue to the presence of FPD,” Bilal said.

Although this symptom isn’t unique to FPD and is found in other types of pancreatic conditions, its presence suggests that further investigation for FPD is warranted, he added.

 

Common-Sense Treatment Approaches

At present, there are no US Food and Drug Administration–approved treatments for FPD, Petrov said.

“What might be recommended is something along the lines of treatment of MASLD — appropriate diet and physical activity,” he said. Petrov hopes that as the disease entity garners more research attention, more clinical drug trials will be initiated, and new medications are found and approved.

Petrov suggested that there could be a “theoretical rationale” for the use of glucagon-like peptide 1 receptor agonists (GLP-1 RAs) as a treatment, given their effectiveness in multiple conditions, including MASLD, but no human trials have robustly shown specific benefits of these drugs for FPD.

Petrov added that, to date, 12 classes of drugs have been investigated for reducing IPFD: biguanides, sulfonylureas, GLP-1 RAs, thiazolidinediones, dipeptidyl peptidase–4 (DPP-4) inhibitors, sodium-glucose cotransporter 2 inhibitors, statins, fibrates, pancreatic lipase inhibitors, angiotensin II receptor blockers, somatostatin receptor agonists, and antioxidants.

Of these, most have shown promise in preclinical animal models. But only thiazolidinediones, GLP-1 RAs, DPP-4 inhibitors, and somatostatin receptor agonists have been investigated in randomized controlled trials in humans. The findings have been inconsistent, with the active treatment often not achieving statistically significant improvements.

“At this stage of our knowledge, we can’t recommend a specific pharmacotherapy,” Petrov said. But we can suggest dietary changes, such as saturated fat reduction, alcohol reduction, smoking cessation, reduction in consumption of ultraprocessed food, physical exercise, and addressing obesity and other drivers of metabolic disease.

Bilal, who is also a spokesperson for AGA, suggested that pancreatic enzyme replacement therapy, often used to treat pancreatic EPI, may treat some symptoms of FPD such as diarrhea.

Bariatric surgery has shown promise for FPD, in that it can decrease the patient’s body mass and potentially reduce the fat in the pancreas as well as it can improve metabolic diseases and hyperlipidemia. One study showed that it significantly decreased IPFD, fatty acid uptake, and blood flow, and these improvements were associated with more favorable glucose homeostasis and beta-cell function.

However, bariatric surgery is only appropriate for certain patients; is associated with potentially adverse sequelae including malnutrition, anemia, and digestive tract stenosis; and is currently not indicated for FPD.

Bilal advises clinicians to “keep an eye on FPD” if it’s detected incidentally and to screen patients more carefully for MASLD, metabolic disease, and diabetes.

“Although there are no consensus guidelines and recommendations for managing FPD at present, these common-sense approaches will benefit the patient’s overall health and hopefully will have a beneficial impact on pancreatic health as well,” he said.

Petrov reported no relevant financial relationships. Bilal reported being a consultant for Boston Scientific, Steris Endoscopy, and Cook Medical.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The most common pathology affecting the pancreas is excess intra-pancreatic fat deposition (IPFD), often called fatty pancreas disease (FPD) — a disorder experienced by roughly one fifth of the world’s population. Although it is more common than type 2 diabetes, pancreatitis, and pancreatic cancer combined, it has remained relatively obscure.

By contrast, fatty liver — once called nonalcoholic fatty liver disease and recently renamed metabolic dysfunction–associated steatotic liver disease (MASLD) — is well-known.

“When it comes to diseases of the liver and pancreas, the liver is the big brother that has gotten all the attention, while the pancreas is the neglected little stepbrother that’s not sufficiently profiled in most medical textbooks and gets very little attention,” Max Petrov, MD, MPH, PhD, professor of pancreatology, University of Auckland, New Zealand, said in an interview. “The phenomenon of fatty pancreas has been observed for decades, but it is underappreciated and underrecognized.”

 

Dr. Mohammad Bilal

As early as 1926, fat depositions were identified during autopsies, but the condition remained relatively unknown, Mohammad Bilal, MD, associate professor of medicine-gastroenterology, University of Colorado Anschutz Medical Campus, Aurora, said in an interview. “Fortunately, FPD has recently been receiving more focus.”

Generally, healthy individuals have small amounts of fat in their pancreas. IPFD is defined as “the diffuse presence of fat in the pancreas, measured on a continuous scale,” and FPD refers to IPFD above the upper limit of normal. While there is no clear consensus as to what the normal range is, studies suggest it’s a pancreatic fat content ranging from 1.8% to 10.4%.

FPD’s “most important implication is that it can be a precursor for more challenging and burdensome diseases of the pancreas,” Petrov said.

Fatty changes in the pancreas affect both its endocrine and exocrine systems. FPD is associated with type 2 diabetes, the most common disease of the endocrine pancreas, as well as pancreatitis and pancreatic cancer, the most common diseases of the exocrine pancreas. It’s also implicated in the development of carotid atherosclerosis, pancreatic fistula following surgery, and exocrine pancreatic insufficiency (EPI).

 

A ‘Pandora’s Box’

Up to half of people with fatty pancreas are lean. The condition isn’t merely caused by an overflow of fat from the liver into the pancreas in people who consume more calories than they burn, Petrov said. Neither robust postmortem nor biopsy studies have found a statistically significant association between fatty deposition in the pancreas and liver fat.

Compared with the way people accumulate liver fat, the development of FPD is more complex, Petrov said.

“Hepatic fat is a relatively simple process: Lipid droplets accumulate in the hepatocytes; but, in the pancreas, there are several ways by which fat may accumulate,” he said.

One relates to the location of the pancreas within visceral, retroperitoneal fat, Petrov said. That fat can migrate and build up between pancreatic lobules.

Fat also can accumulate inside the lobes. This process can involve a buildup of fat droplets in acinar and stellate cells on the exocrine side and in the islets of Langerhans on the endocrine side. Additionally, when functional pancreatic cells die, particularly acinar cells, adult stem cells may replace them with adipocytes. Transformation of acinar cells into fat cells — a process called acinar-to-adipocyte transdifferentiation — also may be a way fat accumulates inside the lobes, Petrov said.

The accumulation of fat is a response to a wide array of insults to the pancreas over time. For example, obesity and metabolic syndrome lead to the accumulation of adipocytes and fat infiltration, whereas alcohol abuse and viral infections may lead to the death of acinar cells, which produce digestive enzymes.

Ultimately, the negative changes produced by excess fat in the pancreas are the origin of all common noninherited pancreatic diseases, bringing them under one umbrella, Petrov maintained. He dubbed this hypothesis PANcreatic Diseases Originating from intRapancreatic fAt (PANDORA).

The type of cells involved has implications for which disease may arise. For example, fat infiltration in stellate cells may promote pancreatic cancer, whereas its accumulation in the islets of Langerhans, which produce insulin and glucagon, is associated with type 2 diabetes.

The PANDORA hypothesis has eight foundational principles:

  • Fatty pancreas is a key driver of pancreatic diseases in most people.
  • Inflammation within the pancreatic microenvironment results from overwhelming lipotoxicity fueled by fatty pancreas.
  • Aberrant communication between acinar cells involving lipid droplets drives acute pancreatitis.
  • The pancreas responds to lipotoxicity with fibrosis and calcification — the hallmarks of chronic pancreatitis.
  • Fat deposition affects signaling between stellate cells and other components of the microenvironment in ways that raise the risk for pancreatic cancer.
  • The development of diabetes of the exocrine pancreas and EPI is affected by the presence of fatty pancreas.
  • The higher risk for pancreatic disease in older adults is influenced by fatty pancreas.
  • The multipronged nature of intrapancreatic fat deposition accounts for the common development of one pancreatic disease after another.

The idea that all common pancreatic diseases are the result of pathways emanating from FPD could “explain the bidirectional relationship between diabetes and pancreatitis or pancreatic cancer,” Petrov said.

 

Risk Factors, Symptoms, and Diagnosis

A variety of risk factors are involved in the accumulation of fat that may lead to pancreatic diseases, including aging, cholelithiasis, dyslipidemia, drugs/toxins (eg, steroids), genetic predisposition, iron overload, diet (eg, fatty foods, ultraprocessed foods), heavy alcohol use, overweight/obesity, pancreatic duct obstruction, tobacco use, viral infection (eg, hepatitis B, COVID-19), severe malnutrition, prediabetes, and dysglycemia.

Petrov described FPD as a “silent disease” that’s often asymptomatic, with its presence emerging as an incidental finding during abdominal ultrasonography for other reasons. However, patients may sometimes experience stomach pain or nausea if they have concurrent diseases of the pancreas, he said.

There are no currently available lab tests that can definitively detect the presence of FPD. Rather, the gold standard for a noninvasive diagnosis of FPD is MRI, with CT as the second-best choice, Petrov said.

In countries where advanced imaging is not available, a low-cost alternative might be a simple abdominal ultrasound, but it is not definitive, he said. “It’s operator-dependent and can be subjective.”

Some risk factors, such as derangements of glucose and lipid metabolism, especially in the presence of heavy alcohol use and a high-fat diet, can “be detected on lab tests,” Petrov said. “This, in combination with the abdominal ultrasound, might suggest the patients will benefit from deeper investigation, including MRI.”

Because the exocrine pancreas helps with digestion of fatty food, intralobular fatty deposits or replacement of pancreatic exocrine cells with adipose cells can lead to steatorrhea, Bilal said.

“Fat within the stool or oily diarrhea is a clue to the presence of FPD,” Bilal said.

Although this symptom isn’t unique to FPD and is found in other types of pancreatic conditions, its presence suggests that further investigation for FPD is warranted, he added.

 

Common-Sense Treatment Approaches

At present, there are no US Food and Drug Administration–approved treatments for FPD, Petrov said.

“What might be recommended is something along the lines of treatment of MASLD — appropriate diet and physical activity,” he said. Petrov hopes that as the disease entity garners more research attention, more clinical drug trials will be initiated, and new medications are found and approved.

Petrov suggested that there could be a “theoretical rationale” for the use of glucagon-like peptide 1 receptor agonists (GLP-1 RAs) as a treatment, given their effectiveness in multiple conditions, including MASLD, but no human trials have robustly shown specific benefits of these drugs for FPD.

Petrov added that, to date, 12 classes of drugs have been investigated for reducing IPFD: biguanides, sulfonylureas, GLP-1 RAs, thiazolidinediones, dipeptidyl peptidase–4 (DPP-4) inhibitors, sodium-glucose cotransporter 2 inhibitors, statins, fibrates, pancreatic lipase inhibitors, angiotensin II receptor blockers, somatostatin receptor agonists, and antioxidants.

Of these, most have shown promise in preclinical animal models. But only thiazolidinediones, GLP-1 RAs, DPP-4 inhibitors, and somatostatin receptor agonists have been investigated in randomized controlled trials in humans. The findings have been inconsistent, with the active treatment often not achieving statistically significant improvements.

“At this stage of our knowledge, we can’t recommend a specific pharmacotherapy,” Petrov said. But we can suggest dietary changes, such as saturated fat reduction, alcohol reduction, smoking cessation, reduction in consumption of ultraprocessed food, physical exercise, and addressing obesity and other drivers of metabolic disease.

Bilal, who is also a spokesperson for AGA, suggested that pancreatic enzyme replacement therapy, often used to treat pancreatic EPI, may treat some symptoms of FPD such as diarrhea.

Bariatric surgery has shown promise for FPD, in that it can decrease the patient’s body mass and potentially reduce the fat in the pancreas as well as it can improve metabolic diseases and hyperlipidemia. One study showed that it significantly decreased IPFD, fatty acid uptake, and blood flow, and these improvements were associated with more favorable glucose homeostasis and beta-cell function.

However, bariatric surgery is only appropriate for certain patients; is associated with potentially adverse sequelae including malnutrition, anemia, and digestive tract stenosis; and is currently not indicated for FPD.

Bilal advises clinicians to “keep an eye on FPD” if it’s detected incidentally and to screen patients more carefully for MASLD, metabolic disease, and diabetes.

“Although there are no consensus guidelines and recommendations for managing FPD at present, these common-sense approaches will benefit the patient’s overall health and hopefully will have a beneficial impact on pancreatic health as well,” he said.

Petrov reported no relevant financial relationships. Bilal reported being a consultant for Boston Scientific, Steris Endoscopy, and Cook Medical.

A version of this article first appeared on Medscape.com.

The most common pathology affecting the pancreas is excess intra-pancreatic fat deposition (IPFD), often called fatty pancreas disease (FPD) — a disorder experienced by roughly one fifth of the world’s population. Although it is more common than type 2 diabetes, pancreatitis, and pancreatic cancer combined, it has remained relatively obscure.

By contrast, fatty liver — once called nonalcoholic fatty liver disease and recently renamed metabolic dysfunction–associated steatotic liver disease (MASLD) — is well-known.

“When it comes to diseases of the liver and pancreas, the liver is the big brother that has gotten all the attention, while the pancreas is the neglected little stepbrother that’s not sufficiently profiled in most medical textbooks and gets very little attention,” Max Petrov, MD, MPH, PhD, professor of pancreatology, University of Auckland, New Zealand, said in an interview. “The phenomenon of fatty pancreas has been observed for decades, but it is underappreciated and underrecognized.”

 

Dr. Mohammad Bilal

As early as 1926, fat depositions were identified during autopsies, but the condition remained relatively unknown, Mohammad Bilal, MD, associate professor of medicine-gastroenterology, University of Colorado Anschutz Medical Campus, Aurora, said in an interview. “Fortunately, FPD has recently been receiving more focus.”

Generally, healthy individuals have small amounts of fat in their pancreas. IPFD is defined as “the diffuse presence of fat in the pancreas, measured on a continuous scale,” and FPD refers to IPFD above the upper limit of normal. While there is no clear consensus as to what the normal range is, studies suggest it’s a pancreatic fat content ranging from 1.8% to 10.4%.

FPD’s “most important implication is that it can be a precursor for more challenging and burdensome diseases of the pancreas,” Petrov said.

Fatty changes in the pancreas affect both its endocrine and exocrine systems. FPD is associated with type 2 diabetes, the most common disease of the endocrine pancreas, as well as pancreatitis and pancreatic cancer, the most common diseases of the exocrine pancreas. It’s also implicated in the development of carotid atherosclerosis, pancreatic fistula following surgery, and exocrine pancreatic insufficiency (EPI).

 

A ‘Pandora’s Box’

Up to half of people with fatty pancreas are lean. The condition isn’t merely caused by an overflow of fat from the liver into the pancreas in people who consume more calories than they burn, Petrov said. Neither robust postmortem nor biopsy studies have found a statistically significant association between fatty deposition in the pancreas and liver fat.

Compared with the way people accumulate liver fat, the development of FPD is more complex, Petrov said.

“Hepatic fat is a relatively simple process: Lipid droplets accumulate in the hepatocytes; but, in the pancreas, there are several ways by which fat may accumulate,” he said.

One relates to the location of the pancreas within visceral, retroperitoneal fat, Petrov said. That fat can migrate and build up between pancreatic lobules.

Fat also can accumulate inside the lobes. This process can involve a buildup of fat droplets in acinar and stellate cells on the exocrine side and in the islets of Langerhans on the endocrine side. Additionally, when functional pancreatic cells die, particularly acinar cells, adult stem cells may replace them with adipocytes. Transformation of acinar cells into fat cells — a process called acinar-to-adipocyte transdifferentiation — also may be a way fat accumulates inside the lobes, Petrov said.

The accumulation of fat is a response to a wide array of insults to the pancreas over time. For example, obesity and metabolic syndrome lead to the accumulation of adipocytes and fat infiltration, whereas alcohol abuse and viral infections may lead to the death of acinar cells, which produce digestive enzymes.

Ultimately, the negative changes produced by excess fat in the pancreas are the origin of all common noninherited pancreatic diseases, bringing them under one umbrella, Petrov maintained. He dubbed this hypothesis PANcreatic Diseases Originating from intRapancreatic fAt (PANDORA).

The type of cells involved has implications for which disease may arise. For example, fat infiltration in stellate cells may promote pancreatic cancer, whereas its accumulation in the islets of Langerhans, which produce insulin and glucagon, is associated with type 2 diabetes.

The PANDORA hypothesis has eight foundational principles:

  • Fatty pancreas is a key driver of pancreatic diseases in most people.
  • Inflammation within the pancreatic microenvironment results from overwhelming lipotoxicity fueled by fatty pancreas.
  • Aberrant communication between acinar cells involving lipid droplets drives acute pancreatitis.
  • The pancreas responds to lipotoxicity with fibrosis and calcification — the hallmarks of chronic pancreatitis.
  • Fat deposition affects signaling between stellate cells and other components of the microenvironment in ways that raise the risk for pancreatic cancer.
  • The development of diabetes of the exocrine pancreas and EPI is affected by the presence of fatty pancreas.
  • The higher risk for pancreatic disease in older adults is influenced by fatty pancreas.
  • The multipronged nature of intrapancreatic fat deposition accounts for the common development of one pancreatic disease after another.

The idea that all common pancreatic diseases are the result of pathways emanating from FPD could “explain the bidirectional relationship between diabetes and pancreatitis or pancreatic cancer,” Petrov said.

 

Risk Factors, Symptoms, and Diagnosis

A variety of risk factors are involved in the accumulation of fat that may lead to pancreatic diseases, including aging, cholelithiasis, dyslipidemia, drugs/toxins (eg, steroids), genetic predisposition, iron overload, diet (eg, fatty foods, ultraprocessed foods), heavy alcohol use, overweight/obesity, pancreatic duct obstruction, tobacco use, viral infection (eg, hepatitis B, COVID-19), severe malnutrition, prediabetes, and dysglycemia.

Petrov described FPD as a “silent disease” that’s often asymptomatic, with its presence emerging as an incidental finding during abdominal ultrasonography for other reasons. However, patients may sometimes experience stomach pain or nausea if they have concurrent diseases of the pancreas, he said.

There are no currently available lab tests that can definitively detect the presence of FPD. Rather, the gold standard for a noninvasive diagnosis of FPD is MRI, with CT as the second-best choice, Petrov said.

In countries where advanced imaging is not available, a low-cost alternative might be a simple abdominal ultrasound, but it is not definitive, he said. “It’s operator-dependent and can be subjective.”

Some risk factors, such as derangements of glucose and lipid metabolism, especially in the presence of heavy alcohol use and a high-fat diet, can “be detected on lab tests,” Petrov said. “This, in combination with the abdominal ultrasound, might suggest the patients will benefit from deeper investigation, including MRI.”

Because the exocrine pancreas helps with digestion of fatty food, intralobular fatty deposits or replacement of pancreatic exocrine cells with adipose cells can lead to steatorrhea, Bilal said.

“Fat within the stool or oily diarrhea is a clue to the presence of FPD,” Bilal said.

Although this symptom isn’t unique to FPD and is found in other types of pancreatic conditions, its presence suggests that further investigation for FPD is warranted, he added.

 

Common-Sense Treatment Approaches

At present, there are no US Food and Drug Administration–approved treatments for FPD, Petrov said.

“What might be recommended is something along the lines of treatment of MASLD — appropriate diet and physical activity,” he said. Petrov hopes that as the disease entity garners more research attention, more clinical drug trials will be initiated, and new medications are found and approved.

Petrov suggested that there could be a “theoretical rationale” for the use of glucagon-like peptide 1 receptor agonists (GLP-1 RAs) as a treatment, given their effectiveness in multiple conditions, including MASLD, but no human trials have robustly shown specific benefits of these drugs for FPD.

Petrov added that, to date, 12 classes of drugs have been investigated for reducing IPFD: biguanides, sulfonylureas, GLP-1 RAs, thiazolidinediones, dipeptidyl peptidase–4 (DPP-4) inhibitors, sodium-glucose cotransporter 2 inhibitors, statins, fibrates, pancreatic lipase inhibitors, angiotensin II receptor blockers, somatostatin receptor agonists, and antioxidants.

Of these, most have shown promise in preclinical animal models. But only thiazolidinediones, GLP-1 RAs, DPP-4 inhibitors, and somatostatin receptor agonists have been investigated in randomized controlled trials in humans. The findings have been inconsistent, with the active treatment often not achieving statistically significant improvements.

“At this stage of our knowledge, we can’t recommend a specific pharmacotherapy,” Petrov said. But we can suggest dietary changes, such as saturated fat reduction, alcohol reduction, smoking cessation, reduction in consumption of ultraprocessed food, physical exercise, and addressing obesity and other drivers of metabolic disease.

Bilal, who is also a spokesperson for AGA, suggested that pancreatic enzyme replacement therapy, often used to treat pancreatic EPI, may treat some symptoms of FPD such as diarrhea.

Bariatric surgery has shown promise for FPD, in that it can decrease the patient’s body mass and potentially reduce the fat in the pancreas as well as it can improve metabolic diseases and hyperlipidemia. One study showed that it significantly decreased IPFD, fatty acid uptake, and blood flow, and these improvements were associated with more favorable glucose homeostasis and beta-cell function.

However, bariatric surgery is only appropriate for certain patients; is associated with potentially adverse sequelae including malnutrition, anemia, and digestive tract stenosis; and is currently not indicated for FPD.

Bilal advises clinicians to “keep an eye on FPD” if it’s detected incidentally and to screen patients more carefully for MASLD, metabolic disease, and diabetes.

“Although there are no consensus guidelines and recommendations for managing FPD at present, these common-sense approaches will benefit the patient’s overall health and hopefully will have a beneficial impact on pancreatic health as well,” he said.

Petrov reported no relevant financial relationships. Bilal reported being a consultant for Boston Scientific, Steris Endoscopy, and Cook Medical.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 03/21/2025 - 08:50
Un-Gate On Date
Fri, 03/21/2025 - 08:50
Use ProPublica
CFC Schedule Remove Status
Fri, 03/21/2025 - 08:50
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 03/21/2025 - 08:50

Newborn Screening Programs: What Do Clinicians Need to Know?

Article Type
Changed
Mon, 09/30/2024 - 13:31

Newborn screening programs are public health services aimed at ensuring that the close to 4 million infants born each year in the United States are screened for certain serious disorders at birth. These disorders, albeit rare, are detected in roughly 12,500 newborn babies every year.

Newborn screening isn’t new, although it has expanded and transformed over the decades. The first newborn screening test was developed in the 1960s to detect phenylketonuria (PKU).1 Since then, the number of conditions screened for has increased, with programs in every US state and territory. “Newborn screening is well established now, not experimental or newfangled,” Wendy Chung, MD, PhD, professor of pediatrics, Harvard Medical School, Boston, Massachusetts, told Neurology Reviews.

Wendy Chung, MD, PhD, is professor of pediatrics, Harvard Medical School, Boston, Massachusetts.
Dr. Wendy Chung


In newborn screening, blood drawn from the baby’s heel is applied to specialized filter paper, which is then subjected to several analytical methods, including tandem mass spectrometry and molecular analyses to detect biomarkers for the diseases.2 More recently, genomic sequencing is being piloted as part of consented research studies.3

Newborn screening includes not only biochemical and genetic testing, but also includes noninvasive screening for hearing loss or for critical congenital heart disease using pulse oximetry. And newborn screening goes beyond analysis of a single drop of blood. Rather, “it’s an entire system, with the goal of identifying babies with genetic disorders who otherwise have no obvious symptoms,” said Dr. Chung. Left undetected and untreated, these conditions can be associated with serious adverse outcomes and even death.

Dr. Chung described newborn screening as a “one of the most successful public health programs, supporting health equity by screening almost every US baby after birth and then bringing timely treatments when relevant even before the baby develops symptoms of a disorder.” In this way, newborn screening has “saved lives and decreased disease burdens.”

There are at present 38 core conditions that the Department of Health and Human Services (HHS) regards as the most critical to screen for and 26 secondary conditions associated with these core disorders. This is called the Recommended Uniform Screening Panel (RUSP). Guidance regarding the most appropriate application of newborn screening tests, technologies and standards are provided by the Advisory Committee on Heritable Disorders in Newborns and Children (ACHDNC).

Each state “independently determines which screening tests are performed and what follow-up is provided.”4 Information about which tests are provided by which states can be found on the “Report Card” of the National Organization for Rare Diseases (NORD).
 

Challenges in Expanding the Current Newborn Screening

One of the major drawbacks in the current system is that “we don’t screen for enough diseases,” according to Zhanzhi Hu, PhD, of the Department of Systems Biology and the Department of Biomedical Information, Columbia University, New York City. “There are over 10,000 rare genetic diseases, but we’re currently screening for fewer than 100,” he told Neurology Reviews. Although in the United States, there are about 700-800 drugs approved for genetic diseases, “we can’t identify patients with these diseases early enough for the ideal window when treatments are most effective.”

Moreover, it’s a “lengthy process” to add new diseases to RUSP. “New conditions are added at the pace of less than one per year, on average — even for the hundreds of diseases for which there are treatments,” he said. “If we keep going at the current pace, we won’t be able to screen for those diseases for another few hundred years.”

Zhanzhi Hu, PhD, is affiliated with the Department of Systems Biology and the Department of Biomedical Information, Columbia University, New York City.
Dr. Zhanzhi Hu


Speeding up the pace of including new diseases in newborn screening is challenging because “we have more diseases than we have development dollars for,” Dr. Hu said. “Big pharmaceutical companies are reluctant to invest in rare diseases because the population is so small and it’s hard and expensive to develop such drugs. So if we can identify patients first, there will be more interest in developing treatments down the road.”

On the other hand, for trials to take place, these babies have to be identified in a timely manner — which requires testing. “Right now, we have a deadlock,” Dr. Hu said. “To nominate a disease, you need an approved treatment. But to get a treatment developed, you need to identify patients suitable for a clinical trial. If you have to wait for the symptoms to show up, the damage has already manifested and is irreversible. Our chance is to recognize the disease before symptom onset and then start treatment. I would call this a ‘chicken-and-egg’ problem.”

Dr. Hu is passionate about expanding newborn screening, and he has a very personal reason. Two of his children have a rare genetic disease. “My younger son, now 13 years old, was diagnosed at a much earlier age than my older son, although he had very few symptoms at the time, because his older brother was known to have the disease. As a result of this, his outcome was much better.” By contrast, Dr. Hu’s oldest son — now age 16 — wasn’t diagnosed until he became symptomatic.

His quest led him to join forces with Dr. Chung in conducting the Genomic Uniform-screening Against Rare Disease in All Newborns (Guardian) study, which screens newborns for more than 450 genetic conditions not currently screened as part of the standard newborn screening. To date, the study — which focuses on babies born in New York City — has screened about 11,000 infants.

“To accumulate enough evidence requires screening at least 100,000 babies because one requirement for nominating a disease for national inclusion in RUSP is an ‘N of 1’ study — meaning, to identify at least one positive patient using the proposed screening method in a prospective study,” Dr. Hu explained. “Most are rare diseases with an incidence rate of around one in 100,000. So getting to that magic number of 100,000 participants should enable us to hit that ‘N of 1’ for most diseases.”

The most challenging part, according to Dr. Hu, is the requirement of a prospective study, which means that you have to conduct a large-scale study enrolling tens of thousands of families and babies. If done for individual diseases (as has been the case in the past), “this is a huge cost and very inefficient.”

In reality, he added, the true incidence of these diseases is unclear. “Incidence rates are based on historical data rather than prospective studies. We’ve already seen some diseases show up more frequently than previously recorded, while others have shown up less frequently.”

For example, in the 11,000 babies screened to date, at least three girls with Rett syndrome have been identified, which is “quite a bit higher” than what has previously been identified in the literature (ie, one in 10,000-12,000 births). “This is a highly unmet need for these families because if you can initiate early treatment — at age 1, or even younger — the outcome will be better.”

He noted that there is at least one clinical trial underway for treating Rett syndrome, which has yielded “promising” data.5 “We’re hoping that by screening for diseases like Rett and identifying patients early, this will go hand-in-hand with clinical drug development. It can speed both the approval of the treatment and the addition to the newborn screening list,” Dr. Hu stated.
 

 

 

Screening and Drug Development Working in Tandem

Sequencing technologies have advanced and become more sophisticated as well as less costly, so interest in expanding newborn screening through newborn genome sequencing has increased. In fact, many states currently have incorporated genetic testing into newborn screening for conditions without biochemical markers. Additionally, newborn genomic sequencing is also used for further testing in infants with abnormal biochemical screening results.6

Genomic sequencing “identifies nucleotide changes that are the underlying etiology of monogenic disorders.”6 Its use could potentially enable identification of over 500 genetic disorders for which an newborn screening assay is not currently available, said Dr. Hu.

“Molecular DNA analysis has been integrated into newborn testing either as a first- or second-tier test for several conditions, including cystic fibrosis, severe combined immunodeficiency, and spinal muscular atrophy (SMA),” Dr. Hu said.

Dr. Hu pointed to SMA to illustrate the power and potential of newborn screening working hand-in-hand with the development of new treatments. SMA is a neurodegenerative disorder caused by mutations in SMN1, which encodes survival motor neuron protein (SMN).7 Deficiencies in SMN results in loss of motor neurons with muscle weakness and, often, early death.7A pilot study, on which Dr. Chung was the senior author, used both biochemical and genetic testing of close to 4000 newborns and found an SMA carrier frequency of 1.5%. One newborn was identified who had a homozygous SMN1 gene deletion and two copies of SMN2, strongly suggesting the presence of a severe type 1 SMA phenotype.8

At age 15 days, the baby was treated with nusinersen, an injection administered into the fluid surrounding the spinal cord, and the first FDA-approved genetic treatment for SMA. At the time of study publication, the baby was 12 months old, “meeting all developmental milestones and free of any respiratory issues,” the authors report.

“Screening for SMA — which was added to the RUSP in 2018 — has dramatically transformed what used to be the most common genetic cause of death in children under the age of 2,” Dr. Chung said. “Now, a once-and-done IV infusion of genetic therapy right after screening has transformed everything, taking what used to be a lethal condition and allowing children to grow up healthy.”
 

Advocating for Inclusion of Diseases With No Current Treatment

At present, any condition included in the RUSP is required to have a treatment, which can be dietary, surgical/procedural, or an FDA-approved drug-based agent. Unfortunately, a wide range of neurodevelopmental diseases still have no known treatments. But lack of availability of treatment shouldn’t invalidate a disease from being included in the RUSP, because even if there is no specific treatment for the condition itself, early intervention can still be initiated to prevent some of the manifestations of the condition, said Dr. Hu.

“For example, most patients with these diseases will sooner or later undergo seizures,” Dr. Hu remarked. “We know that repeated seizures can cause brain damage. If we can diagnose the disease before the seizures start to take place, we can put preventive seizure control interventions in place, even if there is no direct ‘treatment’ for the condition itself.”

Early identification can lead to early intervention, which can have other benefits, Dr. Hu noted. “If we train the brain at a young age, when the brain is most receptive, even though a disease may be progressive and will worsen, those abilities acquired earlier will last longer and remain in place longer. When these skills are acquired later, they’re forgotten sooner. This isn’t a ‘cure,’ but it will help with functional improvement.”

Moreover, parents are “interested in knowing that their child has a condition, even if no treatment is currently available for that disorder, according to our research,” Dr. Chung said. “We found that the parents we interviewed endorsed the nonmedical utility of having access to information, even in the absence of a ‘cure,’ so they could prepare for medical issues that might arise down the road and make informed choices.”9

Nina Gold, MD, director of Prenatal Medical Genetics and associate director for Research for Massachusetts General Brigham Personalized Medicine, Boston, obtained similar findings in her own research, which is currently under review for publication. “We conducted focus groups and one-on-one interviews with parents from diverse racial and socioeconomic backgrounds. At least one parent said they didn’t want to compare their child to other children if their child might have a different developmental trajectory. They stressed that the information would be helpful, even if there was no immediate clinical utility.”

Nina Gold, MD, is director of Prenatal Medical Genetics and associate director for Research for Mass General Brigham Personalized Medicine in Boston.
Dr. Nina Gold


Additionally, there are an “increasing number of fetal therapies for rare disorders, so information about a genetic disease in an older child can be helpful for parents who may go on to have another pregnancy,” Dr. Gold noted.

Dr. Hu detailed several other reasons for including a wider range of disorders in the RUSP. Doing so helps families avoid a “stressful and expensive diagnostic odyssey and also provides equitable access to a diagnosis.” And if these patients are identified early, “we can connect the family with clinical trials already underway or connect them to an organization such as the Accelerating Medicines Partnership (AMP) Program Bespoke Gene Therapy Consortium (AMP BGTC). Bespoke “brings together partners from the public, private, and nonprofit sectors to foster development of gene therapies intended to treat rare genetic diseases, which affect populations too small for viable commercial development.”
 

 

 

Next Steps Following Screening

Rebecca Sponberg, NP, of the Children’s Hospital of Orange County, UC Irvine School of Medicine, California, is part of a broader multidisciplinary team that interfaces with parents whose newborns have screened positive for a genetic disorder. The team also includes a biochemical geneticist, a pediatric neurologist, a pediatric endocrinologist, a genetic counselor, and a social worker.

Different states and locations have different procedures for receiving test results, said Dr. Chung. In some, pediatricians are the ones who receive the results, and they are tasked with the responsibility of making sure the children can start getting appropriate care. In particular, these pediatricians are associated with centers of excellence that specialize in working with families around these conditions. Other facilities have multidisciplinary teams.

Rebecca Sponberg, NP, is a nurse practitioner at the Children's Hospital of Orange County, UC Irvine School of Medicine, California.
Ms. Rebecca Sponberg


Ms. Sponberg gave an example of how the process unfolded with X-linked adrenoleukodystrophy, a rare genetic disorder that affects the white matter of the nervous system and the adrenal cortex.10 “This is the most common peroxisomal disorder, affecting one in 20,000 males,” she said. “There are several different forms of the disorder, but males are most at risk for having the cerebral form, which can lead to neurological regression and hasten death. But the regression does not appear until 4 to 12 years of age.”

A baby who screens positive on the initial newborn screening has repeat testing; and if it’s confirmed, the family meets the entire team to help them understand what the disorder is, what to expect, and how it’s monitored and managed. “Children have to be followed closely with a brain MRI every 6 months to detect brain abnormalities quickly,” Ms. Sponberg explained “And we do regular bloodwork to look for adrenocortical insufficiency.”

A child who shows concerning changes on the MRI or abnormal blood test findings is immediately seen by the relevant specialist. “So far, our center has had one patient who had MRI changes consistent with the cerebral form of the disease and the patient was immediately able to receive a bone marrow transplant,” she reported. “We don’t think this child’s condition would have been picked up so quickly or treatment initiated so rapidly if we hadn’t known about it through newborn screening.”
 

Educating and Involving Families

Part of the role of clinicians is to provide education regarding newborn screening to families, according to Ms. Sponberg. “In my role, I have to call parents to tell them their child screened positive for a genetic condition and that we need to proceed with confirmatory testing,” she said. “We let them know if there’s a high concern that this might be a true positive for the condition, and we offer them information so they know what to expect.”

Unfortunately, Ms. Sponberg said, in the absence of education, some families are skeptical. “When I call families directly, some think it’s a scam and it can be hard to earn their trust. We need to do a better job educating families, especially our pregnant individuals, that testing will occur and if anything is abnormal, they will receive a call.”

 

References

1. Levy HL. Robert Guthrie and the Trials and Tribulations of Newborn Screening. Int J Neonatal Screen. 2021 Jan 19;7(1):5. doi: 10.3390/ijns7010005.

2. Chace DH et al. Clinical Chemistry and Dried Blood Spots: Increasing Laboratory Utilization by Improved Understanding of Quantitative Challenges. Bioanalysis. 2014;6(21):2791-2794. doi: 10.4155/bio.14.237.

3. Gold NB et al. Perspectives of Rare Disease Experts on Newborn Genome Sequencing. JAMA Netw Open. 2023 May 1;6(5):e2312231. doi: 10.1001/jamanetworkopen.2023.12231.

4. Weismiller DG. Expanded Newborn Screening: Information and Resources for the Family Physician. Am Fam Physician. 2017 Jun 1;95(11):703-709. https://www.aafp.org/pubs/afp/issues/2017/0601/p703.html.

5. Neul JL et al. Trofinetide for the Treatment of Rett Syndrome: A Randomized Phase 3 Study. Nat Med. 2023 Jun;29(6):1468-1475. doi: 10.1038/s41591-023-02398-1.

6. Chen T et al. Genomic Sequencing as a First-Tier Screening Test and Outcomes of Newborn Screening. JAMA Netw Open. 2023 Sep 5;6(9):e2331162. doi: 10.1001/jamanetworkopen.2023.31162.

7. Mercuri E et al. Spinal Muscular Atrophy. Nat Rev Dis Primers. 2022 Aug 4;8(1):52. doi: 10.1038/s41572-022-00380-8.

8. Kraszewski JN et al. Pilot Study of Population-Based Newborn Screening for Spinal Muscular Atrophy in New York State. Genet Med. 2018 Jun;20(6):608-613. doi: 10.1038/gim.2017.152.

9. Timmins GT et al. Diverse Parental Perspectives of the Social and Educational Needs for Expanding Newborn Screening Through Genomic Sequencing. Public Health Genomics. 2022 Sep 15:1-8. doi: 10.1159/000526382.

10. Turk BR et al. X-linked Adrenoleukodystrophy: Pathology, Pathophysiology, Diagnostic Testing, Newborn Screening and Therapies. Int J Dev Neurosci. 2020 Feb;80(1):52-72. doi: 10.1002/jdn.10003.

Publications
Topics
Sections

Newborn screening programs are public health services aimed at ensuring that the close to 4 million infants born each year in the United States are screened for certain serious disorders at birth. These disorders, albeit rare, are detected in roughly 12,500 newborn babies every year.

Newborn screening isn’t new, although it has expanded and transformed over the decades. The first newborn screening test was developed in the 1960s to detect phenylketonuria (PKU).1 Since then, the number of conditions screened for has increased, with programs in every US state and territory. “Newborn screening is well established now, not experimental or newfangled,” Wendy Chung, MD, PhD, professor of pediatrics, Harvard Medical School, Boston, Massachusetts, told Neurology Reviews.

Wendy Chung, MD, PhD, is professor of pediatrics, Harvard Medical School, Boston, Massachusetts.
Dr. Wendy Chung


In newborn screening, blood drawn from the baby’s heel is applied to specialized filter paper, which is then subjected to several analytical methods, including tandem mass spectrometry and molecular analyses to detect biomarkers for the diseases.2 More recently, genomic sequencing is being piloted as part of consented research studies.3

Newborn screening includes not only biochemical and genetic testing, but also includes noninvasive screening for hearing loss or for critical congenital heart disease using pulse oximetry. And newborn screening goes beyond analysis of a single drop of blood. Rather, “it’s an entire system, with the goal of identifying babies with genetic disorders who otherwise have no obvious symptoms,” said Dr. Chung. Left undetected and untreated, these conditions can be associated with serious adverse outcomes and even death.

Dr. Chung described newborn screening as a “one of the most successful public health programs, supporting health equity by screening almost every US baby after birth and then bringing timely treatments when relevant even before the baby develops symptoms of a disorder.” In this way, newborn screening has “saved lives and decreased disease burdens.”

There are at present 38 core conditions that the Department of Health and Human Services (HHS) regards as the most critical to screen for and 26 secondary conditions associated with these core disorders. This is called the Recommended Uniform Screening Panel (RUSP). Guidance regarding the most appropriate application of newborn screening tests, technologies and standards are provided by the Advisory Committee on Heritable Disorders in Newborns and Children (ACHDNC).

Each state “independently determines which screening tests are performed and what follow-up is provided.”4 Information about which tests are provided by which states can be found on the “Report Card” of the National Organization for Rare Diseases (NORD).
 

Challenges in Expanding the Current Newborn Screening

One of the major drawbacks in the current system is that “we don’t screen for enough diseases,” according to Zhanzhi Hu, PhD, of the Department of Systems Biology and the Department of Biomedical Information, Columbia University, New York City. “There are over 10,000 rare genetic diseases, but we’re currently screening for fewer than 100,” he told Neurology Reviews. Although in the United States, there are about 700-800 drugs approved for genetic diseases, “we can’t identify patients with these diseases early enough for the ideal window when treatments are most effective.”

Moreover, it’s a “lengthy process” to add new diseases to RUSP. “New conditions are added at the pace of less than one per year, on average — even for the hundreds of diseases for which there are treatments,” he said. “If we keep going at the current pace, we won’t be able to screen for those diseases for another few hundred years.”

Zhanzhi Hu, PhD, is affiliated with the Department of Systems Biology and the Department of Biomedical Information, Columbia University, New York City.
Dr. Zhanzhi Hu


Speeding up the pace of including new diseases in newborn screening is challenging because “we have more diseases than we have development dollars for,” Dr. Hu said. “Big pharmaceutical companies are reluctant to invest in rare diseases because the population is so small and it’s hard and expensive to develop such drugs. So if we can identify patients first, there will be more interest in developing treatments down the road.”

On the other hand, for trials to take place, these babies have to be identified in a timely manner — which requires testing. “Right now, we have a deadlock,” Dr. Hu said. “To nominate a disease, you need an approved treatment. But to get a treatment developed, you need to identify patients suitable for a clinical trial. If you have to wait for the symptoms to show up, the damage has already manifested and is irreversible. Our chance is to recognize the disease before symptom onset and then start treatment. I would call this a ‘chicken-and-egg’ problem.”

Dr. Hu is passionate about expanding newborn screening, and he has a very personal reason. Two of his children have a rare genetic disease. “My younger son, now 13 years old, was diagnosed at a much earlier age than my older son, although he had very few symptoms at the time, because his older brother was known to have the disease. As a result of this, his outcome was much better.” By contrast, Dr. Hu’s oldest son — now age 16 — wasn’t diagnosed until he became symptomatic.

His quest led him to join forces with Dr. Chung in conducting the Genomic Uniform-screening Against Rare Disease in All Newborns (Guardian) study, which screens newborns for more than 450 genetic conditions not currently screened as part of the standard newborn screening. To date, the study — which focuses on babies born in New York City — has screened about 11,000 infants.

“To accumulate enough evidence requires screening at least 100,000 babies because one requirement for nominating a disease for national inclusion in RUSP is an ‘N of 1’ study — meaning, to identify at least one positive patient using the proposed screening method in a prospective study,” Dr. Hu explained. “Most are rare diseases with an incidence rate of around one in 100,000. So getting to that magic number of 100,000 participants should enable us to hit that ‘N of 1’ for most diseases.”

The most challenging part, according to Dr. Hu, is the requirement of a prospective study, which means that you have to conduct a large-scale study enrolling tens of thousands of families and babies. If done for individual diseases (as has been the case in the past), “this is a huge cost and very inefficient.”

In reality, he added, the true incidence of these diseases is unclear. “Incidence rates are based on historical data rather than prospective studies. We’ve already seen some diseases show up more frequently than previously recorded, while others have shown up less frequently.”

For example, in the 11,000 babies screened to date, at least three girls with Rett syndrome have been identified, which is “quite a bit higher” than what has previously been identified in the literature (ie, one in 10,000-12,000 births). “This is a highly unmet need for these families because if you can initiate early treatment — at age 1, or even younger — the outcome will be better.”

He noted that there is at least one clinical trial underway for treating Rett syndrome, which has yielded “promising” data.5 “We’re hoping that by screening for diseases like Rett and identifying patients early, this will go hand-in-hand with clinical drug development. It can speed both the approval of the treatment and the addition to the newborn screening list,” Dr. Hu stated.
 

 

 

Screening and Drug Development Working in Tandem

Sequencing technologies have advanced and become more sophisticated as well as less costly, so interest in expanding newborn screening through newborn genome sequencing has increased. In fact, many states currently have incorporated genetic testing into newborn screening for conditions without biochemical markers. Additionally, newborn genomic sequencing is also used for further testing in infants with abnormal biochemical screening results.6

Genomic sequencing “identifies nucleotide changes that are the underlying etiology of monogenic disorders.”6 Its use could potentially enable identification of over 500 genetic disorders for which an newborn screening assay is not currently available, said Dr. Hu.

“Molecular DNA analysis has been integrated into newborn testing either as a first- or second-tier test for several conditions, including cystic fibrosis, severe combined immunodeficiency, and spinal muscular atrophy (SMA),” Dr. Hu said.

Dr. Hu pointed to SMA to illustrate the power and potential of newborn screening working hand-in-hand with the development of new treatments. SMA is a neurodegenerative disorder caused by mutations in SMN1, which encodes survival motor neuron protein (SMN).7 Deficiencies in SMN results in loss of motor neurons with muscle weakness and, often, early death.7A pilot study, on which Dr. Chung was the senior author, used both biochemical and genetic testing of close to 4000 newborns and found an SMA carrier frequency of 1.5%. One newborn was identified who had a homozygous SMN1 gene deletion and two copies of SMN2, strongly suggesting the presence of a severe type 1 SMA phenotype.8

At age 15 days, the baby was treated with nusinersen, an injection administered into the fluid surrounding the spinal cord, and the first FDA-approved genetic treatment for SMA. At the time of study publication, the baby was 12 months old, “meeting all developmental milestones and free of any respiratory issues,” the authors report.

“Screening for SMA — which was added to the RUSP in 2018 — has dramatically transformed what used to be the most common genetic cause of death in children under the age of 2,” Dr. Chung said. “Now, a once-and-done IV infusion of genetic therapy right after screening has transformed everything, taking what used to be a lethal condition and allowing children to grow up healthy.”
 

Advocating for Inclusion of Diseases With No Current Treatment

At present, any condition included in the RUSP is required to have a treatment, which can be dietary, surgical/procedural, or an FDA-approved drug-based agent. Unfortunately, a wide range of neurodevelopmental diseases still have no known treatments. But lack of availability of treatment shouldn’t invalidate a disease from being included in the RUSP, because even if there is no specific treatment for the condition itself, early intervention can still be initiated to prevent some of the manifestations of the condition, said Dr. Hu.

“For example, most patients with these diseases will sooner or later undergo seizures,” Dr. Hu remarked. “We know that repeated seizures can cause brain damage. If we can diagnose the disease before the seizures start to take place, we can put preventive seizure control interventions in place, even if there is no direct ‘treatment’ for the condition itself.”

Early identification can lead to early intervention, which can have other benefits, Dr. Hu noted. “If we train the brain at a young age, when the brain is most receptive, even though a disease may be progressive and will worsen, those abilities acquired earlier will last longer and remain in place longer. When these skills are acquired later, they’re forgotten sooner. This isn’t a ‘cure,’ but it will help with functional improvement.”

Moreover, parents are “interested in knowing that their child has a condition, even if no treatment is currently available for that disorder, according to our research,” Dr. Chung said. “We found that the parents we interviewed endorsed the nonmedical utility of having access to information, even in the absence of a ‘cure,’ so they could prepare for medical issues that might arise down the road and make informed choices.”9

Nina Gold, MD, director of Prenatal Medical Genetics and associate director for Research for Massachusetts General Brigham Personalized Medicine, Boston, obtained similar findings in her own research, which is currently under review for publication. “We conducted focus groups and one-on-one interviews with parents from diverse racial and socioeconomic backgrounds. At least one parent said they didn’t want to compare their child to other children if their child might have a different developmental trajectory. They stressed that the information would be helpful, even if there was no immediate clinical utility.”

Nina Gold, MD, is director of Prenatal Medical Genetics and associate director for Research for Mass General Brigham Personalized Medicine in Boston.
Dr. Nina Gold


Additionally, there are an “increasing number of fetal therapies for rare disorders, so information about a genetic disease in an older child can be helpful for parents who may go on to have another pregnancy,” Dr. Gold noted.

Dr. Hu detailed several other reasons for including a wider range of disorders in the RUSP. Doing so helps families avoid a “stressful and expensive diagnostic odyssey and also provides equitable access to a diagnosis.” And if these patients are identified early, “we can connect the family with clinical trials already underway or connect them to an organization such as the Accelerating Medicines Partnership (AMP) Program Bespoke Gene Therapy Consortium (AMP BGTC). Bespoke “brings together partners from the public, private, and nonprofit sectors to foster development of gene therapies intended to treat rare genetic diseases, which affect populations too small for viable commercial development.”
 

 

 

Next Steps Following Screening

Rebecca Sponberg, NP, of the Children’s Hospital of Orange County, UC Irvine School of Medicine, California, is part of a broader multidisciplinary team that interfaces with parents whose newborns have screened positive for a genetic disorder. The team also includes a biochemical geneticist, a pediatric neurologist, a pediatric endocrinologist, a genetic counselor, and a social worker.

Different states and locations have different procedures for receiving test results, said Dr. Chung. In some, pediatricians are the ones who receive the results, and they are tasked with the responsibility of making sure the children can start getting appropriate care. In particular, these pediatricians are associated with centers of excellence that specialize in working with families around these conditions. Other facilities have multidisciplinary teams.

Rebecca Sponberg, NP, is a nurse practitioner at the Children's Hospital of Orange County, UC Irvine School of Medicine, California.
Ms. Rebecca Sponberg


Ms. Sponberg gave an example of how the process unfolded with X-linked adrenoleukodystrophy, a rare genetic disorder that affects the white matter of the nervous system and the adrenal cortex.10 “This is the most common peroxisomal disorder, affecting one in 20,000 males,” she said. “There are several different forms of the disorder, but males are most at risk for having the cerebral form, which can lead to neurological regression and hasten death. But the regression does not appear until 4 to 12 years of age.”

A baby who screens positive on the initial newborn screening has repeat testing; and if it’s confirmed, the family meets the entire team to help them understand what the disorder is, what to expect, and how it’s monitored and managed. “Children have to be followed closely with a brain MRI every 6 months to detect brain abnormalities quickly,” Ms. Sponberg explained “And we do regular bloodwork to look for adrenocortical insufficiency.”

A child who shows concerning changes on the MRI or abnormal blood test findings is immediately seen by the relevant specialist. “So far, our center has had one patient who had MRI changes consistent with the cerebral form of the disease and the patient was immediately able to receive a bone marrow transplant,” she reported. “We don’t think this child’s condition would have been picked up so quickly or treatment initiated so rapidly if we hadn’t known about it through newborn screening.”
 

Educating and Involving Families

Part of the role of clinicians is to provide education regarding newborn screening to families, according to Ms. Sponberg. “In my role, I have to call parents to tell them their child screened positive for a genetic condition and that we need to proceed with confirmatory testing,” she said. “We let them know if there’s a high concern that this might be a true positive for the condition, and we offer them information so they know what to expect.”

Unfortunately, Ms. Sponberg said, in the absence of education, some families are skeptical. “When I call families directly, some think it’s a scam and it can be hard to earn their trust. We need to do a better job educating families, especially our pregnant individuals, that testing will occur and if anything is abnormal, they will receive a call.”

 

References

1. Levy HL. Robert Guthrie and the Trials and Tribulations of Newborn Screening. Int J Neonatal Screen. 2021 Jan 19;7(1):5. doi: 10.3390/ijns7010005.

2. Chace DH et al. Clinical Chemistry and Dried Blood Spots: Increasing Laboratory Utilization by Improved Understanding of Quantitative Challenges. Bioanalysis. 2014;6(21):2791-2794. doi: 10.4155/bio.14.237.

3. Gold NB et al. Perspectives of Rare Disease Experts on Newborn Genome Sequencing. JAMA Netw Open. 2023 May 1;6(5):e2312231. doi: 10.1001/jamanetworkopen.2023.12231.

4. Weismiller DG. Expanded Newborn Screening: Information and Resources for the Family Physician. Am Fam Physician. 2017 Jun 1;95(11):703-709. https://www.aafp.org/pubs/afp/issues/2017/0601/p703.html.

5. Neul JL et al. Trofinetide for the Treatment of Rett Syndrome: A Randomized Phase 3 Study. Nat Med. 2023 Jun;29(6):1468-1475. doi: 10.1038/s41591-023-02398-1.

6. Chen T et al. Genomic Sequencing as a First-Tier Screening Test and Outcomes of Newborn Screening. JAMA Netw Open. 2023 Sep 5;6(9):e2331162. doi: 10.1001/jamanetworkopen.2023.31162.

7. Mercuri E et al. Spinal Muscular Atrophy. Nat Rev Dis Primers. 2022 Aug 4;8(1):52. doi: 10.1038/s41572-022-00380-8.

8. Kraszewski JN et al. Pilot Study of Population-Based Newborn Screening for Spinal Muscular Atrophy in New York State. Genet Med. 2018 Jun;20(6):608-613. doi: 10.1038/gim.2017.152.

9. Timmins GT et al. Diverse Parental Perspectives of the Social and Educational Needs for Expanding Newborn Screening Through Genomic Sequencing. Public Health Genomics. 2022 Sep 15:1-8. doi: 10.1159/000526382.

10. Turk BR et al. X-linked Adrenoleukodystrophy: Pathology, Pathophysiology, Diagnostic Testing, Newborn Screening and Therapies. Int J Dev Neurosci. 2020 Feb;80(1):52-72. doi: 10.1002/jdn.10003.

Newborn screening programs are public health services aimed at ensuring that the close to 4 million infants born each year in the United States are screened for certain serious disorders at birth. These disorders, albeit rare, are detected in roughly 12,500 newborn babies every year.

Newborn screening isn’t new, although it has expanded and transformed over the decades. The first newborn screening test was developed in the 1960s to detect phenylketonuria (PKU).1 Since then, the number of conditions screened for has increased, with programs in every US state and territory. “Newborn screening is well established now, not experimental or newfangled,” Wendy Chung, MD, PhD, professor of pediatrics, Harvard Medical School, Boston, Massachusetts, told Neurology Reviews.

Wendy Chung, MD, PhD, is professor of pediatrics, Harvard Medical School, Boston, Massachusetts.
Dr. Wendy Chung


In newborn screening, blood drawn from the baby’s heel is applied to specialized filter paper, which is then subjected to several analytical methods, including tandem mass spectrometry and molecular analyses to detect biomarkers for the diseases.2 More recently, genomic sequencing is being piloted as part of consented research studies.3

Newborn screening includes not only biochemical and genetic testing, but also includes noninvasive screening for hearing loss or for critical congenital heart disease using pulse oximetry. And newborn screening goes beyond analysis of a single drop of blood. Rather, “it’s an entire system, with the goal of identifying babies with genetic disorders who otherwise have no obvious symptoms,” said Dr. Chung. Left undetected and untreated, these conditions can be associated with serious adverse outcomes and even death.

Dr. Chung described newborn screening as a “one of the most successful public health programs, supporting health equity by screening almost every US baby after birth and then bringing timely treatments when relevant even before the baby develops symptoms of a disorder.” In this way, newborn screening has “saved lives and decreased disease burdens.”

There are at present 38 core conditions that the Department of Health and Human Services (HHS) regards as the most critical to screen for and 26 secondary conditions associated with these core disorders. This is called the Recommended Uniform Screening Panel (RUSP). Guidance regarding the most appropriate application of newborn screening tests, technologies and standards are provided by the Advisory Committee on Heritable Disorders in Newborns and Children (ACHDNC).

Each state “independently determines which screening tests are performed and what follow-up is provided.”4 Information about which tests are provided by which states can be found on the “Report Card” of the National Organization for Rare Diseases (NORD).
 

Challenges in Expanding the Current Newborn Screening

One of the major drawbacks in the current system is that “we don’t screen for enough diseases,” according to Zhanzhi Hu, PhD, of the Department of Systems Biology and the Department of Biomedical Information, Columbia University, New York City. “There are over 10,000 rare genetic diseases, but we’re currently screening for fewer than 100,” he told Neurology Reviews. Although in the United States, there are about 700-800 drugs approved for genetic diseases, “we can’t identify patients with these diseases early enough for the ideal window when treatments are most effective.”

Moreover, it’s a “lengthy process” to add new diseases to RUSP. “New conditions are added at the pace of less than one per year, on average — even for the hundreds of diseases for which there are treatments,” he said. “If we keep going at the current pace, we won’t be able to screen for those diseases for another few hundred years.”

Zhanzhi Hu, PhD, is affiliated with the Department of Systems Biology and the Department of Biomedical Information, Columbia University, New York City.
Dr. Zhanzhi Hu


Speeding up the pace of including new diseases in newborn screening is challenging because “we have more diseases than we have development dollars for,” Dr. Hu said. “Big pharmaceutical companies are reluctant to invest in rare diseases because the population is so small and it’s hard and expensive to develop such drugs. So if we can identify patients first, there will be more interest in developing treatments down the road.”

On the other hand, for trials to take place, these babies have to be identified in a timely manner — which requires testing. “Right now, we have a deadlock,” Dr. Hu said. “To nominate a disease, you need an approved treatment. But to get a treatment developed, you need to identify patients suitable for a clinical trial. If you have to wait for the symptoms to show up, the damage has already manifested and is irreversible. Our chance is to recognize the disease before symptom onset and then start treatment. I would call this a ‘chicken-and-egg’ problem.”

Dr. Hu is passionate about expanding newborn screening, and he has a very personal reason. Two of his children have a rare genetic disease. “My younger son, now 13 years old, was diagnosed at a much earlier age than my older son, although he had very few symptoms at the time, because his older brother was known to have the disease. As a result of this, his outcome was much better.” By contrast, Dr. Hu’s oldest son — now age 16 — wasn’t diagnosed until he became symptomatic.

His quest led him to join forces with Dr. Chung in conducting the Genomic Uniform-screening Against Rare Disease in All Newborns (Guardian) study, which screens newborns for more than 450 genetic conditions not currently screened as part of the standard newborn screening. To date, the study — which focuses on babies born in New York City — has screened about 11,000 infants.

“To accumulate enough evidence requires screening at least 100,000 babies because one requirement for nominating a disease for national inclusion in RUSP is an ‘N of 1’ study — meaning, to identify at least one positive patient using the proposed screening method in a prospective study,” Dr. Hu explained. “Most are rare diseases with an incidence rate of around one in 100,000. So getting to that magic number of 100,000 participants should enable us to hit that ‘N of 1’ for most diseases.”

The most challenging part, according to Dr. Hu, is the requirement of a prospective study, which means that you have to conduct a large-scale study enrolling tens of thousands of families and babies. If done for individual diseases (as has been the case in the past), “this is a huge cost and very inefficient.”

In reality, he added, the true incidence of these diseases is unclear. “Incidence rates are based on historical data rather than prospective studies. We’ve already seen some diseases show up more frequently than previously recorded, while others have shown up less frequently.”

For example, in the 11,000 babies screened to date, at least three girls with Rett syndrome have been identified, which is “quite a bit higher” than what has previously been identified in the literature (ie, one in 10,000-12,000 births). “This is a highly unmet need for these families because if you can initiate early treatment — at age 1, or even younger — the outcome will be better.”

He noted that there is at least one clinical trial underway for treating Rett syndrome, which has yielded “promising” data.5 “We’re hoping that by screening for diseases like Rett and identifying patients early, this will go hand-in-hand with clinical drug development. It can speed both the approval of the treatment and the addition to the newborn screening list,” Dr. Hu stated.
 

 

 

Screening and Drug Development Working in Tandem

Sequencing technologies have advanced and become more sophisticated as well as less costly, so interest in expanding newborn screening through newborn genome sequencing has increased. In fact, many states currently have incorporated genetic testing into newborn screening for conditions without biochemical markers. Additionally, newborn genomic sequencing is also used for further testing in infants with abnormal biochemical screening results.6

Genomic sequencing “identifies nucleotide changes that are the underlying etiology of monogenic disorders.”6 Its use could potentially enable identification of over 500 genetic disorders for which an newborn screening assay is not currently available, said Dr. Hu.

“Molecular DNA analysis has been integrated into newborn testing either as a first- or second-tier test for several conditions, including cystic fibrosis, severe combined immunodeficiency, and spinal muscular atrophy (SMA),” Dr. Hu said.

Dr. Hu pointed to SMA to illustrate the power and potential of newborn screening working hand-in-hand with the development of new treatments. SMA is a neurodegenerative disorder caused by mutations in SMN1, which encodes survival motor neuron protein (SMN).7 Deficiencies in SMN results in loss of motor neurons with muscle weakness and, often, early death.7A pilot study, on which Dr. Chung was the senior author, used both biochemical and genetic testing of close to 4000 newborns and found an SMA carrier frequency of 1.5%. One newborn was identified who had a homozygous SMN1 gene deletion and two copies of SMN2, strongly suggesting the presence of a severe type 1 SMA phenotype.8

At age 15 days, the baby was treated with nusinersen, an injection administered into the fluid surrounding the spinal cord, and the first FDA-approved genetic treatment for SMA. At the time of study publication, the baby was 12 months old, “meeting all developmental milestones and free of any respiratory issues,” the authors report.

“Screening for SMA — which was added to the RUSP in 2018 — has dramatically transformed what used to be the most common genetic cause of death in children under the age of 2,” Dr. Chung said. “Now, a once-and-done IV infusion of genetic therapy right after screening has transformed everything, taking what used to be a lethal condition and allowing children to grow up healthy.”
 

Advocating for Inclusion of Diseases With No Current Treatment

At present, any condition included in the RUSP is required to have a treatment, which can be dietary, surgical/procedural, or an FDA-approved drug-based agent. Unfortunately, a wide range of neurodevelopmental diseases still have no known treatments. But lack of availability of treatment shouldn’t invalidate a disease from being included in the RUSP, because even if there is no specific treatment for the condition itself, early intervention can still be initiated to prevent some of the manifestations of the condition, said Dr. Hu.

“For example, most patients with these diseases will sooner or later undergo seizures,” Dr. Hu remarked. “We know that repeated seizures can cause brain damage. If we can diagnose the disease before the seizures start to take place, we can put preventive seizure control interventions in place, even if there is no direct ‘treatment’ for the condition itself.”

Early identification can lead to early intervention, which can have other benefits, Dr. Hu noted. “If we train the brain at a young age, when the brain is most receptive, even though a disease may be progressive and will worsen, those abilities acquired earlier will last longer and remain in place longer. When these skills are acquired later, they’re forgotten sooner. This isn’t a ‘cure,’ but it will help with functional improvement.”

Moreover, parents are “interested in knowing that their child has a condition, even if no treatment is currently available for that disorder, according to our research,” Dr. Chung said. “We found that the parents we interviewed endorsed the nonmedical utility of having access to information, even in the absence of a ‘cure,’ so they could prepare for medical issues that might arise down the road and make informed choices.”9

Nina Gold, MD, director of Prenatal Medical Genetics and associate director for Research for Massachusetts General Brigham Personalized Medicine, Boston, obtained similar findings in her own research, which is currently under review for publication. “We conducted focus groups and one-on-one interviews with parents from diverse racial and socioeconomic backgrounds. At least one parent said they didn’t want to compare their child to other children if their child might have a different developmental trajectory. They stressed that the information would be helpful, even if there was no immediate clinical utility.”

Nina Gold, MD, is director of Prenatal Medical Genetics and associate director for Research for Mass General Brigham Personalized Medicine in Boston.
Dr. Nina Gold


Additionally, there are an “increasing number of fetal therapies for rare disorders, so information about a genetic disease in an older child can be helpful for parents who may go on to have another pregnancy,” Dr. Gold noted.

Dr. Hu detailed several other reasons for including a wider range of disorders in the RUSP. Doing so helps families avoid a “stressful and expensive diagnostic odyssey and also provides equitable access to a diagnosis.” And if these patients are identified early, “we can connect the family with clinical trials already underway or connect them to an organization such as the Accelerating Medicines Partnership (AMP) Program Bespoke Gene Therapy Consortium (AMP BGTC). Bespoke “brings together partners from the public, private, and nonprofit sectors to foster development of gene therapies intended to treat rare genetic diseases, which affect populations too small for viable commercial development.”
 

 

 

Next Steps Following Screening

Rebecca Sponberg, NP, of the Children’s Hospital of Orange County, UC Irvine School of Medicine, California, is part of a broader multidisciplinary team that interfaces with parents whose newborns have screened positive for a genetic disorder. The team also includes a biochemical geneticist, a pediatric neurologist, a pediatric endocrinologist, a genetic counselor, and a social worker.

Different states and locations have different procedures for receiving test results, said Dr. Chung. In some, pediatricians are the ones who receive the results, and they are tasked with the responsibility of making sure the children can start getting appropriate care. In particular, these pediatricians are associated with centers of excellence that specialize in working with families around these conditions. Other facilities have multidisciplinary teams.

Rebecca Sponberg, NP, is a nurse practitioner at the Children's Hospital of Orange County, UC Irvine School of Medicine, California.
Ms. Rebecca Sponberg


Ms. Sponberg gave an example of how the process unfolded with X-linked adrenoleukodystrophy, a rare genetic disorder that affects the white matter of the nervous system and the adrenal cortex.10 “This is the most common peroxisomal disorder, affecting one in 20,000 males,” she said. “There are several different forms of the disorder, but males are most at risk for having the cerebral form, which can lead to neurological regression and hasten death. But the regression does not appear until 4 to 12 years of age.”

A baby who screens positive on the initial newborn screening has repeat testing; and if it’s confirmed, the family meets the entire team to help them understand what the disorder is, what to expect, and how it’s monitored and managed. “Children have to be followed closely with a brain MRI every 6 months to detect brain abnormalities quickly,” Ms. Sponberg explained “And we do regular bloodwork to look for adrenocortical insufficiency.”

A child who shows concerning changes on the MRI or abnormal blood test findings is immediately seen by the relevant specialist. “So far, our center has had one patient who had MRI changes consistent with the cerebral form of the disease and the patient was immediately able to receive a bone marrow transplant,” she reported. “We don’t think this child’s condition would have been picked up so quickly or treatment initiated so rapidly if we hadn’t known about it through newborn screening.”
 

Educating and Involving Families

Part of the role of clinicians is to provide education regarding newborn screening to families, according to Ms. Sponberg. “In my role, I have to call parents to tell them their child screened positive for a genetic condition and that we need to proceed with confirmatory testing,” she said. “We let them know if there’s a high concern that this might be a true positive for the condition, and we offer them information so they know what to expect.”

Unfortunately, Ms. Sponberg said, in the absence of education, some families are skeptical. “When I call families directly, some think it’s a scam and it can be hard to earn their trust. We need to do a better job educating families, especially our pregnant individuals, that testing will occur and if anything is abnormal, they will receive a call.”

 

References

1. Levy HL. Robert Guthrie and the Trials and Tribulations of Newborn Screening. Int J Neonatal Screen. 2021 Jan 19;7(1):5. doi: 10.3390/ijns7010005.

2. Chace DH et al. Clinical Chemistry and Dried Blood Spots: Increasing Laboratory Utilization by Improved Understanding of Quantitative Challenges. Bioanalysis. 2014;6(21):2791-2794. doi: 10.4155/bio.14.237.

3. Gold NB et al. Perspectives of Rare Disease Experts on Newborn Genome Sequencing. JAMA Netw Open. 2023 May 1;6(5):e2312231. doi: 10.1001/jamanetworkopen.2023.12231.

4. Weismiller DG. Expanded Newborn Screening: Information and Resources for the Family Physician. Am Fam Physician. 2017 Jun 1;95(11):703-709. https://www.aafp.org/pubs/afp/issues/2017/0601/p703.html.

5. Neul JL et al. Trofinetide for the Treatment of Rett Syndrome: A Randomized Phase 3 Study. Nat Med. 2023 Jun;29(6):1468-1475. doi: 10.1038/s41591-023-02398-1.

6. Chen T et al. Genomic Sequencing as a First-Tier Screening Test and Outcomes of Newborn Screening. JAMA Netw Open. 2023 Sep 5;6(9):e2331162. doi: 10.1001/jamanetworkopen.2023.31162.

7. Mercuri E et al. Spinal Muscular Atrophy. Nat Rev Dis Primers. 2022 Aug 4;8(1):52. doi: 10.1038/s41572-022-00380-8.

8. Kraszewski JN et al. Pilot Study of Population-Based Newborn Screening for Spinal Muscular Atrophy in New York State. Genet Med. 2018 Jun;20(6):608-613. doi: 10.1038/gim.2017.152.

9. Timmins GT et al. Diverse Parental Perspectives of the Social and Educational Needs for Expanding Newborn Screening Through Genomic Sequencing. Public Health Genomics. 2022 Sep 15:1-8. doi: 10.1159/000526382.

10. Turk BR et al. X-linked Adrenoleukodystrophy: Pathology, Pathophysiology, Diagnostic Testing, Newborn Screening and Therapies. Int J Dev Neurosci. 2020 Feb;80(1):52-72. doi: 10.1002/jdn.10003.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Common Cognitive Test Falls Short for Concussion Diagnosis

Article Type
Changed
Mon, 07/01/2024 - 14:13

 

A tool routinely used to evaluate concussion in college athletes fails to accurately diagnose the condition in many cases, a new study showed.

Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.

“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.

The study was published online in JAMA Network Open.

Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.

Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.

Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.

All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.

No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.

Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.

The most accurate predictor of concussion was participants’ responses to questions about their symptoms.

“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”

Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”

The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.

Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”

This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

A tool routinely used to evaluate concussion in college athletes fails to accurately diagnose the condition in many cases, a new study showed.

Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.

“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.

The study was published online in JAMA Network Open.

Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.

Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.

Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.

All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.

No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.

Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.

The most accurate predictor of concussion was participants’ responses to questions about their symptoms.

“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”

Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”

The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.

Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”

This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

A tool routinely used to evaluate concussion in college athletes fails to accurately diagnose the condition in many cases, a new study showed.

Investigators found that almost half of athletes diagnosed with a concussion tested normally on the Sports Concussion Assessment Tool 5 (SCAT5), the recommended tool for measuring cognitive skills in concussion evaluations. The most accurate measure of concussion was symptoms reported by the athletes.

“If you don’t do well on the cognitive exam, it suggests you have a concussion. But many people who are concussed do fine on the exam,” lead author Kimberly Harmon, MD, professor of family medicine and section head of sports medicine at the University of Washington School of Medicine, Seattle, said in a news release.

The study was published online in JAMA Network Open.

Introduced in 2004, the SCAT was created to standardize the collection of information clinicians use to diagnose concussion, including evaluation of symptoms, orientation, and balance. It also uses a 10-word list to assess immediate memory and delayed recall.

Dr. Harmon’s own experiences as a team physician led her to wonder about the accuracy of the cognitive screening portion of the SCAT. She saw that “some people were concussed, and they did well on the recall test. Some people weren’t concussed, and they didn’t do well. So I thought we should study it,” she said.

Investigators compared 92 National Collegiate Athletic Association (NCAA) Division 1 athletes who had sustained a concussion between 2020 and 2022 and had a concussion evaluation within 48 hours to 92 matched nonconcussed teammates (overall cohort, 52% men). Most concussions occurred in those who played football, followed by volleyball.

All athletes had previously completed NCAA-required baseline concussion screenings. Participants completed the SCAT5 screening test within 2 weeks of the incident concussion.

No significant differences were found between the baseline scores of athletes with and without concussion. Moreover, responses on the word recall section of the SCAT5 held little predictive value for concussion.

Nearly half (45%) of athletes with concussion performed at or even above their baseline cognitive report, which the authors said highlights the limitations of the cognitive components of SCAT5.

The most accurate predictor of concussion was participants’ responses to questions about their symptoms.

“If you get hit in the head and go to the sideline and say, ‘I have a headache, I’m dizzy, I don’t feel right,’ I can say with pretty good assurance that you have a concussion,” Dr. Harmon continued. “I don’t need to do any testing.”

Unfortunately, the problem is “that some athletes don’t want to come out. They don’t report their symptoms or may not recognize their symptoms. So then you need an objective, accurate test to tell you whether you can safely put the athlete back on the field. We don’t have that right now.”

The study did not control for concussion history, and the all–Division 1 cohort means the findings may not be generalizable to other athletes.

Nevertheless, investigators said the study “affirms that reported symptoms are the most sensitive indicator of concussion, and there are limitations to the objective cognitive testing included in the SCAT.” They concluded that concussion “remains a clinical diagnosis that should be based on a thorough review of signs, symptoms, and clinical findings.”

This study was funded in part by donations from University of Washington alumni Jack and Luellen Cherneski and the Chisholm Foundation. Dr. Harmon reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Muscle fat: A new risk factor for cognitive decline?

Article Type
Changed
Wed, 06/14/2023 - 09:39

 

Muscle adiposity may be a novel risk factor for cognitive decline in older adults, new research suggests.

Investigators assessed muscle fat in more than 1,600 adults in their 70s and evaluated their cognitive function over a 10-year period. They found that increases in muscle adiposity from year 1 to year 6 were associated with greater cognitive decline over time, independent of total weight, other fat deposits, muscle characteristics, and traditional dementia risk factors.

The findings were similar between Black and White people and between men and women.

“Increasing adiposity – or fat deposition – in skeletal muscles predicted faster cognitive decline, irrespective of demographics or other disease, and this effect was distinct from that of other types of fat or other muscle characteristics, such as strength or mass,” study investigator Caterina Rosano MD, MPH, professor of epidemiology at the University of Pittsburgh, said in an interview.

The study was published in the Journal of the American Geriatrics Society.
 

Biologically plausible

“There has been a growing recognition that overall adiposity and muscle measures, such as strength and mass, are individual indicators of future dementia risk and both strengthen the algorithms to predict cognitive decline,” said Dr. Rosano, associate director for clinical translation at the University of Pittsburgh’s Aging Institute. “However, adiposity in the muscle has not been examined.”

Some evidence supports a “biologically plausible link” between muscle adiposity and dementia risk. For example, muscle adiposity increases the risk for type 2 diabetes and hypertension, both of which are dementia risk factors.

Skeletal muscle adiposity increases with older age, even in older adults who lose weight, and is “highly prevalent” among older adults of African ancestry.

The researchers examined a large, biracial sample of older adults participating in the Health, Aging and Body Composition study, which enrolled men and women aged between 70 and 79 years. Participants were followed for an average of 9.0 ± 1.8 years.

During years 1 and 6, participants’ body composition was analyzed, including intermuscular adipose tissue (IMAT), visceral and subcutaneous adiposity, total fat mass, and muscle area.

In years 1, 3, 5, 8, and 10, participants’ cognition was measured using the modified Mini-Mental State (3MS) exam.

The main independent variable was 5-year change in thigh IMAT (year 6 minus year 1), and the main dependent variable was 3MS decline (from year 5 to year 10).

The researchers adjusted all the models for traditional dementia risk factors at baseline including 3MS, education, apo E4 allele, diabetes, hypertension, and physical activity and also calculated interactions between IMAT change by race or sex.

These models also accounted for change in muscle strength, muscle area, body weight, abdominal subcutaneous and visceral adiposity, and total body fat mass as well as cytokines related to adiposity.
 

‘Rich and engaging crosstalk’

The final sample included 1634 participants (mean age, 73.38 years at baseline; 48% female; 35% Black; mean baseline 3MS score, 91.6).

Thigh IMAT increased by 39.0% in all participants from year 1 to year 6, which corresponded to an increase of 4.85 cm2 or 0.97 cm2/year. During the same time period, muscle strength decreased by 14.0% (P < .05), although thigh muscle area remained stable, decreasing less than 0.5%.

There were decreases in both abdominal subcutaneous and visceral adiposity of 3.92% and 6.43%, respectively (P < .05). There was a decrease of 3.3% in 3MS from year 5 to year 10.

Several variables were associated with 3MS decline, independent of any change in thigh IMAT: older age, less education, and having at least one copy of the APOe4 allele. These variables were included in the model of IMAT change predicting 3MS change.

A statistically significant association of IMAT increase with 3MS decline was found. The IMAT increase of 4.85 cm2 corresponded to a 3MS decline of an additional 3.6 points (P < .0001) from year 5 to year 10, “indicating a clinically important change.”

The association between increasing thigh IMAT with declining 3MS “remained statistically significant” after adjusting for race, age, education, and apo E4 (P < .0001) and was independent of changes in thigh muscle area, muscle strength, and other adiposity measures.

In participants with increased IMAT in years 1-6, the mean 3MS score fell to approximately 87 points at year 10, compared with those without increased IMAT, with a 3MS score that dropped to approximately 89 points.

Interactions by race and sex were not statistically significant (P > .08).

“Our results suggest that adiposity in muscles can predict cognitive decline, in addition to (not instead of) other traditional dementia risk factors,” said Dr. Rosano.

There is “a rich and engaging crosstalk between muscle, adipose tissue, and the brain all throughout our lives, happening through factors released in the bloodstream that can reach the brain, however, the specific identity of the factors responsible for the crosstalk of muscle adiposity and brain in older adults has not yet been discovered,” she noted.

Although muscle adiposity is “not yet routinely measured in clinical settings, it is being measured opportunistically on clinical CT scans obtained as part of routine patient care,” she added. “These CT measurements have already been validated in many studies of older adults; thus, clinicians could have access to this novel information without additional cost, time, or radiation exposure.”
 

Causality not proven

In a comment, Bruce Albala, PhD, professor, department of environmental and occupational health, University of California, Irvine, noted that the 3MS assessment is scored on a 100-point scale, with a score less than 78 “generally regarded as indicating cognitive impairment or approaching a dementia condition.” In the current study, the mean 3MS score of participants with increased IMAT was still “well above the dementia cut-off.”

Moreover, “even if there is a relationship or correlation between IMAT and cognition, this does not prove or even suggest causality, especially from a biological mechanistic approach,” said Dr. Albaba, an adjunct professor of neurology, who was not involved in the study. “Clearly, more research is needed even to understand the relationship between these two factors.”

The study was supported by the National Institute on Aging. Dr. Rosano and coauthors and Dr. Albala declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

Muscle adiposity may be a novel risk factor for cognitive decline in older adults, new research suggests.

Investigators assessed muscle fat in more than 1,600 adults in their 70s and evaluated their cognitive function over a 10-year period. They found that increases in muscle adiposity from year 1 to year 6 were associated with greater cognitive decline over time, independent of total weight, other fat deposits, muscle characteristics, and traditional dementia risk factors.

The findings were similar between Black and White people and between men and women.

“Increasing adiposity – or fat deposition – in skeletal muscles predicted faster cognitive decline, irrespective of demographics or other disease, and this effect was distinct from that of other types of fat or other muscle characteristics, such as strength or mass,” study investigator Caterina Rosano MD, MPH, professor of epidemiology at the University of Pittsburgh, said in an interview.

The study was published in the Journal of the American Geriatrics Society.
 

Biologically plausible

“There has been a growing recognition that overall adiposity and muscle measures, such as strength and mass, are individual indicators of future dementia risk and both strengthen the algorithms to predict cognitive decline,” said Dr. Rosano, associate director for clinical translation at the University of Pittsburgh’s Aging Institute. “However, adiposity in the muscle has not been examined.”

Some evidence supports a “biologically plausible link” between muscle adiposity and dementia risk. For example, muscle adiposity increases the risk for type 2 diabetes and hypertension, both of which are dementia risk factors.

Skeletal muscle adiposity increases with older age, even in older adults who lose weight, and is “highly prevalent” among older adults of African ancestry.

The researchers examined a large, biracial sample of older adults participating in the Health, Aging and Body Composition study, which enrolled men and women aged between 70 and 79 years. Participants were followed for an average of 9.0 ± 1.8 years.

During years 1 and 6, participants’ body composition was analyzed, including intermuscular adipose tissue (IMAT), visceral and subcutaneous adiposity, total fat mass, and muscle area.

In years 1, 3, 5, 8, and 10, participants’ cognition was measured using the modified Mini-Mental State (3MS) exam.

The main independent variable was 5-year change in thigh IMAT (year 6 minus year 1), and the main dependent variable was 3MS decline (from year 5 to year 10).

The researchers adjusted all the models for traditional dementia risk factors at baseline including 3MS, education, apo E4 allele, diabetes, hypertension, and physical activity and also calculated interactions between IMAT change by race or sex.

These models also accounted for change in muscle strength, muscle area, body weight, abdominal subcutaneous and visceral adiposity, and total body fat mass as well as cytokines related to adiposity.
 

‘Rich and engaging crosstalk’

The final sample included 1634 participants (mean age, 73.38 years at baseline; 48% female; 35% Black; mean baseline 3MS score, 91.6).

Thigh IMAT increased by 39.0% in all participants from year 1 to year 6, which corresponded to an increase of 4.85 cm2 or 0.97 cm2/year. During the same time period, muscle strength decreased by 14.0% (P < .05), although thigh muscle area remained stable, decreasing less than 0.5%.

There were decreases in both abdominal subcutaneous and visceral adiposity of 3.92% and 6.43%, respectively (P < .05). There was a decrease of 3.3% in 3MS from year 5 to year 10.

Several variables were associated with 3MS decline, independent of any change in thigh IMAT: older age, less education, and having at least one copy of the APOe4 allele. These variables were included in the model of IMAT change predicting 3MS change.

A statistically significant association of IMAT increase with 3MS decline was found. The IMAT increase of 4.85 cm2 corresponded to a 3MS decline of an additional 3.6 points (P < .0001) from year 5 to year 10, “indicating a clinically important change.”

The association between increasing thigh IMAT with declining 3MS “remained statistically significant” after adjusting for race, age, education, and apo E4 (P < .0001) and was independent of changes in thigh muscle area, muscle strength, and other adiposity measures.

In participants with increased IMAT in years 1-6, the mean 3MS score fell to approximately 87 points at year 10, compared with those without increased IMAT, with a 3MS score that dropped to approximately 89 points.

Interactions by race and sex were not statistically significant (P > .08).

“Our results suggest that adiposity in muscles can predict cognitive decline, in addition to (not instead of) other traditional dementia risk factors,” said Dr. Rosano.

There is “a rich and engaging crosstalk between muscle, adipose tissue, and the brain all throughout our lives, happening through factors released in the bloodstream that can reach the brain, however, the specific identity of the factors responsible for the crosstalk of muscle adiposity and brain in older adults has not yet been discovered,” she noted.

Although muscle adiposity is “not yet routinely measured in clinical settings, it is being measured opportunistically on clinical CT scans obtained as part of routine patient care,” she added. “These CT measurements have already been validated in many studies of older adults; thus, clinicians could have access to this novel information without additional cost, time, or radiation exposure.”
 

Causality not proven

In a comment, Bruce Albala, PhD, professor, department of environmental and occupational health, University of California, Irvine, noted that the 3MS assessment is scored on a 100-point scale, with a score less than 78 “generally regarded as indicating cognitive impairment or approaching a dementia condition.” In the current study, the mean 3MS score of participants with increased IMAT was still “well above the dementia cut-off.”

Moreover, “even if there is a relationship or correlation between IMAT and cognition, this does not prove or even suggest causality, especially from a biological mechanistic approach,” said Dr. Albaba, an adjunct professor of neurology, who was not involved in the study. “Clearly, more research is needed even to understand the relationship between these two factors.”

The study was supported by the National Institute on Aging. Dr. Rosano and coauthors and Dr. Albala declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

Muscle adiposity may be a novel risk factor for cognitive decline in older adults, new research suggests.

Investigators assessed muscle fat in more than 1,600 adults in their 70s and evaluated their cognitive function over a 10-year period. They found that increases in muscle adiposity from year 1 to year 6 were associated with greater cognitive decline over time, independent of total weight, other fat deposits, muscle characteristics, and traditional dementia risk factors.

The findings were similar between Black and White people and between men and women.

“Increasing adiposity – or fat deposition – in skeletal muscles predicted faster cognitive decline, irrespective of demographics or other disease, and this effect was distinct from that of other types of fat or other muscle characteristics, such as strength or mass,” study investigator Caterina Rosano MD, MPH, professor of epidemiology at the University of Pittsburgh, said in an interview.

The study was published in the Journal of the American Geriatrics Society.
 

Biologically plausible

“There has been a growing recognition that overall adiposity and muscle measures, such as strength and mass, are individual indicators of future dementia risk and both strengthen the algorithms to predict cognitive decline,” said Dr. Rosano, associate director for clinical translation at the University of Pittsburgh’s Aging Institute. “However, adiposity in the muscle has not been examined.”

Some evidence supports a “biologically plausible link” between muscle adiposity and dementia risk. For example, muscle adiposity increases the risk for type 2 diabetes and hypertension, both of which are dementia risk factors.

Skeletal muscle adiposity increases with older age, even in older adults who lose weight, and is “highly prevalent” among older adults of African ancestry.

The researchers examined a large, biracial sample of older adults participating in the Health, Aging and Body Composition study, which enrolled men and women aged between 70 and 79 years. Participants were followed for an average of 9.0 ± 1.8 years.

During years 1 and 6, participants’ body composition was analyzed, including intermuscular adipose tissue (IMAT), visceral and subcutaneous adiposity, total fat mass, and muscle area.

In years 1, 3, 5, 8, and 10, participants’ cognition was measured using the modified Mini-Mental State (3MS) exam.

The main independent variable was 5-year change in thigh IMAT (year 6 minus year 1), and the main dependent variable was 3MS decline (from year 5 to year 10).

The researchers adjusted all the models for traditional dementia risk factors at baseline including 3MS, education, apo E4 allele, diabetes, hypertension, and physical activity and also calculated interactions between IMAT change by race or sex.

These models also accounted for change in muscle strength, muscle area, body weight, abdominal subcutaneous and visceral adiposity, and total body fat mass as well as cytokines related to adiposity.
 

‘Rich and engaging crosstalk’

The final sample included 1634 participants (mean age, 73.38 years at baseline; 48% female; 35% Black; mean baseline 3MS score, 91.6).

Thigh IMAT increased by 39.0% in all participants from year 1 to year 6, which corresponded to an increase of 4.85 cm2 or 0.97 cm2/year. During the same time period, muscle strength decreased by 14.0% (P < .05), although thigh muscle area remained stable, decreasing less than 0.5%.

There were decreases in both abdominal subcutaneous and visceral adiposity of 3.92% and 6.43%, respectively (P < .05). There was a decrease of 3.3% in 3MS from year 5 to year 10.

Several variables were associated with 3MS decline, independent of any change in thigh IMAT: older age, less education, and having at least one copy of the APOe4 allele. These variables were included in the model of IMAT change predicting 3MS change.

A statistically significant association of IMAT increase with 3MS decline was found. The IMAT increase of 4.85 cm2 corresponded to a 3MS decline of an additional 3.6 points (P < .0001) from year 5 to year 10, “indicating a clinically important change.”

The association between increasing thigh IMAT with declining 3MS “remained statistically significant” after adjusting for race, age, education, and apo E4 (P < .0001) and was independent of changes in thigh muscle area, muscle strength, and other adiposity measures.

In participants with increased IMAT in years 1-6, the mean 3MS score fell to approximately 87 points at year 10, compared with those without increased IMAT, with a 3MS score that dropped to approximately 89 points.

Interactions by race and sex were not statistically significant (P > .08).

“Our results suggest that adiposity in muscles can predict cognitive decline, in addition to (not instead of) other traditional dementia risk factors,” said Dr. Rosano.

There is “a rich and engaging crosstalk between muscle, adipose tissue, and the brain all throughout our lives, happening through factors released in the bloodstream that can reach the brain, however, the specific identity of the factors responsible for the crosstalk of muscle adiposity and brain in older adults has not yet been discovered,” she noted.

Although muscle adiposity is “not yet routinely measured in clinical settings, it is being measured opportunistically on clinical CT scans obtained as part of routine patient care,” she added. “These CT measurements have already been validated in many studies of older adults; thus, clinicians could have access to this novel information without additional cost, time, or radiation exposure.”
 

Causality not proven

In a comment, Bruce Albala, PhD, professor, department of environmental and occupational health, University of California, Irvine, noted that the 3MS assessment is scored on a 100-point scale, with a score less than 78 “generally regarded as indicating cognitive impairment or approaching a dementia condition.” In the current study, the mean 3MS score of participants with increased IMAT was still “well above the dementia cut-off.”

Moreover, “even if there is a relationship or correlation between IMAT and cognition, this does not prove or even suggest causality, especially from a biological mechanistic approach,” said Dr. Albaba, an adjunct professor of neurology, who was not involved in the study. “Clearly, more research is needed even to understand the relationship between these two factors.”

The study was supported by the National Institute on Aging. Dr. Rosano and coauthors and Dr. Albala declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN GERIATRICS SOCIETY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Link between bipolar disorder and CVD mortality explained?

Article Type
Changed
Fri, 06/09/2023 - 09:51

An early predictor of cardiovascular disease (CVD) has been found in youth with bipolar disorder (BD), in new findings that may explain the “excessive and premature mortality” related to heart disease in this patient population.

The investigators found that higher reactive hyperemia index (RHI) scores, a measure of endothelial function, were tied to mood severity in patients with higher mania, but not depression scores. These findings persisted even after accounting for medications, obesity, and other cardiovascular risk factors (CVRFs).

“From a clinical perspective, these findings highlight the potential value of integrating vascular health in the assessment and management of youth with BD, and from a scientific perspective, these findings call for additional research focused on shared biological mechanisms linking vascular health and mood symptoms of BD,” senior investigator Benjamin Goldstein, MD, PhD, full professor of psychiatry, pharmacology, and psychological clinical science, University of Toronto, said in an interview.

The study was published online in the Journal of Clinical Psychiatry.
 

‘Excessively present’

BD is associated with “excessive and premature cardiovascular mortality” and CVD is “excessively present” in BD, exceeding what can be explained by traditional cardiovascular risk factors, psychiatric medications, and substance use, the researchers noted.

“In adults, more severe mood symptoms increase the risk of future CVD. Our focus on endothelial function rose due to the fact that CVD is rare in youth, whereas endothelial dysfunction – considered a precursor of CVD – can be assessed in youth,” said Dr. Goldstein, who holds the RBC Investments Chair in children’s mental health and developmental psychopathology at the Centre for Addiction and Mental Health, Toronto, where he is director of the Centre for Youth Bipolar Disorder.

For this reason, he and his colleagues were “interested in researching whether endothelial dysfunction is associated with mood symptoms in youth with BD.” Ultimately, the motivation was to “inspire new therapeutic opportunities that may improve both cardiovascular and mental health simultaneously.”

To investigate the question, the researchers studied 209 youth aged 13-20 years (n = 114 with BD and 94 healthy controls [HCs]).

In the BD group, there were 34 BD-euthymia, 36 BD-depressed, and 44 BD-hypomanic/mixed; and within the groups who had depression or hypomania/mixed features, 72 were experiencing clinically significant depression. 

Participants had to be free of chronic inflammatory illness, use of medications that might be addressing traditional CVRFs, recent infectious diseases, or neurologic conditions.

Participants’ bipolar symptoms, psychosocial functioning, and family history were assessed. In addition, they were asked about treatment, physical and/or sexual abuse, smoking status, and socioeconomic status. Height, weight, waist circumference, blood pressure, and blood tests to assess CVRFs, including C-reactive protein (CRP), were also assessed. RHI was measured via pulse amplitude tonometry, with lower values indicating poorer endothelial function.
 

Positive affect beneficial?

Compared with HCs, there were fewer White participants in the BD group (78% vs. 55%; P < .001). The BD group also had higher Tanner stage development scores (stage 5: 65% vs. 35%; P = .03; V = 0.21), higher body mass index (BMI, 24.4 ± 4.6 vs. 22.0 ± 4.2; P < .001; d = 0.53), and higher CRP (1.94 ± 3.99 vs. 0.76 ± 0.86; P = .009; d = –0.40).

After controlling for age, sex, and BMI (F3,202 = 4.47; P = .005; np2  = 0.06), the researchers found significant between-group differences in RHI.

Post hoc pairwise comparisons showed RHI to be significantly lower in the BD-depressed versus the HC group (P = .04; d = 0.4). Moreover, the BD-hypomanic/mixed group had significantly higher RHI, compared with the other BD groups and the HC group.

RHI was associated with higher mania scores (beta, 0.26; P = .006), but there was no similar significant association with depression mood scores (beta, 0.01; P = .90).

The mood state differences in RHI and the RHI-mania association remained significant in sensitivity analyses examining the effect of current medication use as well as CVRFs, including lipids, CRP, and blood pressure on RHI.

“We found that youth with BD experiencing a depressive episode had lower endothelial function, whereas youth with BD experiencing a hypomanic/mixed episode had higher endothelial function, as compared to healthy youth,” Dr. Goldstein said.

There are several mechanisms potentially underlying the association between endothelial function and hypomania, the investigators noted. For example, positive affect is associated with increased endothelial function in normative samples, so hypomanic symptoms, including elation, may have similar beneficial associations, although those benefits likely do not extend to mania, which has been associated with cardiovascular risk.

They also point to several limitations in the study. The cross-sectional design “precludes making inferences regarding the temporal relationship between RHI and mood.” Moreover, the study focused only on hypomania, so “we cannot draw conclusions about mania.” In addition, the HC group had a “significantly higher proportion” of White participants, and a lower Tanner stage, so it “may not be a representative control sample.”

Nevertheless, the researchers concluded that the study “adds to the existing evidence for the potential value of integrating cardiovascular-related therapeutic approaches in BD,” noting that further research is needed to elucidate the mechanisms of the association.
 

 

 

Observable changes in youth

In a comment, Jess G Fiedorowicz, MD, PhD, head and chief, department of mental health, Ottawa Hospital Research Institute, noted that individuals with BD “have a much higher risk of CVD, which tends to develop earlier and shortens life expectancy by more than a decade.” 

This cardiovascular risk “appears to be acquired over the long-term course of illness and proportionate to the persistence and severity of mood symptoms, which implies that mood syndromes, such as depression and mania, themselves may induce changes in the body relevant to CVD,” said Dr. Fiedorowicz, who is also a professor in the department of psychiatry and senior research chair in adult psychiatry at the Brain and Mind Research Institute, University of Ottawa, and was not involved with the study.

The study “adds to a growing body of evidence that mood syndromes may enact physiological changes that may be relevant to risk of CVD. One important aspect of this study is that this can even be observed in young sample,” he said.

This study was funded by the Canadian Institutes of Health Research and a Miner’s Lamp Innovation Fund from the University of Toronto. Dr. Goldstein and coauthors declare no relevant financial relationships. Dr. Fiedorowicz receives an honorarium from Elsevier for his work as editor-in-chief of the Journal of Psychosomatic Research.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An early predictor of cardiovascular disease (CVD) has been found in youth with bipolar disorder (BD), in new findings that may explain the “excessive and premature mortality” related to heart disease in this patient population.

The investigators found that higher reactive hyperemia index (RHI) scores, a measure of endothelial function, were tied to mood severity in patients with higher mania, but not depression scores. These findings persisted even after accounting for medications, obesity, and other cardiovascular risk factors (CVRFs).

“From a clinical perspective, these findings highlight the potential value of integrating vascular health in the assessment and management of youth with BD, and from a scientific perspective, these findings call for additional research focused on shared biological mechanisms linking vascular health and mood symptoms of BD,” senior investigator Benjamin Goldstein, MD, PhD, full professor of psychiatry, pharmacology, and psychological clinical science, University of Toronto, said in an interview.

The study was published online in the Journal of Clinical Psychiatry.
 

‘Excessively present’

BD is associated with “excessive and premature cardiovascular mortality” and CVD is “excessively present” in BD, exceeding what can be explained by traditional cardiovascular risk factors, psychiatric medications, and substance use, the researchers noted.

“In adults, more severe mood symptoms increase the risk of future CVD. Our focus on endothelial function rose due to the fact that CVD is rare in youth, whereas endothelial dysfunction – considered a precursor of CVD – can be assessed in youth,” said Dr. Goldstein, who holds the RBC Investments Chair in children’s mental health and developmental psychopathology at the Centre for Addiction and Mental Health, Toronto, where he is director of the Centre for Youth Bipolar Disorder.

For this reason, he and his colleagues were “interested in researching whether endothelial dysfunction is associated with mood symptoms in youth with BD.” Ultimately, the motivation was to “inspire new therapeutic opportunities that may improve both cardiovascular and mental health simultaneously.”

To investigate the question, the researchers studied 209 youth aged 13-20 years (n = 114 with BD and 94 healthy controls [HCs]).

In the BD group, there were 34 BD-euthymia, 36 BD-depressed, and 44 BD-hypomanic/mixed; and within the groups who had depression or hypomania/mixed features, 72 were experiencing clinically significant depression. 

Participants had to be free of chronic inflammatory illness, use of medications that might be addressing traditional CVRFs, recent infectious diseases, or neurologic conditions.

Participants’ bipolar symptoms, psychosocial functioning, and family history were assessed. In addition, they were asked about treatment, physical and/or sexual abuse, smoking status, and socioeconomic status. Height, weight, waist circumference, blood pressure, and blood tests to assess CVRFs, including C-reactive protein (CRP), were also assessed. RHI was measured via pulse amplitude tonometry, with lower values indicating poorer endothelial function.
 

Positive affect beneficial?

Compared with HCs, there were fewer White participants in the BD group (78% vs. 55%; P < .001). The BD group also had higher Tanner stage development scores (stage 5: 65% vs. 35%; P = .03; V = 0.21), higher body mass index (BMI, 24.4 ± 4.6 vs. 22.0 ± 4.2; P < .001; d = 0.53), and higher CRP (1.94 ± 3.99 vs. 0.76 ± 0.86; P = .009; d = –0.40).

After controlling for age, sex, and BMI (F3,202 = 4.47; P = .005; np2  = 0.06), the researchers found significant between-group differences in RHI.

Post hoc pairwise comparisons showed RHI to be significantly lower in the BD-depressed versus the HC group (P = .04; d = 0.4). Moreover, the BD-hypomanic/mixed group had significantly higher RHI, compared with the other BD groups and the HC group.

RHI was associated with higher mania scores (beta, 0.26; P = .006), but there was no similar significant association with depression mood scores (beta, 0.01; P = .90).

The mood state differences in RHI and the RHI-mania association remained significant in sensitivity analyses examining the effect of current medication use as well as CVRFs, including lipids, CRP, and blood pressure on RHI.

“We found that youth with BD experiencing a depressive episode had lower endothelial function, whereas youth with BD experiencing a hypomanic/mixed episode had higher endothelial function, as compared to healthy youth,” Dr. Goldstein said.

There are several mechanisms potentially underlying the association between endothelial function and hypomania, the investigators noted. For example, positive affect is associated with increased endothelial function in normative samples, so hypomanic symptoms, including elation, may have similar beneficial associations, although those benefits likely do not extend to mania, which has been associated with cardiovascular risk.

They also point to several limitations in the study. The cross-sectional design “precludes making inferences regarding the temporal relationship between RHI and mood.” Moreover, the study focused only on hypomania, so “we cannot draw conclusions about mania.” In addition, the HC group had a “significantly higher proportion” of White participants, and a lower Tanner stage, so it “may not be a representative control sample.”

Nevertheless, the researchers concluded that the study “adds to the existing evidence for the potential value of integrating cardiovascular-related therapeutic approaches in BD,” noting that further research is needed to elucidate the mechanisms of the association.
 

 

 

Observable changes in youth

In a comment, Jess G Fiedorowicz, MD, PhD, head and chief, department of mental health, Ottawa Hospital Research Institute, noted that individuals with BD “have a much higher risk of CVD, which tends to develop earlier and shortens life expectancy by more than a decade.” 

This cardiovascular risk “appears to be acquired over the long-term course of illness and proportionate to the persistence and severity of mood symptoms, which implies that mood syndromes, such as depression and mania, themselves may induce changes in the body relevant to CVD,” said Dr. Fiedorowicz, who is also a professor in the department of psychiatry and senior research chair in adult psychiatry at the Brain and Mind Research Institute, University of Ottawa, and was not involved with the study.

The study “adds to a growing body of evidence that mood syndromes may enact physiological changes that may be relevant to risk of CVD. One important aspect of this study is that this can even be observed in young sample,” he said.

This study was funded by the Canadian Institutes of Health Research and a Miner’s Lamp Innovation Fund from the University of Toronto. Dr. Goldstein and coauthors declare no relevant financial relationships. Dr. Fiedorowicz receives an honorarium from Elsevier for his work as editor-in-chief of the Journal of Psychosomatic Research.

A version of this article first appeared on Medscape.com.

An early predictor of cardiovascular disease (CVD) has been found in youth with bipolar disorder (BD), in new findings that may explain the “excessive and premature mortality” related to heart disease in this patient population.

The investigators found that higher reactive hyperemia index (RHI) scores, a measure of endothelial function, were tied to mood severity in patients with higher mania, but not depression scores. These findings persisted even after accounting for medications, obesity, and other cardiovascular risk factors (CVRFs).

“From a clinical perspective, these findings highlight the potential value of integrating vascular health in the assessment and management of youth with BD, and from a scientific perspective, these findings call for additional research focused on shared biological mechanisms linking vascular health and mood symptoms of BD,” senior investigator Benjamin Goldstein, MD, PhD, full professor of psychiatry, pharmacology, and psychological clinical science, University of Toronto, said in an interview.

The study was published online in the Journal of Clinical Psychiatry.
 

‘Excessively present’

BD is associated with “excessive and premature cardiovascular mortality” and CVD is “excessively present” in BD, exceeding what can be explained by traditional cardiovascular risk factors, psychiatric medications, and substance use, the researchers noted.

“In adults, more severe mood symptoms increase the risk of future CVD. Our focus on endothelial function rose due to the fact that CVD is rare in youth, whereas endothelial dysfunction – considered a precursor of CVD – can be assessed in youth,” said Dr. Goldstein, who holds the RBC Investments Chair in children’s mental health and developmental psychopathology at the Centre for Addiction and Mental Health, Toronto, where he is director of the Centre for Youth Bipolar Disorder.

For this reason, he and his colleagues were “interested in researching whether endothelial dysfunction is associated with mood symptoms in youth with BD.” Ultimately, the motivation was to “inspire new therapeutic opportunities that may improve both cardiovascular and mental health simultaneously.”

To investigate the question, the researchers studied 209 youth aged 13-20 years (n = 114 with BD and 94 healthy controls [HCs]).

In the BD group, there were 34 BD-euthymia, 36 BD-depressed, and 44 BD-hypomanic/mixed; and within the groups who had depression or hypomania/mixed features, 72 were experiencing clinically significant depression. 

Participants had to be free of chronic inflammatory illness, use of medications that might be addressing traditional CVRFs, recent infectious diseases, or neurologic conditions.

Participants’ bipolar symptoms, psychosocial functioning, and family history were assessed. In addition, they were asked about treatment, physical and/or sexual abuse, smoking status, and socioeconomic status. Height, weight, waist circumference, blood pressure, and blood tests to assess CVRFs, including C-reactive protein (CRP), were also assessed. RHI was measured via pulse amplitude tonometry, with lower values indicating poorer endothelial function.
 

Positive affect beneficial?

Compared with HCs, there were fewer White participants in the BD group (78% vs. 55%; P < .001). The BD group also had higher Tanner stage development scores (stage 5: 65% vs. 35%; P = .03; V = 0.21), higher body mass index (BMI, 24.4 ± 4.6 vs. 22.0 ± 4.2; P < .001; d = 0.53), and higher CRP (1.94 ± 3.99 vs. 0.76 ± 0.86; P = .009; d = –0.40).

After controlling for age, sex, and BMI (F3,202 = 4.47; P = .005; np2  = 0.06), the researchers found significant between-group differences in RHI.

Post hoc pairwise comparisons showed RHI to be significantly lower in the BD-depressed versus the HC group (P = .04; d = 0.4). Moreover, the BD-hypomanic/mixed group had significantly higher RHI, compared with the other BD groups and the HC group.

RHI was associated with higher mania scores (beta, 0.26; P = .006), but there was no similar significant association with depression mood scores (beta, 0.01; P = .90).

The mood state differences in RHI and the RHI-mania association remained significant in sensitivity analyses examining the effect of current medication use as well as CVRFs, including lipids, CRP, and blood pressure on RHI.

“We found that youth with BD experiencing a depressive episode had lower endothelial function, whereas youth with BD experiencing a hypomanic/mixed episode had higher endothelial function, as compared to healthy youth,” Dr. Goldstein said.

There are several mechanisms potentially underlying the association between endothelial function and hypomania, the investigators noted. For example, positive affect is associated with increased endothelial function in normative samples, so hypomanic symptoms, including elation, may have similar beneficial associations, although those benefits likely do not extend to mania, which has been associated with cardiovascular risk.

They also point to several limitations in the study. The cross-sectional design “precludes making inferences regarding the temporal relationship between RHI and mood.” Moreover, the study focused only on hypomania, so “we cannot draw conclusions about mania.” In addition, the HC group had a “significantly higher proportion” of White participants, and a lower Tanner stage, so it “may not be a representative control sample.”

Nevertheless, the researchers concluded that the study “adds to the existing evidence for the potential value of integrating cardiovascular-related therapeutic approaches in BD,” noting that further research is needed to elucidate the mechanisms of the association.
 

 

 

Observable changes in youth

In a comment, Jess G Fiedorowicz, MD, PhD, head and chief, department of mental health, Ottawa Hospital Research Institute, noted that individuals with BD “have a much higher risk of CVD, which tends to develop earlier and shortens life expectancy by more than a decade.” 

This cardiovascular risk “appears to be acquired over the long-term course of illness and proportionate to the persistence and severity of mood symptoms, which implies that mood syndromes, such as depression and mania, themselves may induce changes in the body relevant to CVD,” said Dr. Fiedorowicz, who is also a professor in the department of psychiatry and senior research chair in adult psychiatry at the Brain and Mind Research Institute, University of Ottawa, and was not involved with the study.

The study “adds to a growing body of evidence that mood syndromes may enact physiological changes that may be relevant to risk of CVD. One important aspect of this study is that this can even be observed in young sample,” he said.

This study was funded by the Canadian Institutes of Health Research and a Miner’s Lamp Innovation Fund from the University of Toronto. Dr. Goldstein and coauthors declare no relevant financial relationships. Dr. Fiedorowicz receives an honorarium from Elsevier for his work as editor-in-chief of the Journal of Psychosomatic Research.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

PTSD, anxiety linked to out-of-hospital cardiac arrest

Article Type
Changed
Wed, 05/31/2023 - 10:55

Stress-related disorders and anxiety are associated with a higher risk of out-of-hospital cardiac arrest (OHCA), a new case-control study suggests.

Investigators compared more than 35,000 OHCA case patients with a similar number of matched control persons and found an almost 1.5 times higher hazard of long-term stress conditions among OHCA case patients, compared with control persons, with a similar hazard for anxiety. Posttraumatic stress disorder was associated with an almost twofold higher risk of OHCA.

The findings applied equally to men and women and were independent of the presence of cardiovascular disease (CVD).

“This study raises awareness of the higher risks of OHCA and early risk monitoring to prevent OHCA in patients with stress-related disorders and anxiety,” write Talip Eroglu, of the department of cardiology, Copenhagen University Hospital, and colleagues.

The study was published online  in BMJ Open Heart.
 

Stress disorders and anxiety overrepresented

OHCA “predominantly arises from lethal cardiac arrhythmias ... that occur most frequently in the setting of coronary heart disease,” the authors write. However, increasing evidence suggests that rates of OHCA may also be increased in association with noncardiac diseases.

Individuals with stress-related disorders and anxiety are “overrepresented” among victims of cardiac arrest as well as those with multiple CVDs. But previous studies of OHCA have been limited by small numbers of cardiac arrests. In addition, those studies involved only data from selected populations or used in-hospital diagnosis to identify cardiac arrest, thereby potentially omitting OHCA patients who died prior to hospital admission.

The researchers therefore turned to data from Danish health registries that include a large, unselected cohort of patients with OHCA to investigate whether long-term stress conditions (that is, PTSD and adjustment disorder) or anxiety disorder were associated with OHCA.

They stratified the cohort according to sex, age, and CVD to identify which risk factor confers the highest risk of OHCA in patients with long-term stress conditions or anxiety, and they conducted sensitivity analyses of potential confounders, such as depression.

The design was a nested-case control model in which records at an individual patient level across registries were cross-linked to data from other national registries and were compared to matched control persons from the general population (35,195 OHCAs and 351,950 matched control persons; median IQR age, 72 [62-81] years; 66.82% men).

The prevalence of comorbidities and use of cardiovascular drugs were higher among OHCA case patients than among non-OHCA control persons.
 

Keep aware of stress and anxiety as risk factors

Among OHCA and non-OHCA participants, long-term stress conditions were diagnosed in 0.92% and 0.45%, respectively. Anxiety was diagnosed in 0.85% of OHCA case patients and in 0.37% of non-OHCA control persons.

These conditions were associated with a higher rate of OHCA after adjustment for common OHCA risk factors.



There were no significant differences in results when the researchers adjusted for the use of anxiolytics and antidepressants.

When they examined the prevalence of concomitant medication use or comorbidities, they found that depression was more frequent among patients with long-term stress and anxiety, compared with individuals with neither of those diagnoses. Additionally, patients with long-term stress and anxiety more often used anxiolytics, antidepressants, and QT-prolonging drugs.

Stratification of the analyses according to sex revealed that the OHCA rate was increased in both women and men with long-term stress and anxiety. There were no significant differences between the sexes. There were also no significant differences between the association among different age groups, nor between patients with and those without CVD, ischemic heart disease, or heart failure.

Previous research has shown associations of stress-related disorders or anxiety with cardiovascular outcomes, including myocardial infarction, heart failure, and cerebrovascular disease. These disorders might be “biological mediators in the causal pathway of OHCA” and contribute to the increased OHCA rate associated with stress-related disorders and anxiety, the authors suggest.

Nevertheless, they note, stress-related disorders and anxiety remained significantly associated with OHCA after controlling for these variables, “suggesting that it is unlikely that traditional risk factors of OHCA alone explain this relationship.”

They suggest several potential mechanisms. One is that the relationship is likely mediated by the activity of the sympathetic autonomic nervous system, which “leads to an increase in heart rate, release of neurotransmitters into the circulation, and local release of neurotransmitters in the heart.”

Each of these factors “may potentially influence cardiac electrophysiology and facilitate ventricular arrhythmias and OHCA.”

In addition to a biological mechanism, behavioral and psychosocial factors may also contribute to OHCA risk, since stress-related disorders and anxiety “often lead to unhealthy lifestyle, such as smoking and lower physical activity, which in turn may increase the risk of OHCA.” Given the absence of data on these features in the registries the investigators used, they were unable to account for them.

However, “it is unlikely that knowledge of these factors would have altered our conclusions considering that we have adjusted for all the relevant cardiovascular comorbidities.”

Similarly, other psychiatric disorders, such as depression, can contribute to OHCA risk, but they adjusted for depression in their multivariable analyses.

“Awareness of the higher risks of OHCA in patients with stress-related disorders and anxiety is important when treating these patients,” they conclude.

 

 

Detrimental to the heart, not just the psyche

Glenn Levine, MD, master clinician and professor of medicine, Baylor College of Medicine, Houston, called it an “important study in that it is a large, nationwide cohort study and thus provides important information to complement much smaller, focused studies.”

Like those other studies, “it finds that negative psychological health, specifically, long-term stress (as well as anxiety), is associated with a significantly increased risk of out-of-hospital cardiac arrest,” continued Dr. Levine, who is the chief of the cardiology section at Michael E. DeBakey VA Medical Center, Houston, and was not involved with the study.

Dr. Levine thinks the study “does a good job, as best one can for such a study, in trying to control for other factors, and zeroing in specifically on stress (and anxiety), trying to assess their independent contributions to the risk of developing cardiac arrest.”

The take-home message for clinicians and patients “is that negative psychological stress factors, such as stress and anxiety, are not only detrimental to one’s psychological health but likely increase one’s risk for adverse cardiac events, such as cardiac arrest,” he stated.

No specific funding for the study was disclosed. Mr. Eroglu has disclosed no relevant financial relationships. The other authors’ disclosures are listed in the original article. Dr. Levine reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Stress-related disorders and anxiety are associated with a higher risk of out-of-hospital cardiac arrest (OHCA), a new case-control study suggests.

Investigators compared more than 35,000 OHCA case patients with a similar number of matched control persons and found an almost 1.5 times higher hazard of long-term stress conditions among OHCA case patients, compared with control persons, with a similar hazard for anxiety. Posttraumatic stress disorder was associated with an almost twofold higher risk of OHCA.

The findings applied equally to men and women and were independent of the presence of cardiovascular disease (CVD).

“This study raises awareness of the higher risks of OHCA and early risk monitoring to prevent OHCA in patients with stress-related disorders and anxiety,” write Talip Eroglu, of the department of cardiology, Copenhagen University Hospital, and colleagues.

The study was published online  in BMJ Open Heart.
 

Stress disorders and anxiety overrepresented

OHCA “predominantly arises from lethal cardiac arrhythmias ... that occur most frequently in the setting of coronary heart disease,” the authors write. However, increasing evidence suggests that rates of OHCA may also be increased in association with noncardiac diseases.

Individuals with stress-related disorders and anxiety are “overrepresented” among victims of cardiac arrest as well as those with multiple CVDs. But previous studies of OHCA have been limited by small numbers of cardiac arrests. In addition, those studies involved only data from selected populations or used in-hospital diagnosis to identify cardiac arrest, thereby potentially omitting OHCA patients who died prior to hospital admission.

The researchers therefore turned to data from Danish health registries that include a large, unselected cohort of patients with OHCA to investigate whether long-term stress conditions (that is, PTSD and adjustment disorder) or anxiety disorder were associated with OHCA.

They stratified the cohort according to sex, age, and CVD to identify which risk factor confers the highest risk of OHCA in patients with long-term stress conditions or anxiety, and they conducted sensitivity analyses of potential confounders, such as depression.

The design was a nested-case control model in which records at an individual patient level across registries were cross-linked to data from other national registries and were compared to matched control persons from the general population (35,195 OHCAs and 351,950 matched control persons; median IQR age, 72 [62-81] years; 66.82% men).

The prevalence of comorbidities and use of cardiovascular drugs were higher among OHCA case patients than among non-OHCA control persons.
 

Keep aware of stress and anxiety as risk factors

Among OHCA and non-OHCA participants, long-term stress conditions were diagnosed in 0.92% and 0.45%, respectively. Anxiety was diagnosed in 0.85% of OHCA case patients and in 0.37% of non-OHCA control persons.

These conditions were associated with a higher rate of OHCA after adjustment for common OHCA risk factors.



There were no significant differences in results when the researchers adjusted for the use of anxiolytics and antidepressants.

When they examined the prevalence of concomitant medication use or comorbidities, they found that depression was more frequent among patients with long-term stress and anxiety, compared with individuals with neither of those diagnoses. Additionally, patients with long-term stress and anxiety more often used anxiolytics, antidepressants, and QT-prolonging drugs.

Stratification of the analyses according to sex revealed that the OHCA rate was increased in both women and men with long-term stress and anxiety. There were no significant differences between the sexes. There were also no significant differences between the association among different age groups, nor between patients with and those without CVD, ischemic heart disease, or heart failure.

Previous research has shown associations of stress-related disorders or anxiety with cardiovascular outcomes, including myocardial infarction, heart failure, and cerebrovascular disease. These disorders might be “biological mediators in the causal pathway of OHCA” and contribute to the increased OHCA rate associated with stress-related disorders and anxiety, the authors suggest.

Nevertheless, they note, stress-related disorders and anxiety remained significantly associated with OHCA after controlling for these variables, “suggesting that it is unlikely that traditional risk factors of OHCA alone explain this relationship.”

They suggest several potential mechanisms. One is that the relationship is likely mediated by the activity of the sympathetic autonomic nervous system, which “leads to an increase in heart rate, release of neurotransmitters into the circulation, and local release of neurotransmitters in the heart.”

Each of these factors “may potentially influence cardiac electrophysiology and facilitate ventricular arrhythmias and OHCA.”

In addition to a biological mechanism, behavioral and psychosocial factors may also contribute to OHCA risk, since stress-related disorders and anxiety “often lead to unhealthy lifestyle, such as smoking and lower physical activity, which in turn may increase the risk of OHCA.” Given the absence of data on these features in the registries the investigators used, they were unable to account for them.

However, “it is unlikely that knowledge of these factors would have altered our conclusions considering that we have adjusted for all the relevant cardiovascular comorbidities.”

Similarly, other psychiatric disorders, such as depression, can contribute to OHCA risk, but they adjusted for depression in their multivariable analyses.

“Awareness of the higher risks of OHCA in patients with stress-related disorders and anxiety is important when treating these patients,” they conclude.

 

 

Detrimental to the heart, not just the psyche

Glenn Levine, MD, master clinician and professor of medicine, Baylor College of Medicine, Houston, called it an “important study in that it is a large, nationwide cohort study and thus provides important information to complement much smaller, focused studies.”

Like those other studies, “it finds that negative psychological health, specifically, long-term stress (as well as anxiety), is associated with a significantly increased risk of out-of-hospital cardiac arrest,” continued Dr. Levine, who is the chief of the cardiology section at Michael E. DeBakey VA Medical Center, Houston, and was not involved with the study.

Dr. Levine thinks the study “does a good job, as best one can for such a study, in trying to control for other factors, and zeroing in specifically on stress (and anxiety), trying to assess their independent contributions to the risk of developing cardiac arrest.”

The take-home message for clinicians and patients “is that negative psychological stress factors, such as stress and anxiety, are not only detrimental to one’s psychological health but likely increase one’s risk for adverse cardiac events, such as cardiac arrest,” he stated.

No specific funding for the study was disclosed. Mr. Eroglu has disclosed no relevant financial relationships. The other authors’ disclosures are listed in the original article. Dr. Levine reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Stress-related disorders and anxiety are associated with a higher risk of out-of-hospital cardiac arrest (OHCA), a new case-control study suggests.

Investigators compared more than 35,000 OHCA case patients with a similar number of matched control persons and found an almost 1.5 times higher hazard of long-term stress conditions among OHCA case patients, compared with control persons, with a similar hazard for anxiety. Posttraumatic stress disorder was associated with an almost twofold higher risk of OHCA.

The findings applied equally to men and women and were independent of the presence of cardiovascular disease (CVD).

“This study raises awareness of the higher risks of OHCA and early risk monitoring to prevent OHCA in patients with stress-related disorders and anxiety,” write Talip Eroglu, of the department of cardiology, Copenhagen University Hospital, and colleagues.

The study was published online  in BMJ Open Heart.
 

Stress disorders and anxiety overrepresented

OHCA “predominantly arises from lethal cardiac arrhythmias ... that occur most frequently in the setting of coronary heart disease,” the authors write. However, increasing evidence suggests that rates of OHCA may also be increased in association with noncardiac diseases.

Individuals with stress-related disorders and anxiety are “overrepresented” among victims of cardiac arrest as well as those with multiple CVDs. But previous studies of OHCA have been limited by small numbers of cardiac arrests. In addition, those studies involved only data from selected populations or used in-hospital diagnosis to identify cardiac arrest, thereby potentially omitting OHCA patients who died prior to hospital admission.

The researchers therefore turned to data from Danish health registries that include a large, unselected cohort of patients with OHCA to investigate whether long-term stress conditions (that is, PTSD and adjustment disorder) or anxiety disorder were associated with OHCA.

They stratified the cohort according to sex, age, and CVD to identify which risk factor confers the highest risk of OHCA in patients with long-term stress conditions or anxiety, and they conducted sensitivity analyses of potential confounders, such as depression.

The design was a nested-case control model in which records at an individual patient level across registries were cross-linked to data from other national registries and were compared to matched control persons from the general population (35,195 OHCAs and 351,950 matched control persons; median IQR age, 72 [62-81] years; 66.82% men).

The prevalence of comorbidities and use of cardiovascular drugs were higher among OHCA case patients than among non-OHCA control persons.
 

Keep aware of stress and anxiety as risk factors

Among OHCA and non-OHCA participants, long-term stress conditions were diagnosed in 0.92% and 0.45%, respectively. Anxiety was diagnosed in 0.85% of OHCA case patients and in 0.37% of non-OHCA control persons.

These conditions were associated with a higher rate of OHCA after adjustment for common OHCA risk factors.



There were no significant differences in results when the researchers adjusted for the use of anxiolytics and antidepressants.

When they examined the prevalence of concomitant medication use or comorbidities, they found that depression was more frequent among patients with long-term stress and anxiety, compared with individuals with neither of those diagnoses. Additionally, patients with long-term stress and anxiety more often used anxiolytics, antidepressants, and QT-prolonging drugs.

Stratification of the analyses according to sex revealed that the OHCA rate was increased in both women and men with long-term stress and anxiety. There were no significant differences between the sexes. There were also no significant differences between the association among different age groups, nor between patients with and those without CVD, ischemic heart disease, or heart failure.

Previous research has shown associations of stress-related disorders or anxiety with cardiovascular outcomes, including myocardial infarction, heart failure, and cerebrovascular disease. These disorders might be “biological mediators in the causal pathway of OHCA” and contribute to the increased OHCA rate associated with stress-related disorders and anxiety, the authors suggest.

Nevertheless, they note, stress-related disorders and anxiety remained significantly associated with OHCA after controlling for these variables, “suggesting that it is unlikely that traditional risk factors of OHCA alone explain this relationship.”

They suggest several potential mechanisms. One is that the relationship is likely mediated by the activity of the sympathetic autonomic nervous system, which “leads to an increase in heart rate, release of neurotransmitters into the circulation, and local release of neurotransmitters in the heart.”

Each of these factors “may potentially influence cardiac electrophysiology and facilitate ventricular arrhythmias and OHCA.”

In addition to a biological mechanism, behavioral and psychosocial factors may also contribute to OHCA risk, since stress-related disorders and anxiety “often lead to unhealthy lifestyle, such as smoking and lower physical activity, which in turn may increase the risk of OHCA.” Given the absence of data on these features in the registries the investigators used, they were unable to account for them.

However, “it is unlikely that knowledge of these factors would have altered our conclusions considering that we have adjusted for all the relevant cardiovascular comorbidities.”

Similarly, other psychiatric disorders, such as depression, can contribute to OHCA risk, but they adjusted for depression in their multivariable analyses.

“Awareness of the higher risks of OHCA in patients with stress-related disorders and anxiety is important when treating these patients,” they conclude.

 

 

Detrimental to the heart, not just the psyche

Glenn Levine, MD, master clinician and professor of medicine, Baylor College of Medicine, Houston, called it an “important study in that it is a large, nationwide cohort study and thus provides important information to complement much smaller, focused studies.”

Like those other studies, “it finds that negative psychological health, specifically, long-term stress (as well as anxiety), is associated with a significantly increased risk of out-of-hospital cardiac arrest,” continued Dr. Levine, who is the chief of the cardiology section at Michael E. DeBakey VA Medical Center, Houston, and was not involved with the study.

Dr. Levine thinks the study “does a good job, as best one can for such a study, in trying to control for other factors, and zeroing in specifically on stress (and anxiety), trying to assess their independent contributions to the risk of developing cardiac arrest.”

The take-home message for clinicians and patients “is that negative psychological stress factors, such as stress and anxiety, are not only detrimental to one’s psychological health but likely increase one’s risk for adverse cardiac events, such as cardiac arrest,” he stated.

No specific funding for the study was disclosed. Mr. Eroglu has disclosed no relevant financial relationships. The other authors’ disclosures are listed in the original article. Dr. Levine reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BMJ OPEN HEART

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Widespread prescribing of stimulants with other CNS-active meds

Article Type
Changed
Mon, 05/08/2023 - 16:15

 

A large proportion of U.S. adults who are prescribed schedule II stimulants are simultaneously receiving other CNS-active agents including benzodiazepines, opioids, and antidepressants – a potentially dangerous practice.

Investigators analyzed prescription drug claims for over 9.1 million U.S. adults over a 1-year period and found that 276,223 (3%) had used a schedule II stimulant, such as methylphenidate and amphetamines, during that time. Of these 276,223 patients, 45% combined these agents with one or more additional CNS-active drugs and almost 25% were simultaneously using two or more additional CNS-active drugs.

Close to half of the stimulant users were taking an antidepressant, while close to one-third filled prescriptions for anxiolytic/sedative/hypnotic meditations, and one-fifth received opioid prescriptions.

The widespread, often off-label use of these stimulants in combination therapy with antidepressants, anxiolytics, opioids, and other psychoactive drugs, “reveals new patterns of utilization beyond the approved use of stimulants as monotherapy for ADHD, but because there are so few studies of these kinds of combination therapy, both the advantages and additional risks [of this type of prescribing] remain unknown,” study investigator Thomas J. Moore, AB, faculty associate in epidemiology, Johns Hopkins Bloomberg School of Public Health and Johns Hopkins Medicine, Baltimore, told this news organization.

The study was published online in BMJ Open.
 

‘Dangerous’ substances

Amphetamines and methylphenidate are CNS stimulants that have been in use for almost a century. Like opioids and barbiturates, they’re considered “dangerous” and classified as schedule II Controlled Substances because of their high potential for abuse.

Over many years, these stimulants have been used for multiple purposes, including nasal congestion, narcolepsy, appetite suppression, binge eating, depression, senile behavior, lethargy, and ADHD, the researchers note.

Observational studies suggest medical use of these agents has been increasing in the United States. The investigators conducted previous research that revealed a 79% increase from 2013 to 2018 in the number of adults who self-report their use. The current study, said Mr. Moore, explores how these stimulants are being used.

For the study, data was extracted from the MarketScan 2019 and 2020 Commercial Claims and Encounters Databases, focusing on 9.1 million adults aged 19-64 years who were continuously enrolled in an included commercial benefit plan from Oct. 1, 2019 to Dec. 31, 2020.

The primary outcome consisted of an outpatient prescription claim, service date, and days’ supply for the CNS-active drugs.

The researchers defined “combination-2” therapy as 60 or more days of combination treatment with a schedule II stimulant and at least one additional CNS-active drug. “Combination-3” therapy was defined as the addition of at least two additional CNS-active drugs.

The researchers used service date and days’ supply to examine the number of stimulant and other CNS-active drugs for each of the days of 2020.

CNS-active drug classes included antidepressants, anxiolytics/sedatives/hypnotics, antipsychotics, opioids, anticonvulsants, and other CNS-active drugs.
 

Prescribing cascade

Of the total number of adults enrolled, 3% (n = 276,223) were taking schedule II stimulants during 2020, with a median of 8 (interquartile range, 4-11) prescriptions. These drugs provided 227 (IQR, 110-322) treatment days of exposure.

Among those taking stimulants 45.5% combined the use of at least one additional CNS-active drug for a median of 213 (IQR, 126-301) treatment days; and 24.3% used at least two additional CNS-active drugs for a median of 182 (IQR, 108-276) days.

“Clinicians should beware of the prescribing cascade. Sometimes it begins with an antidepressant that causes too much sedation, so a stimulant gets added, which leads to insomnia, so alprazolam gets added to the mix,” Mr. Moore said.

He cautioned that this “leaves a patient with multiple drugs, all with discontinuation effects of different kinds and clashing effects.”

These new findings, the investigators note, “add new public health concerns to those raised by our previous study. ... this more-detailed profile reveals several new patterns.”

Most patients become “long-term users” once treatment has started, with 75% continuing for a 1-year period.

“This underscores the possible risks of nonmedical use and dependence that have warranted the classification of these drugs as having high potential for psychological or physical dependence and their prominent appearance in toxicology drug rankings of fatal overdose cases,” they write.

They note that the data “do not indicate which intervention may have come first – a stimulant added to compensate for excess sedation from the benzodiazepine, or the alprazolam added to calm excessive CNS stimulation and/or insomnia from the stimulants or other drugs.”

Several limitations cited by the authors include the fact that, although the population encompassed 9.1 million people, it “may not represent all commercially insured adults,” and it doesn’t include people who aren’t covered by commercial insurance.

Moreover, the MarketScan dataset included up to four diagnosis codes for each outpatient and emergency department encounter; therefore, it was not possible to directly link the diagnoses to specific prescription drug claims, and thus the diagnoses were not evaluated.

“Since many providers will not accept a drug claim for a schedule II stimulant without an on-label diagnosis of ADHD,” the authors suspect that “large numbers of this diagnosis were present.”
 

 

 

Complex prescribing regimens

Mark Olfson, MD, MPH, professor of psychiatry, medicine, and law and professor of epidemiology, Columbia University Irving Medical Center, New York, said the report “highlights the pharmacological complexity of adults who are treated with stimulants.”

Columbia University
Dr. Mark Olfson

Dr. Olfson, who is a research psychiatrist at the New York State Psychiatric Institute, New York, and was not involved with the study, observed there is “evidence to support stimulants as an adjunctive therapy for treatment-resistant unipolar depression in older adults.”

However, he added, “this indication is unlikely to fully explain the high proportion of nonelderly, stimulant-treated adults who also receive antidepressants.”

These new findings “call for research to increase our understanding of the clinical contexts that motivate these complex prescribing regimens as well as their effectiveness and safety,” said Dr. Olfson.

The authors have not declared a specific grant for this research from any funding agency in the public, commercial, or not-for-profit sectors. Mr. Moore declares no relevant financial relationships. Coauthor G. Caleb Alexander, MD, is past chair and a current member of the Food and Drug Administration’s Peripheral and Central Nervous System Advisory Committee; is a cofounding principal and equity holder in Monument Analytics, a health care consultancy whose clients include the life sciences industry as well as plaintiffs in opioid litigation, for whom he has served as a paid expert witness; and is a past member of OptumRx’s National P&T Committee. Dr. Olfson declares no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

A large proportion of U.S. adults who are prescribed schedule II stimulants are simultaneously receiving other CNS-active agents including benzodiazepines, opioids, and antidepressants – a potentially dangerous practice.

Investigators analyzed prescription drug claims for over 9.1 million U.S. adults over a 1-year period and found that 276,223 (3%) had used a schedule II stimulant, such as methylphenidate and amphetamines, during that time. Of these 276,223 patients, 45% combined these agents with one or more additional CNS-active drugs and almost 25% were simultaneously using two or more additional CNS-active drugs.

Close to half of the stimulant users were taking an antidepressant, while close to one-third filled prescriptions for anxiolytic/sedative/hypnotic meditations, and one-fifth received opioid prescriptions.

The widespread, often off-label use of these stimulants in combination therapy with antidepressants, anxiolytics, opioids, and other psychoactive drugs, “reveals new patterns of utilization beyond the approved use of stimulants as monotherapy for ADHD, but because there are so few studies of these kinds of combination therapy, both the advantages and additional risks [of this type of prescribing] remain unknown,” study investigator Thomas J. Moore, AB, faculty associate in epidemiology, Johns Hopkins Bloomberg School of Public Health and Johns Hopkins Medicine, Baltimore, told this news organization.

The study was published online in BMJ Open.
 

‘Dangerous’ substances

Amphetamines and methylphenidate are CNS stimulants that have been in use for almost a century. Like opioids and barbiturates, they’re considered “dangerous” and classified as schedule II Controlled Substances because of their high potential for abuse.

Over many years, these stimulants have been used for multiple purposes, including nasal congestion, narcolepsy, appetite suppression, binge eating, depression, senile behavior, lethargy, and ADHD, the researchers note.

Observational studies suggest medical use of these agents has been increasing in the United States. The investigators conducted previous research that revealed a 79% increase from 2013 to 2018 in the number of adults who self-report their use. The current study, said Mr. Moore, explores how these stimulants are being used.

For the study, data was extracted from the MarketScan 2019 and 2020 Commercial Claims and Encounters Databases, focusing on 9.1 million adults aged 19-64 years who were continuously enrolled in an included commercial benefit plan from Oct. 1, 2019 to Dec. 31, 2020.

The primary outcome consisted of an outpatient prescription claim, service date, and days’ supply for the CNS-active drugs.

The researchers defined “combination-2” therapy as 60 or more days of combination treatment with a schedule II stimulant and at least one additional CNS-active drug. “Combination-3” therapy was defined as the addition of at least two additional CNS-active drugs.

The researchers used service date and days’ supply to examine the number of stimulant and other CNS-active drugs for each of the days of 2020.

CNS-active drug classes included antidepressants, anxiolytics/sedatives/hypnotics, antipsychotics, opioids, anticonvulsants, and other CNS-active drugs.
 

Prescribing cascade

Of the total number of adults enrolled, 3% (n = 276,223) were taking schedule II stimulants during 2020, with a median of 8 (interquartile range, 4-11) prescriptions. These drugs provided 227 (IQR, 110-322) treatment days of exposure.

Among those taking stimulants 45.5% combined the use of at least one additional CNS-active drug for a median of 213 (IQR, 126-301) treatment days; and 24.3% used at least two additional CNS-active drugs for a median of 182 (IQR, 108-276) days.

“Clinicians should beware of the prescribing cascade. Sometimes it begins with an antidepressant that causes too much sedation, so a stimulant gets added, which leads to insomnia, so alprazolam gets added to the mix,” Mr. Moore said.

He cautioned that this “leaves a patient with multiple drugs, all with discontinuation effects of different kinds and clashing effects.”

These new findings, the investigators note, “add new public health concerns to those raised by our previous study. ... this more-detailed profile reveals several new patterns.”

Most patients become “long-term users” once treatment has started, with 75% continuing for a 1-year period.

“This underscores the possible risks of nonmedical use and dependence that have warranted the classification of these drugs as having high potential for psychological or physical dependence and their prominent appearance in toxicology drug rankings of fatal overdose cases,” they write.

They note that the data “do not indicate which intervention may have come first – a stimulant added to compensate for excess sedation from the benzodiazepine, or the alprazolam added to calm excessive CNS stimulation and/or insomnia from the stimulants or other drugs.”

Several limitations cited by the authors include the fact that, although the population encompassed 9.1 million people, it “may not represent all commercially insured adults,” and it doesn’t include people who aren’t covered by commercial insurance.

Moreover, the MarketScan dataset included up to four diagnosis codes for each outpatient and emergency department encounter; therefore, it was not possible to directly link the diagnoses to specific prescription drug claims, and thus the diagnoses were not evaluated.

“Since many providers will not accept a drug claim for a schedule II stimulant without an on-label diagnosis of ADHD,” the authors suspect that “large numbers of this diagnosis were present.”
 

 

 

Complex prescribing regimens

Mark Olfson, MD, MPH, professor of psychiatry, medicine, and law and professor of epidemiology, Columbia University Irving Medical Center, New York, said the report “highlights the pharmacological complexity of adults who are treated with stimulants.”

Columbia University
Dr. Mark Olfson

Dr. Olfson, who is a research psychiatrist at the New York State Psychiatric Institute, New York, and was not involved with the study, observed there is “evidence to support stimulants as an adjunctive therapy for treatment-resistant unipolar depression in older adults.”

However, he added, “this indication is unlikely to fully explain the high proportion of nonelderly, stimulant-treated adults who also receive antidepressants.”

These new findings “call for research to increase our understanding of the clinical contexts that motivate these complex prescribing regimens as well as their effectiveness and safety,” said Dr. Olfson.

The authors have not declared a specific grant for this research from any funding agency in the public, commercial, or not-for-profit sectors. Mr. Moore declares no relevant financial relationships. Coauthor G. Caleb Alexander, MD, is past chair and a current member of the Food and Drug Administration’s Peripheral and Central Nervous System Advisory Committee; is a cofounding principal and equity holder in Monument Analytics, a health care consultancy whose clients include the life sciences industry as well as plaintiffs in opioid litigation, for whom he has served as a paid expert witness; and is a past member of OptumRx’s National P&T Committee. Dr. Olfson declares no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

A large proportion of U.S. adults who are prescribed schedule II stimulants are simultaneously receiving other CNS-active agents including benzodiazepines, opioids, and antidepressants – a potentially dangerous practice.

Investigators analyzed prescription drug claims for over 9.1 million U.S. adults over a 1-year period and found that 276,223 (3%) had used a schedule II stimulant, such as methylphenidate and amphetamines, during that time. Of these 276,223 patients, 45% combined these agents with one or more additional CNS-active drugs and almost 25% were simultaneously using two or more additional CNS-active drugs.

Close to half of the stimulant users were taking an antidepressant, while close to one-third filled prescriptions for anxiolytic/sedative/hypnotic meditations, and one-fifth received opioid prescriptions.

The widespread, often off-label use of these stimulants in combination therapy with antidepressants, anxiolytics, opioids, and other psychoactive drugs, “reveals new patterns of utilization beyond the approved use of stimulants as monotherapy for ADHD, but because there are so few studies of these kinds of combination therapy, both the advantages and additional risks [of this type of prescribing] remain unknown,” study investigator Thomas J. Moore, AB, faculty associate in epidemiology, Johns Hopkins Bloomberg School of Public Health and Johns Hopkins Medicine, Baltimore, told this news organization.

The study was published online in BMJ Open.
 

‘Dangerous’ substances

Amphetamines and methylphenidate are CNS stimulants that have been in use for almost a century. Like opioids and barbiturates, they’re considered “dangerous” and classified as schedule II Controlled Substances because of their high potential for abuse.

Over many years, these stimulants have been used for multiple purposes, including nasal congestion, narcolepsy, appetite suppression, binge eating, depression, senile behavior, lethargy, and ADHD, the researchers note.

Observational studies suggest medical use of these agents has been increasing in the United States. The investigators conducted previous research that revealed a 79% increase from 2013 to 2018 in the number of adults who self-report their use. The current study, said Mr. Moore, explores how these stimulants are being used.

For the study, data was extracted from the MarketScan 2019 and 2020 Commercial Claims and Encounters Databases, focusing on 9.1 million adults aged 19-64 years who were continuously enrolled in an included commercial benefit plan from Oct. 1, 2019 to Dec. 31, 2020.

The primary outcome consisted of an outpatient prescription claim, service date, and days’ supply for the CNS-active drugs.

The researchers defined “combination-2” therapy as 60 or more days of combination treatment with a schedule II stimulant and at least one additional CNS-active drug. “Combination-3” therapy was defined as the addition of at least two additional CNS-active drugs.

The researchers used service date and days’ supply to examine the number of stimulant and other CNS-active drugs for each of the days of 2020.

CNS-active drug classes included antidepressants, anxiolytics/sedatives/hypnotics, antipsychotics, opioids, anticonvulsants, and other CNS-active drugs.
 

Prescribing cascade

Of the total number of adults enrolled, 3% (n = 276,223) were taking schedule II stimulants during 2020, with a median of 8 (interquartile range, 4-11) prescriptions. These drugs provided 227 (IQR, 110-322) treatment days of exposure.

Among those taking stimulants 45.5% combined the use of at least one additional CNS-active drug for a median of 213 (IQR, 126-301) treatment days; and 24.3% used at least two additional CNS-active drugs for a median of 182 (IQR, 108-276) days.

“Clinicians should beware of the prescribing cascade. Sometimes it begins with an antidepressant that causes too much sedation, so a stimulant gets added, which leads to insomnia, so alprazolam gets added to the mix,” Mr. Moore said.

He cautioned that this “leaves a patient with multiple drugs, all with discontinuation effects of different kinds and clashing effects.”

These new findings, the investigators note, “add new public health concerns to those raised by our previous study. ... this more-detailed profile reveals several new patterns.”

Most patients become “long-term users” once treatment has started, with 75% continuing for a 1-year period.

“This underscores the possible risks of nonmedical use and dependence that have warranted the classification of these drugs as having high potential for psychological or physical dependence and their prominent appearance in toxicology drug rankings of fatal overdose cases,” they write.

They note that the data “do not indicate which intervention may have come first – a stimulant added to compensate for excess sedation from the benzodiazepine, or the alprazolam added to calm excessive CNS stimulation and/or insomnia from the stimulants or other drugs.”

Several limitations cited by the authors include the fact that, although the population encompassed 9.1 million people, it “may not represent all commercially insured adults,” and it doesn’t include people who aren’t covered by commercial insurance.

Moreover, the MarketScan dataset included up to four diagnosis codes for each outpatient and emergency department encounter; therefore, it was not possible to directly link the diagnoses to specific prescription drug claims, and thus the diagnoses were not evaluated.

“Since many providers will not accept a drug claim for a schedule II stimulant without an on-label diagnosis of ADHD,” the authors suspect that “large numbers of this diagnosis were present.”
 

 

 

Complex prescribing regimens

Mark Olfson, MD, MPH, professor of psychiatry, medicine, and law and professor of epidemiology, Columbia University Irving Medical Center, New York, said the report “highlights the pharmacological complexity of adults who are treated with stimulants.”

Columbia University
Dr. Mark Olfson

Dr. Olfson, who is a research psychiatrist at the New York State Psychiatric Institute, New York, and was not involved with the study, observed there is “evidence to support stimulants as an adjunctive therapy for treatment-resistant unipolar depression in older adults.”

However, he added, “this indication is unlikely to fully explain the high proportion of nonelderly, stimulant-treated adults who also receive antidepressants.”

These new findings “call for research to increase our understanding of the clinical contexts that motivate these complex prescribing regimens as well as their effectiveness and safety,” said Dr. Olfson.

The authors have not declared a specific grant for this research from any funding agency in the public, commercial, or not-for-profit sectors. Mr. Moore declares no relevant financial relationships. Coauthor G. Caleb Alexander, MD, is past chair and a current member of the Food and Drug Administration’s Peripheral and Central Nervous System Advisory Committee; is a cofounding principal and equity holder in Monument Analytics, a health care consultancy whose clients include the life sciences industry as well as plaintiffs in opioid litigation, for whom he has served as a paid expert witness; and is a past member of OptumRx’s National P&T Committee. Dr. Olfson declares no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM BMJ OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Clozapine may curb schizophrenia’s ‘most dreaded outcome’

Article Type
Changed
Wed, 04/05/2023 - 11:37

The antipsychotic clozapine appears to guard against suicide for patients with treatment-resistant schizophrenia, results of an autopsy study suggest.

Investigators reviewed over 53,000 autopsy records, including over 600 from individuals whose autopsies revealed the presence of the antipsychotics clozapine or olanzapine, and found that those who took clozapine were significantly less likely to have died by suicide, compared with their counterparts who were taking olanzapine.

“Clozapine is an important and effective antisuicide medicine and should be strongly considered for treatment-resistant psychotic disorders, especially when the patient may be at risk for suicide,” study investigator Paul Nestadt, MD, associate professor, department of psychiatry and behavioral sciences, Johns Hopkins School of Medicine, Baltimore, told this news organization.

The study was published online in The Journal of Clinical Psychiatry.
 

Underutilized medication

Clozapine is the only medication indicated for treatment-resistant schizophrenia and is considered “the most efficacious antipsychotic,” the investigators note. Unfortunately, it has “long been underutilized” for several reasons, including prescriber hesitancy and concerns about side effects.

The authors note that its mechanism of action and the basis for superior efficacy are “still poorly understood” but “may extend beyond neurotransmitter receptor binding.”

Importantly, it may have a beneficial impact on domains other than positive symptoms of schizophrenia, including suicidality. Several studies have shown that it’s beneficial in this regard, but it is “unclear whether the unique antisuicidal properties of clozapine are related to better symptom control ... or to the closer monitoring and follow-up mandated for clozapine use,” they note.

A previous trial, the International Suicide Prevention Trial (InterSePT), demonstrated that clozapine is associated with a greater reduction in suicidality, and the findings “led to an FDA indication for clozapine in reducing the risk of recurrent suicidal behavior.”

However, the authors note, “in the severely ill populations in these studies, it is difficult to be certain about patients’ adherence to prescribed clozapine.”

“Other studies, such as InterSePT, have shown some evidence of clozapine working to reduce suicide-related outcomes, such as attempts or suicidal ideation, but few have been sufficiently powered to measure an effect on actual suicide deaths,” said Dr. Nestadt.

Dr. Paul Nestadt


“As a suicidologist, I feel it is very important that we understand what treatments and interventions can actually prevent suicide deaths, as most suicides are not associated with past attempts or ideation, with suicide decedents usually looking very different from characteristic nonfatal attempters, from a clinical or epidemiological standpoint,” he added.

“If we could show that clozapine actually decreases the likelihood of suicide deaths in our patients, it gives us more reason to choose it over less effective neuroleptics in our clinics – especially for patients at high risk of suicide,” he said.

For the study, the researchers reviewed 19-year state-wide autopsy records of Maryland’s Office of the Chief Medical Examiner, which “performs uniquely comprehensive death investigations.” Data included in these investigations are full toxicologic panels with postmortem blood levels of antipsychotics.

The researchers compared decedents who tested positive for clozapine and decedents who tested positive for olanzapine. They evaluated demographics, clinical features, and manner-of-death outcomes.
 

 

 

‘Untapped resource’

Of 53,133 decedents, olanzapine or clozapine was detected in the blood of 621 persons (n = 571 and n = 50, respectively).

There were no significant differences in age, sex, race, or urban residence between the decedents who were treated with olanzapine and those who received clozapine.

The odds of a death by suicide in those treated with clozapine were less than half of the odds among decedents who had been treated with olanzapine (odds ratio, 0.47; 95% confidence interval, 0.26-0.84; P = .011).

In sensitivity analyses, the investigators reanalyzed the data to compare clozapine with other antipsychotics, including chlorpromazine, thioridazine, quetiapine, and olanzapine, and the results were similar. The odds of suicide (compared with accident) in those taking clozapine were much lower than in those taking any other tested antipsychotics individually or in combination (OR, 0.42; 95% CI, 0.24-0.73; P = .002).

Dr. Nestadt outlined several hypotheses regarding the mechanism of clozapine’s antisuicidal properties.

“Most theories stem from the differences in its receptor affinity, compared [with] the other neuroleptics,” he said. “In addition to the more typical dopaminergic blockade seen in neuroleptics, clozapine enhances serotonin release and greatly increases peripheral norepinephrine.”

This has been shown to “grant clozapine a greater antidepressant effect than other neuroleptics while also potentially decreasing aggression and impulsivity, which are both strongly associated with suicide risk,” he said.

Clozapine may also “work to reduce the inflammation-triggered activation of the kynurenine pathway, which otherwise contributes to serotonin depletion,” he added.

He noted that some studies have shown that as many as 1 in 10 patients with schizophrenia die by suicide, “so addressing this risk is paramount,” and that clozapine can play an important role in this.

The authors note that the findings “also highlight the utility of state-wide autopsy records, an untapped resource for investigating the potential protective effect of psychiatric medications on suicide at a population level.

“Importantly, we can be certain that this was not an issue of nonadherence to treatment in either group, which is a common issue in the use of these drugs because, instead of prescription records or self-report, we used actual measurements of drug presence in decedents’ blood at death,” said Dr. Nestadt.
 

‘Strongly suggestive’ data

Commenting on the study, Maria Oquendo, MD, PhD, Ruth Meltzer Professor and chair of psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, said most work on antisuicidal psychopharmacologic approaches “focuses on suicidal ideation or suicide attempts, due to the rarity of suicide death, even in high-risk populations.”

“Showing that clozapine may decrease risk for the most dreaded outcome of schizophrenia – suicide – is critically important,” said Dr. Oquendo, past president of the American Psychiatric Association.

Nevertheless, some questions remain, said Dr. Oquendo, who was not involved with the study. “Comparison of suicides to only accidental deaths has limitations. Many individuals who die due to accidents, like many suicides, are not similar to the general population,” she added.

However, she acknowledged, the data are strongly suggestive that clozapine protects against suicide.

“While not definitive, ideally these findings will stimulate changes in prescribing practices which may be lifesaving both literally – in terms of preventing suicides – and figuratively, given the drug’s effect on symptoms that impact quality of life and functioning,” said Dr. Oquendo.

The study received no funding or support. Dr. Nestadt is supported by the American Foundation for Suicide prevention and the National Institute on Drug Abuse. The other authors’ disclosures are listed in the original article. Dr. Oquendo receives royalties from the Research Foundation for Mental Hygiene for the commercial use of the Columbia Suicide Severity Rating Scale. She serves as an advisor to Alkermes, Mind Medicine, Sage Therapeutics, St. George’s University, and Fundacion Jimenez Diaz. Her family owns stock in Bristol-Myers Squibb.

 

 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The antipsychotic clozapine appears to guard against suicide for patients with treatment-resistant schizophrenia, results of an autopsy study suggest.

Investigators reviewed over 53,000 autopsy records, including over 600 from individuals whose autopsies revealed the presence of the antipsychotics clozapine or olanzapine, and found that those who took clozapine were significantly less likely to have died by suicide, compared with their counterparts who were taking olanzapine.

“Clozapine is an important and effective antisuicide medicine and should be strongly considered for treatment-resistant psychotic disorders, especially when the patient may be at risk for suicide,” study investigator Paul Nestadt, MD, associate professor, department of psychiatry and behavioral sciences, Johns Hopkins School of Medicine, Baltimore, told this news organization.

The study was published online in The Journal of Clinical Psychiatry.
 

Underutilized medication

Clozapine is the only medication indicated for treatment-resistant schizophrenia and is considered “the most efficacious antipsychotic,” the investigators note. Unfortunately, it has “long been underutilized” for several reasons, including prescriber hesitancy and concerns about side effects.

The authors note that its mechanism of action and the basis for superior efficacy are “still poorly understood” but “may extend beyond neurotransmitter receptor binding.”

Importantly, it may have a beneficial impact on domains other than positive symptoms of schizophrenia, including suicidality. Several studies have shown that it’s beneficial in this regard, but it is “unclear whether the unique antisuicidal properties of clozapine are related to better symptom control ... or to the closer monitoring and follow-up mandated for clozapine use,” they note.

A previous trial, the International Suicide Prevention Trial (InterSePT), demonstrated that clozapine is associated with a greater reduction in suicidality, and the findings “led to an FDA indication for clozapine in reducing the risk of recurrent suicidal behavior.”

However, the authors note, “in the severely ill populations in these studies, it is difficult to be certain about patients’ adherence to prescribed clozapine.”

“Other studies, such as InterSePT, have shown some evidence of clozapine working to reduce suicide-related outcomes, such as attempts or suicidal ideation, but few have been sufficiently powered to measure an effect on actual suicide deaths,” said Dr. Nestadt.

Dr. Paul Nestadt


“As a suicidologist, I feel it is very important that we understand what treatments and interventions can actually prevent suicide deaths, as most suicides are not associated with past attempts or ideation, with suicide decedents usually looking very different from characteristic nonfatal attempters, from a clinical or epidemiological standpoint,” he added.

“If we could show that clozapine actually decreases the likelihood of suicide deaths in our patients, it gives us more reason to choose it over less effective neuroleptics in our clinics – especially for patients at high risk of suicide,” he said.

For the study, the researchers reviewed 19-year state-wide autopsy records of Maryland’s Office of the Chief Medical Examiner, which “performs uniquely comprehensive death investigations.” Data included in these investigations are full toxicologic panels with postmortem blood levels of antipsychotics.

The researchers compared decedents who tested positive for clozapine and decedents who tested positive for olanzapine. They evaluated demographics, clinical features, and manner-of-death outcomes.
 

 

 

‘Untapped resource’

Of 53,133 decedents, olanzapine or clozapine was detected in the blood of 621 persons (n = 571 and n = 50, respectively).

There were no significant differences in age, sex, race, or urban residence between the decedents who were treated with olanzapine and those who received clozapine.

The odds of a death by suicide in those treated with clozapine were less than half of the odds among decedents who had been treated with olanzapine (odds ratio, 0.47; 95% confidence interval, 0.26-0.84; P = .011).

In sensitivity analyses, the investigators reanalyzed the data to compare clozapine with other antipsychotics, including chlorpromazine, thioridazine, quetiapine, and olanzapine, and the results were similar. The odds of suicide (compared with accident) in those taking clozapine were much lower than in those taking any other tested antipsychotics individually or in combination (OR, 0.42; 95% CI, 0.24-0.73; P = .002).

Dr. Nestadt outlined several hypotheses regarding the mechanism of clozapine’s antisuicidal properties.

“Most theories stem from the differences in its receptor affinity, compared [with] the other neuroleptics,” he said. “In addition to the more typical dopaminergic blockade seen in neuroleptics, clozapine enhances serotonin release and greatly increases peripheral norepinephrine.”

This has been shown to “grant clozapine a greater antidepressant effect than other neuroleptics while also potentially decreasing aggression and impulsivity, which are both strongly associated with suicide risk,” he said.

Clozapine may also “work to reduce the inflammation-triggered activation of the kynurenine pathway, which otherwise contributes to serotonin depletion,” he added.

He noted that some studies have shown that as many as 1 in 10 patients with schizophrenia die by suicide, “so addressing this risk is paramount,” and that clozapine can play an important role in this.

The authors note that the findings “also highlight the utility of state-wide autopsy records, an untapped resource for investigating the potential protective effect of psychiatric medications on suicide at a population level.

“Importantly, we can be certain that this was not an issue of nonadherence to treatment in either group, which is a common issue in the use of these drugs because, instead of prescription records or self-report, we used actual measurements of drug presence in decedents’ blood at death,” said Dr. Nestadt.
 

‘Strongly suggestive’ data

Commenting on the study, Maria Oquendo, MD, PhD, Ruth Meltzer Professor and chair of psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, said most work on antisuicidal psychopharmacologic approaches “focuses on suicidal ideation or suicide attempts, due to the rarity of suicide death, even in high-risk populations.”

“Showing that clozapine may decrease risk for the most dreaded outcome of schizophrenia – suicide – is critically important,” said Dr. Oquendo, past president of the American Psychiatric Association.

Nevertheless, some questions remain, said Dr. Oquendo, who was not involved with the study. “Comparison of suicides to only accidental deaths has limitations. Many individuals who die due to accidents, like many suicides, are not similar to the general population,” she added.

However, she acknowledged, the data are strongly suggestive that clozapine protects against suicide.

“While not definitive, ideally these findings will stimulate changes in prescribing practices which may be lifesaving both literally – in terms of preventing suicides – and figuratively, given the drug’s effect on symptoms that impact quality of life and functioning,” said Dr. Oquendo.

The study received no funding or support. Dr. Nestadt is supported by the American Foundation for Suicide prevention and the National Institute on Drug Abuse. The other authors’ disclosures are listed in the original article. Dr. Oquendo receives royalties from the Research Foundation for Mental Hygiene for the commercial use of the Columbia Suicide Severity Rating Scale. She serves as an advisor to Alkermes, Mind Medicine, Sage Therapeutics, St. George’s University, and Fundacion Jimenez Diaz. Her family owns stock in Bristol-Myers Squibb.

 

 

A version of this article first appeared on Medscape.com.

The antipsychotic clozapine appears to guard against suicide for patients with treatment-resistant schizophrenia, results of an autopsy study suggest.

Investigators reviewed over 53,000 autopsy records, including over 600 from individuals whose autopsies revealed the presence of the antipsychotics clozapine or olanzapine, and found that those who took clozapine were significantly less likely to have died by suicide, compared with their counterparts who were taking olanzapine.

“Clozapine is an important and effective antisuicide medicine and should be strongly considered for treatment-resistant psychotic disorders, especially when the patient may be at risk for suicide,” study investigator Paul Nestadt, MD, associate professor, department of psychiatry and behavioral sciences, Johns Hopkins School of Medicine, Baltimore, told this news organization.

The study was published online in The Journal of Clinical Psychiatry.
 

Underutilized medication

Clozapine is the only medication indicated for treatment-resistant schizophrenia and is considered “the most efficacious antipsychotic,” the investigators note. Unfortunately, it has “long been underutilized” for several reasons, including prescriber hesitancy and concerns about side effects.

The authors note that its mechanism of action and the basis for superior efficacy are “still poorly understood” but “may extend beyond neurotransmitter receptor binding.”

Importantly, it may have a beneficial impact on domains other than positive symptoms of schizophrenia, including suicidality. Several studies have shown that it’s beneficial in this regard, but it is “unclear whether the unique antisuicidal properties of clozapine are related to better symptom control ... or to the closer monitoring and follow-up mandated for clozapine use,” they note.

A previous trial, the International Suicide Prevention Trial (InterSePT), demonstrated that clozapine is associated with a greater reduction in suicidality, and the findings “led to an FDA indication for clozapine in reducing the risk of recurrent suicidal behavior.”

However, the authors note, “in the severely ill populations in these studies, it is difficult to be certain about patients’ adherence to prescribed clozapine.”

“Other studies, such as InterSePT, have shown some evidence of clozapine working to reduce suicide-related outcomes, such as attempts or suicidal ideation, but few have been sufficiently powered to measure an effect on actual suicide deaths,” said Dr. Nestadt.

Dr. Paul Nestadt


“As a suicidologist, I feel it is very important that we understand what treatments and interventions can actually prevent suicide deaths, as most suicides are not associated with past attempts or ideation, with suicide decedents usually looking very different from characteristic nonfatal attempters, from a clinical or epidemiological standpoint,” he added.

“If we could show that clozapine actually decreases the likelihood of suicide deaths in our patients, it gives us more reason to choose it over less effective neuroleptics in our clinics – especially for patients at high risk of suicide,” he said.

For the study, the researchers reviewed 19-year state-wide autopsy records of Maryland’s Office of the Chief Medical Examiner, which “performs uniquely comprehensive death investigations.” Data included in these investigations are full toxicologic panels with postmortem blood levels of antipsychotics.

The researchers compared decedents who tested positive for clozapine and decedents who tested positive for olanzapine. They evaluated demographics, clinical features, and manner-of-death outcomes.
 

 

 

‘Untapped resource’

Of 53,133 decedents, olanzapine or clozapine was detected in the blood of 621 persons (n = 571 and n = 50, respectively).

There were no significant differences in age, sex, race, or urban residence between the decedents who were treated with olanzapine and those who received clozapine.

The odds of a death by suicide in those treated with clozapine were less than half of the odds among decedents who had been treated with olanzapine (odds ratio, 0.47; 95% confidence interval, 0.26-0.84; P = .011).

In sensitivity analyses, the investigators reanalyzed the data to compare clozapine with other antipsychotics, including chlorpromazine, thioridazine, quetiapine, and olanzapine, and the results were similar. The odds of suicide (compared with accident) in those taking clozapine were much lower than in those taking any other tested antipsychotics individually or in combination (OR, 0.42; 95% CI, 0.24-0.73; P = .002).

Dr. Nestadt outlined several hypotheses regarding the mechanism of clozapine’s antisuicidal properties.

“Most theories stem from the differences in its receptor affinity, compared [with] the other neuroleptics,” he said. “In addition to the more typical dopaminergic blockade seen in neuroleptics, clozapine enhances serotonin release and greatly increases peripheral norepinephrine.”

This has been shown to “grant clozapine a greater antidepressant effect than other neuroleptics while also potentially decreasing aggression and impulsivity, which are both strongly associated with suicide risk,” he said.

Clozapine may also “work to reduce the inflammation-triggered activation of the kynurenine pathway, which otherwise contributes to serotonin depletion,” he added.

He noted that some studies have shown that as many as 1 in 10 patients with schizophrenia die by suicide, “so addressing this risk is paramount,” and that clozapine can play an important role in this.

The authors note that the findings “also highlight the utility of state-wide autopsy records, an untapped resource for investigating the potential protective effect of psychiatric medications on suicide at a population level.

“Importantly, we can be certain that this was not an issue of nonadherence to treatment in either group, which is a common issue in the use of these drugs because, instead of prescription records or self-report, we used actual measurements of drug presence in decedents’ blood at death,” said Dr. Nestadt.
 

‘Strongly suggestive’ data

Commenting on the study, Maria Oquendo, MD, PhD, Ruth Meltzer Professor and chair of psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, said most work on antisuicidal psychopharmacologic approaches “focuses on suicidal ideation or suicide attempts, due to the rarity of suicide death, even in high-risk populations.”

“Showing that clozapine may decrease risk for the most dreaded outcome of schizophrenia – suicide – is critically important,” said Dr. Oquendo, past president of the American Psychiatric Association.

Nevertheless, some questions remain, said Dr. Oquendo, who was not involved with the study. “Comparison of suicides to only accidental deaths has limitations. Many individuals who die due to accidents, like many suicides, are not similar to the general population,” she added.

However, she acknowledged, the data are strongly suggestive that clozapine protects against suicide.

“While not definitive, ideally these findings will stimulate changes in prescribing practices which may be lifesaving both literally – in terms of preventing suicides – and figuratively, given the drug’s effect on symptoms that impact quality of life and functioning,” said Dr. Oquendo.

The study received no funding or support. Dr. Nestadt is supported by the American Foundation for Suicide prevention and the National Institute on Drug Abuse. The other authors’ disclosures are listed in the original article. Dr. Oquendo receives royalties from the Research Foundation for Mental Hygiene for the commercial use of the Columbia Suicide Severity Rating Scale. She serves as an advisor to Alkermes, Mind Medicine, Sage Therapeutics, St. George’s University, and Fundacion Jimenez Diaz. Her family owns stock in Bristol-Myers Squibb.

 

 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Magnesium-rich diet linked to lower dementia risk

Article Type
Changed
Fri, 04/07/2023 - 14:04

A magnesium-rich diet has been linked to better brain health, an outcome that may help lower dementia risk, new research suggests.

Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).

“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.

Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.

The study was published online  in the European Journal of Nutrition.
 

Promising target

The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.

“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.

Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.

Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.

Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.

Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.

In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.

They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).

Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
 

Brain volume differences

The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.

For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.

Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.

Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.

They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”

Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.

Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.

The latent class analysis identified three classes of magnesium intake:




In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.



Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:



Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.

“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.

“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”

Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
 

 

 

Association, not causation

Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.

“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.

She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.

“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.

Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

A magnesium-rich diet has been linked to better brain health, an outcome that may help lower dementia risk, new research suggests.

Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).

“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.

Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.

The study was published online  in the European Journal of Nutrition.
 

Promising target

The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.

“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.

Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.

Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.

Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.

Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.

In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.

They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).

Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
 

Brain volume differences

The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.

For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.

Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.

Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.

They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”

Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.

Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.

The latent class analysis identified three classes of magnesium intake:




In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.



Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:



Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.

“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.

“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”

Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
 

 

 

Association, not causation

Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.

“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.

She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.

“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.

Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

A magnesium-rich diet has been linked to better brain health, an outcome that may help lower dementia risk, new research suggests.

Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).

“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.

Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.

The study was published online  in the European Journal of Nutrition.
 

Promising target

The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.

“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.

Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.

Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.

Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.

Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.

In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.

They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).

Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
 

Brain volume differences

The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.

For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.

Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.

Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.

They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”

Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.

Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.

The latent class analysis identified three classes of magnesium intake:




In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.



Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:



Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.

“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.

“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”

Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
 

 

 

Association, not causation

Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.

“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.

She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.

“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.

Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EUROPEAN JOURNAL OF NUTRITION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

HRT may prevent Alzheimer’s in high-risk women

Article Type
Changed
Thu, 01/19/2023 - 16:26

 

Hormone replacement therapy (HRT) introduced early during the menopausal transition may protect against Alzheimer’s dementia in women carrying the APOE4 gene, new research suggests.

Results from a cohort study of almost 1,200 women showed that use of HRT was associated with higher delayed memory scores and larger entorhinal and hippocampal brain volumes – areas that are affected early by Alzheimer’s disease (AD) pathology.

HRT was also found to be most effective, as seen by larger hippocampal volume, when introduced during early perimenopause.

“Clinicians are very much aware of the susceptibility of women to cognitive disturbances during menopause,” lead author Rasha Saleh, MD, senior research associate, University of East Anglia (England), said in an interview.

“Identifying the at-risk APOE4 women and early HRT introduction can be of benefit. Confirming our findings in a clinical trial would be the next step forward,” Dr. Saleh said.

The findings were published online in Alzheimer’s Research and Therapy.
 

Personalized approaches

Dr. Saleh noted that estrogen receptors are localized in various areas of the brain, including cognition-related areas. Estrogen regulates such things as neuroinflammatory status, glucose utilization, and lipid metabolism.

“The decline of estrogen during menopause can lead to disturbance in these functions, which can accelerate AD-related pathology,” she said.

HRT during the menopausal transition and afterward is “being considered as a strategy to mitigate cognitive decline,” the investigators wrote. Early observational studies have suggested that oral estrogen “may be protective against dementia,” but results of clinical trials have been inconsistent, and some have even shown “harmful effects.”

The current researchers were “interested in the personalized approaches in the prevention of AD,” Dr. Saleh said. Preclinical and pilot data from her group have shown that women with APOE4 have “better cognitive test scores with nutritional and hormonal interventions.”

This led Dr. Saleh to hypothesize that HRT would be of more cognitive benefit for those with versus without APOE4, particularly when introduced early during the menopausal transition.

To investigate this hypothesis, the researchers analyzed baseline data from participants in the European Prevention of Alzheimer’s Dementia (EPAD) cohort. This project was initiated in 2015 with the aim of developing longitudinal models over the entire course of AD prior to dementia clinical diagnosis.

Participants were recruited from 10 European countries. All were required to be at least 50 years old, to have not been diagnosed with dementia at baseline, and to have no medical or psychiatric illness that could potentially exclude them from further research.

The current study included 1,178 women (mean age, 65.1 years), who were divided by genotype into non-APOE4 and APOE4 groups. HRT treatment for current or previous users included estrogen alone or estrogen plus progestogens via oral or transdermal administration routes, and at different doses.

The four tests used to assess cognition were the Mini-Mental State Examination dot counting to evaluate verbal working memory, the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) total score, the Four Mountain Test, and the supermarket trolley virtual reality test.

Brain MRI data were collected. The researchers focused on the medial temporal lobe as the “main brain region regulating cognition and memory processing.” This lobe includes the hippocampus, the parahippocampus, the entorhinal cortex, and the amygdala.
 

‘Critical window’

The researchers found a “trend” toward an APOE-HRT interaction (P-interaction = .097) for the total RBANS score. In particular, it was significant for the RBANS delayed memory index, where scores were consistently higher for women with APOE4 who had received HRT, compared with all other groups (P-interaction = .009).

Within-genotype group comparisons showed that HRT users had a higher RBANS total scale score and delayed memory index (P = .045 and P = .002, respectively), but only among APOE4 carriers. Effect size analyses showed a large effect of HRT use on the Four Mountain Test score and the supermarket trolley virtual reality test score (Cohen’s d = 0.988 and 1.2, respectively).

“This large effect was found only in APOE4 carriers,” the investigators noted.

Similarly, a moderate to large effect of HRT on the left entorhinal volume was observed in APOE4 carriers (Cohen’s d = 0.63).

In members of the APOE4 group who received HRT, the left entorhinal and left and right amygdala volumes were larger, compared with both no-APOE4 and non-HRT users (P-interaction = .002, .003, and .005, respectively). Similar trends were observed for the right entorhinal volume (P = .074).

In addition, among HRT users, the left entorhinal volume was larger (P = .03); the right and left anterior cingulate gyrus volumes were smaller (P = .003 and .062, respectively); and the left superior frontal gyrus volume was larger (P = .009) in comparison with women who did not receive HRT, independently of their APOE genotype.

Early use of HRT among APOE4 carriers was associated with larger right and left hippocampal volume (P = .035 and P = .028, respectively) – an association not found in non-APOE4 carriers. The association was also not significant when participants were not stratified by APOE genotype.

“The key important point here is the timing, or the ‘critical window,’ when HRT can be of most benefit,” Dr. Saleh said. “This is most beneficial when introduced early, before the neuropathology becomes irreversible.”

Study limitations include its cross-sectional design, which precludes the establishment of a causal relationship, and the fact that information regarding the type and dose of estrogen was not available for all participants.

HRT is not without risk, Dr. Saleh noted. She recommended that clinicians “carry out various screening tests to make sure that a woman is eligible for HRT and not at risk of hypercoagulability, for instance.”
 

Risk-benefit ratio

In a comment, Howard Fillit, MD, cofounder and chief science officer at the Alzheimer’s Drug Discovery Foundation, called the study “exactly the kind of work that needs to be done.”

Dr. Fillit, who was not involved with the current research, is a clinical professor of geriatric medicine, palliative care medicine, and neuroscience at Mount Sinai Hospital, New York.

He compared the process with that of osteoporosis. “We know that if women are treated [with HRT] at the time of the menopause, you can prevent the rapid bone loss that occurs with rapid estrogen loss. But if you wait 5, 10 years out, once the bone loss has occurred, the HRT doesn’t really have any impact on osteoporosis risk because the horse is already out of the barn,” he said.

Although HRT carries risks, “they can clearly be managed; and if it’s proven that estrogen or hormone replacement around the time of the menopause can be protective [against AD], the risk-benefit ratio of HRT could be in favor of treatment,” Dr. Fillit added.

The study was conducted as part of the Medical Research Council NuBrain Consortium. The investigators and Dr. Fillit reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Hormone replacement therapy (HRT) introduced early during the menopausal transition may protect against Alzheimer’s dementia in women carrying the APOE4 gene, new research suggests.

Results from a cohort study of almost 1,200 women showed that use of HRT was associated with higher delayed memory scores and larger entorhinal and hippocampal brain volumes – areas that are affected early by Alzheimer’s disease (AD) pathology.

HRT was also found to be most effective, as seen by larger hippocampal volume, when introduced during early perimenopause.

“Clinicians are very much aware of the susceptibility of women to cognitive disturbances during menopause,” lead author Rasha Saleh, MD, senior research associate, University of East Anglia (England), said in an interview.

“Identifying the at-risk APOE4 women and early HRT introduction can be of benefit. Confirming our findings in a clinical trial would be the next step forward,” Dr. Saleh said.

The findings were published online in Alzheimer’s Research and Therapy.
 

Personalized approaches

Dr. Saleh noted that estrogen receptors are localized in various areas of the brain, including cognition-related areas. Estrogen regulates such things as neuroinflammatory status, glucose utilization, and lipid metabolism.

“The decline of estrogen during menopause can lead to disturbance in these functions, which can accelerate AD-related pathology,” she said.

HRT during the menopausal transition and afterward is “being considered as a strategy to mitigate cognitive decline,” the investigators wrote. Early observational studies have suggested that oral estrogen “may be protective against dementia,” but results of clinical trials have been inconsistent, and some have even shown “harmful effects.”

The current researchers were “interested in the personalized approaches in the prevention of AD,” Dr. Saleh said. Preclinical and pilot data from her group have shown that women with APOE4 have “better cognitive test scores with nutritional and hormonal interventions.”

This led Dr. Saleh to hypothesize that HRT would be of more cognitive benefit for those with versus without APOE4, particularly when introduced early during the menopausal transition.

To investigate this hypothesis, the researchers analyzed baseline data from participants in the European Prevention of Alzheimer’s Dementia (EPAD) cohort. This project was initiated in 2015 with the aim of developing longitudinal models over the entire course of AD prior to dementia clinical diagnosis.

Participants were recruited from 10 European countries. All were required to be at least 50 years old, to have not been diagnosed with dementia at baseline, and to have no medical or psychiatric illness that could potentially exclude them from further research.

The current study included 1,178 women (mean age, 65.1 years), who were divided by genotype into non-APOE4 and APOE4 groups. HRT treatment for current or previous users included estrogen alone or estrogen plus progestogens via oral or transdermal administration routes, and at different doses.

The four tests used to assess cognition were the Mini-Mental State Examination dot counting to evaluate verbal working memory, the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) total score, the Four Mountain Test, and the supermarket trolley virtual reality test.

Brain MRI data were collected. The researchers focused on the medial temporal lobe as the “main brain region regulating cognition and memory processing.” This lobe includes the hippocampus, the parahippocampus, the entorhinal cortex, and the amygdala.
 

‘Critical window’

The researchers found a “trend” toward an APOE-HRT interaction (P-interaction = .097) for the total RBANS score. In particular, it was significant for the RBANS delayed memory index, where scores were consistently higher for women with APOE4 who had received HRT, compared with all other groups (P-interaction = .009).

Within-genotype group comparisons showed that HRT users had a higher RBANS total scale score and delayed memory index (P = .045 and P = .002, respectively), but only among APOE4 carriers. Effect size analyses showed a large effect of HRT use on the Four Mountain Test score and the supermarket trolley virtual reality test score (Cohen’s d = 0.988 and 1.2, respectively).

“This large effect was found only in APOE4 carriers,” the investigators noted.

Similarly, a moderate to large effect of HRT on the left entorhinal volume was observed in APOE4 carriers (Cohen’s d = 0.63).

In members of the APOE4 group who received HRT, the left entorhinal and left and right amygdala volumes were larger, compared with both no-APOE4 and non-HRT users (P-interaction = .002, .003, and .005, respectively). Similar trends were observed for the right entorhinal volume (P = .074).

In addition, among HRT users, the left entorhinal volume was larger (P = .03); the right and left anterior cingulate gyrus volumes were smaller (P = .003 and .062, respectively); and the left superior frontal gyrus volume was larger (P = .009) in comparison with women who did not receive HRT, independently of their APOE genotype.

Early use of HRT among APOE4 carriers was associated with larger right and left hippocampal volume (P = .035 and P = .028, respectively) – an association not found in non-APOE4 carriers. The association was also not significant when participants were not stratified by APOE genotype.

“The key important point here is the timing, or the ‘critical window,’ when HRT can be of most benefit,” Dr. Saleh said. “This is most beneficial when introduced early, before the neuropathology becomes irreversible.”

Study limitations include its cross-sectional design, which precludes the establishment of a causal relationship, and the fact that information regarding the type and dose of estrogen was not available for all participants.

HRT is not without risk, Dr. Saleh noted. She recommended that clinicians “carry out various screening tests to make sure that a woman is eligible for HRT and not at risk of hypercoagulability, for instance.”
 

Risk-benefit ratio

In a comment, Howard Fillit, MD, cofounder and chief science officer at the Alzheimer’s Drug Discovery Foundation, called the study “exactly the kind of work that needs to be done.”

Dr. Fillit, who was not involved with the current research, is a clinical professor of geriatric medicine, palliative care medicine, and neuroscience at Mount Sinai Hospital, New York.

He compared the process with that of osteoporosis. “We know that if women are treated [with HRT] at the time of the menopause, you can prevent the rapid bone loss that occurs with rapid estrogen loss. But if you wait 5, 10 years out, once the bone loss has occurred, the HRT doesn’t really have any impact on osteoporosis risk because the horse is already out of the barn,” he said.

Although HRT carries risks, “they can clearly be managed; and if it’s proven that estrogen or hormone replacement around the time of the menopause can be protective [against AD], the risk-benefit ratio of HRT could be in favor of treatment,” Dr. Fillit added.

The study was conducted as part of the Medical Research Council NuBrain Consortium. The investigators and Dr. Fillit reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Hormone replacement therapy (HRT) introduced early during the menopausal transition may protect against Alzheimer’s dementia in women carrying the APOE4 gene, new research suggests.

Results from a cohort study of almost 1,200 women showed that use of HRT was associated with higher delayed memory scores and larger entorhinal and hippocampal brain volumes – areas that are affected early by Alzheimer’s disease (AD) pathology.

HRT was also found to be most effective, as seen by larger hippocampal volume, when introduced during early perimenopause.

“Clinicians are very much aware of the susceptibility of women to cognitive disturbances during menopause,” lead author Rasha Saleh, MD, senior research associate, University of East Anglia (England), said in an interview.

“Identifying the at-risk APOE4 women and early HRT introduction can be of benefit. Confirming our findings in a clinical trial would be the next step forward,” Dr. Saleh said.

The findings were published online in Alzheimer’s Research and Therapy.
 

Personalized approaches

Dr. Saleh noted that estrogen receptors are localized in various areas of the brain, including cognition-related areas. Estrogen regulates such things as neuroinflammatory status, glucose utilization, and lipid metabolism.

“The decline of estrogen during menopause can lead to disturbance in these functions, which can accelerate AD-related pathology,” she said.

HRT during the menopausal transition and afterward is “being considered as a strategy to mitigate cognitive decline,” the investigators wrote. Early observational studies have suggested that oral estrogen “may be protective against dementia,” but results of clinical trials have been inconsistent, and some have even shown “harmful effects.”

The current researchers were “interested in the personalized approaches in the prevention of AD,” Dr. Saleh said. Preclinical and pilot data from her group have shown that women with APOE4 have “better cognitive test scores with nutritional and hormonal interventions.”

This led Dr. Saleh to hypothesize that HRT would be of more cognitive benefit for those with versus without APOE4, particularly when introduced early during the menopausal transition.

To investigate this hypothesis, the researchers analyzed baseline data from participants in the European Prevention of Alzheimer’s Dementia (EPAD) cohort. This project was initiated in 2015 with the aim of developing longitudinal models over the entire course of AD prior to dementia clinical diagnosis.

Participants were recruited from 10 European countries. All were required to be at least 50 years old, to have not been diagnosed with dementia at baseline, and to have no medical or psychiatric illness that could potentially exclude them from further research.

The current study included 1,178 women (mean age, 65.1 years), who were divided by genotype into non-APOE4 and APOE4 groups. HRT treatment for current or previous users included estrogen alone or estrogen plus progestogens via oral or transdermal administration routes, and at different doses.

The four tests used to assess cognition were the Mini-Mental State Examination dot counting to evaluate verbal working memory, the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) total score, the Four Mountain Test, and the supermarket trolley virtual reality test.

Brain MRI data were collected. The researchers focused on the medial temporal lobe as the “main brain region regulating cognition and memory processing.” This lobe includes the hippocampus, the parahippocampus, the entorhinal cortex, and the amygdala.
 

‘Critical window’

The researchers found a “trend” toward an APOE-HRT interaction (P-interaction = .097) for the total RBANS score. In particular, it was significant for the RBANS delayed memory index, where scores were consistently higher for women with APOE4 who had received HRT, compared with all other groups (P-interaction = .009).

Within-genotype group comparisons showed that HRT users had a higher RBANS total scale score and delayed memory index (P = .045 and P = .002, respectively), but only among APOE4 carriers. Effect size analyses showed a large effect of HRT use on the Four Mountain Test score and the supermarket trolley virtual reality test score (Cohen’s d = 0.988 and 1.2, respectively).

“This large effect was found only in APOE4 carriers,” the investigators noted.

Similarly, a moderate to large effect of HRT on the left entorhinal volume was observed in APOE4 carriers (Cohen’s d = 0.63).

In members of the APOE4 group who received HRT, the left entorhinal and left and right amygdala volumes were larger, compared with both no-APOE4 and non-HRT users (P-interaction = .002, .003, and .005, respectively). Similar trends were observed for the right entorhinal volume (P = .074).

In addition, among HRT users, the left entorhinal volume was larger (P = .03); the right and left anterior cingulate gyrus volumes were smaller (P = .003 and .062, respectively); and the left superior frontal gyrus volume was larger (P = .009) in comparison with women who did not receive HRT, independently of their APOE genotype.

Early use of HRT among APOE4 carriers was associated with larger right and left hippocampal volume (P = .035 and P = .028, respectively) – an association not found in non-APOE4 carriers. The association was also not significant when participants were not stratified by APOE genotype.

“The key important point here is the timing, or the ‘critical window,’ when HRT can be of most benefit,” Dr. Saleh said. “This is most beneficial when introduced early, before the neuropathology becomes irreversible.”

Study limitations include its cross-sectional design, which precludes the establishment of a causal relationship, and the fact that information regarding the type and dose of estrogen was not available for all participants.

HRT is not without risk, Dr. Saleh noted. She recommended that clinicians “carry out various screening tests to make sure that a woman is eligible for HRT and not at risk of hypercoagulability, for instance.”
 

Risk-benefit ratio

In a comment, Howard Fillit, MD, cofounder and chief science officer at the Alzheimer’s Drug Discovery Foundation, called the study “exactly the kind of work that needs to be done.”

Dr. Fillit, who was not involved with the current research, is a clinical professor of geriatric medicine, palliative care medicine, and neuroscience at Mount Sinai Hospital, New York.

He compared the process with that of osteoporosis. “We know that if women are treated [with HRT] at the time of the menopause, you can prevent the rapid bone loss that occurs with rapid estrogen loss. But if you wait 5, 10 years out, once the bone loss has occurred, the HRT doesn’t really have any impact on osteoporosis risk because the horse is already out of the barn,” he said.

Although HRT carries risks, “they can clearly be managed; and if it’s proven that estrogen or hormone replacement around the time of the menopause can be protective [against AD], the risk-benefit ratio of HRT could be in favor of treatment,” Dr. Fillit added.

The study was conducted as part of the Medical Research Council NuBrain Consortium. The investigators and Dr. Fillit reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ALZHEIMER’S RESEARCH AND THERAPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article