LayerRx Mapping ID
376
Slot System
Featured Buckets
Featured Buckets Admin
Medscape Lead Concept
281

Facial Temperature Can Reveal Age and Disease

Article Type
Changed
Wed, 07/03/2024 - 11:08

 

This transcript has been edited for clarity. 

My oldest daughter is at sleepaway camp for a couple of weeks, and the camp has a photographer who goes around all day taking pictures of the kids, which get uploaded to a private Facebook group. In the past, I would go online every day (or, okay, several times a day) and scroll through all those pictures looking for one that features my kid. 

I don’t have to do that anymore. This year, I simply uploaded a picture of my daughter to an app and artificial intelligence (AI) takes care of the rest, recognizing her face amidst the sea of smiling children, and flagging just those photos for me to peruse. It’s amazing, really. And a bit scary.

The fact that facial recognition has penetrated the summer camp market should tell you that the tech is truly ubiquitous. But today we’re going to think a bit more about what AI can do with a picture of your face, because the power of facial recognition is not just skin deep.

What’s got me hot and bothered about facial images is this paper, appearing in Cell Metabolism, which adds a new layer to the standard facial-analysis playbook: facial temperature.

To understand this paper, you need to understand a whole field of research that is developing various different “clocks” for age. 

It turns out that age really is just a number. Our cells, our proteins, our biochemistry can be analyzed to give different numbers. These “clocks,” as distinct from the calendar we usually use to measure our age, might have more predictive power than the number itself. 

There are numerous molecular clocks, such as telomere length, that not only correlate with calendar age but are superior to calendar age in predicting age-related complications. Testing telomere length typically requires a blood sample — and remains costly. But we can use other sources to estimate age; how about a photo?

I mean, we do this all the time when we meet someone new or, as a physician, when we meet a new patient. I have often written that a patient “appears younger than their stated age,” and we’ve all had the experience of hearing how old someone is and being shocked. I mean, have you seen Sharon Stone recently? She’s 66 years old. Okay — to be fair, there might be some outside help there. But you get the point.

Back to the Cell Metabolism paper. Researchers report on multiple algorithms to obtain an “age” from a picture of an individual’s face. 

The first algorithm is pretty straightforward. Researchers collected 2811 images, all of Han Chinese individuals ranging in age from 20 to 90 years, and reconstructed a 3D facial map from those. 

Cell Metabolism


They then trained a convolutional neural network to predict the individuals’ ages from the pictures. It was quite accurate, as you can see here.

Cell Metabolism


In the AI age, this may not seem that impressive. A brief search online turned up dozens of apps that promised to guess my age from a photo.

I sent this rather unflattering picture of myself to ChatGPT which, after initially demurring and saying it was not designed to guess ages, pegged me at somewhere between 35 and 45, which I am taking as a major victory.

Dr. Wilson


But the Cell Metabolism paper goes deeper. Literally. They added a new dimension to facial image analysis by taking an individual’s temperature using a thermal scanning camera that provided temperatures at 54 different landmarks across the face.

Cell Metabolism


And this is where things start to get interesting. Because sure, the visible part of your face can change depending on makeup, expression, plastic surgery, and the like. But the temperature? That’s harder to fake.

It turns out that the temperature distribution in your face changes as you get older. There is a cooling of the nose and the cheeks, for example.

Cell Metabolism


And the researchers could combine all this temperature data to guess someone’s calendar age fairly accurately, though notably not as accurately as the model that just looks at the pictures.

Cell Metabolism


But guessing your age is not really the interesting part of thermal imaging of the face. It’s guessing — or, rather, predicting — the state of your metabolism. All these study participants had extensive metabolic testing performed, as well as detailed analysis of their lifestyle behaviors. And facial images could be used to predict those factors.

For example, the 3D reconstruction of the faces could predict who ate seafood (they tend to look younger than their actual age) compared with who ate poultry and meat (they tend to look older). The thermal imaging could predict who got more sleep (they look younger from a temperature perspective) and who ate more yogurt (also younger-appearing, temperature-wise). Facial temperature patterns could identify those with higher BMI, higher blood pressure, higher fasting glucose. 

The researchers used the difference between actual and predicted age as a metric to measure illness as well. You can see here how, on average, individuals with hypertension, diabetes, and even liver cysts are “older,” at least by face temperature.

Cell Metabolism


It may even be possible to use facial temperature as biofeedback. In a small study, the researchers measured the difference between facial temperature age and real age before and after 2 weeks of jump-roping. It turns out that 2 weeks of jump-roping can make you look about 5 years younger, at least as judged by a thermal camera. Or like the Predator.

Cell Metabolism


Okay, this is all very cool, but I’m not saying we’ll all be doing facial temperature tests in the near future. No; what this study highlights for me is how much information about ourselves is available to those who know how to decode it. Maybe those data come from the wrinkles in our faces, or the angles of our smiles, or the speed with which we type, or the temperature of our elbows. The data have always been there, actually, but we’ve never had the tools powerful enough to analyze them until now.

When I was a kid, I was obsessed with Star Trek — I know, you’re shocked — and, of course, the famous tricorder, a scanner that could tell everything about someone’s state of health in 5 seconds from 3 feet away. That’s how I thought medicine really would be in the future. Once I got to medical school, I was disabused of that notion. But the age of data, the age of AI, may mean the tricorder age is not actually that far away.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity. 

My oldest daughter is at sleepaway camp for a couple of weeks, and the camp has a photographer who goes around all day taking pictures of the kids, which get uploaded to a private Facebook group. In the past, I would go online every day (or, okay, several times a day) and scroll through all those pictures looking for one that features my kid. 

I don’t have to do that anymore. This year, I simply uploaded a picture of my daughter to an app and artificial intelligence (AI) takes care of the rest, recognizing her face amidst the sea of smiling children, and flagging just those photos for me to peruse. It’s amazing, really. And a bit scary.

The fact that facial recognition has penetrated the summer camp market should tell you that the tech is truly ubiquitous. But today we’re going to think a bit more about what AI can do with a picture of your face, because the power of facial recognition is not just skin deep.

What’s got me hot and bothered about facial images is this paper, appearing in Cell Metabolism, which adds a new layer to the standard facial-analysis playbook: facial temperature.

To understand this paper, you need to understand a whole field of research that is developing various different “clocks” for age. 

It turns out that age really is just a number. Our cells, our proteins, our biochemistry can be analyzed to give different numbers. These “clocks,” as distinct from the calendar we usually use to measure our age, might have more predictive power than the number itself. 

There are numerous molecular clocks, such as telomere length, that not only correlate with calendar age but are superior to calendar age in predicting age-related complications. Testing telomere length typically requires a blood sample — and remains costly. But we can use other sources to estimate age; how about a photo?

I mean, we do this all the time when we meet someone new or, as a physician, when we meet a new patient. I have often written that a patient “appears younger than their stated age,” and we’ve all had the experience of hearing how old someone is and being shocked. I mean, have you seen Sharon Stone recently? She’s 66 years old. Okay — to be fair, there might be some outside help there. But you get the point.

Back to the Cell Metabolism paper. Researchers report on multiple algorithms to obtain an “age” from a picture of an individual’s face. 

The first algorithm is pretty straightforward. Researchers collected 2811 images, all of Han Chinese individuals ranging in age from 20 to 90 years, and reconstructed a 3D facial map from those. 

Cell Metabolism


They then trained a convolutional neural network to predict the individuals’ ages from the pictures. It was quite accurate, as you can see here.

Cell Metabolism


In the AI age, this may not seem that impressive. A brief search online turned up dozens of apps that promised to guess my age from a photo.

I sent this rather unflattering picture of myself to ChatGPT which, after initially demurring and saying it was not designed to guess ages, pegged me at somewhere between 35 and 45, which I am taking as a major victory.

Dr. Wilson


But the Cell Metabolism paper goes deeper. Literally. They added a new dimension to facial image analysis by taking an individual’s temperature using a thermal scanning camera that provided temperatures at 54 different landmarks across the face.

Cell Metabolism


And this is where things start to get interesting. Because sure, the visible part of your face can change depending on makeup, expression, plastic surgery, and the like. But the temperature? That’s harder to fake.

It turns out that the temperature distribution in your face changes as you get older. There is a cooling of the nose and the cheeks, for example.

Cell Metabolism


And the researchers could combine all this temperature data to guess someone’s calendar age fairly accurately, though notably not as accurately as the model that just looks at the pictures.

Cell Metabolism


But guessing your age is not really the interesting part of thermal imaging of the face. It’s guessing — or, rather, predicting — the state of your metabolism. All these study participants had extensive metabolic testing performed, as well as detailed analysis of their lifestyle behaviors. And facial images could be used to predict those factors.

For example, the 3D reconstruction of the faces could predict who ate seafood (they tend to look younger than their actual age) compared with who ate poultry and meat (they tend to look older). The thermal imaging could predict who got more sleep (they look younger from a temperature perspective) and who ate more yogurt (also younger-appearing, temperature-wise). Facial temperature patterns could identify those with higher BMI, higher blood pressure, higher fasting glucose. 

The researchers used the difference between actual and predicted age as a metric to measure illness as well. You can see here how, on average, individuals with hypertension, diabetes, and even liver cysts are “older,” at least by face temperature.

Cell Metabolism


It may even be possible to use facial temperature as biofeedback. In a small study, the researchers measured the difference between facial temperature age and real age before and after 2 weeks of jump-roping. It turns out that 2 weeks of jump-roping can make you look about 5 years younger, at least as judged by a thermal camera. Or like the Predator.

Cell Metabolism


Okay, this is all very cool, but I’m not saying we’ll all be doing facial temperature tests in the near future. No; what this study highlights for me is how much information about ourselves is available to those who know how to decode it. Maybe those data come from the wrinkles in our faces, or the angles of our smiles, or the speed with which we type, or the temperature of our elbows. The data have always been there, actually, but we’ve never had the tools powerful enough to analyze them until now.

When I was a kid, I was obsessed with Star Trek — I know, you’re shocked — and, of course, the famous tricorder, a scanner that could tell everything about someone’s state of health in 5 seconds from 3 feet away. That’s how I thought medicine really would be in the future. Once I got to medical school, I was disabused of that notion. But the age of data, the age of AI, may mean the tricorder age is not actually that far away.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

This transcript has been edited for clarity. 

My oldest daughter is at sleepaway camp for a couple of weeks, and the camp has a photographer who goes around all day taking pictures of the kids, which get uploaded to a private Facebook group. In the past, I would go online every day (or, okay, several times a day) and scroll through all those pictures looking for one that features my kid. 

I don’t have to do that anymore. This year, I simply uploaded a picture of my daughter to an app and artificial intelligence (AI) takes care of the rest, recognizing her face amidst the sea of smiling children, and flagging just those photos for me to peruse. It’s amazing, really. And a bit scary.

The fact that facial recognition has penetrated the summer camp market should tell you that the tech is truly ubiquitous. But today we’re going to think a bit more about what AI can do with a picture of your face, because the power of facial recognition is not just skin deep.

What’s got me hot and bothered about facial images is this paper, appearing in Cell Metabolism, which adds a new layer to the standard facial-analysis playbook: facial temperature.

To understand this paper, you need to understand a whole field of research that is developing various different “clocks” for age. 

It turns out that age really is just a number. Our cells, our proteins, our biochemistry can be analyzed to give different numbers. These “clocks,” as distinct from the calendar we usually use to measure our age, might have more predictive power than the number itself. 

There are numerous molecular clocks, such as telomere length, that not only correlate with calendar age but are superior to calendar age in predicting age-related complications. Testing telomere length typically requires a blood sample — and remains costly. But we can use other sources to estimate age; how about a photo?

I mean, we do this all the time when we meet someone new or, as a physician, when we meet a new patient. I have often written that a patient “appears younger than their stated age,” and we’ve all had the experience of hearing how old someone is and being shocked. I mean, have you seen Sharon Stone recently? She’s 66 years old. Okay — to be fair, there might be some outside help there. But you get the point.

Back to the Cell Metabolism paper. Researchers report on multiple algorithms to obtain an “age” from a picture of an individual’s face. 

The first algorithm is pretty straightforward. Researchers collected 2811 images, all of Han Chinese individuals ranging in age from 20 to 90 years, and reconstructed a 3D facial map from those. 

Cell Metabolism


They then trained a convolutional neural network to predict the individuals’ ages from the pictures. It was quite accurate, as you can see here.

Cell Metabolism


In the AI age, this may not seem that impressive. A brief search online turned up dozens of apps that promised to guess my age from a photo.

I sent this rather unflattering picture of myself to ChatGPT which, after initially demurring and saying it was not designed to guess ages, pegged me at somewhere between 35 and 45, which I am taking as a major victory.

Dr. Wilson


But the Cell Metabolism paper goes deeper. Literally. They added a new dimension to facial image analysis by taking an individual’s temperature using a thermal scanning camera that provided temperatures at 54 different landmarks across the face.

Cell Metabolism


And this is where things start to get interesting. Because sure, the visible part of your face can change depending on makeup, expression, plastic surgery, and the like. But the temperature? That’s harder to fake.

It turns out that the temperature distribution in your face changes as you get older. There is a cooling of the nose and the cheeks, for example.

Cell Metabolism


And the researchers could combine all this temperature data to guess someone’s calendar age fairly accurately, though notably not as accurately as the model that just looks at the pictures.

Cell Metabolism


But guessing your age is not really the interesting part of thermal imaging of the face. It’s guessing — or, rather, predicting — the state of your metabolism. All these study participants had extensive metabolic testing performed, as well as detailed analysis of their lifestyle behaviors. And facial images could be used to predict those factors.

For example, the 3D reconstruction of the faces could predict who ate seafood (they tend to look younger than their actual age) compared with who ate poultry and meat (they tend to look older). The thermal imaging could predict who got more sleep (they look younger from a temperature perspective) and who ate more yogurt (also younger-appearing, temperature-wise). Facial temperature patterns could identify those with higher BMI, higher blood pressure, higher fasting glucose. 

The researchers used the difference between actual and predicted age as a metric to measure illness as well. You can see here how, on average, individuals with hypertension, diabetes, and even liver cysts are “older,” at least by face temperature.

Cell Metabolism


It may even be possible to use facial temperature as biofeedback. In a small study, the researchers measured the difference between facial temperature age and real age before and after 2 weeks of jump-roping. It turns out that 2 weeks of jump-roping can make you look about 5 years younger, at least as judged by a thermal camera. Or like the Predator.

Cell Metabolism


Okay, this is all very cool, but I’m not saying we’ll all be doing facial temperature tests in the near future. No; what this study highlights for me is how much information about ourselves is available to those who know how to decode it. Maybe those data come from the wrinkles in our faces, or the angles of our smiles, or the speed with which we type, or the temperature of our elbows. The data have always been there, actually, but we’ve never had the tools powerful enough to analyze them until now.

When I was a kid, I was obsessed with Star Trek — I know, you’re shocked — and, of course, the famous tricorder, a scanner that could tell everything about someone’s state of health in 5 seconds from 3 feet away. That’s how I thought medicine really would be in the future. Once I got to medical school, I was disabused of that notion. But the age of data, the age of AI, may mean the tricorder age is not actually that far away.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cardiovascular Health Becoming a Major Risk Factor for Dementia

Article Type
Changed
Wed, 07/10/2024 - 14:05

In a shifting landscape in dementia risk factors, cardiovascular health is now taking precedence.

That’s according to researchers from University College London (UCL) in the United Kingdom who analyzed 27 papers about dementia that had data collected over more than 70 years. They calculated what share of dementia cases were due to different risk factors. Their findings were recently published in the Lancet Public Health.

Top risk factors for dementia over the years have been hypertension, obesity, diabetes, education, and smoking, according to a news release on the findings. But the prevalence of risk factors has changed over the decades.

Researchers said smoking and education have become less important risk factors because of “population-level interventions,” such as stop-smoking campaigns and compulsory public education. On the other hand, obesity and diabetes rates have increased and become bigger risk factors.

Hypertension remains the greatest risk factor, even though doctors and public health groups are putting more emphasis on managing the condition, the study said.

“Cardiovascular risk factors may have contributed more to dementia risk over time, so these deserve more targeted action for future dementia prevention efforts,” said Naaheed Mukadam, PhD, an associate professor at UCL and the lead author of the study.

Eliminating modifiable risk factors could theoretically prevent 40% of dementia cases, the release said. 

The CDC says that an estimated 5.8 million people in the United States have Alzheimer’s disease and related dementias, including 5.6 million people ages 65 and older and about 200,000 under age 65. The UCL release said an estimated 944,000 in the U.K. have dementia. 

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

In a shifting landscape in dementia risk factors, cardiovascular health is now taking precedence.

That’s according to researchers from University College London (UCL) in the United Kingdom who analyzed 27 papers about dementia that had data collected over more than 70 years. They calculated what share of dementia cases were due to different risk factors. Their findings were recently published in the Lancet Public Health.

Top risk factors for dementia over the years have been hypertension, obesity, diabetes, education, and smoking, according to a news release on the findings. But the prevalence of risk factors has changed over the decades.

Researchers said smoking and education have become less important risk factors because of “population-level interventions,” such as stop-smoking campaigns and compulsory public education. On the other hand, obesity and diabetes rates have increased and become bigger risk factors.

Hypertension remains the greatest risk factor, even though doctors and public health groups are putting more emphasis on managing the condition, the study said.

“Cardiovascular risk factors may have contributed more to dementia risk over time, so these deserve more targeted action for future dementia prevention efforts,” said Naaheed Mukadam, PhD, an associate professor at UCL and the lead author of the study.

Eliminating modifiable risk factors could theoretically prevent 40% of dementia cases, the release said. 

The CDC says that an estimated 5.8 million people in the United States have Alzheimer’s disease and related dementias, including 5.6 million people ages 65 and older and about 200,000 under age 65. The UCL release said an estimated 944,000 in the U.K. have dementia. 

A version of this article first appeared on WebMD.com.

In a shifting landscape in dementia risk factors, cardiovascular health is now taking precedence.

That’s according to researchers from University College London (UCL) in the United Kingdom who analyzed 27 papers about dementia that had data collected over more than 70 years. They calculated what share of dementia cases were due to different risk factors. Their findings were recently published in the Lancet Public Health.

Top risk factors for dementia over the years have been hypertension, obesity, diabetes, education, and smoking, according to a news release on the findings. But the prevalence of risk factors has changed over the decades.

Researchers said smoking and education have become less important risk factors because of “population-level interventions,” such as stop-smoking campaigns and compulsory public education. On the other hand, obesity and diabetes rates have increased and become bigger risk factors.

Hypertension remains the greatest risk factor, even though doctors and public health groups are putting more emphasis on managing the condition, the study said.

“Cardiovascular risk factors may have contributed more to dementia risk over time, so these deserve more targeted action for future dementia prevention efforts,” said Naaheed Mukadam, PhD, an associate professor at UCL and the lead author of the study.

Eliminating modifiable risk factors could theoretically prevent 40% of dementia cases, the release said. 

The CDC says that an estimated 5.8 million people in the United States have Alzheimer’s disease and related dementias, including 5.6 million people ages 65 and older and about 200,000 under age 65. The UCL release said an estimated 944,000 in the U.K. have dementia. 

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET PUBLIC HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Similar Outcomes With Labetalol, Nifedipine for Chronic Hypertension in Pregnancy

Article Type
Changed
Thu, 06/27/2024 - 15:09

Treatment for chronic hypertension in pregnancy with labetalol showed no significant differences in maternal or neonatal outcomes, compared with treatment with nifedipine, new research indicates.

The open-label, multicenter, randomized CHAP (Chronic Hypertension in Pregnancy) trial showed that treating mild chronic hypertension was better than delaying treatment until severe hypertension developed, but still unclear was whether, or to what extent, the choice of first-line treatment affected outcomes.

Researchers, led by Ayodeji A. Sanusi, MD, MPH, with the Division of Maternal and Fetal Medicine at the University of Alabama at Birmingham, conducted a secondary analysis of CHAP to compare the primary treatments. Mild chronic hypertension in the study was defined as blood pressure of 140-159/90-104 mmHg before 20 weeks of gestation.
 

Three Comparisons

Three comparisons were performed in 2292 participants based on medications prescribed at enrollment: 720 (31.4%) received labetalol; 417 (18.2%) initially received nifedipine; and 1155 (50.4%) had standard care. Labetalol was compared with standard care; nifedipine was compared with standard care; and labetalol was compared with nifedipine.

The primary outcome was occurrence of superimposed preeclampsia with severe features; preterm birth before 35 weeks of gestation; placental abruption; or fetal or neonatal death. The key secondary outcome was a small-for-gestational age neonate. Researchers also compared adverse effects between groups.

Among the results were the following:

  • The primary outcome occurred in 30.1% in the labetalol group; 31.2% in the nifedipine group; and 37% in the standard care group.
  • Risk of the primary outcome was lower among those receiving treatment. For labetalol vs standard care, the adjusted relative risk (RR) was 0.82; 95% confidence interval (CI), 0.72-0.94. For nifedipine vs standard care, the adjusted RR was 0.84; 95% CI, 0.71-0.99. There was no significant difference in risk when labetalol was compared with nifedipine (adjusted RR, 0.98; 95% CI, 0.82-1.18).
  • There were no significant differences in numbers of small-for-gestational age neonates or serious adverse events between those who received labetalol and those using nifedipine.

Any adverse events were significantly more common with nifedipine, compared with labetalol (35.7% vs 28.3%, P = .009), and with nifedipine, compared with standard care (35.7% vs 26.3%, P = .0003). Adverse event rates were not significantly higher with labetalol when compared with standard care (28.3% vs 26.3%, P = .34). The most frequently reported adverse events were headache, medication intolerance, dizziness, nausea, dyspepsia, neonatal jaundice, and vomiting.

“Thus, labetalol compared with nifedipine appeared to have fewer adverse events and to be better tolerated,” the authors write. They note that labetalol, a third-generation mixed alpha- and beta-adrenergic antagonist, is contraindicated for those who have obstructive pulmonary disease and nifedipine, a dihydropyridine calcium channel blocker, is contraindicated in people with tachycardia.

The authors write that their results align with other studies that have not found differences between labetalol and nifedipine. “[O]ur findings support the use of either labetalol or nifedipine as initial first-line agents for the management of mild chronic hypertension in pregnancy to reduce the risk of adverse maternal and other perinatal outcomes with no increased risk of fetal harm,” the authors write.

Dr. Sanusi reports no relevant financial relationships. Full coauthor disclosures are available with the full text of the paper.

Publications
Topics
Sections

Treatment for chronic hypertension in pregnancy with labetalol showed no significant differences in maternal or neonatal outcomes, compared with treatment with nifedipine, new research indicates.

The open-label, multicenter, randomized CHAP (Chronic Hypertension in Pregnancy) trial showed that treating mild chronic hypertension was better than delaying treatment until severe hypertension developed, but still unclear was whether, or to what extent, the choice of first-line treatment affected outcomes.

Researchers, led by Ayodeji A. Sanusi, MD, MPH, with the Division of Maternal and Fetal Medicine at the University of Alabama at Birmingham, conducted a secondary analysis of CHAP to compare the primary treatments. Mild chronic hypertension in the study was defined as blood pressure of 140-159/90-104 mmHg before 20 weeks of gestation.
 

Three Comparisons

Three comparisons were performed in 2292 participants based on medications prescribed at enrollment: 720 (31.4%) received labetalol; 417 (18.2%) initially received nifedipine; and 1155 (50.4%) had standard care. Labetalol was compared with standard care; nifedipine was compared with standard care; and labetalol was compared with nifedipine.

The primary outcome was occurrence of superimposed preeclampsia with severe features; preterm birth before 35 weeks of gestation; placental abruption; or fetal or neonatal death. The key secondary outcome was a small-for-gestational age neonate. Researchers also compared adverse effects between groups.

Among the results were the following:

  • The primary outcome occurred in 30.1% in the labetalol group; 31.2% in the nifedipine group; and 37% in the standard care group.
  • Risk of the primary outcome was lower among those receiving treatment. For labetalol vs standard care, the adjusted relative risk (RR) was 0.82; 95% confidence interval (CI), 0.72-0.94. For nifedipine vs standard care, the adjusted RR was 0.84; 95% CI, 0.71-0.99. There was no significant difference in risk when labetalol was compared with nifedipine (adjusted RR, 0.98; 95% CI, 0.82-1.18).
  • There were no significant differences in numbers of small-for-gestational age neonates or serious adverse events between those who received labetalol and those using nifedipine.

Any adverse events were significantly more common with nifedipine, compared with labetalol (35.7% vs 28.3%, P = .009), and with nifedipine, compared with standard care (35.7% vs 26.3%, P = .0003). Adverse event rates were not significantly higher with labetalol when compared with standard care (28.3% vs 26.3%, P = .34). The most frequently reported adverse events were headache, medication intolerance, dizziness, nausea, dyspepsia, neonatal jaundice, and vomiting.

“Thus, labetalol compared with nifedipine appeared to have fewer adverse events and to be better tolerated,” the authors write. They note that labetalol, a third-generation mixed alpha- and beta-adrenergic antagonist, is contraindicated for those who have obstructive pulmonary disease and nifedipine, a dihydropyridine calcium channel blocker, is contraindicated in people with tachycardia.

The authors write that their results align with other studies that have not found differences between labetalol and nifedipine. “[O]ur findings support the use of either labetalol or nifedipine as initial first-line agents for the management of mild chronic hypertension in pregnancy to reduce the risk of adverse maternal and other perinatal outcomes with no increased risk of fetal harm,” the authors write.

Dr. Sanusi reports no relevant financial relationships. Full coauthor disclosures are available with the full text of the paper.

Treatment for chronic hypertension in pregnancy with labetalol showed no significant differences in maternal or neonatal outcomes, compared with treatment with nifedipine, new research indicates.

The open-label, multicenter, randomized CHAP (Chronic Hypertension in Pregnancy) trial showed that treating mild chronic hypertension was better than delaying treatment until severe hypertension developed, but still unclear was whether, or to what extent, the choice of first-line treatment affected outcomes.

Researchers, led by Ayodeji A. Sanusi, MD, MPH, with the Division of Maternal and Fetal Medicine at the University of Alabama at Birmingham, conducted a secondary analysis of CHAP to compare the primary treatments. Mild chronic hypertension in the study was defined as blood pressure of 140-159/90-104 mmHg before 20 weeks of gestation.
 

Three Comparisons

Three comparisons were performed in 2292 participants based on medications prescribed at enrollment: 720 (31.4%) received labetalol; 417 (18.2%) initially received nifedipine; and 1155 (50.4%) had standard care. Labetalol was compared with standard care; nifedipine was compared with standard care; and labetalol was compared with nifedipine.

The primary outcome was occurrence of superimposed preeclampsia with severe features; preterm birth before 35 weeks of gestation; placental abruption; or fetal or neonatal death. The key secondary outcome was a small-for-gestational age neonate. Researchers also compared adverse effects between groups.

Among the results were the following:

  • The primary outcome occurred in 30.1% in the labetalol group; 31.2% in the nifedipine group; and 37% in the standard care group.
  • Risk of the primary outcome was lower among those receiving treatment. For labetalol vs standard care, the adjusted relative risk (RR) was 0.82; 95% confidence interval (CI), 0.72-0.94. For nifedipine vs standard care, the adjusted RR was 0.84; 95% CI, 0.71-0.99. There was no significant difference in risk when labetalol was compared with nifedipine (adjusted RR, 0.98; 95% CI, 0.82-1.18).
  • There were no significant differences in numbers of small-for-gestational age neonates or serious adverse events between those who received labetalol and those using nifedipine.

Any adverse events were significantly more common with nifedipine, compared with labetalol (35.7% vs 28.3%, P = .009), and with nifedipine, compared with standard care (35.7% vs 26.3%, P = .0003). Adverse event rates were not significantly higher with labetalol when compared with standard care (28.3% vs 26.3%, P = .34). The most frequently reported adverse events were headache, medication intolerance, dizziness, nausea, dyspepsia, neonatal jaundice, and vomiting.

“Thus, labetalol compared with nifedipine appeared to have fewer adverse events and to be better tolerated,” the authors write. They note that labetalol, a third-generation mixed alpha- and beta-adrenergic antagonist, is contraindicated for those who have obstructive pulmonary disease and nifedipine, a dihydropyridine calcium channel blocker, is contraindicated in people with tachycardia.

The authors write that their results align with other studies that have not found differences between labetalol and nifedipine. “[O]ur findings support the use of either labetalol or nifedipine as initial first-line agents for the management of mild chronic hypertension in pregnancy to reduce the risk of adverse maternal and other perinatal outcomes with no increased risk of fetal harm,” the authors write.

Dr. Sanusi reports no relevant financial relationships. Full coauthor disclosures are available with the full text of the paper.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM OBSTETRICS & GYNECOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

BP Disorder in Pregnancy Tied to Young-Onset Dementia Risk

Article Type
Changed
Wed, 06/26/2024 - 12:34

 

TOPLINE:

A new analysis showed that preeclampsia is associated with an increased risk for young-onset dementia.

METHODOLOGY:

  • Researchers analyzed data from the French Conception study, a nationwide prospective cohort study of more than 1.9 million pregnancies.
  • Mothers were followed for an average of 9 years.

TAKEAWAY:

  • Nearly 3% of the mothers had preeclampsia, and 128 developed young-onset dementia.
  • Preeclampsia was associated with a 2.65-fold increased risk for young-onset dementia after adjusting for obesity, diabetes, smoking, drug or alcohol addiction, and social deprivation.
  • The risk was greater when preeclampsia occurred before 34 weeks of gestation (hazard ratio [HR], 4.15) or was superimposed on chronic hypertension (HR, 4.76).
  • Prior research has found an association between preeclampsia and vascular dementia, but this analysis “is the first to show an increase in early-onset dementia risk,” the authors of the study wrote.

IN PRACTICE:

“Individuals who have had preeclampsia should be reassured that young-onset dementia remains a very rare condition. Their absolute risk increases only imperceptibly,” Stephen Tong, PhD, and Roxanne Hastie, PhD, both with the University of Melbourne, Melbourne, Australia, wrote in a related commentary about the findings.

“Individuals who have been affected by preeclampsia in a prior pregnancy might instead focus on reducing their risk of developing the many chronic health ailments that are far more common,” they added. “Although it is yet to be proven in clinical trials, it is plausible that after an episode of preeclampsia, adopting a healthy lifestyle may improve vascular health and reduce the risk of many serious cardiovascular conditions.”

SOURCE:

Valérie Olié, PhD, of the Santé Publique France in Saint-Maurice, France, was the corresponding author on the paper. The research letter was published online in JAMA Network Open.

LIMITATIONS:

The investigators relied on hospital records to identify cases of dementia, which may have led to underestimation of incidence of the disease.

DISCLOSURES:

The study was funded by the French Hypertension Society, the French Hypertension Research Foundation, and the French Cardiology Federation. A co-author disclosed personal fees from pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A new analysis showed that preeclampsia is associated with an increased risk for young-onset dementia.

METHODOLOGY:

  • Researchers analyzed data from the French Conception study, a nationwide prospective cohort study of more than 1.9 million pregnancies.
  • Mothers were followed for an average of 9 years.

TAKEAWAY:

  • Nearly 3% of the mothers had preeclampsia, and 128 developed young-onset dementia.
  • Preeclampsia was associated with a 2.65-fold increased risk for young-onset dementia after adjusting for obesity, diabetes, smoking, drug or alcohol addiction, and social deprivation.
  • The risk was greater when preeclampsia occurred before 34 weeks of gestation (hazard ratio [HR], 4.15) or was superimposed on chronic hypertension (HR, 4.76).
  • Prior research has found an association between preeclampsia and vascular dementia, but this analysis “is the first to show an increase in early-onset dementia risk,” the authors of the study wrote.

IN PRACTICE:

“Individuals who have had preeclampsia should be reassured that young-onset dementia remains a very rare condition. Their absolute risk increases only imperceptibly,” Stephen Tong, PhD, and Roxanne Hastie, PhD, both with the University of Melbourne, Melbourne, Australia, wrote in a related commentary about the findings.

“Individuals who have been affected by preeclampsia in a prior pregnancy might instead focus on reducing their risk of developing the many chronic health ailments that are far more common,” they added. “Although it is yet to be proven in clinical trials, it is plausible that after an episode of preeclampsia, adopting a healthy lifestyle may improve vascular health and reduce the risk of many serious cardiovascular conditions.”

SOURCE:

Valérie Olié, PhD, of the Santé Publique France in Saint-Maurice, France, was the corresponding author on the paper. The research letter was published online in JAMA Network Open.

LIMITATIONS:

The investigators relied on hospital records to identify cases of dementia, which may have led to underestimation of incidence of the disease.

DISCLOSURES:

The study was funded by the French Hypertension Society, the French Hypertension Research Foundation, and the French Cardiology Federation. A co-author disclosed personal fees from pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

A new analysis showed that preeclampsia is associated with an increased risk for young-onset dementia.

METHODOLOGY:

  • Researchers analyzed data from the French Conception study, a nationwide prospective cohort study of more than 1.9 million pregnancies.
  • Mothers were followed for an average of 9 years.

TAKEAWAY:

  • Nearly 3% of the mothers had preeclampsia, and 128 developed young-onset dementia.
  • Preeclampsia was associated with a 2.65-fold increased risk for young-onset dementia after adjusting for obesity, diabetes, smoking, drug or alcohol addiction, and social deprivation.
  • The risk was greater when preeclampsia occurred before 34 weeks of gestation (hazard ratio [HR], 4.15) or was superimposed on chronic hypertension (HR, 4.76).
  • Prior research has found an association between preeclampsia and vascular dementia, but this analysis “is the first to show an increase in early-onset dementia risk,” the authors of the study wrote.

IN PRACTICE:

“Individuals who have had preeclampsia should be reassured that young-onset dementia remains a very rare condition. Their absolute risk increases only imperceptibly,” Stephen Tong, PhD, and Roxanne Hastie, PhD, both with the University of Melbourne, Melbourne, Australia, wrote in a related commentary about the findings.

“Individuals who have been affected by preeclampsia in a prior pregnancy might instead focus on reducing their risk of developing the many chronic health ailments that are far more common,” they added. “Although it is yet to be proven in clinical trials, it is plausible that after an episode of preeclampsia, adopting a healthy lifestyle may improve vascular health and reduce the risk of many serious cardiovascular conditions.”

SOURCE:

Valérie Olié, PhD, of the Santé Publique France in Saint-Maurice, France, was the corresponding author on the paper. The research letter was published online in JAMA Network Open.

LIMITATIONS:

The investigators relied on hospital records to identify cases of dementia, which may have led to underestimation of incidence of the disease.

DISCLOSURES:

The study was funded by the French Hypertension Society, the French Hypertension Research Foundation, and the French Cardiology Federation. A co-author disclosed personal fees from pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Women with Autoimmune Liver Diseases Still Face Increased CVD Risks

Article Type
Changed
Wed, 06/19/2024 - 16:40

Women with autoimmune liver diseases (AILD) may face increased risks for major adverse cardiovascular outcomes, according to a study presented at the annual Digestive Disease Week® (DDW).

In particular, women with autoimmune hepatitis (AIH) and primary biliary cholangitis (PBC) appear to have higher risks than women without AIH or PBC. Those with primary sclerosing cholangitis (PSC) don’t seem to have increased risks.

“We know that cardiovascular disease remains the number one cause of death, but the mortality rate for women over the last decade has plateaued, whereas in men it’s actually declining due to interventions,” said lead author Rachel Redfield, MD, a transplant hepatology fellow at Thomas Jefferson University Hospital in Philadelphia.

“This is likely because we don’t have adequate risk stratification, especially for women,” she said. “We know that immune-mediated diseases — such as rheumatoid arthritis and psoriasis — carry a higher risk of cardiovascular disease, but there’s not a lot of data on our autoimmune liver disease patients.”

Dr. Redfield
Dr. Rachel Redfield

Although being a female can offer protection against some CVD risks, the atherosclerotic cardiovascular disease (ASCVD) 10-year risk score calculator recommended by the American College of Cardiology doesn’t include chronic inflammatory diseases associated with increased CVD risk, including AILD.

Dr. Redfield and colleagues conducted a multicenter, retrospective cohort study of patients with AIH, PBC, and PSC from 1999-2019. Using TriNetX data, the researchers looked at women with AILD who also had diabetes mellitus, hypertension, and hyperlipidemia, as well as a control group of men and women with these same disorders, excluding those who used biologics, immune modulators, and steroids or had other autoimmune disorders.

The research team used 1:1 propensity-score matching for women in the study group and in the control group based on age, race, ethnicity, ASCVD risk factors, and tobacco use. Women in the study group and men in the control group were matched for age, race, ethnicity, and tobacco use.

The primary outcome was summative cardiovascular risk, including unstable angina, acute myocardial infarction, presence of coronary angioplasty implant, coronary artery bypass, percutaneous coronary intervention, and cerebral infarction.

Overall, women with AIH had a significantly higher cardiovascular risk compared to women without AIH, at 25.4% versus 20.6% (P = .0007).

Specifically, women with PBC had a significantly higher cardiovascular risk compared to women without PBC, at 27.05% versus 20.9% (P < .0001).

There wasn’t a significant difference in risk between women with and without PSC, at 27.5% versus 21.8% (P = .27).

When compared to men without disease, women with AIH didn’t have a statistically significant higher risk, at 25.3% versus 24.2% (P = .44). Similarly, there didn’t appear to be a significant difference between women with PBC and men without PBC, at 26.9% versus 25.9% (P = .52), or between women with PSC and men without PSC, at 27.7% versus 26.2% (P = .78).

Dr. Redfield and colleagues then compared the ASCVD-calculated risk versus database risk, finding that in each group of women with AILD — including AIH, PBC, and PSC — the ASCVD-calculated risk was around 11%, compared with database risk scores of 25% for AIH, 27% for PBC, and 28% for PSC. These database risks appeared similar to both the ASCVD and database risk percentages for men.

“So potentially there’s an oversight in women with any kind of inflammatory disease, but specifically here, autoimmune liver diseases,” she said. “We really need to enhance our risk assessment strategies to take into account their risk and optimize patient outcomes.”

Dr. Redfield noted the limitations with using TriNetX data, including coding consistency among providers and healthcare organizations, unknown patient follow-up dates, and the inability to capture various inflammatory disease phenotypes, such as autoimmune hepatitis with multiple flares, which may be associated with higher cardiovascular risks.

As an attendee of the DDW session, Kenneth Kelson, MD, a gastroenterologist with Fremont Medical Group and Washington Hospital Healthcare System in Fremont, California, noted the importance of investigating the effects of different types of statins in these patients. Although the research team looked at top-level differences among statin users, finding that women with AILD were more likely to be on a statin, they didn’t incorporate statin therapy in the propensity-score matching model.

“Lipid-soluble statins are known to cause more liver trouble, even though it’s pretty low,” Dr. Kelson said. “Whereas the water-soluble statins have a lower incidence of liver issues.”

Dr. Redfield and Dr. Kelson reported no relevant disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Women with autoimmune liver diseases (AILD) may face increased risks for major adverse cardiovascular outcomes, according to a study presented at the annual Digestive Disease Week® (DDW).

In particular, women with autoimmune hepatitis (AIH) and primary biliary cholangitis (PBC) appear to have higher risks than women without AIH or PBC. Those with primary sclerosing cholangitis (PSC) don’t seem to have increased risks.

“We know that cardiovascular disease remains the number one cause of death, but the mortality rate for women over the last decade has plateaued, whereas in men it’s actually declining due to interventions,” said lead author Rachel Redfield, MD, a transplant hepatology fellow at Thomas Jefferson University Hospital in Philadelphia.

“This is likely because we don’t have adequate risk stratification, especially for women,” she said. “We know that immune-mediated diseases — such as rheumatoid arthritis and psoriasis — carry a higher risk of cardiovascular disease, but there’s not a lot of data on our autoimmune liver disease patients.”

Dr. Redfield
Dr. Rachel Redfield

Although being a female can offer protection against some CVD risks, the atherosclerotic cardiovascular disease (ASCVD) 10-year risk score calculator recommended by the American College of Cardiology doesn’t include chronic inflammatory diseases associated with increased CVD risk, including AILD.

Dr. Redfield and colleagues conducted a multicenter, retrospective cohort study of patients with AIH, PBC, and PSC from 1999-2019. Using TriNetX data, the researchers looked at women with AILD who also had diabetes mellitus, hypertension, and hyperlipidemia, as well as a control group of men and women with these same disorders, excluding those who used biologics, immune modulators, and steroids or had other autoimmune disorders.

The research team used 1:1 propensity-score matching for women in the study group and in the control group based on age, race, ethnicity, ASCVD risk factors, and tobacco use. Women in the study group and men in the control group were matched for age, race, ethnicity, and tobacco use.

The primary outcome was summative cardiovascular risk, including unstable angina, acute myocardial infarction, presence of coronary angioplasty implant, coronary artery bypass, percutaneous coronary intervention, and cerebral infarction.

Overall, women with AIH had a significantly higher cardiovascular risk compared to women without AIH, at 25.4% versus 20.6% (P = .0007).

Specifically, women with PBC had a significantly higher cardiovascular risk compared to women without PBC, at 27.05% versus 20.9% (P < .0001).

There wasn’t a significant difference in risk between women with and without PSC, at 27.5% versus 21.8% (P = .27).

When compared to men without disease, women with AIH didn’t have a statistically significant higher risk, at 25.3% versus 24.2% (P = .44). Similarly, there didn’t appear to be a significant difference between women with PBC and men without PBC, at 26.9% versus 25.9% (P = .52), or between women with PSC and men without PSC, at 27.7% versus 26.2% (P = .78).

Dr. Redfield and colleagues then compared the ASCVD-calculated risk versus database risk, finding that in each group of women with AILD — including AIH, PBC, and PSC — the ASCVD-calculated risk was around 11%, compared with database risk scores of 25% for AIH, 27% for PBC, and 28% for PSC. These database risks appeared similar to both the ASCVD and database risk percentages for men.

“So potentially there’s an oversight in women with any kind of inflammatory disease, but specifically here, autoimmune liver diseases,” she said. “We really need to enhance our risk assessment strategies to take into account their risk and optimize patient outcomes.”

Dr. Redfield noted the limitations with using TriNetX data, including coding consistency among providers and healthcare organizations, unknown patient follow-up dates, and the inability to capture various inflammatory disease phenotypes, such as autoimmune hepatitis with multiple flares, which may be associated with higher cardiovascular risks.

As an attendee of the DDW session, Kenneth Kelson, MD, a gastroenterologist with Fremont Medical Group and Washington Hospital Healthcare System in Fremont, California, noted the importance of investigating the effects of different types of statins in these patients. Although the research team looked at top-level differences among statin users, finding that women with AILD were more likely to be on a statin, they didn’t incorporate statin therapy in the propensity-score matching model.

“Lipid-soluble statins are known to cause more liver trouble, even though it’s pretty low,” Dr. Kelson said. “Whereas the water-soluble statins have a lower incidence of liver issues.”

Dr. Redfield and Dr. Kelson reported no relevant disclosures.

Women with autoimmune liver diseases (AILD) may face increased risks for major adverse cardiovascular outcomes, according to a study presented at the annual Digestive Disease Week® (DDW).

In particular, women with autoimmune hepatitis (AIH) and primary biliary cholangitis (PBC) appear to have higher risks than women without AIH or PBC. Those with primary sclerosing cholangitis (PSC) don’t seem to have increased risks.

“We know that cardiovascular disease remains the number one cause of death, but the mortality rate for women over the last decade has plateaued, whereas in men it’s actually declining due to interventions,” said lead author Rachel Redfield, MD, a transplant hepatology fellow at Thomas Jefferson University Hospital in Philadelphia.

“This is likely because we don’t have adequate risk stratification, especially for women,” she said. “We know that immune-mediated diseases — such as rheumatoid arthritis and psoriasis — carry a higher risk of cardiovascular disease, but there’s not a lot of data on our autoimmune liver disease patients.”

Dr. Redfield
Dr. Rachel Redfield

Although being a female can offer protection against some CVD risks, the atherosclerotic cardiovascular disease (ASCVD) 10-year risk score calculator recommended by the American College of Cardiology doesn’t include chronic inflammatory diseases associated with increased CVD risk, including AILD.

Dr. Redfield and colleagues conducted a multicenter, retrospective cohort study of patients with AIH, PBC, and PSC from 1999-2019. Using TriNetX data, the researchers looked at women with AILD who also had diabetes mellitus, hypertension, and hyperlipidemia, as well as a control group of men and women with these same disorders, excluding those who used biologics, immune modulators, and steroids or had other autoimmune disorders.

The research team used 1:1 propensity-score matching for women in the study group and in the control group based on age, race, ethnicity, ASCVD risk factors, and tobacco use. Women in the study group and men in the control group were matched for age, race, ethnicity, and tobacco use.

The primary outcome was summative cardiovascular risk, including unstable angina, acute myocardial infarction, presence of coronary angioplasty implant, coronary artery bypass, percutaneous coronary intervention, and cerebral infarction.

Overall, women with AIH had a significantly higher cardiovascular risk compared to women without AIH, at 25.4% versus 20.6% (P = .0007).

Specifically, women with PBC had a significantly higher cardiovascular risk compared to women without PBC, at 27.05% versus 20.9% (P < .0001).

There wasn’t a significant difference in risk between women with and without PSC, at 27.5% versus 21.8% (P = .27).

When compared to men without disease, women with AIH didn’t have a statistically significant higher risk, at 25.3% versus 24.2% (P = .44). Similarly, there didn’t appear to be a significant difference between women with PBC and men without PBC, at 26.9% versus 25.9% (P = .52), or between women with PSC and men without PSC, at 27.7% versus 26.2% (P = .78).

Dr. Redfield and colleagues then compared the ASCVD-calculated risk versus database risk, finding that in each group of women with AILD — including AIH, PBC, and PSC — the ASCVD-calculated risk was around 11%, compared with database risk scores of 25% for AIH, 27% for PBC, and 28% for PSC. These database risks appeared similar to both the ASCVD and database risk percentages for men.

“So potentially there’s an oversight in women with any kind of inflammatory disease, but specifically here, autoimmune liver diseases,” she said. “We really need to enhance our risk assessment strategies to take into account their risk and optimize patient outcomes.”

Dr. Redfield noted the limitations with using TriNetX data, including coding consistency among providers and healthcare organizations, unknown patient follow-up dates, and the inability to capture various inflammatory disease phenotypes, such as autoimmune hepatitis with multiple flares, which may be associated with higher cardiovascular risks.

As an attendee of the DDW session, Kenneth Kelson, MD, a gastroenterologist with Fremont Medical Group and Washington Hospital Healthcare System in Fremont, California, noted the importance of investigating the effects of different types of statins in these patients. Although the research team looked at top-level differences among statin users, finding that women with AILD were more likely to be on a statin, they didn’t incorporate statin therapy in the propensity-score matching model.

“Lipid-soluble statins are known to cause more liver trouble, even though it’s pretty low,” Dr. Kelson said. “Whereas the water-soluble statins have a lower incidence of liver issues.”

Dr. Redfield and Dr. Kelson reported no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The Tyranny of Beta-Blockers

Article Type
Changed
Thu, 06/13/2024 - 16:54

Beta-blockers are excellent drugs. They’re cheap and effective; feature prominently in hypertension guidelines; and remain a sine qua non for coronary artery diseasemyocardial infarction, and heart failure treatment. They’ve been around forever, and we know they work. Good luck finding an adult medicine patient who isn’t on one.

Beta-blockers act by slowing resting heart rate (and blunting the heart rate response to exercise. The latter is a pernicious cause of activity intolerance that often goes unchecked. Even when the adverse effects of beta-blockers are appreciated, providers are loath to alter dosing, much less stop the drug. After all, beta-blockers are an integral part of guideline-directed medical therapy (GDMT), and GDMT saves lives.

Balancing Heart Rate and Stroke Volume Effects

The pulmonologist sees beta-blockers differently. To augment cardiac output and optimize oxygen uptake (VO2) during exercise, we need the heart rate response. In fact, the heart rate response contributes more to cardiac output than augmenting stroke volume (SV) and more to VO2 than the increase in arteriovenous (AV) oxygen difference. An inability to increase the heart rate commensurate with physiologic work is called chronotropic incompetence (CI). That’s what beta-blockers do ─ they cause CI.

Physiology dictates that CI will cause activity intolerance. That said, it’s hard to quantify the impact from beta-blockers at the individual patient level. Data suggest the heart rate effect is profound. A study in patients without heart failure found that 22% of participants on beta-blockers had CI, and the investigators used a conservative CI definition (≤ 62% of heart rate reserve used). A recent report published in JAMA Cardiology found that stopping beta-blockers in patients with heart failure allowed for an extra 30 beats/min at max exercise.

Wasserman and Whipp’s textbook, the last word on all things exercise, presents a sample subject who undergoes two separate cardiopulmonary exercise tests (CPETs). Before the first, he’s given a placebo, and before the second, he gets an intravenous beta-blocker. He’s a 23-year-old otherwise healthy male — the perfect test case for isolating beta-blocker impact without confounding by comorbid diseases, other medications, or deconditioning. His max heart rate dropped by 30 beats/min after the beta-blocker, identical to what we saw in the JAMA Cardiology study (with the heart rate increasing by 30 beats/min following withdrawal). Case closed. Stop the beta-blockers on your patients so they can meet their exercise goals and get healthy!

Such pithy enthusiasm discounts physiology’s complexities. When blunting our patient’s heart rate response with beta-blockers, we also increase diastolic filling time, which increases SV. For the 23-year-old in Wasserman and Whipp’s physiology textbook, the beta-blocker increased O2 pulse (the product of SV and AV difference). Presumably, this is mediated by the increased SV. There was a net reduction in VO2 peak, but it was nominal, suggesting that the drop in heart rate was largely offset by the increase in O2 pulse. For the patients in the JAMA Cardiology study, the entire group had a small increase in VO2 peak with beta-blocker withdrawal, but the effect differed by left ventricular function. Across different studies, the beta-blocker effect on heart rate is consistent but the change in overall exercise capacity is not. 

Patient Variability in Beta-Blocker Response

In addition to left ventricular function, there are other factors likely to drive variability at the patient level. We’ve treated the response to beta-blockers as a class effect — an obvious oversimplification. The impact on exercise and the heart will vary by dose and drug (eg, atenolol vs metoprolol vs carvedilol, and so on). Beta-blockers can also affect the lungs, and we’re still debating how cautious to be in the presence of asthma or chronic obstructive pulmonary disease

In a world of infinite time, resources, and expertise, we’d CPET everyone before and after beta-blocker use. Our current reality requires the unthinkable: We’ll have to talk to each other and our patients. For example, heart failure guidelines recommend titrating drugs to match the dose from trials that proved efficacy. These doses are quite high. Simple discussion with the cardiologist and the patient may allow for an adjustment back down with careful monitoring and close attention to activity tolerance. With any luck, you’ll preserve the benefits from GDMT while optimizing your patient›s ability to meet their exercise goals.
 

Dr. Holley, professor in the department of medicine, Uniformed Services University, Bethesda, Maryland, and a pulmonary/sleep and critical care medicine physician at MedStar Washington Hospital Center, Washington, disclosed ties with Metapharm, CHEST College, and WebMD.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Beta-blockers are excellent drugs. They’re cheap and effective; feature prominently in hypertension guidelines; and remain a sine qua non for coronary artery diseasemyocardial infarction, and heart failure treatment. They’ve been around forever, and we know they work. Good luck finding an adult medicine patient who isn’t on one.

Beta-blockers act by slowing resting heart rate (and blunting the heart rate response to exercise. The latter is a pernicious cause of activity intolerance that often goes unchecked. Even when the adverse effects of beta-blockers are appreciated, providers are loath to alter dosing, much less stop the drug. After all, beta-blockers are an integral part of guideline-directed medical therapy (GDMT), and GDMT saves lives.

Balancing Heart Rate and Stroke Volume Effects

The pulmonologist sees beta-blockers differently. To augment cardiac output and optimize oxygen uptake (VO2) during exercise, we need the heart rate response. In fact, the heart rate response contributes more to cardiac output than augmenting stroke volume (SV) and more to VO2 than the increase in arteriovenous (AV) oxygen difference. An inability to increase the heart rate commensurate with physiologic work is called chronotropic incompetence (CI). That’s what beta-blockers do ─ they cause CI.

Physiology dictates that CI will cause activity intolerance. That said, it’s hard to quantify the impact from beta-blockers at the individual patient level. Data suggest the heart rate effect is profound. A study in patients without heart failure found that 22% of participants on beta-blockers had CI, and the investigators used a conservative CI definition (≤ 62% of heart rate reserve used). A recent report published in JAMA Cardiology found that stopping beta-blockers in patients with heart failure allowed for an extra 30 beats/min at max exercise.

Wasserman and Whipp’s textbook, the last word on all things exercise, presents a sample subject who undergoes two separate cardiopulmonary exercise tests (CPETs). Before the first, he’s given a placebo, and before the second, he gets an intravenous beta-blocker. He’s a 23-year-old otherwise healthy male — the perfect test case for isolating beta-blocker impact without confounding by comorbid diseases, other medications, or deconditioning. His max heart rate dropped by 30 beats/min after the beta-blocker, identical to what we saw in the JAMA Cardiology study (with the heart rate increasing by 30 beats/min following withdrawal). Case closed. Stop the beta-blockers on your patients so they can meet their exercise goals and get healthy!

Such pithy enthusiasm discounts physiology’s complexities. When blunting our patient’s heart rate response with beta-blockers, we also increase diastolic filling time, which increases SV. For the 23-year-old in Wasserman and Whipp’s physiology textbook, the beta-blocker increased O2 pulse (the product of SV and AV difference). Presumably, this is mediated by the increased SV. There was a net reduction in VO2 peak, but it was nominal, suggesting that the drop in heart rate was largely offset by the increase in O2 pulse. For the patients in the JAMA Cardiology study, the entire group had a small increase in VO2 peak with beta-blocker withdrawal, but the effect differed by left ventricular function. Across different studies, the beta-blocker effect on heart rate is consistent but the change in overall exercise capacity is not. 

Patient Variability in Beta-Blocker Response

In addition to left ventricular function, there are other factors likely to drive variability at the patient level. We’ve treated the response to beta-blockers as a class effect — an obvious oversimplification. The impact on exercise and the heart will vary by dose and drug (eg, atenolol vs metoprolol vs carvedilol, and so on). Beta-blockers can also affect the lungs, and we’re still debating how cautious to be in the presence of asthma or chronic obstructive pulmonary disease

In a world of infinite time, resources, and expertise, we’d CPET everyone before and after beta-blocker use. Our current reality requires the unthinkable: We’ll have to talk to each other and our patients. For example, heart failure guidelines recommend titrating drugs to match the dose from trials that proved efficacy. These doses are quite high. Simple discussion with the cardiologist and the patient may allow for an adjustment back down with careful monitoring and close attention to activity tolerance. With any luck, you’ll preserve the benefits from GDMT while optimizing your patient›s ability to meet their exercise goals.
 

Dr. Holley, professor in the department of medicine, Uniformed Services University, Bethesda, Maryland, and a pulmonary/sleep and critical care medicine physician at MedStar Washington Hospital Center, Washington, disclosed ties with Metapharm, CHEST College, and WebMD.

A version of this article appeared on Medscape.com.

Beta-blockers are excellent drugs. They’re cheap and effective; feature prominently in hypertension guidelines; and remain a sine qua non for coronary artery diseasemyocardial infarction, and heart failure treatment. They’ve been around forever, and we know they work. Good luck finding an adult medicine patient who isn’t on one.

Beta-blockers act by slowing resting heart rate (and blunting the heart rate response to exercise. The latter is a pernicious cause of activity intolerance that often goes unchecked. Even when the adverse effects of beta-blockers are appreciated, providers are loath to alter dosing, much less stop the drug. After all, beta-blockers are an integral part of guideline-directed medical therapy (GDMT), and GDMT saves lives.

Balancing Heart Rate and Stroke Volume Effects

The pulmonologist sees beta-blockers differently. To augment cardiac output and optimize oxygen uptake (VO2) during exercise, we need the heart rate response. In fact, the heart rate response contributes more to cardiac output than augmenting stroke volume (SV) and more to VO2 than the increase in arteriovenous (AV) oxygen difference. An inability to increase the heart rate commensurate with physiologic work is called chronotropic incompetence (CI). That’s what beta-blockers do ─ they cause CI.

Physiology dictates that CI will cause activity intolerance. That said, it’s hard to quantify the impact from beta-blockers at the individual patient level. Data suggest the heart rate effect is profound. A study in patients without heart failure found that 22% of participants on beta-blockers had CI, and the investigators used a conservative CI definition (≤ 62% of heart rate reserve used). A recent report published in JAMA Cardiology found that stopping beta-blockers in patients with heart failure allowed for an extra 30 beats/min at max exercise.

Wasserman and Whipp’s textbook, the last word on all things exercise, presents a sample subject who undergoes two separate cardiopulmonary exercise tests (CPETs). Before the first, he’s given a placebo, and before the second, he gets an intravenous beta-blocker. He’s a 23-year-old otherwise healthy male — the perfect test case for isolating beta-blocker impact without confounding by comorbid diseases, other medications, or deconditioning. His max heart rate dropped by 30 beats/min after the beta-blocker, identical to what we saw in the JAMA Cardiology study (with the heart rate increasing by 30 beats/min following withdrawal). Case closed. Stop the beta-blockers on your patients so they can meet their exercise goals and get healthy!

Such pithy enthusiasm discounts physiology’s complexities. When blunting our patient’s heart rate response with beta-blockers, we also increase diastolic filling time, which increases SV. For the 23-year-old in Wasserman and Whipp’s physiology textbook, the beta-blocker increased O2 pulse (the product of SV and AV difference). Presumably, this is mediated by the increased SV. There was a net reduction in VO2 peak, but it was nominal, suggesting that the drop in heart rate was largely offset by the increase in O2 pulse. For the patients in the JAMA Cardiology study, the entire group had a small increase in VO2 peak with beta-blocker withdrawal, but the effect differed by left ventricular function. Across different studies, the beta-blocker effect on heart rate is consistent but the change in overall exercise capacity is not. 

Patient Variability in Beta-Blocker Response

In addition to left ventricular function, there are other factors likely to drive variability at the patient level. We’ve treated the response to beta-blockers as a class effect — an obvious oversimplification. The impact on exercise and the heart will vary by dose and drug (eg, atenolol vs metoprolol vs carvedilol, and so on). Beta-blockers can also affect the lungs, and we’re still debating how cautious to be in the presence of asthma or chronic obstructive pulmonary disease

In a world of infinite time, resources, and expertise, we’d CPET everyone before and after beta-blocker use. Our current reality requires the unthinkable: We’ll have to talk to each other and our patients. For example, heart failure guidelines recommend titrating drugs to match the dose from trials that proved efficacy. These doses are quite high. Simple discussion with the cardiologist and the patient may allow for an adjustment back down with careful monitoring and close attention to activity tolerance. With any luck, you’ll preserve the benefits from GDMT while optimizing your patient›s ability to meet their exercise goals.
 

Dr. Holley, professor in the department of medicine, Uniformed Services University, Bethesda, Maryland, and a pulmonary/sleep and critical care medicine physician at MedStar Washington Hospital Center, Washington, disclosed ties with Metapharm, CHEST College, and WebMD.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Helping Patients Cut Down on Sodium: Useful Substitutes and Strategies

Article Type
Changed
Tue, 06/11/2024 - 12:38

Humans have used salt for centuries, to preserve or cure food before refrigeration was readily available, and even as currency in some cultures. Though modern food preservation efforts have decreased our reliance on salt, we still heavily incorporate it as a flavor enhancer. 

It’s only relatively recently that we’ve begun limiting salt in our diets, as research has linked high sodium intake with chronic, preventable conditions like hypertension, heart disease, and kidney disease.
 

How to Recommend Restriction in a Helpful Way 

The US Department of Agriculture’s Dietary Guidelines for Americans recommends intake of no more than 2300 mg of sodium daily for adults and children aged 14 years or older. This echoes similar recommendations for people at risk for heart disease, kidney disease, and hypertension. However, the sodium intake of the average American still sits at a whopping 3400 mg daily. 

High sodium intake is primarily the result of modern commercial food processing. Food prepared outside the home accounts for up to 70% of sodium intake in the United States, whereas only about 10% comes from salt that is added to food either during or after cooking. For this reason, I hesitate to recommend withholding salt as a primary focus when counseling on a low-sodium diet. 

To many people, certain foods just taste better with salt. Many of my patients in the southern United States simply will not eat foods like eggs and tomatoes if they cannot salt them. We can spend every moment of patient interaction time explaining why excess sodium is unhealthy, but the fact remains that humans prefer food that tastes good. This is why I try to avoid counseling a “no-added-salt” diet; instead, I recommend a low-sodium diet with a focus on fresh, whole foods and limiting salt to only a few food items. 

Patients should be counseled to slowly restrict their salt intake and be made aware that doing so may increase the time it takes for their sensitivity to the taste of less salty foods to return. But it is also important for them to know that it will return. The surest way to kill progress is for an unprepared patient to believe that their food will taste bland forever. A prepared patient understands that their food may taste different for a couple of weeks, but that the change will not last forever.
 

Types of Salt 

I have often worked with patients who insist that their sodium intake is acceptable because they are using sea salt instead of table salt. This is the result of exceptional marketing and misinformation. 

Specialty salts like sea salt and Himalayan pink salt contain about 560 mg and 590 mg of sodium, respectively, per quarter teaspoon. These products do have a slightly different mineral content, with sea salt typically having a negligible amount of calcium, magnesium, or potassium. The very small amount of these minerals offers no obvious health benefits compared with more affordable table salt. 

The sodium content of iodized table salt is comparable to these products, with about 590 mg of sodium per quarter teaspoon. Though its high sodium content will put some practitioners off, it is also an excellent source of iodine, at about 75 mg per serving. It has been estimated that upward of 35% of the US population has iodine deficiency, most commonly due to pregnancy, avoidance of dairy products, increasing rates of vegetarianism, intake of highly processed foods, and avoidance of added salt. For this reason, and its relative affordability, I find table salt to be far more appropriate for the average American than specialty salts.
 

 

 

Salt Substitutes 

Monosodium glutamate (MSG). MSG was previously at the center of public health concern owing to reports of “Chinese restaurant syndrome” that have since been debunked. I often recommend MSG to people trying to decrease sodium intake because the US Food and Drug Administration has designated it as GRAS (“generally recognized as safe”), and it has about one quarter of the sodium content of table salt at 125 mg per quarter teaspoon. Its crystalline structure makes it a lower-sodium salt substitute in savory applications like soups, stews, and gravies. 

Hot sauce. These sauces are generally composed of peppers, vinegar, salt, and sugar. There may be some variation and occasionally added ingredients depending upon the brand. However, I find most hot sauces to be a low-sodium seasoning option that works especially well on proteins like eggs, chicken, and pork. 

Potassium-based substitutes. Salt alternatives such as Nu-Salt and Morton Salt Substitute are sodium-free options with a significant amount of potassium, at 525 mg per quarter-teaspoon serving. These alternatives may not be ideal for patients with kidney problems, but they can be very helpful for those with potassium deficiency. 

Herb-based seasonings. Garlic and onion powder are both sodium-free seasonings that many of my patients have found help to increase palatability while decreasing salt use. Black pepper; lemon and lime juice; salt-free herb mixes like Mrs. Dash; and spices like cumin, paprika, dill, chili powder, and ginger are also sodium-free or low-sodium alternatives that can help to alleviate blandness for someone new to a low-sodium diet. I recommend them often and use them in my own cooking at home.

Plant-based diet. If the goal of care is to improve cardiovascular or kidney health, then I find that working with patients to increase intake of plant foods to be a helpful option. This way of eating encourages replacing highly processed foods that may be high in sodium and sugar with plants that tend to be higher in potassium and calcium. The Dietary Approaches to Stop Hypertension (DASH), Mediterranean, and other plant-based diets have been shown to increase cardiovascular and metabolic health by significantly decreasing serum lipids, blood pressure, and hemoglobin A1c and promoting weight loss. They have also been shown to increase the gut microbiome and promote increased cognitive function. 

I rarely encourage the use of added salt. However, research shows that putting down the salt shaker is probably not the most effective option to restrict sodium intake. For those who can cut back, these options can help keep food flavorful and patients compliant. 

Ms. Winfree is a renal dietitian in private practice in Mary Esther, Florida. She has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Humans have used salt for centuries, to preserve or cure food before refrigeration was readily available, and even as currency in some cultures. Though modern food preservation efforts have decreased our reliance on salt, we still heavily incorporate it as a flavor enhancer. 

It’s only relatively recently that we’ve begun limiting salt in our diets, as research has linked high sodium intake with chronic, preventable conditions like hypertension, heart disease, and kidney disease.
 

How to Recommend Restriction in a Helpful Way 

The US Department of Agriculture’s Dietary Guidelines for Americans recommends intake of no more than 2300 mg of sodium daily for adults and children aged 14 years or older. This echoes similar recommendations for people at risk for heart disease, kidney disease, and hypertension. However, the sodium intake of the average American still sits at a whopping 3400 mg daily. 

High sodium intake is primarily the result of modern commercial food processing. Food prepared outside the home accounts for up to 70% of sodium intake in the United States, whereas only about 10% comes from salt that is added to food either during or after cooking. For this reason, I hesitate to recommend withholding salt as a primary focus when counseling on a low-sodium diet. 

To many people, certain foods just taste better with salt. Many of my patients in the southern United States simply will not eat foods like eggs and tomatoes if they cannot salt them. We can spend every moment of patient interaction time explaining why excess sodium is unhealthy, but the fact remains that humans prefer food that tastes good. This is why I try to avoid counseling a “no-added-salt” diet; instead, I recommend a low-sodium diet with a focus on fresh, whole foods and limiting salt to only a few food items. 

Patients should be counseled to slowly restrict their salt intake and be made aware that doing so may increase the time it takes for their sensitivity to the taste of less salty foods to return. But it is also important for them to know that it will return. The surest way to kill progress is for an unprepared patient to believe that their food will taste bland forever. A prepared patient understands that their food may taste different for a couple of weeks, but that the change will not last forever.
 

Types of Salt 

I have often worked with patients who insist that their sodium intake is acceptable because they are using sea salt instead of table salt. This is the result of exceptional marketing and misinformation. 

Specialty salts like sea salt and Himalayan pink salt contain about 560 mg and 590 mg of sodium, respectively, per quarter teaspoon. These products do have a slightly different mineral content, with sea salt typically having a negligible amount of calcium, magnesium, or potassium. The very small amount of these minerals offers no obvious health benefits compared with more affordable table salt. 

The sodium content of iodized table salt is comparable to these products, with about 590 mg of sodium per quarter teaspoon. Though its high sodium content will put some practitioners off, it is also an excellent source of iodine, at about 75 mg per serving. It has been estimated that upward of 35% of the US population has iodine deficiency, most commonly due to pregnancy, avoidance of dairy products, increasing rates of vegetarianism, intake of highly processed foods, and avoidance of added salt. For this reason, and its relative affordability, I find table salt to be far more appropriate for the average American than specialty salts.
 

 

 

Salt Substitutes 

Monosodium glutamate (MSG). MSG was previously at the center of public health concern owing to reports of “Chinese restaurant syndrome” that have since been debunked. I often recommend MSG to people trying to decrease sodium intake because the US Food and Drug Administration has designated it as GRAS (“generally recognized as safe”), and it has about one quarter of the sodium content of table salt at 125 mg per quarter teaspoon. Its crystalline structure makes it a lower-sodium salt substitute in savory applications like soups, stews, and gravies. 

Hot sauce. These sauces are generally composed of peppers, vinegar, salt, and sugar. There may be some variation and occasionally added ingredients depending upon the brand. However, I find most hot sauces to be a low-sodium seasoning option that works especially well on proteins like eggs, chicken, and pork. 

Potassium-based substitutes. Salt alternatives such as Nu-Salt and Morton Salt Substitute are sodium-free options with a significant amount of potassium, at 525 mg per quarter-teaspoon serving. These alternatives may not be ideal for patients with kidney problems, but they can be very helpful for those with potassium deficiency. 

Herb-based seasonings. Garlic and onion powder are both sodium-free seasonings that many of my patients have found help to increase palatability while decreasing salt use. Black pepper; lemon and lime juice; salt-free herb mixes like Mrs. Dash; and spices like cumin, paprika, dill, chili powder, and ginger are also sodium-free or low-sodium alternatives that can help to alleviate blandness for someone new to a low-sodium diet. I recommend them often and use them in my own cooking at home.

Plant-based diet. If the goal of care is to improve cardiovascular or kidney health, then I find that working with patients to increase intake of plant foods to be a helpful option. This way of eating encourages replacing highly processed foods that may be high in sodium and sugar with plants that tend to be higher in potassium and calcium. The Dietary Approaches to Stop Hypertension (DASH), Mediterranean, and other plant-based diets have been shown to increase cardiovascular and metabolic health by significantly decreasing serum lipids, blood pressure, and hemoglobin A1c and promoting weight loss. They have also been shown to increase the gut microbiome and promote increased cognitive function. 

I rarely encourage the use of added salt. However, research shows that putting down the salt shaker is probably not the most effective option to restrict sodium intake. For those who can cut back, these options can help keep food flavorful and patients compliant. 

Ms. Winfree is a renal dietitian in private practice in Mary Esther, Florida. She has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Humans have used salt for centuries, to preserve or cure food before refrigeration was readily available, and even as currency in some cultures. Though modern food preservation efforts have decreased our reliance on salt, we still heavily incorporate it as a flavor enhancer. 

It’s only relatively recently that we’ve begun limiting salt in our diets, as research has linked high sodium intake with chronic, preventable conditions like hypertension, heart disease, and kidney disease.
 

How to Recommend Restriction in a Helpful Way 

The US Department of Agriculture’s Dietary Guidelines for Americans recommends intake of no more than 2300 mg of sodium daily for adults and children aged 14 years or older. This echoes similar recommendations for people at risk for heart disease, kidney disease, and hypertension. However, the sodium intake of the average American still sits at a whopping 3400 mg daily. 

High sodium intake is primarily the result of modern commercial food processing. Food prepared outside the home accounts for up to 70% of sodium intake in the United States, whereas only about 10% comes from salt that is added to food either during or after cooking. For this reason, I hesitate to recommend withholding salt as a primary focus when counseling on a low-sodium diet. 

To many people, certain foods just taste better with salt. Many of my patients in the southern United States simply will not eat foods like eggs and tomatoes if they cannot salt them. We can spend every moment of patient interaction time explaining why excess sodium is unhealthy, but the fact remains that humans prefer food that tastes good. This is why I try to avoid counseling a “no-added-salt” diet; instead, I recommend a low-sodium diet with a focus on fresh, whole foods and limiting salt to only a few food items. 

Patients should be counseled to slowly restrict their salt intake and be made aware that doing so may increase the time it takes for their sensitivity to the taste of less salty foods to return. But it is also important for them to know that it will return. The surest way to kill progress is for an unprepared patient to believe that their food will taste bland forever. A prepared patient understands that their food may taste different for a couple of weeks, but that the change will not last forever.
 

Types of Salt 

I have often worked with patients who insist that their sodium intake is acceptable because they are using sea salt instead of table salt. This is the result of exceptional marketing and misinformation. 

Specialty salts like sea salt and Himalayan pink salt contain about 560 mg and 590 mg of sodium, respectively, per quarter teaspoon. These products do have a slightly different mineral content, with sea salt typically having a negligible amount of calcium, magnesium, or potassium. The very small amount of these minerals offers no obvious health benefits compared with more affordable table salt. 

The sodium content of iodized table salt is comparable to these products, with about 590 mg of sodium per quarter teaspoon. Though its high sodium content will put some practitioners off, it is also an excellent source of iodine, at about 75 mg per serving. It has been estimated that upward of 35% of the US population has iodine deficiency, most commonly due to pregnancy, avoidance of dairy products, increasing rates of vegetarianism, intake of highly processed foods, and avoidance of added salt. For this reason, and its relative affordability, I find table salt to be far more appropriate for the average American than specialty salts.
 

 

 

Salt Substitutes 

Monosodium glutamate (MSG). MSG was previously at the center of public health concern owing to reports of “Chinese restaurant syndrome” that have since been debunked. I often recommend MSG to people trying to decrease sodium intake because the US Food and Drug Administration has designated it as GRAS (“generally recognized as safe”), and it has about one quarter of the sodium content of table salt at 125 mg per quarter teaspoon. Its crystalline structure makes it a lower-sodium salt substitute in savory applications like soups, stews, and gravies. 

Hot sauce. These sauces are generally composed of peppers, vinegar, salt, and sugar. There may be some variation and occasionally added ingredients depending upon the brand. However, I find most hot sauces to be a low-sodium seasoning option that works especially well on proteins like eggs, chicken, and pork. 

Potassium-based substitutes. Salt alternatives such as Nu-Salt and Morton Salt Substitute are sodium-free options with a significant amount of potassium, at 525 mg per quarter-teaspoon serving. These alternatives may not be ideal for patients with kidney problems, but they can be very helpful for those with potassium deficiency. 

Herb-based seasonings. Garlic and onion powder are both sodium-free seasonings that many of my patients have found help to increase palatability while decreasing salt use. Black pepper; lemon and lime juice; salt-free herb mixes like Mrs. Dash; and spices like cumin, paprika, dill, chili powder, and ginger are also sodium-free or low-sodium alternatives that can help to alleviate blandness for someone new to a low-sodium diet. I recommend them often and use them in my own cooking at home.

Plant-based diet. If the goal of care is to improve cardiovascular or kidney health, then I find that working with patients to increase intake of plant foods to be a helpful option. This way of eating encourages replacing highly processed foods that may be high in sodium and sugar with plants that tend to be higher in potassium and calcium. The Dietary Approaches to Stop Hypertension (DASH), Mediterranean, and other plant-based diets have been shown to increase cardiovascular and metabolic health by significantly decreasing serum lipids, blood pressure, and hemoglobin A1c and promoting weight loss. They have also been shown to increase the gut microbiome and promote increased cognitive function. 

I rarely encourage the use of added salt. However, research shows that putting down the salt shaker is probably not the most effective option to restrict sodium intake. For those who can cut back, these options can help keep food flavorful and patients compliant. 

Ms. Winfree is a renal dietitian in private practice in Mary Esther, Florida. She has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Chronotherapy: Why Timing Drugs to Our Body Clocks May Work

Article Type
Changed
Mon, 06/10/2024 - 16:37

Do drugs work better if taken by the clock?

A new analysis published in The Lancet journal’s eClinicalMedicine suggests: Yes, they do — if you consider the patient’s individual body clock. The study is the first to find that timing blood pressure drugs to a person’s personal “chronotype” — that is, whether they are a night owl or an early bird — may reduce the risk for a heart attack.

The findings represent a significant advance in the field of circadian medicine or “chronotherapy” — timing drug administration to circadian rhythms. A growing stack of research suggests this approach could reduce side effects and improve the effectiveness of a wide range of therapies, including vaccines, cancer treatments, and drugs for depression, glaucoma, pain, seizures, and other conditions. Still, despite decades of research, time of day is rarely considered in writing prescriptions.

“We are really just at the beginning of an exciting new way of looking at patient care,” said Kenneth A. Dyar, PhD, whose lab at Helmholtz Zentrum München’s Institute for Diabetes and Cancer focuses on metabolic physiology. Dr. Dyar is co-lead author of the new blood pressure analysis.

“Chronotherapy is a rapidly growing field,” he said, “and I suspect we are soon going to see more and more studies focused on ‘personalized chronotherapy,’ not only in hypertension but also potentially in other clinical areas.”
 

The ‘Missing Piece’ in Chronotherapy Research

Blood pressure drugs have long been chronotherapy’s battleground. After all, blood pressure follows a circadian rhythm, peaking in the morning and dropping at night.

That healthy overnight dip can disappear in people with diabeteskidney disease, and obstructive sleep apnea. Some physicians have suggested a bed-time dose to restore that dip. But studies have had mixed results, so “take at bedtime” has become a less common recommendation in recent years.

But the debate continued. After a large 2019 Spanish study found that bedtime doses had benefits so big that the results drew questions, an even larger, 2022 randomized, controlled trial from the University of Dundee in Dundee, Scotland — called the TIME study — aimed to settle the question.

Researchers assigned over 21,000 people to take morning or night hypertension drugs for several years and found no difference in cardiovascular outcomes.

“We did this study thinking nocturnal blood pressure tablets might be better,” said Thomas MacDonald, MD, professor emeritus of clinical pharmacology and pharmacoepidemiology at the University of Dundee and principal investigator for the TIME study and the recent chronotype analysis. “But there was no difference for heart attacks, strokes, or vascular death.”

So, the researchers then looked at participants’ chronotypes, sorting outcomes based on whether the participants were late-to-bed, late-to-rise “night owls” or early-to-bed, early-to-rise “morning larks.”

Their analysis of these 5358 TIME participants found the following results: Risk for hospitalization for a heart attack was at least 34% lower for “owls” who took their drugs at bedtime. By contrast, owls’ heart attack risk was at least 62% higher with morning doses. For “larks,” the opposite was true. Morning doses were associated with an 11% lower heart attack risk and night doses with an 11% higher risk, according to supplemental data.

The personalized approach could explain why some previous chronotherapy studies have failed to show a benefit. Those studies did not individualize drug timing as this one did. But personalization could be key to circadian medicine’s success.

“Our ‘internal personal time’ appears to be an important variable to consider when dosing antihypertensives,” said co-lead author Filippo Pigazzani, MD, PhD, clinical senior lecturer and honorary consultant cardiologist at the University of Dundee School of Medicine. “Chronotherapy research has been going on for decades. We knew there was something important with time of day. But researchers haven’t considered the internal time of individual people. I think that is the missing piece.”

The analysis has several important limitations, the researchers said. A total of 95% of participants were White. And it was an observational study, not a true randomized comparison. “We started it late in the original TIME study,” Dr. MacDonald said. “You could argue we were reporting on those who survived long enough to get into the analysis.” More research is needed, they concluded.
 

 

 

Looking Beyond Blood Pressure

What about the rest of the body? “Almost all the cells of our body contain ‘circadian clocks’ that are synchronized by daily environmental cues, including light-dark, activity-rest, and feeding-fasting cycles,” said Dr. Dyar.

An estimated 50% of prescription drugs hit targets in the body that have circadian patterns. So, experts suspect that syncing a drug with a person’s body clock might increase effectiveness of many drugs.

handful of US Food and Drug Administration–approved drugs already have time-of-day recommendations on the label for effectiveness or to limit side effects, including bedtime or evening for the insomnia drug Ambien, the HIV antiviral Atripla, and cholesterol-lowering Zocor. Others are intended to be taken with or after your last meal of the day, such as the long-acting insulin Levemir and the cardiovascular drug Xarelto. A morning recommendation comes with the proton pump inhibitor Nexium and the attention-deficit/hyperactivity disorder drug Ritalin.

Interest is expanding. About one third of the papers published about chronotherapy in the past 25 years have come out in the past 5 years. The May 2024 meeting of the Society for Research on Biological Rhythms featured a day-long session aimed at bringing clinicians up to speed. An organization called the International Association of Circadian Health Clinics is trying to bring circadian medicine findings to clinicians and their patients and to support research.

Moreover, while recent research suggests minding the clock could have benefits for a wide range of treatments, ignoring it could cause problems.

In a Massachusetts Institute of Technology study published in April in Science Advances, researchers looked at engineered livers made from human donor cells and found more than 300 genes that operate on a circadian schedule, many with roles in drug metabolism. They also found that circadian patterns affected the toxicity of acetaminophen and atorvastatin. Identifying the time of day to take these drugs could maximize effectiveness and minimize adverse effects, the researchers said.
 

Timing and the Immune System

Circadian rhythms are also seen in immune processes. In a 2023 study in The Journal of Clinical Investigation of vaccine data from 1.5 million people in Israel, researchers found that children and older adults who got their second dose of the Pfizer mRNA COVID vaccine earlier in the day were about 36% less likely to be hospitalized with SARS-CoV-2 infection than those who got an evening shot.

“The sweet spot in our data was somewhere around late morning to late afternoon,” said lead researcher Jeffrey Haspel, MD, PhD, associate professor of medicine in the division of pulmonary and critical care medicine at Washington University School of Medicine in St. Louis.

In a multicenter, 2024 analysis of 13 studies of immunotherapy for advanced cancers in 1663 people, researchers found treatment earlier in the day was associated with longer survival time and longer survival without cancer progression.

“Patients with selected metastatic cancers seemed to largely benefit from early [time of day] infusions, which is consistent with circadian mechanisms in immune-cell functions and trafficking,” the researchers noted. But “retrospective randomized trials are needed to establish recommendations for optimal circadian timing.”

Other research suggests or is investigating possible chronotherapy benefits for depressionglaucomarespiratory diseasesstroke treatmentepilepsy, and sedatives used in surgery. So why aren’t healthcare providers adding time of day to more prescriptions? “What’s missing is more reliable data,” Dr. Dyar said.
 

 

 

Should You Use Chronotherapy Now?

Experts emphasize that more research is needed before doctors use chronotherapy and before medical organizations include it in treatment recommendations. But for some patients, circadian dosing may be worth a try:

Night owls whose blood pressure isn’t well controlled. Dr. Dyar and Dr. Pigazzani said night-time blood pressure drugs may be helpful for people with a “late chronotype.” Of course, patients shouldn’t change their medication schedule on their own, they said. And doctors may want to consider other concerns, like more overnight bathroom visits with evening diuretics.

In their study, the researchers determined participants’ chronotype with a few questions from the Munich Chronotype Questionnaire about what time they fell asleep and woke up on workdays and days off and whether they considered themselves “morning types” or “evening types.” (The questions can be found in supplementary data for the study.)

If a physician thinks matching the timing of a dose with chronotype would help, they can consider it, Dr. Pigazzani said. “However, I must add that this was an observational study, so I would advise healthcare practitioners to wait for our data to be confirmed in new RCTs of personalized chronotherapy of hypertension.”

Children and older adults getting vaccines. Timing COVID shots and possibly other vaccines from late morning to mid-afternoon could have a small benefit for individuals and a bigger public-health benefit, Dr. Haspel said. But the most important thing is getting vaccinated. “If you can only get one in the evening, it’s still worthwhile. Timing may add oomph at a public-health level for more vulnerable groups.”
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Do drugs work better if taken by the clock?

A new analysis published in The Lancet journal’s eClinicalMedicine suggests: Yes, they do — if you consider the patient’s individual body clock. The study is the first to find that timing blood pressure drugs to a person’s personal “chronotype” — that is, whether they are a night owl or an early bird — may reduce the risk for a heart attack.

The findings represent a significant advance in the field of circadian medicine or “chronotherapy” — timing drug administration to circadian rhythms. A growing stack of research suggests this approach could reduce side effects and improve the effectiveness of a wide range of therapies, including vaccines, cancer treatments, and drugs for depression, glaucoma, pain, seizures, and other conditions. Still, despite decades of research, time of day is rarely considered in writing prescriptions.

“We are really just at the beginning of an exciting new way of looking at patient care,” said Kenneth A. Dyar, PhD, whose lab at Helmholtz Zentrum München’s Institute for Diabetes and Cancer focuses on metabolic physiology. Dr. Dyar is co-lead author of the new blood pressure analysis.

“Chronotherapy is a rapidly growing field,” he said, “and I suspect we are soon going to see more and more studies focused on ‘personalized chronotherapy,’ not only in hypertension but also potentially in other clinical areas.”
 

The ‘Missing Piece’ in Chronotherapy Research

Blood pressure drugs have long been chronotherapy’s battleground. After all, blood pressure follows a circadian rhythm, peaking in the morning and dropping at night.

That healthy overnight dip can disappear in people with diabeteskidney disease, and obstructive sleep apnea. Some physicians have suggested a bed-time dose to restore that dip. But studies have had mixed results, so “take at bedtime” has become a less common recommendation in recent years.

But the debate continued. After a large 2019 Spanish study found that bedtime doses had benefits so big that the results drew questions, an even larger, 2022 randomized, controlled trial from the University of Dundee in Dundee, Scotland — called the TIME study — aimed to settle the question.

Researchers assigned over 21,000 people to take morning or night hypertension drugs for several years and found no difference in cardiovascular outcomes.

“We did this study thinking nocturnal blood pressure tablets might be better,” said Thomas MacDonald, MD, professor emeritus of clinical pharmacology and pharmacoepidemiology at the University of Dundee and principal investigator for the TIME study and the recent chronotype analysis. “But there was no difference for heart attacks, strokes, or vascular death.”

So, the researchers then looked at participants’ chronotypes, sorting outcomes based on whether the participants were late-to-bed, late-to-rise “night owls” or early-to-bed, early-to-rise “morning larks.”

Their analysis of these 5358 TIME participants found the following results: Risk for hospitalization for a heart attack was at least 34% lower for “owls” who took their drugs at bedtime. By contrast, owls’ heart attack risk was at least 62% higher with morning doses. For “larks,” the opposite was true. Morning doses were associated with an 11% lower heart attack risk and night doses with an 11% higher risk, according to supplemental data.

The personalized approach could explain why some previous chronotherapy studies have failed to show a benefit. Those studies did not individualize drug timing as this one did. But personalization could be key to circadian medicine’s success.

“Our ‘internal personal time’ appears to be an important variable to consider when dosing antihypertensives,” said co-lead author Filippo Pigazzani, MD, PhD, clinical senior lecturer and honorary consultant cardiologist at the University of Dundee School of Medicine. “Chronotherapy research has been going on for decades. We knew there was something important with time of day. But researchers haven’t considered the internal time of individual people. I think that is the missing piece.”

The analysis has several important limitations, the researchers said. A total of 95% of participants were White. And it was an observational study, not a true randomized comparison. “We started it late in the original TIME study,” Dr. MacDonald said. “You could argue we were reporting on those who survived long enough to get into the analysis.” More research is needed, they concluded.
 

 

 

Looking Beyond Blood Pressure

What about the rest of the body? “Almost all the cells of our body contain ‘circadian clocks’ that are synchronized by daily environmental cues, including light-dark, activity-rest, and feeding-fasting cycles,” said Dr. Dyar.

An estimated 50% of prescription drugs hit targets in the body that have circadian patterns. So, experts suspect that syncing a drug with a person’s body clock might increase effectiveness of many drugs.

handful of US Food and Drug Administration–approved drugs already have time-of-day recommendations on the label for effectiveness or to limit side effects, including bedtime or evening for the insomnia drug Ambien, the HIV antiviral Atripla, and cholesterol-lowering Zocor. Others are intended to be taken with or after your last meal of the day, such as the long-acting insulin Levemir and the cardiovascular drug Xarelto. A morning recommendation comes with the proton pump inhibitor Nexium and the attention-deficit/hyperactivity disorder drug Ritalin.

Interest is expanding. About one third of the papers published about chronotherapy in the past 25 years have come out in the past 5 years. The May 2024 meeting of the Society for Research on Biological Rhythms featured a day-long session aimed at bringing clinicians up to speed. An organization called the International Association of Circadian Health Clinics is trying to bring circadian medicine findings to clinicians and their patients and to support research.

Moreover, while recent research suggests minding the clock could have benefits for a wide range of treatments, ignoring it could cause problems.

In a Massachusetts Institute of Technology study published in April in Science Advances, researchers looked at engineered livers made from human donor cells and found more than 300 genes that operate on a circadian schedule, many with roles in drug metabolism. They also found that circadian patterns affected the toxicity of acetaminophen and atorvastatin. Identifying the time of day to take these drugs could maximize effectiveness and minimize adverse effects, the researchers said.
 

Timing and the Immune System

Circadian rhythms are also seen in immune processes. In a 2023 study in The Journal of Clinical Investigation of vaccine data from 1.5 million people in Israel, researchers found that children and older adults who got their second dose of the Pfizer mRNA COVID vaccine earlier in the day were about 36% less likely to be hospitalized with SARS-CoV-2 infection than those who got an evening shot.

“The sweet spot in our data was somewhere around late morning to late afternoon,” said lead researcher Jeffrey Haspel, MD, PhD, associate professor of medicine in the division of pulmonary and critical care medicine at Washington University School of Medicine in St. Louis.

In a multicenter, 2024 analysis of 13 studies of immunotherapy for advanced cancers in 1663 people, researchers found treatment earlier in the day was associated with longer survival time and longer survival without cancer progression.

“Patients with selected metastatic cancers seemed to largely benefit from early [time of day] infusions, which is consistent with circadian mechanisms in immune-cell functions and trafficking,” the researchers noted. But “retrospective randomized trials are needed to establish recommendations for optimal circadian timing.”

Other research suggests or is investigating possible chronotherapy benefits for depressionglaucomarespiratory diseasesstroke treatmentepilepsy, and sedatives used in surgery. So why aren’t healthcare providers adding time of day to more prescriptions? “What’s missing is more reliable data,” Dr. Dyar said.
 

 

 

Should You Use Chronotherapy Now?

Experts emphasize that more research is needed before doctors use chronotherapy and before medical organizations include it in treatment recommendations. But for some patients, circadian dosing may be worth a try:

Night owls whose blood pressure isn’t well controlled. Dr. Dyar and Dr. Pigazzani said night-time blood pressure drugs may be helpful for people with a “late chronotype.” Of course, patients shouldn’t change their medication schedule on their own, they said. And doctors may want to consider other concerns, like more overnight bathroom visits with evening diuretics.

In their study, the researchers determined participants’ chronotype with a few questions from the Munich Chronotype Questionnaire about what time they fell asleep and woke up on workdays and days off and whether they considered themselves “morning types” or “evening types.” (The questions can be found in supplementary data for the study.)

If a physician thinks matching the timing of a dose with chronotype would help, they can consider it, Dr. Pigazzani said. “However, I must add that this was an observational study, so I would advise healthcare practitioners to wait for our data to be confirmed in new RCTs of personalized chronotherapy of hypertension.”

Children and older adults getting vaccines. Timing COVID shots and possibly other vaccines from late morning to mid-afternoon could have a small benefit for individuals and a bigger public-health benefit, Dr. Haspel said. But the most important thing is getting vaccinated. “If you can only get one in the evening, it’s still worthwhile. Timing may add oomph at a public-health level for more vulnerable groups.”
 

A version of this article appeared on Medscape.com.

Do drugs work better if taken by the clock?

A new analysis published in The Lancet journal’s eClinicalMedicine suggests: Yes, they do — if you consider the patient’s individual body clock. The study is the first to find that timing blood pressure drugs to a person’s personal “chronotype” — that is, whether they are a night owl or an early bird — may reduce the risk for a heart attack.

The findings represent a significant advance in the field of circadian medicine or “chronotherapy” — timing drug administration to circadian rhythms. A growing stack of research suggests this approach could reduce side effects and improve the effectiveness of a wide range of therapies, including vaccines, cancer treatments, and drugs for depression, glaucoma, pain, seizures, and other conditions. Still, despite decades of research, time of day is rarely considered in writing prescriptions.

“We are really just at the beginning of an exciting new way of looking at patient care,” said Kenneth A. Dyar, PhD, whose lab at Helmholtz Zentrum München’s Institute for Diabetes and Cancer focuses on metabolic physiology. Dr. Dyar is co-lead author of the new blood pressure analysis.

“Chronotherapy is a rapidly growing field,” he said, “and I suspect we are soon going to see more and more studies focused on ‘personalized chronotherapy,’ not only in hypertension but also potentially in other clinical areas.”
 

The ‘Missing Piece’ in Chronotherapy Research

Blood pressure drugs have long been chronotherapy’s battleground. After all, blood pressure follows a circadian rhythm, peaking in the morning and dropping at night.

That healthy overnight dip can disappear in people with diabeteskidney disease, and obstructive sleep apnea. Some physicians have suggested a bed-time dose to restore that dip. But studies have had mixed results, so “take at bedtime” has become a less common recommendation in recent years.

But the debate continued. After a large 2019 Spanish study found that bedtime doses had benefits so big that the results drew questions, an even larger, 2022 randomized, controlled trial from the University of Dundee in Dundee, Scotland — called the TIME study — aimed to settle the question.

Researchers assigned over 21,000 people to take morning or night hypertension drugs for several years and found no difference in cardiovascular outcomes.

“We did this study thinking nocturnal blood pressure tablets might be better,” said Thomas MacDonald, MD, professor emeritus of clinical pharmacology and pharmacoepidemiology at the University of Dundee and principal investigator for the TIME study and the recent chronotype analysis. “But there was no difference for heart attacks, strokes, or vascular death.”

So, the researchers then looked at participants’ chronotypes, sorting outcomes based on whether the participants were late-to-bed, late-to-rise “night owls” or early-to-bed, early-to-rise “morning larks.”

Their analysis of these 5358 TIME participants found the following results: Risk for hospitalization for a heart attack was at least 34% lower for “owls” who took their drugs at bedtime. By contrast, owls’ heart attack risk was at least 62% higher with morning doses. For “larks,” the opposite was true. Morning doses were associated with an 11% lower heart attack risk and night doses with an 11% higher risk, according to supplemental data.

The personalized approach could explain why some previous chronotherapy studies have failed to show a benefit. Those studies did not individualize drug timing as this one did. But personalization could be key to circadian medicine’s success.

“Our ‘internal personal time’ appears to be an important variable to consider when dosing antihypertensives,” said co-lead author Filippo Pigazzani, MD, PhD, clinical senior lecturer and honorary consultant cardiologist at the University of Dundee School of Medicine. “Chronotherapy research has been going on for decades. We knew there was something important with time of day. But researchers haven’t considered the internal time of individual people. I think that is the missing piece.”

The analysis has several important limitations, the researchers said. A total of 95% of participants were White. And it was an observational study, not a true randomized comparison. “We started it late in the original TIME study,” Dr. MacDonald said. “You could argue we were reporting on those who survived long enough to get into the analysis.” More research is needed, they concluded.
 

 

 

Looking Beyond Blood Pressure

What about the rest of the body? “Almost all the cells of our body contain ‘circadian clocks’ that are synchronized by daily environmental cues, including light-dark, activity-rest, and feeding-fasting cycles,” said Dr. Dyar.

An estimated 50% of prescription drugs hit targets in the body that have circadian patterns. So, experts suspect that syncing a drug with a person’s body clock might increase effectiveness of many drugs.

handful of US Food and Drug Administration–approved drugs already have time-of-day recommendations on the label for effectiveness or to limit side effects, including bedtime or evening for the insomnia drug Ambien, the HIV antiviral Atripla, and cholesterol-lowering Zocor. Others are intended to be taken with or after your last meal of the day, such as the long-acting insulin Levemir and the cardiovascular drug Xarelto. A morning recommendation comes with the proton pump inhibitor Nexium and the attention-deficit/hyperactivity disorder drug Ritalin.

Interest is expanding. About one third of the papers published about chronotherapy in the past 25 years have come out in the past 5 years. The May 2024 meeting of the Society for Research on Biological Rhythms featured a day-long session aimed at bringing clinicians up to speed. An organization called the International Association of Circadian Health Clinics is trying to bring circadian medicine findings to clinicians and their patients and to support research.

Moreover, while recent research suggests minding the clock could have benefits for a wide range of treatments, ignoring it could cause problems.

In a Massachusetts Institute of Technology study published in April in Science Advances, researchers looked at engineered livers made from human donor cells and found more than 300 genes that operate on a circadian schedule, many with roles in drug metabolism. They also found that circadian patterns affected the toxicity of acetaminophen and atorvastatin. Identifying the time of day to take these drugs could maximize effectiveness and minimize adverse effects, the researchers said.
 

Timing and the Immune System

Circadian rhythms are also seen in immune processes. In a 2023 study in The Journal of Clinical Investigation of vaccine data from 1.5 million people in Israel, researchers found that children and older adults who got their second dose of the Pfizer mRNA COVID vaccine earlier in the day were about 36% less likely to be hospitalized with SARS-CoV-2 infection than those who got an evening shot.

“The sweet spot in our data was somewhere around late morning to late afternoon,” said lead researcher Jeffrey Haspel, MD, PhD, associate professor of medicine in the division of pulmonary and critical care medicine at Washington University School of Medicine in St. Louis.

In a multicenter, 2024 analysis of 13 studies of immunotherapy for advanced cancers in 1663 people, researchers found treatment earlier in the day was associated with longer survival time and longer survival without cancer progression.

“Patients with selected metastatic cancers seemed to largely benefit from early [time of day] infusions, which is consistent with circadian mechanisms in immune-cell functions and trafficking,” the researchers noted. But “retrospective randomized trials are needed to establish recommendations for optimal circadian timing.”

Other research suggests or is investigating possible chronotherapy benefits for depressionglaucomarespiratory diseasesstroke treatmentepilepsy, and sedatives used in surgery. So why aren’t healthcare providers adding time of day to more prescriptions? “What’s missing is more reliable data,” Dr. Dyar said.
 

 

 

Should You Use Chronotherapy Now?

Experts emphasize that more research is needed before doctors use chronotherapy and before medical organizations include it in treatment recommendations. But for some patients, circadian dosing may be worth a try:

Night owls whose blood pressure isn’t well controlled. Dr. Dyar and Dr. Pigazzani said night-time blood pressure drugs may be helpful for people with a “late chronotype.” Of course, patients shouldn’t change their medication schedule on their own, they said. And doctors may want to consider other concerns, like more overnight bathroom visits with evening diuretics.

In their study, the researchers determined participants’ chronotype with a few questions from the Munich Chronotype Questionnaire about what time they fell asleep and woke up on workdays and days off and whether they considered themselves “morning types” or “evening types.” (The questions can be found in supplementary data for the study.)

If a physician thinks matching the timing of a dose with chronotype would help, they can consider it, Dr. Pigazzani said. “However, I must add that this was an observational study, so I would advise healthcare practitioners to wait for our data to be confirmed in new RCTs of personalized chronotherapy of hypertension.”

Children and older adults getting vaccines. Timing COVID shots and possibly other vaccines from late morning to mid-afternoon could have a small benefit for individuals and a bigger public-health benefit, Dr. Haspel said. But the most important thing is getting vaccinated. “If you can only get one in the evening, it’s still worthwhile. Timing may add oomph at a public-health level for more vulnerable groups.”
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Arterial Stiffness May Predict Risk for Glaucoma

Article Type
Changed
Tue, 06/04/2024 - 15:12

 

TOPLINE:

Arterial stiffness increases the risk for developing glaucoma, a new study found.

METHODOLOGY:

  • To study the link between arterial stiffness and glaucoma, the researchers evaluated 4713 individuals (mean age, 66 years; 58% men) without the eye condition at baseline between April 2011 and November 2012.
  • They assessed arterial stiffness by measuring aortic pulse wave velocity, estimated carotid-femoral pulse wave velocity, and aortic pulse pressure.
  • The primary outcome was incident glaucoma, identified from prescriptions for eye drops or hospital records.

TAKEAWAY:

  • Overall, 301 people in the study developed glaucoma over a mean follow-up period of 10.5 years.
  • For every standard deviation increase in aortic pulse wave velocity, participants had a 32% higher risk for developing glaucoma (standardized hazard ratio [sHR], 1.32; 95% CI, 1.10-1.60), while estimated carotid-femoral pulse wave velocity was associated with a 37% higher risk (sHR, 1.37; 95% CI, 1.11-1.70).
  • Incident glaucoma increased across all quartiles of arterial stiffness, with the highest risk observed in the fourth quartile for aortic pulse wave velocity (HR, 2.41; 95% CI, 1.36-4.26), estimated carotid-femoral pulse wave velocity (HR, 2.29; 95% CI, 1.27-4.13), and aortic pulse pressure (HR, 1.76; 95% CI, 1.10-2.82).
  • The cumulative incidence of glaucoma rose with increases in arterial stiffness. This trend was statistically significant for both aortic and estimated pulse wave velocity (P < .0001) and aortic pulse pressure (P = .02).

IN PRACTICE:

“Arterial stiffness…which can be easily and accurately measured, could be used as a tool in clinical practice [as part of routine blood pressure measurement] to help identify people at risk of glaucoma and as a therapeutic target to prevent glaucoma progression,” the authors wrote.

SOURCE:

This study was led by Angela L. Beros, MPH, of the School of Population Health at the University of Auckland, Auckland, New Zealand, and published online in the American Journal of Ophthalmology.

LIMITATIONS:

The cohort study did not clinically assess for glaucoma, potentially leading to the inclusion of individuals with the condition. Not all participants with incident glaucoma, particularly those unaware of their diagnosis, may have been identified. Intraocular pressure and central corneal thickness, which are common risk factors for glaucoma, were not included in the multivariate analysis.

DISCLOSURES:

The study did not receive any funding. The authors declared no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Arterial stiffness increases the risk for developing glaucoma, a new study found.

METHODOLOGY:

  • To study the link between arterial stiffness and glaucoma, the researchers evaluated 4713 individuals (mean age, 66 years; 58% men) without the eye condition at baseline between April 2011 and November 2012.
  • They assessed arterial stiffness by measuring aortic pulse wave velocity, estimated carotid-femoral pulse wave velocity, and aortic pulse pressure.
  • The primary outcome was incident glaucoma, identified from prescriptions for eye drops or hospital records.

TAKEAWAY:

  • Overall, 301 people in the study developed glaucoma over a mean follow-up period of 10.5 years.
  • For every standard deviation increase in aortic pulse wave velocity, participants had a 32% higher risk for developing glaucoma (standardized hazard ratio [sHR], 1.32; 95% CI, 1.10-1.60), while estimated carotid-femoral pulse wave velocity was associated with a 37% higher risk (sHR, 1.37; 95% CI, 1.11-1.70).
  • Incident glaucoma increased across all quartiles of arterial stiffness, with the highest risk observed in the fourth quartile for aortic pulse wave velocity (HR, 2.41; 95% CI, 1.36-4.26), estimated carotid-femoral pulse wave velocity (HR, 2.29; 95% CI, 1.27-4.13), and aortic pulse pressure (HR, 1.76; 95% CI, 1.10-2.82).
  • The cumulative incidence of glaucoma rose with increases in arterial stiffness. This trend was statistically significant for both aortic and estimated pulse wave velocity (P < .0001) and aortic pulse pressure (P = .02).

IN PRACTICE:

“Arterial stiffness…which can be easily and accurately measured, could be used as a tool in clinical practice [as part of routine blood pressure measurement] to help identify people at risk of glaucoma and as a therapeutic target to prevent glaucoma progression,” the authors wrote.

SOURCE:

This study was led by Angela L. Beros, MPH, of the School of Population Health at the University of Auckland, Auckland, New Zealand, and published online in the American Journal of Ophthalmology.

LIMITATIONS:

The cohort study did not clinically assess for glaucoma, potentially leading to the inclusion of individuals with the condition. Not all participants with incident glaucoma, particularly those unaware of their diagnosis, may have been identified. Intraocular pressure and central corneal thickness, which are common risk factors for glaucoma, were not included in the multivariate analysis.

DISCLOSURES:

The study did not receive any funding. The authors declared no conflicts of interest.

A version of this article appeared on Medscape.com.

 

TOPLINE:

Arterial stiffness increases the risk for developing glaucoma, a new study found.

METHODOLOGY:

  • To study the link between arterial stiffness and glaucoma, the researchers evaluated 4713 individuals (mean age, 66 years; 58% men) without the eye condition at baseline between April 2011 and November 2012.
  • They assessed arterial stiffness by measuring aortic pulse wave velocity, estimated carotid-femoral pulse wave velocity, and aortic pulse pressure.
  • The primary outcome was incident glaucoma, identified from prescriptions for eye drops or hospital records.

TAKEAWAY:

  • Overall, 301 people in the study developed glaucoma over a mean follow-up period of 10.5 years.
  • For every standard deviation increase in aortic pulse wave velocity, participants had a 32% higher risk for developing glaucoma (standardized hazard ratio [sHR], 1.32; 95% CI, 1.10-1.60), while estimated carotid-femoral pulse wave velocity was associated with a 37% higher risk (sHR, 1.37; 95% CI, 1.11-1.70).
  • Incident glaucoma increased across all quartiles of arterial stiffness, with the highest risk observed in the fourth quartile for aortic pulse wave velocity (HR, 2.41; 95% CI, 1.36-4.26), estimated carotid-femoral pulse wave velocity (HR, 2.29; 95% CI, 1.27-4.13), and aortic pulse pressure (HR, 1.76; 95% CI, 1.10-2.82).
  • The cumulative incidence of glaucoma rose with increases in arterial stiffness. This trend was statistically significant for both aortic and estimated pulse wave velocity (P < .0001) and aortic pulse pressure (P = .02).

IN PRACTICE:

“Arterial stiffness…which can be easily and accurately measured, could be used as a tool in clinical practice [as part of routine blood pressure measurement] to help identify people at risk of glaucoma and as a therapeutic target to prevent glaucoma progression,” the authors wrote.

SOURCE:

This study was led by Angela L. Beros, MPH, of the School of Population Health at the University of Auckland, Auckland, New Zealand, and published online in the American Journal of Ophthalmology.

LIMITATIONS:

The cohort study did not clinically assess for glaucoma, potentially leading to the inclusion of individuals with the condition. Not all participants with incident glaucoma, particularly those unaware of their diagnosis, may have been identified. Intraocular pressure and central corneal thickness, which are common risk factors for glaucoma, were not included in the multivariate analysis.

DISCLOSURES:

The study did not receive any funding. The authors declared no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Another Reason to Control Lp(a): To Protect the Kidneys Too

Article Type
Changed
Tue, 06/04/2024 - 11:12

High levels of lipoprotein(a) [Lp(a)] in the blood are associated with a significantly increased risk for chronic kidney disease, report investigators who are studying the link in a two-part study of more than 100,000 people.

There is already genetic evidence showing that Lp(a) can cause cardiovascular conditions, including myocardial infarction, aortic valve stenosis, peripheral artery disease, and ischemic stroke.

Now, researchers presenting at the European Atherosclerosis Society (EAS) 2024 Congress are adding new organs – the kidneys – to the list of those that can be damaged by elevated Lp(a).

“This is very important,” said lead investigator Anne Langsted, MD, PhD, DMSc, from the Department of Clinical Biochemistry at the Rigshospitalet in Denmark. And “hopefully, we’ll have a treatment for Lp(a) on the market very soon. Until then, I think individuals who have kidney disease would benefit a lot from reducing other risk factors, if they also have high levels” of Lp(a).

Using data gathered from the Copenhagen General Population Study, the study involved 108,439 individuals who had a range of tests including estimated glomerular filtration rate (eGFR), plasma Lp(a) levels, and LPA genotyping. The patients were then linked to a series of national registries to study outcomes. 

The researchers conducted two separate analyses: an observational study of Lp(a) levels in 70,040 individuals and a Mendelian randomization study of LPA kringle IV–type 2 domain repeats in 106,624 individuals. The number of those repeats is inversely associated with median Lp(a) plasma levels.

The observational study showed that eGFR decreased with increasing median plasma Lp(a) levels; the Mendelian randomization study indicated that eGFR decreased KIV-2 repeat numbers dropped.

Across both parts of the study, it was found that each 50 mg/dL increase in plasma Lp(a) levels was associated with an increased risk of at least 25% for chronic kidney disease.
 

Lp(a) and Chronic Kidney Disease

When high plasma levels of Lp(a) have been spotted before in patients with kidney disease, “we’ve kind of assumed that it was probably the kidney disease that caused the higher levels,” Dr. Langsted said. But her team hypothesized that the opposite was at play and that Lp(a) levels are genetically determined, and increased plasma Lp(a) levels may be causally associated with rising risk for chronic kidney disease.

Gerald F. Watts, MD, PhD, DSc, Winthrop Professor of cardiometabolic and internal medicine at the University of Western Australia in Perth, and co-chair of the study, said in an interview that “although Mendelian randomization is a technique that allows you to infer causality, it’s probably a little bit more complex than that in reality,” adding that there is likely a bidirectional relationship between Lp(a) and chronic kidney disease.

Having increased Lp(a) levels on their own is not sufficient to trigger chronic kidney disease. “You probably need another event and then you get into a vicious cycle,” Dr. Watts said.

The mechanism linking Lp(a) with chronic kidney disease remains unclear, but Dr. Watts explained that the lipoprotein probably damages the renal tubes when it is reabsorbed after it dissociates from low-density lipoprotein cholesterol.

The next step will be to identify the people who are most susceptible to this and figure out what treatment might help. Dr. Watts suggested that gene silencing, in which Lp(a) is “completely obliterated,” will lead to an improvement in renal function.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

High levels of lipoprotein(a) [Lp(a)] in the blood are associated with a significantly increased risk for chronic kidney disease, report investigators who are studying the link in a two-part study of more than 100,000 people.

There is already genetic evidence showing that Lp(a) can cause cardiovascular conditions, including myocardial infarction, aortic valve stenosis, peripheral artery disease, and ischemic stroke.

Now, researchers presenting at the European Atherosclerosis Society (EAS) 2024 Congress are adding new organs – the kidneys – to the list of those that can be damaged by elevated Lp(a).

“This is very important,” said lead investigator Anne Langsted, MD, PhD, DMSc, from the Department of Clinical Biochemistry at the Rigshospitalet in Denmark. And “hopefully, we’ll have a treatment for Lp(a) on the market very soon. Until then, I think individuals who have kidney disease would benefit a lot from reducing other risk factors, if they also have high levels” of Lp(a).

Using data gathered from the Copenhagen General Population Study, the study involved 108,439 individuals who had a range of tests including estimated glomerular filtration rate (eGFR), plasma Lp(a) levels, and LPA genotyping. The patients were then linked to a series of national registries to study outcomes. 

The researchers conducted two separate analyses: an observational study of Lp(a) levels in 70,040 individuals and a Mendelian randomization study of LPA kringle IV–type 2 domain repeats in 106,624 individuals. The number of those repeats is inversely associated with median Lp(a) plasma levels.

The observational study showed that eGFR decreased with increasing median plasma Lp(a) levels; the Mendelian randomization study indicated that eGFR decreased KIV-2 repeat numbers dropped.

Across both parts of the study, it was found that each 50 mg/dL increase in plasma Lp(a) levels was associated with an increased risk of at least 25% for chronic kidney disease.
 

Lp(a) and Chronic Kidney Disease

When high plasma levels of Lp(a) have been spotted before in patients with kidney disease, “we’ve kind of assumed that it was probably the kidney disease that caused the higher levels,” Dr. Langsted said. But her team hypothesized that the opposite was at play and that Lp(a) levels are genetically determined, and increased plasma Lp(a) levels may be causally associated with rising risk for chronic kidney disease.

Gerald F. Watts, MD, PhD, DSc, Winthrop Professor of cardiometabolic and internal medicine at the University of Western Australia in Perth, and co-chair of the study, said in an interview that “although Mendelian randomization is a technique that allows you to infer causality, it’s probably a little bit more complex than that in reality,” adding that there is likely a bidirectional relationship between Lp(a) and chronic kidney disease.

Having increased Lp(a) levels on their own is not sufficient to trigger chronic kidney disease. “You probably need another event and then you get into a vicious cycle,” Dr. Watts said.

The mechanism linking Lp(a) with chronic kidney disease remains unclear, but Dr. Watts explained that the lipoprotein probably damages the renal tubes when it is reabsorbed after it dissociates from low-density lipoprotein cholesterol.

The next step will be to identify the people who are most susceptible to this and figure out what treatment might help. Dr. Watts suggested that gene silencing, in which Lp(a) is “completely obliterated,” will lead to an improvement in renal function.

A version of this article appeared on Medscape.com.

High levels of lipoprotein(a) [Lp(a)] in the blood are associated with a significantly increased risk for chronic kidney disease, report investigators who are studying the link in a two-part study of more than 100,000 people.

There is already genetic evidence showing that Lp(a) can cause cardiovascular conditions, including myocardial infarction, aortic valve stenosis, peripheral artery disease, and ischemic stroke.

Now, researchers presenting at the European Atherosclerosis Society (EAS) 2024 Congress are adding new organs – the kidneys – to the list of those that can be damaged by elevated Lp(a).

“This is very important,” said lead investigator Anne Langsted, MD, PhD, DMSc, from the Department of Clinical Biochemistry at the Rigshospitalet in Denmark. And “hopefully, we’ll have a treatment for Lp(a) on the market very soon. Until then, I think individuals who have kidney disease would benefit a lot from reducing other risk factors, if they also have high levels” of Lp(a).

Using data gathered from the Copenhagen General Population Study, the study involved 108,439 individuals who had a range of tests including estimated glomerular filtration rate (eGFR), plasma Lp(a) levels, and LPA genotyping. The patients were then linked to a series of national registries to study outcomes. 

The researchers conducted two separate analyses: an observational study of Lp(a) levels in 70,040 individuals and a Mendelian randomization study of LPA kringle IV–type 2 domain repeats in 106,624 individuals. The number of those repeats is inversely associated with median Lp(a) plasma levels.

The observational study showed that eGFR decreased with increasing median plasma Lp(a) levels; the Mendelian randomization study indicated that eGFR decreased KIV-2 repeat numbers dropped.

Across both parts of the study, it was found that each 50 mg/dL increase in plasma Lp(a) levels was associated with an increased risk of at least 25% for chronic kidney disease.
 

Lp(a) and Chronic Kidney Disease

When high plasma levels of Lp(a) have been spotted before in patients with kidney disease, “we’ve kind of assumed that it was probably the kidney disease that caused the higher levels,” Dr. Langsted said. But her team hypothesized that the opposite was at play and that Lp(a) levels are genetically determined, and increased plasma Lp(a) levels may be causally associated with rising risk for chronic kidney disease.

Gerald F. Watts, MD, PhD, DSc, Winthrop Professor of cardiometabolic and internal medicine at the University of Western Australia in Perth, and co-chair of the study, said in an interview that “although Mendelian randomization is a technique that allows you to infer causality, it’s probably a little bit more complex than that in reality,” adding that there is likely a bidirectional relationship between Lp(a) and chronic kidney disease.

Having increased Lp(a) levels on their own is not sufficient to trigger chronic kidney disease. “You probably need another event and then you get into a vicious cycle,” Dr. Watts said.

The mechanism linking Lp(a) with chronic kidney disease remains unclear, but Dr. Watts explained that the lipoprotein probably damages the renal tubes when it is reabsorbed after it dissociates from low-density lipoprotein cholesterol.

The next step will be to identify the people who are most susceptible to this and figure out what treatment might help. Dr. Watts suggested that gene silencing, in which Lp(a) is “completely obliterated,” will lead to an improvement in renal function.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article