User login
On Second Thought: Aspirin for Primary Prevention — What We Really Know
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Higher Doses of Vitamin D3 Do Not Reduce Cardiac Biomarkers in Older Adults
TOPLINE:
Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.
METHODOLOGY:
- A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
- Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
- Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
- The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
- The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.
TAKEAWAY:
- Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
- No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
- Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
- The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.
IN PRACTICE:
“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.
SOURCE:
The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.
LIMITATIONS:
The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.
DISCLOSURES:
The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.
METHODOLOGY:
- A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
- Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
- Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
- The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
- The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.
TAKEAWAY:
- Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
- No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
- Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
- The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.
IN PRACTICE:
“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.
SOURCE:
The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.
LIMITATIONS:
The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.
DISCLOSURES:
The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.
METHODOLOGY:
- A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
- Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
- Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
- The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
- The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.
TAKEAWAY:
- Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
- No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
- Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
- The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.
IN PRACTICE:
“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.
SOURCE:
The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.
LIMITATIONS:
The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.
DISCLOSURES:
The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Genetic Risk for Gout Raises Risk for Cardiovascular Disease Independent of Urate Level
TOPLINE:
Genetic predisposition to gout, unfavorable lifestyle habits, and poor metabolic health are associated with an increased risk for cardiovascular disease (CVD); however, adherence to a healthy lifestyle can reduce this risk by up to 62%, even in individuals with high genetic risk.
METHODOLOGY:
- Researchers investigated the association between genetic predisposition to gout, combined with lifestyle habits, and the risk for CVD in two diverse prospective cohorts from different ancestral backgrounds.
- They analyzed the data of 224,689 participants of European descent from the UK Biobank (mean age, 57.0 years; 56.1% women) and 50,364 participants of East Asian descent from the Korean Genome and Epidemiology Study (KoGES; mean age, 53.7 years; 66.0% women).
- The genetic predisposition to gout was evaluated using a polygenic risk score (PRS) derived from a metagenome-wide association study, and the participants were categorized into low, intermediate, and high genetic risk groups based on their PRS for gout.
- A favorable lifestyle was defined as having ≥ 3 healthy lifestyle factors, and 0-1 metabolic syndrome factor defined the ideal metabolic health status.
- The incident CVD risk was evaluated according to genetic risk, lifestyle habits, and metabolic syndrome.
TAKEAWAY:
- Individuals in the high genetic risk group had a higher risk for CVD than those in the low genetic risk group in both the UK Biobank (adjusted hazard ratio [aHR], 1.10; P < .001) and KoGES (aHR, 1.31; P = .024) cohorts.
- In the UK Biobank cohort, individuals with a high genetic risk for gout and unfavorable lifestyle choices had a 1.99 times higher risk for incident CVD than those with low genetic risk (aHR, 1.99; P < .001); similar outcomes were observed in the KoGES cohort.
- Similarly, individuals with a high genetic risk for gout and poor metabolic health in the UK Biobank cohort had a 2.16 times higher risk for CVD than those with low genetic risk (aHR, 2.16; P < .001 for both); outcomes were no different in the KoGES cohort.
- Improving metabolic health and adhering to a healthy lifestyle reduced the risk for CVD by 62% in individuals with high genetic risk and by 46% in those with low genetic risk (P < .001 for both).
IN PRACTICE:
“PRS for gout can be used for preventing not only gout but also CVD. It is possible to identify individuals with high genetic risk for gout and strongly recommend modifying lifestyle habits. Weight reduction, smoking cessation, regular exercise, and eating healthy food are effective strategies to prevent gout and CVD,” the authors wrote.
SOURCE:
This study was led by Ki Won Moon, MD, PhD, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon, Republic of Korea, and SangHyuk Jung, PhD, University of Pennsylvania, Philadelphia, and was published online on October 8, 2024, in RMD Open.
LIMITATIONS:
The definitions of lifestyle and metabolic syndrome were different in each cohort, which may have affected the findings. Data on lifestyle behaviors and metabolic health statuses were collected at enrollment, but these variables may have changed during the follow-up period, which potentially introduced bias into the results. This study was not able to establish causality between genetic predisposition to gout and the incident risk for CVD.
DISCLOSURES:
This study was supported by the National Institute of General Medical Sciences and the National Research Foundation of Korea. The authors declared no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Genetic predisposition to gout, unfavorable lifestyle habits, and poor metabolic health are associated with an increased risk for cardiovascular disease (CVD); however, adherence to a healthy lifestyle can reduce this risk by up to 62%, even in individuals with high genetic risk.
METHODOLOGY:
- Researchers investigated the association between genetic predisposition to gout, combined with lifestyle habits, and the risk for CVD in two diverse prospective cohorts from different ancestral backgrounds.
- They analyzed the data of 224,689 participants of European descent from the UK Biobank (mean age, 57.0 years; 56.1% women) and 50,364 participants of East Asian descent from the Korean Genome and Epidemiology Study (KoGES; mean age, 53.7 years; 66.0% women).
- The genetic predisposition to gout was evaluated using a polygenic risk score (PRS) derived from a metagenome-wide association study, and the participants were categorized into low, intermediate, and high genetic risk groups based on their PRS for gout.
- A favorable lifestyle was defined as having ≥ 3 healthy lifestyle factors, and 0-1 metabolic syndrome factor defined the ideal metabolic health status.
- The incident CVD risk was evaluated according to genetic risk, lifestyle habits, and metabolic syndrome.
TAKEAWAY:
- Individuals in the high genetic risk group had a higher risk for CVD than those in the low genetic risk group in both the UK Biobank (adjusted hazard ratio [aHR], 1.10; P < .001) and KoGES (aHR, 1.31; P = .024) cohorts.
- In the UK Biobank cohort, individuals with a high genetic risk for gout and unfavorable lifestyle choices had a 1.99 times higher risk for incident CVD than those with low genetic risk (aHR, 1.99; P < .001); similar outcomes were observed in the KoGES cohort.
- Similarly, individuals with a high genetic risk for gout and poor metabolic health in the UK Biobank cohort had a 2.16 times higher risk for CVD than those with low genetic risk (aHR, 2.16; P < .001 for both); outcomes were no different in the KoGES cohort.
- Improving metabolic health and adhering to a healthy lifestyle reduced the risk for CVD by 62% in individuals with high genetic risk and by 46% in those with low genetic risk (P < .001 for both).
IN PRACTICE:
“PRS for gout can be used for preventing not only gout but also CVD. It is possible to identify individuals with high genetic risk for gout and strongly recommend modifying lifestyle habits. Weight reduction, smoking cessation, regular exercise, and eating healthy food are effective strategies to prevent gout and CVD,” the authors wrote.
SOURCE:
This study was led by Ki Won Moon, MD, PhD, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon, Republic of Korea, and SangHyuk Jung, PhD, University of Pennsylvania, Philadelphia, and was published online on October 8, 2024, in RMD Open.
LIMITATIONS:
The definitions of lifestyle and metabolic syndrome were different in each cohort, which may have affected the findings. Data on lifestyle behaviors and metabolic health statuses were collected at enrollment, but these variables may have changed during the follow-up period, which potentially introduced bias into the results. This study was not able to establish causality between genetic predisposition to gout and the incident risk for CVD.
DISCLOSURES:
This study was supported by the National Institute of General Medical Sciences and the National Research Foundation of Korea. The authors declared no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Genetic predisposition to gout, unfavorable lifestyle habits, and poor metabolic health are associated with an increased risk for cardiovascular disease (CVD); however, adherence to a healthy lifestyle can reduce this risk by up to 62%, even in individuals with high genetic risk.
METHODOLOGY:
- Researchers investigated the association between genetic predisposition to gout, combined with lifestyle habits, and the risk for CVD in two diverse prospective cohorts from different ancestral backgrounds.
- They analyzed the data of 224,689 participants of European descent from the UK Biobank (mean age, 57.0 years; 56.1% women) and 50,364 participants of East Asian descent from the Korean Genome and Epidemiology Study (KoGES; mean age, 53.7 years; 66.0% women).
- The genetic predisposition to gout was evaluated using a polygenic risk score (PRS) derived from a metagenome-wide association study, and the participants were categorized into low, intermediate, and high genetic risk groups based on their PRS for gout.
- A favorable lifestyle was defined as having ≥ 3 healthy lifestyle factors, and 0-1 metabolic syndrome factor defined the ideal metabolic health status.
- The incident CVD risk was evaluated according to genetic risk, lifestyle habits, and metabolic syndrome.
TAKEAWAY:
- Individuals in the high genetic risk group had a higher risk for CVD than those in the low genetic risk group in both the UK Biobank (adjusted hazard ratio [aHR], 1.10; P < .001) and KoGES (aHR, 1.31; P = .024) cohorts.
- In the UK Biobank cohort, individuals with a high genetic risk for gout and unfavorable lifestyle choices had a 1.99 times higher risk for incident CVD than those with low genetic risk (aHR, 1.99; P < .001); similar outcomes were observed in the KoGES cohort.
- Similarly, individuals with a high genetic risk for gout and poor metabolic health in the UK Biobank cohort had a 2.16 times higher risk for CVD than those with low genetic risk (aHR, 2.16; P < .001 for both); outcomes were no different in the KoGES cohort.
- Improving metabolic health and adhering to a healthy lifestyle reduced the risk for CVD by 62% in individuals with high genetic risk and by 46% in those with low genetic risk (P < .001 for both).
IN PRACTICE:
“PRS for gout can be used for preventing not only gout but also CVD. It is possible to identify individuals with high genetic risk for gout and strongly recommend modifying lifestyle habits. Weight reduction, smoking cessation, regular exercise, and eating healthy food are effective strategies to prevent gout and CVD,” the authors wrote.
SOURCE:
This study was led by Ki Won Moon, MD, PhD, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon, Republic of Korea, and SangHyuk Jung, PhD, University of Pennsylvania, Philadelphia, and was published online on October 8, 2024, in RMD Open.
LIMITATIONS:
The definitions of lifestyle and metabolic syndrome were different in each cohort, which may have affected the findings. Data on lifestyle behaviors and metabolic health statuses were collected at enrollment, but these variables may have changed during the follow-up period, which potentially introduced bias into the results. This study was not able to establish causality between genetic predisposition to gout and the incident risk for CVD.
DISCLOSURES:
This study was supported by the National Institute of General Medical Sciences and the National Research Foundation of Korea. The authors declared no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
New Evidence That Plaque Buildup Shouldn’t Be Ignored
Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”
Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.
Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.
“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.
David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”
“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
Predicting Risk Early
The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.
Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”
Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
3D Vascular Ultrasound vs Coronary Artery Calcium
While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.
For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.
But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.
Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.
3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
Progression Increases Risk
Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further.
The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.
Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”
He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
A version of this article appeared on Medscape.com.
Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”
Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.
Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.
“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.
David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”
“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
Predicting Risk Early
The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.
Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”
Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
3D Vascular Ultrasound vs Coronary Artery Calcium
While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.
For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.
But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.
Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.
3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
Progression Increases Risk
Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further.
The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.
Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”
He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
A version of this article appeared on Medscape.com.
Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”
Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.
Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.
“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.
David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”
“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
Predicting Risk Early
The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.
Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”
Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
3D Vascular Ultrasound vs Coronary Artery Calcium
While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.
For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.
But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.
Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.
3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
Progression Increases Risk
Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further.
The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.
Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”
He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
A version of this article appeared on Medscape.com.
Is Wildfire Smoke More Toxic Than General Air Pollution?
Wildfire-related air pollution in Europe kills more than non-wildfire air pollution. As climate change exacerbates the frequency and violence of wildfires, researchers are studying the health implications of mitigation methods such as prescribed fires.
Presenting at the annual congress of the European Respiratory Society (ERS), Cathryn Tonne, PhD, an environmental epidemiologist at the Instituto de Salud Global de Barcelona, Spain, said wildfire-related PM2.5 is more toxic than general PM2.5, leading to significantly higher mortality rates.
Prescribed, controlled fires have been employed worldwide to reduce the chance of uncontrolled, catastrophic fires. However, researchers wonder whether the techniques reduce the overall fire-related PM2.5 or add up to it. “Prescribed fire increases ecosystem resilience and can reduce the risk of catastrophic wildfire,” said Jason Sacks, MPH, an epidemiologist in the Center for Public Health and Environmental Assessment in the Office of Research and Development at the Environmental Protection Agency (EPA), at the congress. “But it also leads to poorer air quality and health impacts, and we still don’t know what this means at a regional scale.”
Wildfire Pollution Kills More Than Other Air Pollution
Researchers at the Instituto de Salud Global de Barcelona used a large dataset of daily mortality data from 32 European countries collected through the EARLY-ADAPT project. They utilized the SILAM model to derive daily average concentrations of wildfire-related PM2.5, non-fire PM2.5, and total PM2.5 levels. They also employed GEOSTAT population grids at a 1-km resolution to calculate the attributable number of deaths across different regions, specifically focusing on data from 2006, 2011, and 2018.
The data analysis indicated that the relative risk per unit of PM2.5 is substantially larger for wildfire-related PM2.5, compared with non-fire PM2.5. “We essentially assume that wildfire smoke PM2.5 has the same toxicity as total PM2.5, but it’s increasingly clear that’s likely not the case,” Dr. Tonne said, presenting the study.
When employing exposure-response functions (ERFs) specific to wildfire smoke, researchers found that the attributable deaths from all causes of wildfire PM2.5 were approximately 10 times larger than those calculated using total PM2.5 exposure estimates. Dr. Tonne explained that this stark difference highlights the critical need for tailored ERFs that accurately reflect the unique health risks posed by wildfire smoke.
“Respiratory mortality usually has the strongest relative risks, and we’re seeing that in this study as well,” Dr. Tonne said. “Wildfire smoke seems to operate through quite immediate mechanisms, likely through inflammation and oxidative stress.”
One significant challenge of the study was the lack of uniform spatial resolution across all countries involved in the analysis. This inconsistency may affect how accurately mortality estimates can be attributed to specific PM2.5 sources. Additionally, the study had limited statistical power for generating age- and sex-specific mortality estimates, which could obscure important demographic differences in vulnerability to wildfire smoke exposure. The analysis was also constrained to data available only up to 2020, thereby excluding critical wildfire events from subsequent years, such as those in 2022 and 2023, which may have further elucidated the health impacts of wildfire smoke in Europe.
Fires Prescription
Prescribed fires or controlled burns are intentional fires set by land managers under carefully managed conditions.
Historically, many forested areas have been subjected to fire suppression practices, which allow combustible materials like dry leaves, twigs, and shrubs to accumulate over time. This buildup leads to a higher likelihood of severe, uncontrollable wildfires. Prescribed fires can reduce these fuel loads and improve the health and resilience of ecosystems.
They release fewer pollutants and emissions than the large-scale, unmanageable wildfires they help prevent because they happen at lower temperatures. But they still introduce pollutants in the air that can negatively affect nearby communities’ health.
People with preexisting respiratory conditions, such as asthma or chronic obstructive pulmonary disease (COPD), are particularly vulnerable to smoke, which can trigger health issues like breathing difficulties, coughing, and eye irritation. The cumulative impact of increased burns raises concerns about long-term air quality, especially in densely populated areas. “We need to understand if we’re actually tipping the scale to having less wildfire smoke or just increasing the total amount of smoke.”
Mitigation strategies include accurately picking the right timing and weather conditions to determine when and where to conduct controlled burns and effective and timely communication to inform local communities about upcoming burns, the potential for smoke exposure, and how to protect themselves.
There is a growing need to improve public messaging around prescribed fires, Mr. Sacks said, because often the message communicated is oversimplified, such as “there will be smoke, but don’t worry. But that’s not the message we want to convey, especially for people with asthma or COPD.”
Instead, he said public health agencies should provide clearer, science-based guidance on the risks for smoke exposure and practical steps people can take to reduce their risk.
What Can Doctors Do?
Chris Carlsten, MD, director of the Centre for Lung Health and professor and head of the Respiratory Medicine Division at the University of British Columbia, Vancouver, Canada, told this news organization that determining whether an exacerbation of a respiratory condition is caused by fire exposure or other factors, such as viral infections, is complex because both can trigger similar responses and may complement each other. “It’s very difficult for any individual to know whether, when they’re having an exacerbation of asthma or COPD, that’s due to the fire,” he said. Fire smoke also increases infection risks, further complicating diagnosis.
Dr. Carlsten suggested that physicians could recommend preventative use of inhalers for at-risk patients when wildfires occur rather than waiting for symptoms to worsen. “That is a really interesting idea that could be practical.” Still, he advises caution, stressing that patients should consult their providers because not all may react well to increased inhaler use.
He also highlighted a significant shift in the healthcare landscape, noting that traditionally, the focus has been on the cardiovascular impacts of pollution, particularly traffic-related pollution. However, as wildfire smoke becomes a growing issue, the focus is shifting back to respiratory problems, with profound implications for healthcare resources, budgets, and drug approvals based on the burden of respiratory disease. “Fire smoke is becoming more of a problem. This swing back to respiratory has huge implications for healthcare systems and respiratory disease burden.”
Mr. Sacks and Dr. Carlsten reported no relevant financial relationships. The study presented by Dr. Tonne received funding from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No. 101057131.
A version of this article first appeared on Medscape.com.
Wildfire-related air pollution in Europe kills more than non-wildfire air pollution. As climate change exacerbates the frequency and violence of wildfires, researchers are studying the health implications of mitigation methods such as prescribed fires.
Presenting at the annual congress of the European Respiratory Society (ERS), Cathryn Tonne, PhD, an environmental epidemiologist at the Instituto de Salud Global de Barcelona, Spain, said wildfire-related PM2.5 is more toxic than general PM2.5, leading to significantly higher mortality rates.
Prescribed, controlled fires have been employed worldwide to reduce the chance of uncontrolled, catastrophic fires. However, researchers wonder whether the techniques reduce the overall fire-related PM2.5 or add up to it. “Prescribed fire increases ecosystem resilience and can reduce the risk of catastrophic wildfire,” said Jason Sacks, MPH, an epidemiologist in the Center for Public Health and Environmental Assessment in the Office of Research and Development at the Environmental Protection Agency (EPA), at the congress. “But it also leads to poorer air quality and health impacts, and we still don’t know what this means at a regional scale.”
Wildfire Pollution Kills More Than Other Air Pollution
Researchers at the Instituto de Salud Global de Barcelona used a large dataset of daily mortality data from 32 European countries collected through the EARLY-ADAPT project. They utilized the SILAM model to derive daily average concentrations of wildfire-related PM2.5, non-fire PM2.5, and total PM2.5 levels. They also employed GEOSTAT population grids at a 1-km resolution to calculate the attributable number of deaths across different regions, specifically focusing on data from 2006, 2011, and 2018.
The data analysis indicated that the relative risk per unit of PM2.5 is substantially larger for wildfire-related PM2.5, compared with non-fire PM2.5. “We essentially assume that wildfire smoke PM2.5 has the same toxicity as total PM2.5, but it’s increasingly clear that’s likely not the case,” Dr. Tonne said, presenting the study.
When employing exposure-response functions (ERFs) specific to wildfire smoke, researchers found that the attributable deaths from all causes of wildfire PM2.5 were approximately 10 times larger than those calculated using total PM2.5 exposure estimates. Dr. Tonne explained that this stark difference highlights the critical need for tailored ERFs that accurately reflect the unique health risks posed by wildfire smoke.
“Respiratory mortality usually has the strongest relative risks, and we’re seeing that in this study as well,” Dr. Tonne said. “Wildfire smoke seems to operate through quite immediate mechanisms, likely through inflammation and oxidative stress.”
One significant challenge of the study was the lack of uniform spatial resolution across all countries involved in the analysis. This inconsistency may affect how accurately mortality estimates can be attributed to specific PM2.5 sources. Additionally, the study had limited statistical power for generating age- and sex-specific mortality estimates, which could obscure important demographic differences in vulnerability to wildfire smoke exposure. The analysis was also constrained to data available only up to 2020, thereby excluding critical wildfire events from subsequent years, such as those in 2022 and 2023, which may have further elucidated the health impacts of wildfire smoke in Europe.
Fires Prescription
Prescribed fires or controlled burns are intentional fires set by land managers under carefully managed conditions.
Historically, many forested areas have been subjected to fire suppression practices, which allow combustible materials like dry leaves, twigs, and shrubs to accumulate over time. This buildup leads to a higher likelihood of severe, uncontrollable wildfires. Prescribed fires can reduce these fuel loads and improve the health and resilience of ecosystems.
They release fewer pollutants and emissions than the large-scale, unmanageable wildfires they help prevent because they happen at lower temperatures. But they still introduce pollutants in the air that can negatively affect nearby communities’ health.
People with preexisting respiratory conditions, such as asthma or chronic obstructive pulmonary disease (COPD), are particularly vulnerable to smoke, which can trigger health issues like breathing difficulties, coughing, and eye irritation. The cumulative impact of increased burns raises concerns about long-term air quality, especially in densely populated areas. “We need to understand if we’re actually tipping the scale to having less wildfire smoke or just increasing the total amount of smoke.”
Mitigation strategies include accurately picking the right timing and weather conditions to determine when and where to conduct controlled burns and effective and timely communication to inform local communities about upcoming burns, the potential for smoke exposure, and how to protect themselves.
There is a growing need to improve public messaging around prescribed fires, Mr. Sacks said, because often the message communicated is oversimplified, such as “there will be smoke, but don’t worry. But that’s not the message we want to convey, especially for people with asthma or COPD.”
Instead, he said public health agencies should provide clearer, science-based guidance on the risks for smoke exposure and practical steps people can take to reduce their risk.
What Can Doctors Do?
Chris Carlsten, MD, director of the Centre for Lung Health and professor and head of the Respiratory Medicine Division at the University of British Columbia, Vancouver, Canada, told this news organization that determining whether an exacerbation of a respiratory condition is caused by fire exposure or other factors, such as viral infections, is complex because both can trigger similar responses and may complement each other. “It’s very difficult for any individual to know whether, when they’re having an exacerbation of asthma or COPD, that’s due to the fire,” he said. Fire smoke also increases infection risks, further complicating diagnosis.
Dr. Carlsten suggested that physicians could recommend preventative use of inhalers for at-risk patients when wildfires occur rather than waiting for symptoms to worsen. “That is a really interesting idea that could be practical.” Still, he advises caution, stressing that patients should consult their providers because not all may react well to increased inhaler use.
He also highlighted a significant shift in the healthcare landscape, noting that traditionally, the focus has been on the cardiovascular impacts of pollution, particularly traffic-related pollution. However, as wildfire smoke becomes a growing issue, the focus is shifting back to respiratory problems, with profound implications for healthcare resources, budgets, and drug approvals based on the burden of respiratory disease. “Fire smoke is becoming more of a problem. This swing back to respiratory has huge implications for healthcare systems and respiratory disease burden.”
Mr. Sacks and Dr. Carlsten reported no relevant financial relationships. The study presented by Dr. Tonne received funding from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No. 101057131.
A version of this article first appeared on Medscape.com.
Wildfire-related air pollution in Europe kills more than non-wildfire air pollution. As climate change exacerbates the frequency and violence of wildfires, researchers are studying the health implications of mitigation methods such as prescribed fires.
Presenting at the annual congress of the European Respiratory Society (ERS), Cathryn Tonne, PhD, an environmental epidemiologist at the Instituto de Salud Global de Barcelona, Spain, said wildfire-related PM2.5 is more toxic than general PM2.5, leading to significantly higher mortality rates.
Prescribed, controlled fires have been employed worldwide to reduce the chance of uncontrolled, catastrophic fires. However, researchers wonder whether the techniques reduce the overall fire-related PM2.5 or add up to it. “Prescribed fire increases ecosystem resilience and can reduce the risk of catastrophic wildfire,” said Jason Sacks, MPH, an epidemiologist in the Center for Public Health and Environmental Assessment in the Office of Research and Development at the Environmental Protection Agency (EPA), at the congress. “But it also leads to poorer air quality and health impacts, and we still don’t know what this means at a regional scale.”
Wildfire Pollution Kills More Than Other Air Pollution
Researchers at the Instituto de Salud Global de Barcelona used a large dataset of daily mortality data from 32 European countries collected through the EARLY-ADAPT project. They utilized the SILAM model to derive daily average concentrations of wildfire-related PM2.5, non-fire PM2.5, and total PM2.5 levels. They also employed GEOSTAT population grids at a 1-km resolution to calculate the attributable number of deaths across different regions, specifically focusing on data from 2006, 2011, and 2018.
The data analysis indicated that the relative risk per unit of PM2.5 is substantially larger for wildfire-related PM2.5, compared with non-fire PM2.5. “We essentially assume that wildfire smoke PM2.5 has the same toxicity as total PM2.5, but it’s increasingly clear that’s likely not the case,” Dr. Tonne said, presenting the study.
When employing exposure-response functions (ERFs) specific to wildfire smoke, researchers found that the attributable deaths from all causes of wildfire PM2.5 were approximately 10 times larger than those calculated using total PM2.5 exposure estimates. Dr. Tonne explained that this stark difference highlights the critical need for tailored ERFs that accurately reflect the unique health risks posed by wildfire smoke.
“Respiratory mortality usually has the strongest relative risks, and we’re seeing that in this study as well,” Dr. Tonne said. “Wildfire smoke seems to operate through quite immediate mechanisms, likely through inflammation and oxidative stress.”
One significant challenge of the study was the lack of uniform spatial resolution across all countries involved in the analysis. This inconsistency may affect how accurately mortality estimates can be attributed to specific PM2.5 sources. Additionally, the study had limited statistical power for generating age- and sex-specific mortality estimates, which could obscure important demographic differences in vulnerability to wildfire smoke exposure. The analysis was also constrained to data available only up to 2020, thereby excluding critical wildfire events from subsequent years, such as those in 2022 and 2023, which may have further elucidated the health impacts of wildfire smoke in Europe.
Fires Prescription
Prescribed fires or controlled burns are intentional fires set by land managers under carefully managed conditions.
Historically, many forested areas have been subjected to fire suppression practices, which allow combustible materials like dry leaves, twigs, and shrubs to accumulate over time. This buildup leads to a higher likelihood of severe, uncontrollable wildfires. Prescribed fires can reduce these fuel loads and improve the health and resilience of ecosystems.
They release fewer pollutants and emissions than the large-scale, unmanageable wildfires they help prevent because they happen at lower temperatures. But they still introduce pollutants in the air that can negatively affect nearby communities’ health.
People with preexisting respiratory conditions, such as asthma or chronic obstructive pulmonary disease (COPD), are particularly vulnerable to smoke, which can trigger health issues like breathing difficulties, coughing, and eye irritation. The cumulative impact of increased burns raises concerns about long-term air quality, especially in densely populated areas. “We need to understand if we’re actually tipping the scale to having less wildfire smoke or just increasing the total amount of smoke.”
Mitigation strategies include accurately picking the right timing and weather conditions to determine when and where to conduct controlled burns and effective and timely communication to inform local communities about upcoming burns, the potential for smoke exposure, and how to protect themselves.
There is a growing need to improve public messaging around prescribed fires, Mr. Sacks said, because often the message communicated is oversimplified, such as “there will be smoke, but don’t worry. But that’s not the message we want to convey, especially for people with asthma or COPD.”
Instead, he said public health agencies should provide clearer, science-based guidance on the risks for smoke exposure and practical steps people can take to reduce their risk.
What Can Doctors Do?
Chris Carlsten, MD, director of the Centre for Lung Health and professor and head of the Respiratory Medicine Division at the University of British Columbia, Vancouver, Canada, told this news organization that determining whether an exacerbation of a respiratory condition is caused by fire exposure or other factors, such as viral infections, is complex because both can trigger similar responses and may complement each other. “It’s very difficult for any individual to know whether, when they’re having an exacerbation of asthma or COPD, that’s due to the fire,” he said. Fire smoke also increases infection risks, further complicating diagnosis.
Dr. Carlsten suggested that physicians could recommend preventative use of inhalers for at-risk patients when wildfires occur rather than waiting for symptoms to worsen. “That is a really interesting idea that could be practical.” Still, he advises caution, stressing that patients should consult their providers because not all may react well to increased inhaler use.
He also highlighted a significant shift in the healthcare landscape, noting that traditionally, the focus has been on the cardiovascular impacts of pollution, particularly traffic-related pollution. However, as wildfire smoke becomes a growing issue, the focus is shifting back to respiratory problems, with profound implications for healthcare resources, budgets, and drug approvals based on the burden of respiratory disease. “Fire smoke is becoming more of a problem. This swing back to respiratory has huge implications for healthcare systems and respiratory disease burden.”
Mr. Sacks and Dr. Carlsten reported no relevant financial relationships. The study presented by Dr. Tonne received funding from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No. 101057131.
A version of this article first appeared on Medscape.com.
FROM ERS 2024
Hot Flashes: Do They Predict CVD and Dementia?
This transcript has been edited for clarity.
I’d like to talk about a recent report in the journal Menopause linking menopausal symptoms to increased risk for cognitive impairment. I’d also like to discuss some of the recent studies that have addressed whether hot flashes are linked to increased risk for heart disease and other forms of cardiovascular disease (CVD).
Given that 75%-80% of perimenopausal and postmenopausal women have hot flashes and vasomotor symptoms, it’s undoubtedly a more complex relationship between hot flashes and these outcomes than a simple one-size-fits-all, yes-or-no question.
Increasing evidence shows that several additional factors are important, including the age at which the symptoms are occurring, the time since menopause, the severity of the symptoms, whether they co-occur with night sweats and sleep disruption, and the cardiovascular status of the woman.
Several studies suggest that women who have more severe hot flashes and vasomotor symptoms are more likely to have prevalent cardiovascular risk factors — hypertension, dyslipidemia, high body mass index, endothelial dysfunction — as measured by flow-mediated vasodilation and other measures.
It is quite plausible that hot flashes could be a marker for increased risk for cognitive impairment. But the question remains, are hot flashes associated with cognitive impairment independent of these other risk factors? It appears that the associations between hot flashes, vasomotor symptoms, and CVD, and other adverse outcomes, may be more likely when hot flashes persist after age 60 or are newly occurring in later menopause. In the Women’s Health Initiative observational study, the presence of hot flashes and vasomotor symptoms in early menopause was not linked to any increased risk for heart attack, stroke, total CVD, or all-cause mortality.
However, the onset of these symptoms, especially new onset of these symptoms after age 60 or in later menopause, was in fact linked to increased risk for CVD and all-cause mortality. With respect to cognitive impairment, if a woman is having hot flashes and night sweats with regular sleep disruption, performance on cognitive testing would not be as favorable as it would be in the absence of these symptoms.
This brings us to the new study in Menopause that included approximately 1300 Latino women in nine Latin American countries, with an average age of 55 years. Looking at the association between severe menopausal symptoms and cognitive impairment, researchers found that women with severe symptoms were more likely to have cognitive impairment.
Conversely, they found that the women who had a favorable CVD risk factor status (physically active, lower BMI, healthier) and were ever users of estrogen were less likely to have cognitive impairment.
Clearly, for estrogen therapy, we need randomized clinical trials of the presence or absence of vasomotor symptoms and cognitive and CVD outcomes. Such analyses are ongoing, and new randomized trials focused specifically on women in early menopause would be very beneficial.
At the present time, it’s important that we not alarm women about the associations seen in some of these studies because often they are not independent associations; they aren’t independent of other risk factors that are commonly linked to hot flashes and night sweats. There are many other complexities in the relationship between hot flashes and cognitive impairment.
We need to appreciate that women who have moderate to severe hot flashes (especially when associated with disrupted sleep) do have impaired quality of life. It’s important to treat these symptoms, especially in early menopause, and very effective hormonal and nonhormonal treatments are available.
For women with symptoms that persist into later menopause or who have new onset of symptoms in later menopause, it’s important to prioritize cardiovascular health. For example, be more vigilant about behavioral lifestyle counseling to lower risk, and be even more aggressive in treating dyslipidemia and diabetes.
JoAnn E. Manson, Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School; Chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, Massachusetts; and Past President, North American Menopause Society, 2011-2012, has disclosed the following relevant financial relationships: Received study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to talk about a recent report in the journal Menopause linking menopausal symptoms to increased risk for cognitive impairment. I’d also like to discuss some of the recent studies that have addressed whether hot flashes are linked to increased risk for heart disease and other forms of cardiovascular disease (CVD).
Given that 75%-80% of perimenopausal and postmenopausal women have hot flashes and vasomotor symptoms, it’s undoubtedly a more complex relationship between hot flashes and these outcomes than a simple one-size-fits-all, yes-or-no question.
Increasing evidence shows that several additional factors are important, including the age at which the symptoms are occurring, the time since menopause, the severity of the symptoms, whether they co-occur with night sweats and sleep disruption, and the cardiovascular status of the woman.
Several studies suggest that women who have more severe hot flashes and vasomotor symptoms are more likely to have prevalent cardiovascular risk factors — hypertension, dyslipidemia, high body mass index, endothelial dysfunction — as measured by flow-mediated vasodilation and other measures.
It is quite plausible that hot flashes could be a marker for increased risk for cognitive impairment. But the question remains, are hot flashes associated with cognitive impairment independent of these other risk factors? It appears that the associations between hot flashes, vasomotor symptoms, and CVD, and other adverse outcomes, may be more likely when hot flashes persist after age 60 or are newly occurring in later menopause. In the Women’s Health Initiative observational study, the presence of hot flashes and vasomotor symptoms in early menopause was not linked to any increased risk for heart attack, stroke, total CVD, or all-cause mortality.
However, the onset of these symptoms, especially new onset of these symptoms after age 60 or in later menopause, was in fact linked to increased risk for CVD and all-cause mortality. With respect to cognitive impairment, if a woman is having hot flashes and night sweats with regular sleep disruption, performance on cognitive testing would not be as favorable as it would be in the absence of these symptoms.
This brings us to the new study in Menopause that included approximately 1300 Latino women in nine Latin American countries, with an average age of 55 years. Looking at the association between severe menopausal symptoms and cognitive impairment, researchers found that women with severe symptoms were more likely to have cognitive impairment.
Conversely, they found that the women who had a favorable CVD risk factor status (physically active, lower BMI, healthier) and were ever users of estrogen were less likely to have cognitive impairment.
Clearly, for estrogen therapy, we need randomized clinical trials of the presence or absence of vasomotor symptoms and cognitive and CVD outcomes. Such analyses are ongoing, and new randomized trials focused specifically on women in early menopause would be very beneficial.
At the present time, it’s important that we not alarm women about the associations seen in some of these studies because often they are not independent associations; they aren’t independent of other risk factors that are commonly linked to hot flashes and night sweats. There are many other complexities in the relationship between hot flashes and cognitive impairment.
We need to appreciate that women who have moderate to severe hot flashes (especially when associated with disrupted sleep) do have impaired quality of life. It’s important to treat these symptoms, especially in early menopause, and very effective hormonal and nonhormonal treatments are available.
For women with symptoms that persist into later menopause or who have new onset of symptoms in later menopause, it’s important to prioritize cardiovascular health. For example, be more vigilant about behavioral lifestyle counseling to lower risk, and be even more aggressive in treating dyslipidemia and diabetes.
JoAnn E. Manson, Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School; Chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, Massachusetts; and Past President, North American Menopause Society, 2011-2012, has disclosed the following relevant financial relationships: Received study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to talk about a recent report in the journal Menopause linking menopausal symptoms to increased risk for cognitive impairment. I’d also like to discuss some of the recent studies that have addressed whether hot flashes are linked to increased risk for heart disease and other forms of cardiovascular disease (CVD).
Given that 75%-80% of perimenopausal and postmenopausal women have hot flashes and vasomotor symptoms, it’s undoubtedly a more complex relationship between hot flashes and these outcomes than a simple one-size-fits-all, yes-or-no question.
Increasing evidence shows that several additional factors are important, including the age at which the symptoms are occurring, the time since menopause, the severity of the symptoms, whether they co-occur with night sweats and sleep disruption, and the cardiovascular status of the woman.
Several studies suggest that women who have more severe hot flashes and vasomotor symptoms are more likely to have prevalent cardiovascular risk factors — hypertension, dyslipidemia, high body mass index, endothelial dysfunction — as measured by flow-mediated vasodilation and other measures.
It is quite plausible that hot flashes could be a marker for increased risk for cognitive impairment. But the question remains, are hot flashes associated with cognitive impairment independent of these other risk factors? It appears that the associations between hot flashes, vasomotor symptoms, and CVD, and other adverse outcomes, may be more likely when hot flashes persist after age 60 or are newly occurring in later menopause. In the Women’s Health Initiative observational study, the presence of hot flashes and vasomotor symptoms in early menopause was not linked to any increased risk for heart attack, stroke, total CVD, or all-cause mortality.
However, the onset of these symptoms, especially new onset of these symptoms after age 60 or in later menopause, was in fact linked to increased risk for CVD and all-cause mortality. With respect to cognitive impairment, if a woman is having hot flashes and night sweats with regular sleep disruption, performance on cognitive testing would not be as favorable as it would be in the absence of these symptoms.
This brings us to the new study in Menopause that included approximately 1300 Latino women in nine Latin American countries, with an average age of 55 years. Looking at the association between severe menopausal symptoms and cognitive impairment, researchers found that women with severe symptoms were more likely to have cognitive impairment.
Conversely, they found that the women who had a favorable CVD risk factor status (physically active, lower BMI, healthier) and were ever users of estrogen were less likely to have cognitive impairment.
Clearly, for estrogen therapy, we need randomized clinical trials of the presence or absence of vasomotor symptoms and cognitive and CVD outcomes. Such analyses are ongoing, and new randomized trials focused specifically on women in early menopause would be very beneficial.
At the present time, it’s important that we not alarm women about the associations seen in some of these studies because often they are not independent associations; they aren’t independent of other risk factors that are commonly linked to hot flashes and night sweats. There are many other complexities in the relationship between hot flashes and cognitive impairment.
We need to appreciate that women who have moderate to severe hot flashes (especially when associated with disrupted sleep) do have impaired quality of life. It’s important to treat these symptoms, especially in early menopause, and very effective hormonal and nonhormonal treatments are available.
For women with symptoms that persist into later menopause or who have new onset of symptoms in later menopause, it’s important to prioritize cardiovascular health. For example, be more vigilant about behavioral lifestyle counseling to lower risk, and be even more aggressive in treating dyslipidemia and diabetes.
JoAnn E. Manson, Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School; Chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, Massachusetts; and Past President, North American Menopause Society, 2011-2012, has disclosed the following relevant financial relationships: Received study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article first appeared on Medscape.com.
Beyond Weight Loss, Limited Bariatric Surgery Benefits in Older Adults
TOPLINE:
For older adults with obesity, bariatric surgery does not appear to significantly reduce the risk for obesity-related cancer and cardiovascular disease (CVD), as it does in younger adults.
METHODOLOGY:
- Bariatric surgery has been shown to decrease the risk for obesity-related cancer and CVD but is typically reserved for patients aged < 60 years. Whether the same holds for patients who undergo surgery at older ages is unclear.
- Researchers analyzed nationwide data from three countries (Denmark, Finland, and Sweden) to compare patients with no history of cancer or CVD and age ≥ 60 years who underwent bariatric surgery against matched controls who received nonoperative treatment for obesity.
- The main outcome was obesity-related cancer, defined as a composite outcome of breast, endometrial, esophageal, colorectal, and kidney cancer. The secondary outcome was CVD, defined as a composite of myocardial infarction, ischemic stroke, and cerebral hemorrhage.
- Analyses were adjusted for diabetes, hypertension, peripheral vascular disease, chronic obstructive pulmonary disease, kidney disease, and frailty.
TAKEAWAY:
- Of the 15,300 patients (66.4% women) included, 2550 underwent bariatric surgery (including gastric bypass in 1930) and 12,750 matched controls received nonoperative treatment for obesity.
- During a median 5.8 years of follow-up, 658 (4.3%) people developed obesity-related cancer and 1436 (9.4%) developed CVD.
- Bariatric surgery in adults aged ≥ 60 years was not associated with a reduced risk for obesity-related cancer (hazard ratio [HR], 0.81) or CVD (HR, 0.86) compared with matched nonoperative controls.
- Bariatric surgery appeared to be associated with a decreased risk for obesity-related cancer in women (HR, 0.76).
- There was a decreased risk for both obesity-related cancer (HR, 0.74) and CVD (HR, 0.82) in patients who underwent gastric bypass.
IN PRACTICE:
“The findings from this study suggest a limited role of bariatric surgery in older patients for the prevention of obesity-related cancer or cardiovascular disease,” the authors wrote, noting that this “may be explained by the poorer weight loss and resolution of comorbidities observed in patients who underwent surgery at an older age.”
SOURCE:
The study, with first author Peter Gerber, MD, PhD, Department of Surgery, Capio St Göran’s Hospital, Stockholm, Sweden, was published online in JAMA Network Open.
LIMITATIONS:
Data on smoking status and body mass index were not available. The observational design limited the ability to draw causal inferences. The null association between bariatric surgery and outcomes may be due to limited power.
DISCLOSURES:
The study was funded by the Swedish Society of Medicine. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
For older adults with obesity, bariatric surgery does not appear to significantly reduce the risk for obesity-related cancer and cardiovascular disease (CVD), as it does in younger adults.
METHODOLOGY:
- Bariatric surgery has been shown to decrease the risk for obesity-related cancer and CVD but is typically reserved for patients aged < 60 years. Whether the same holds for patients who undergo surgery at older ages is unclear.
- Researchers analyzed nationwide data from three countries (Denmark, Finland, and Sweden) to compare patients with no history of cancer or CVD and age ≥ 60 years who underwent bariatric surgery against matched controls who received nonoperative treatment for obesity.
- The main outcome was obesity-related cancer, defined as a composite outcome of breast, endometrial, esophageal, colorectal, and kidney cancer. The secondary outcome was CVD, defined as a composite of myocardial infarction, ischemic stroke, and cerebral hemorrhage.
- Analyses were adjusted for diabetes, hypertension, peripheral vascular disease, chronic obstructive pulmonary disease, kidney disease, and frailty.
TAKEAWAY:
- Of the 15,300 patients (66.4% women) included, 2550 underwent bariatric surgery (including gastric bypass in 1930) and 12,750 matched controls received nonoperative treatment for obesity.
- During a median 5.8 years of follow-up, 658 (4.3%) people developed obesity-related cancer and 1436 (9.4%) developed CVD.
- Bariatric surgery in adults aged ≥ 60 years was not associated with a reduced risk for obesity-related cancer (hazard ratio [HR], 0.81) or CVD (HR, 0.86) compared with matched nonoperative controls.
- Bariatric surgery appeared to be associated with a decreased risk for obesity-related cancer in women (HR, 0.76).
- There was a decreased risk for both obesity-related cancer (HR, 0.74) and CVD (HR, 0.82) in patients who underwent gastric bypass.
IN PRACTICE:
“The findings from this study suggest a limited role of bariatric surgery in older patients for the prevention of obesity-related cancer or cardiovascular disease,” the authors wrote, noting that this “may be explained by the poorer weight loss and resolution of comorbidities observed in patients who underwent surgery at an older age.”
SOURCE:
The study, with first author Peter Gerber, MD, PhD, Department of Surgery, Capio St Göran’s Hospital, Stockholm, Sweden, was published online in JAMA Network Open.
LIMITATIONS:
Data on smoking status and body mass index were not available. The observational design limited the ability to draw causal inferences. The null association between bariatric surgery and outcomes may be due to limited power.
DISCLOSURES:
The study was funded by the Swedish Society of Medicine. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
For older adults with obesity, bariatric surgery does not appear to significantly reduce the risk for obesity-related cancer and cardiovascular disease (CVD), as it does in younger adults.
METHODOLOGY:
- Bariatric surgery has been shown to decrease the risk for obesity-related cancer and CVD but is typically reserved for patients aged < 60 years. Whether the same holds for patients who undergo surgery at older ages is unclear.
- Researchers analyzed nationwide data from three countries (Denmark, Finland, and Sweden) to compare patients with no history of cancer or CVD and age ≥ 60 years who underwent bariatric surgery against matched controls who received nonoperative treatment for obesity.
- The main outcome was obesity-related cancer, defined as a composite outcome of breast, endometrial, esophageal, colorectal, and kidney cancer. The secondary outcome was CVD, defined as a composite of myocardial infarction, ischemic stroke, and cerebral hemorrhage.
- Analyses were adjusted for diabetes, hypertension, peripheral vascular disease, chronic obstructive pulmonary disease, kidney disease, and frailty.
TAKEAWAY:
- Of the 15,300 patients (66.4% women) included, 2550 underwent bariatric surgery (including gastric bypass in 1930) and 12,750 matched controls received nonoperative treatment for obesity.
- During a median 5.8 years of follow-up, 658 (4.3%) people developed obesity-related cancer and 1436 (9.4%) developed CVD.
- Bariatric surgery in adults aged ≥ 60 years was not associated with a reduced risk for obesity-related cancer (hazard ratio [HR], 0.81) or CVD (HR, 0.86) compared with matched nonoperative controls.
- Bariatric surgery appeared to be associated with a decreased risk for obesity-related cancer in women (HR, 0.76).
- There was a decreased risk for both obesity-related cancer (HR, 0.74) and CVD (HR, 0.82) in patients who underwent gastric bypass.
IN PRACTICE:
“The findings from this study suggest a limited role of bariatric surgery in older patients for the prevention of obesity-related cancer or cardiovascular disease,” the authors wrote, noting that this “may be explained by the poorer weight loss and resolution of comorbidities observed in patients who underwent surgery at an older age.”
SOURCE:
The study, with first author Peter Gerber, MD, PhD, Department of Surgery, Capio St Göran’s Hospital, Stockholm, Sweden, was published online in JAMA Network Open.
LIMITATIONS:
Data on smoking status and body mass index were not available. The observational design limited the ability to draw causal inferences. The null association between bariatric surgery and outcomes may be due to limited power.
DISCLOSURES:
The study was funded by the Swedish Society of Medicine. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Analysis of Colchicine’s Drug-Drug Interactions Finds Little Risk
TOPLINE:
The presence of an operational classification of drug interactions (ORCA) class 3 or 4 drug-drug interactions (DDIs) did not increase the risk for colchicine-related gastrointestinal adverse events or modify the effect of colchicine on death or hospitalization caused by COVID-19 infection in ambulatory patients.
METHODOLOGY:
- This secondary analysis of the COLCORONA trial aimed to evaluate if a potential DDI of colchicine was associated with changes in its pharmacokinetics or modified its clinical safety and efficacy in patients with COVID-19.
- Overall, 4432 ambulatory patients with COVID-19 (median age, 54 years; 54% women) were randomly assigned to receive colchicine 0.5 mg twice daily for 3 days and then 0.5 mg once daily for 27 days (n = 2205) or a placebo (n = 2227).
- All the participants had at least one high-risk criterion such as age ≥ 70 years, diabetes, heart failure, systolic blood pressure ≥ 150 mm Hg, respiratory disease, coronary disease, body temperature ≥ 38.4 °C within the last 48 hours, dyspnea, bicytopenia, pancytopenia, or high neutrophil count with low lymphocyte count.
- The medications that could interact with colchicine were determined and categorized under ORCA classes 1 (contraindicated), 2 (provisionally contraindicated), 3 (conditional use), or 4 (minimal risk).
- The primary outcome was any gastrointestinal adverse event assessed over a 30-day follow-up period.
TAKEAWAY:
- Among all the participants, 1% received medications with an ORCA class 2 interaction, 14% with a class 3 interaction, and 13% with a class 4 interaction; rosuvastatin (12%) and atorvastatin (10%) were the most common interacting medications.
- The odds of any gastrointestinal adverse event were 1.80 times and 1.68 times higher in the colchicine arm than in the placebo arm among those without and with a DDI, respectively, with the effect of colchicine being consistent regardless of the presence of drug interactions (P = .69 for interaction).
- Similarly, DDIs did not influence the effect of colchicine on combined risk for COVID-19 hospitalization or mortality (P = .80 for interaction).
IN PRACTICE:
“Once potential DDIs have been identified through screening, they must be tested,” Hemalkumar B. Mehta, PhD, and G. Caleb Alexander, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, wrote in an invited commentary published online in JAMA Network Open. “Theoretical DDIs may not translate into real-world harms,” they added.
SOURCE:
The study was led by Lama S. Alfehaid, PharmD, of Brigham and Women’s Hospital, Boston. It was published online in JAMA Network Open.
LIMITATIONS:
This study focused on the medications used by participants at baseline, which may not have captured all potential DDIs. The findings did not provide information on rare adverse events, such as rhabdomyolysis, which usually occur months after initiating drug therapy. Furthermore, all the study participants had confirmed SARS-CoV-2 infection, which may have increased their susceptibility to adverse reactions associated with the use of colchicine.
DISCLOSURES:
Some authors were supported by grants from the National Institutes of Health/National Heart, Lung, and Blood Institute, American Heart Association, and other sources. The authors also declared serving on advisory boards or on the board of directors; receiving personal fees, grants, research support, or speaking fees; or having other ties with many pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The presence of an operational classification of drug interactions (ORCA) class 3 or 4 drug-drug interactions (DDIs) did not increase the risk for colchicine-related gastrointestinal adverse events or modify the effect of colchicine on death or hospitalization caused by COVID-19 infection in ambulatory patients.
METHODOLOGY:
- This secondary analysis of the COLCORONA trial aimed to evaluate if a potential DDI of colchicine was associated with changes in its pharmacokinetics or modified its clinical safety and efficacy in patients with COVID-19.
- Overall, 4432 ambulatory patients with COVID-19 (median age, 54 years; 54% women) were randomly assigned to receive colchicine 0.5 mg twice daily for 3 days and then 0.5 mg once daily for 27 days (n = 2205) or a placebo (n = 2227).
- All the participants had at least one high-risk criterion such as age ≥ 70 years, diabetes, heart failure, systolic blood pressure ≥ 150 mm Hg, respiratory disease, coronary disease, body temperature ≥ 38.4 °C within the last 48 hours, dyspnea, bicytopenia, pancytopenia, or high neutrophil count with low lymphocyte count.
- The medications that could interact with colchicine were determined and categorized under ORCA classes 1 (contraindicated), 2 (provisionally contraindicated), 3 (conditional use), or 4 (minimal risk).
- The primary outcome was any gastrointestinal adverse event assessed over a 30-day follow-up period.
TAKEAWAY:
- Among all the participants, 1% received medications with an ORCA class 2 interaction, 14% with a class 3 interaction, and 13% with a class 4 interaction; rosuvastatin (12%) and atorvastatin (10%) were the most common interacting medications.
- The odds of any gastrointestinal adverse event were 1.80 times and 1.68 times higher in the colchicine arm than in the placebo arm among those without and with a DDI, respectively, with the effect of colchicine being consistent regardless of the presence of drug interactions (P = .69 for interaction).
- Similarly, DDIs did not influence the effect of colchicine on combined risk for COVID-19 hospitalization or mortality (P = .80 for interaction).
IN PRACTICE:
“Once potential DDIs have been identified through screening, they must be tested,” Hemalkumar B. Mehta, PhD, and G. Caleb Alexander, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, wrote in an invited commentary published online in JAMA Network Open. “Theoretical DDIs may not translate into real-world harms,” they added.
SOURCE:
The study was led by Lama S. Alfehaid, PharmD, of Brigham and Women’s Hospital, Boston. It was published online in JAMA Network Open.
LIMITATIONS:
This study focused on the medications used by participants at baseline, which may not have captured all potential DDIs. The findings did not provide information on rare adverse events, such as rhabdomyolysis, which usually occur months after initiating drug therapy. Furthermore, all the study participants had confirmed SARS-CoV-2 infection, which may have increased their susceptibility to adverse reactions associated with the use of colchicine.
DISCLOSURES:
Some authors were supported by grants from the National Institutes of Health/National Heart, Lung, and Blood Institute, American Heart Association, and other sources. The authors also declared serving on advisory boards or on the board of directors; receiving personal fees, grants, research support, or speaking fees; or having other ties with many pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The presence of an operational classification of drug interactions (ORCA) class 3 or 4 drug-drug interactions (DDIs) did not increase the risk for colchicine-related gastrointestinal adverse events or modify the effect of colchicine on death or hospitalization caused by COVID-19 infection in ambulatory patients.
METHODOLOGY:
- This secondary analysis of the COLCORONA trial aimed to evaluate if a potential DDI of colchicine was associated with changes in its pharmacokinetics or modified its clinical safety and efficacy in patients with COVID-19.
- Overall, 4432 ambulatory patients with COVID-19 (median age, 54 years; 54% women) were randomly assigned to receive colchicine 0.5 mg twice daily for 3 days and then 0.5 mg once daily for 27 days (n = 2205) or a placebo (n = 2227).
- All the participants had at least one high-risk criterion such as age ≥ 70 years, diabetes, heart failure, systolic blood pressure ≥ 150 mm Hg, respiratory disease, coronary disease, body temperature ≥ 38.4 °C within the last 48 hours, dyspnea, bicytopenia, pancytopenia, or high neutrophil count with low lymphocyte count.
- The medications that could interact with colchicine were determined and categorized under ORCA classes 1 (contraindicated), 2 (provisionally contraindicated), 3 (conditional use), or 4 (minimal risk).
- The primary outcome was any gastrointestinal adverse event assessed over a 30-day follow-up period.
TAKEAWAY:
- Among all the participants, 1% received medications with an ORCA class 2 interaction, 14% with a class 3 interaction, and 13% with a class 4 interaction; rosuvastatin (12%) and atorvastatin (10%) were the most common interacting medications.
- The odds of any gastrointestinal adverse event were 1.80 times and 1.68 times higher in the colchicine arm than in the placebo arm among those without and with a DDI, respectively, with the effect of colchicine being consistent regardless of the presence of drug interactions (P = .69 for interaction).
- Similarly, DDIs did not influence the effect of colchicine on combined risk for COVID-19 hospitalization or mortality (P = .80 for interaction).
IN PRACTICE:
“Once potential DDIs have been identified through screening, they must be tested,” Hemalkumar B. Mehta, PhD, and G. Caleb Alexander, MD, of the Johns Hopkins Bloomberg School of Public Health, Baltimore, wrote in an invited commentary published online in JAMA Network Open. “Theoretical DDIs may not translate into real-world harms,” they added.
SOURCE:
The study was led by Lama S. Alfehaid, PharmD, of Brigham and Women’s Hospital, Boston. It was published online in JAMA Network Open.
LIMITATIONS:
This study focused on the medications used by participants at baseline, which may not have captured all potential DDIs. The findings did not provide information on rare adverse events, such as rhabdomyolysis, which usually occur months after initiating drug therapy. Furthermore, all the study participants had confirmed SARS-CoV-2 infection, which may have increased their susceptibility to adverse reactions associated with the use of colchicine.
DISCLOSURES:
Some authors were supported by grants from the National Institutes of Health/National Heart, Lung, and Blood Institute, American Heart Association, and other sources. The authors also declared serving on advisory boards or on the board of directors; receiving personal fees, grants, research support, or speaking fees; or having other ties with many pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
A New Focus for Cushing Syndrome Screening in Obesity
TOPLINE:
METHODOLOGY:
- Obesity is a key clinical feature of Cushing syndrome and shares many overlapping characteristics. An ongoing debate continues about the need to screen patients with obesity for the rare endocrine disease, but phenotypes known as metabolically healthy or unhealthy obesity may help better define an at-risk population.
- To assess the prevalence of Cushing syndrome by metabolic health status, researchers conducted a retrospective study of 1008 patients with obesity (mean age, 40 years; 83% women; body mass index ≥ 30) seen at an endocrinology outpatient clinic in Turkey between December 2020 and June 2022.
- They screened patients for Cushing syndrome with an overnight dexamethasone suppression test (1 mg DST), an oral dexamethasone dose given at 11 PM followed by a fasting blood sample for cortisol measurement the next morning. A serum cortisol level < 1.8 mcg/dL indicated normal suppression.
- Patients were categorized into those with metabolically healthy obesity (n = 229) or metabolically unhealthy obesity (n = 779) based on the absence or presence of comorbidities such as diabetes, prediabetes, coronary artery disease, hypertension, or dyslipidemia.
TAKEAWAY:
- The overall prevalence of Cushing syndrome in the study cohort was 0.2%, with only two patients definitively diagnosed after more tests and the remaining 10 classified as having subclinical hypercortisolism.
- Cortisol levels following the 1 mg DST were higher in the metabolically unhealthy obesity group than in the metabolically healthy obesity group (P = .001).
- Among the 12 patients with unsuppressed levels of cortisol, 11 belonged to the metabolically unhealthy obesity group, indicating a strong association between metabolic health and the levels of cortisol.
- The test demonstrated a specificity of 99% and sensitivity of 100% for screening Cushing syndrome in patients with obesity.
IN PRACTICE:
“Screening all patients with obesity for CS [Cushing syndrome] without considering any associated metabolic conditions appears impractical and unnecessary in everyday clinical practice,” the authors wrote. “However, it may be more reasonable and applicable to selectively screen the patients with obesity having comorbidities such as DM [diabetes mellitus], hypertension, dyslipidemia, or coronary artery disease, which lead to a metabolically unhealthy phenotype, rather than all individuals with obesity,” they added.
SOURCE:
The study, led by Sema Hepsen, Ankara Etlik City Hospital, Department of Endocrinology and Metabolism, Ankara, Turkey, was published online in the International Journal of Obesity.
LIMITATIONS:
The single-center design of the study and inclusion of patients from a single racial group may limit the generalizability of the findings. The retrospective design prevented the retrieval of all relevant data on clinical features and fat distribution.
DISCLOSURES:
The study was supported by an open access funding provided by the Scientific and Technological Research Council of Türkiye. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Obesity is a key clinical feature of Cushing syndrome and shares many overlapping characteristics. An ongoing debate continues about the need to screen patients with obesity for the rare endocrine disease, but phenotypes known as metabolically healthy or unhealthy obesity may help better define an at-risk population.
- To assess the prevalence of Cushing syndrome by metabolic health status, researchers conducted a retrospective study of 1008 patients with obesity (mean age, 40 years; 83% women; body mass index ≥ 30) seen at an endocrinology outpatient clinic in Turkey between December 2020 and June 2022.
- They screened patients for Cushing syndrome with an overnight dexamethasone suppression test (1 mg DST), an oral dexamethasone dose given at 11 PM followed by a fasting blood sample for cortisol measurement the next morning. A serum cortisol level < 1.8 mcg/dL indicated normal suppression.
- Patients were categorized into those with metabolically healthy obesity (n = 229) or metabolically unhealthy obesity (n = 779) based on the absence or presence of comorbidities such as diabetes, prediabetes, coronary artery disease, hypertension, or dyslipidemia.
TAKEAWAY:
- The overall prevalence of Cushing syndrome in the study cohort was 0.2%, with only two patients definitively diagnosed after more tests and the remaining 10 classified as having subclinical hypercortisolism.
- Cortisol levels following the 1 mg DST were higher in the metabolically unhealthy obesity group than in the metabolically healthy obesity group (P = .001).
- Among the 12 patients with unsuppressed levels of cortisol, 11 belonged to the metabolically unhealthy obesity group, indicating a strong association between metabolic health and the levels of cortisol.
- The test demonstrated a specificity of 99% and sensitivity of 100% for screening Cushing syndrome in patients with obesity.
IN PRACTICE:
“Screening all patients with obesity for CS [Cushing syndrome] without considering any associated metabolic conditions appears impractical and unnecessary in everyday clinical practice,” the authors wrote. “However, it may be more reasonable and applicable to selectively screen the patients with obesity having comorbidities such as DM [diabetes mellitus], hypertension, dyslipidemia, or coronary artery disease, which lead to a metabolically unhealthy phenotype, rather than all individuals with obesity,” they added.
SOURCE:
The study, led by Sema Hepsen, Ankara Etlik City Hospital, Department of Endocrinology and Metabolism, Ankara, Turkey, was published online in the International Journal of Obesity.
LIMITATIONS:
The single-center design of the study and inclusion of patients from a single racial group may limit the generalizability of the findings. The retrospective design prevented the retrieval of all relevant data on clinical features and fat distribution.
DISCLOSURES:
The study was supported by an open access funding provided by the Scientific and Technological Research Council of Türkiye. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Obesity is a key clinical feature of Cushing syndrome and shares many overlapping characteristics. An ongoing debate continues about the need to screen patients with obesity for the rare endocrine disease, but phenotypes known as metabolically healthy or unhealthy obesity may help better define an at-risk population.
- To assess the prevalence of Cushing syndrome by metabolic health status, researchers conducted a retrospective study of 1008 patients with obesity (mean age, 40 years; 83% women; body mass index ≥ 30) seen at an endocrinology outpatient clinic in Turkey between December 2020 and June 2022.
- They screened patients for Cushing syndrome with an overnight dexamethasone suppression test (1 mg DST), an oral dexamethasone dose given at 11 PM followed by a fasting blood sample for cortisol measurement the next morning. A serum cortisol level < 1.8 mcg/dL indicated normal suppression.
- Patients were categorized into those with metabolically healthy obesity (n = 229) or metabolically unhealthy obesity (n = 779) based on the absence or presence of comorbidities such as diabetes, prediabetes, coronary artery disease, hypertension, or dyslipidemia.
TAKEAWAY:
- The overall prevalence of Cushing syndrome in the study cohort was 0.2%, with only two patients definitively diagnosed after more tests and the remaining 10 classified as having subclinical hypercortisolism.
- Cortisol levels following the 1 mg DST were higher in the metabolically unhealthy obesity group than in the metabolically healthy obesity group (P = .001).
- Among the 12 patients with unsuppressed levels of cortisol, 11 belonged to the metabolically unhealthy obesity group, indicating a strong association between metabolic health and the levels of cortisol.
- The test demonstrated a specificity of 99% and sensitivity of 100% for screening Cushing syndrome in patients with obesity.
IN PRACTICE:
“Screening all patients with obesity for CS [Cushing syndrome] without considering any associated metabolic conditions appears impractical and unnecessary in everyday clinical practice,” the authors wrote. “However, it may be more reasonable and applicable to selectively screen the patients with obesity having comorbidities such as DM [diabetes mellitus], hypertension, dyslipidemia, or coronary artery disease, which lead to a metabolically unhealthy phenotype, rather than all individuals with obesity,” they added.
SOURCE:
The study, led by Sema Hepsen, Ankara Etlik City Hospital, Department of Endocrinology and Metabolism, Ankara, Turkey, was published online in the International Journal of Obesity.
LIMITATIONS:
The single-center design of the study and inclusion of patients from a single racial group may limit the generalizability of the findings. The retrospective design prevented the retrieval of all relevant data on clinical features and fat distribution.
DISCLOSURES:
The study was supported by an open access funding provided by the Scientific and Technological Research Council of Türkiye. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Can Endurance Exercise Be Harmful?
In 490 BC, Pheidippides (or possibly Philippides) ran from Athens to Sparta to ask for military aid against the invading Persian army, then back to Athens, then off to the battlefield of Marathon, then back to Athens to announce the army’s victory, after which he promptly died. The story, if it is to be believed (there is some doubt among historians), raises an interesting question: Are some forms of exercise dangerous?
Running a marathon is a lot of work. The “worst parade ever,” as one spectator described it, is not without its risks. As a runner myself, I know that it doesn’t take much to generate a bloody sock at the end of a long run.
But when most people think about the risks of exercise, they mean the cardiovascular risks, such as sudden deaths during marathons, probably because of the aforementioned ancient Greek’s demise. The reality is more reassuring. An analysis of 10 years’ worth of data from US marathons and half-marathons found that out of 10.9 million runners, there were 59 cardiac arrests, an incidence rate of 0.54 per 100,000 participants. Others have found incidence rates in the same range. An analysis of the annual Marine Corps and Twin Cities marathons found a sudden death rate of 0.002%.
Marathon runners do sometimes require medical attention. In the Twin Cities cohort, 25 out of every 1000 finishers required medical attention, but 90% of their problems were mild. The majority included issues such as dehydration, vasovagal syncope, hyperthermia, and exhaustion. Musculoskeletal problems and skin abrasions made up the rest. Objectively, long distance running is fairly safe.
Running and Coronary Calcium
Then a study comes around suggesting that marathon runners have more coronary artery calcium (CAC). In 2008, German researchers compared 108 healthy male marathon runners over 50 years of age with Framingham risk–matched controls. The marathoners had a higher median CAC score (36 vs 12; P =.02), but scores across the board were quite low and not all studies were in agreement. The MESA study and another from Korea found an inverse relationship between physical activity and coronary calcium, but they compared sedentary people with vigorous exercisers, not specifically marathoners.
Two later studies, published in 2017, generally corroborated that endurance exercise was associated with higher calcium — with some caveats. A group from the Netherlands looked at lifelong exercise volume and compared men who accumulated > 2000 MET-min/week with those who exercised < 1000 MET-min/week. Again, the analysis was limited to men, and CAC scores, though statistically different, were still very low (9.4 vs 0; P =.02). Importantly, in men with coronary plaques, the more active group had less mixed plaque and more calcified plaque.
A UK study of middle-aged masters-level athletes at low cardiovascular risk had similar findings. Most of the study population (70%) were men, and 77% were runners (not all were marathoners). Overall, the male athletes had not only more plaque but more calcified plaque than their sedentary peers, even though most male athletes (60%) had a CAC score of zero.
The findings from these two studies were interpreted as reassuring. They confirmed that athletes are a generally low-risk group with low calcium scores, and although they might have more plaque and coronary calcium on average, it tends to be the more benign calcified type.
Masters at Heart
But the 2023 Master@Heart study challenged that assertion. It analyzed lifelong endurance athletes, late-onset endurance athletes (those who came to the game later in life), and healthy nonathletic controls. The study also found more coronary stenoses in lifelong athletes, but the breakdown of calcified vs noncalcified vs mixed plaques was the same across groups, thus contradicting the idea that exercise exerted its protective effect by calcifying and therefore stabilizing said plaques. The silver lining was fewer vulnerable plaques in the lifelong athletes (defined via high-risk features) but these were generally rare across the entire population.
Whether Master@Heart is groundbreaking or an outlier depends on your point of view. In 2024, a study from Portugal suggested that the relationship between exercise and coronary calcification is more complicated. Among 105 male veteran athletes, a high volume of exercise was associated with more coronary atherosclerosis in those at higher cardiovascular risk, but it tended to be protective in those deemed lower risk. In fact, the high-volume exercise group had fewer individuals with a CAC score > 100 (16% vs 4%; P =.029), though again, the vast majority had low CAC scores.
A limitation of all these studies is that they had cross-sectional designs, measuring coronary calcium at a single point in time and relying on questionnaires and patient recall to determine lifelong exposure to exercise. Recall bias could have been a problem, and exercise patterns vary over time. It’s not unreasonable to wonder whether people at higher cardiovascular risk should start exercising to mitigate that risk. Granted, they might not start running marathons, but many of these studies looked only at physical activity levels. A study that measured the increase (or stability) of coronary calcium over time would be more helpful.
Prior research (in men again) showed that high levels of physical activity were associated with more coronary calcium, but not with all-cause or cardiovascular mortality. But it too looked only at a single time point. The most recent study added to the body of evidence included data on nearly 9000 men and women and found that higher exercise volume did not correlate with CAC progression over the mean follow-up of 7.8 years. The study measured physical activity of any variety and included physically taxing sports like golf (without a cart). So it was not an assessment of the dangers of endurance exercise.
Outstanding Questions and Bananas
Ultimately, many questions remain. Is the lack of risk seen in women a spurious finding because they are underrepresented in most studies, or might exercise affect men and women differently? Is it valid to combine studies on endurance exercise with those looking at physical activity more generally? How accurate are self-reports of exercise? Could endurance exercisers be using performance-enhancing drugs that are confounding the associations? Are people who engage in more physical activity healthier or just trying to mitigate a higher baseline cardiovascular risk? Why do they give out bananas at the end of marathons given that there are better sources of potassium?
We have no randomized trials on the benefits and risks of endurance exercise. Even if you could get ethics approval, one imagines there would be few volunteers. In the end, we must make do with observational data and remember that coronary calcifications are a surrogate endpoint.
When it comes to hard endpoints, an analysis of French Tour de France participants found a lower risk for both cardiovascular and cancer deaths compared with the general male population. So perhaps the most important take-home message is one that has been said many times: Beware of surrogate endpoints. And for those contemplating running a marathon, I am forced to agree with the person who wrote the sign I saw during my first race. It does seem like a lot of work for a free banana.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, Quebec, Canada. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
In 490 BC, Pheidippides (or possibly Philippides) ran from Athens to Sparta to ask for military aid against the invading Persian army, then back to Athens, then off to the battlefield of Marathon, then back to Athens to announce the army’s victory, after which he promptly died. The story, if it is to be believed (there is some doubt among historians), raises an interesting question: Are some forms of exercise dangerous?
Running a marathon is a lot of work. The “worst parade ever,” as one spectator described it, is not without its risks. As a runner myself, I know that it doesn’t take much to generate a bloody sock at the end of a long run.
But when most people think about the risks of exercise, they mean the cardiovascular risks, such as sudden deaths during marathons, probably because of the aforementioned ancient Greek’s demise. The reality is more reassuring. An analysis of 10 years’ worth of data from US marathons and half-marathons found that out of 10.9 million runners, there were 59 cardiac arrests, an incidence rate of 0.54 per 100,000 participants. Others have found incidence rates in the same range. An analysis of the annual Marine Corps and Twin Cities marathons found a sudden death rate of 0.002%.
Marathon runners do sometimes require medical attention. In the Twin Cities cohort, 25 out of every 1000 finishers required medical attention, but 90% of their problems were mild. The majority included issues such as dehydration, vasovagal syncope, hyperthermia, and exhaustion. Musculoskeletal problems and skin abrasions made up the rest. Objectively, long distance running is fairly safe.
Running and Coronary Calcium
Then a study comes around suggesting that marathon runners have more coronary artery calcium (CAC). In 2008, German researchers compared 108 healthy male marathon runners over 50 years of age with Framingham risk–matched controls. The marathoners had a higher median CAC score (36 vs 12; P =.02), but scores across the board were quite low and not all studies were in agreement. The MESA study and another from Korea found an inverse relationship between physical activity and coronary calcium, but they compared sedentary people with vigorous exercisers, not specifically marathoners.
Two later studies, published in 2017, generally corroborated that endurance exercise was associated with higher calcium — with some caveats. A group from the Netherlands looked at lifelong exercise volume and compared men who accumulated > 2000 MET-min/week with those who exercised < 1000 MET-min/week. Again, the analysis was limited to men, and CAC scores, though statistically different, were still very low (9.4 vs 0; P =.02). Importantly, in men with coronary plaques, the more active group had less mixed plaque and more calcified plaque.
A UK study of middle-aged masters-level athletes at low cardiovascular risk had similar findings. Most of the study population (70%) were men, and 77% were runners (not all were marathoners). Overall, the male athletes had not only more plaque but more calcified plaque than their sedentary peers, even though most male athletes (60%) had a CAC score of zero.
The findings from these two studies were interpreted as reassuring. They confirmed that athletes are a generally low-risk group with low calcium scores, and although they might have more plaque and coronary calcium on average, it tends to be the more benign calcified type.
Masters at Heart
But the 2023 Master@Heart study challenged that assertion. It analyzed lifelong endurance athletes, late-onset endurance athletes (those who came to the game later in life), and healthy nonathletic controls. The study also found more coronary stenoses in lifelong athletes, but the breakdown of calcified vs noncalcified vs mixed plaques was the same across groups, thus contradicting the idea that exercise exerted its protective effect by calcifying and therefore stabilizing said plaques. The silver lining was fewer vulnerable plaques in the lifelong athletes (defined via high-risk features) but these were generally rare across the entire population.
Whether Master@Heart is groundbreaking or an outlier depends on your point of view. In 2024, a study from Portugal suggested that the relationship between exercise and coronary calcification is more complicated. Among 105 male veteran athletes, a high volume of exercise was associated with more coronary atherosclerosis in those at higher cardiovascular risk, but it tended to be protective in those deemed lower risk. In fact, the high-volume exercise group had fewer individuals with a CAC score > 100 (16% vs 4%; P =.029), though again, the vast majority had low CAC scores.
A limitation of all these studies is that they had cross-sectional designs, measuring coronary calcium at a single point in time and relying on questionnaires and patient recall to determine lifelong exposure to exercise. Recall bias could have been a problem, and exercise patterns vary over time. It’s not unreasonable to wonder whether people at higher cardiovascular risk should start exercising to mitigate that risk. Granted, they might not start running marathons, but many of these studies looked only at physical activity levels. A study that measured the increase (or stability) of coronary calcium over time would be more helpful.
Prior research (in men again) showed that high levels of physical activity were associated with more coronary calcium, but not with all-cause or cardiovascular mortality. But it too looked only at a single time point. The most recent study added to the body of evidence included data on nearly 9000 men and women and found that higher exercise volume did not correlate with CAC progression over the mean follow-up of 7.8 years. The study measured physical activity of any variety and included physically taxing sports like golf (without a cart). So it was not an assessment of the dangers of endurance exercise.
Outstanding Questions and Bananas
Ultimately, many questions remain. Is the lack of risk seen in women a spurious finding because they are underrepresented in most studies, or might exercise affect men and women differently? Is it valid to combine studies on endurance exercise with those looking at physical activity more generally? How accurate are self-reports of exercise? Could endurance exercisers be using performance-enhancing drugs that are confounding the associations? Are people who engage in more physical activity healthier or just trying to mitigate a higher baseline cardiovascular risk? Why do they give out bananas at the end of marathons given that there are better sources of potassium?
We have no randomized trials on the benefits and risks of endurance exercise. Even if you could get ethics approval, one imagines there would be few volunteers. In the end, we must make do with observational data and remember that coronary calcifications are a surrogate endpoint.
When it comes to hard endpoints, an analysis of French Tour de France participants found a lower risk for both cardiovascular and cancer deaths compared with the general male population. So perhaps the most important take-home message is one that has been said many times: Beware of surrogate endpoints. And for those contemplating running a marathon, I am forced to agree with the person who wrote the sign I saw during my first race. It does seem like a lot of work for a free banana.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, Quebec, Canada. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
In 490 BC, Pheidippides (or possibly Philippides) ran from Athens to Sparta to ask for military aid against the invading Persian army, then back to Athens, then off to the battlefield of Marathon, then back to Athens to announce the army’s victory, after which he promptly died. The story, if it is to be believed (there is some doubt among historians), raises an interesting question: Are some forms of exercise dangerous?
Running a marathon is a lot of work. The “worst parade ever,” as one spectator described it, is not without its risks. As a runner myself, I know that it doesn’t take much to generate a bloody sock at the end of a long run.
But when most people think about the risks of exercise, they mean the cardiovascular risks, such as sudden deaths during marathons, probably because of the aforementioned ancient Greek’s demise. The reality is more reassuring. An analysis of 10 years’ worth of data from US marathons and half-marathons found that out of 10.9 million runners, there were 59 cardiac arrests, an incidence rate of 0.54 per 100,000 participants. Others have found incidence rates in the same range. An analysis of the annual Marine Corps and Twin Cities marathons found a sudden death rate of 0.002%.
Marathon runners do sometimes require medical attention. In the Twin Cities cohort, 25 out of every 1000 finishers required medical attention, but 90% of their problems were mild. The majority included issues such as dehydration, vasovagal syncope, hyperthermia, and exhaustion. Musculoskeletal problems and skin abrasions made up the rest. Objectively, long distance running is fairly safe.
Running and Coronary Calcium
Then a study comes around suggesting that marathon runners have more coronary artery calcium (CAC). In 2008, German researchers compared 108 healthy male marathon runners over 50 years of age with Framingham risk–matched controls. The marathoners had a higher median CAC score (36 vs 12; P =.02), but scores across the board were quite low and not all studies were in agreement. The MESA study and another from Korea found an inverse relationship between physical activity and coronary calcium, but they compared sedentary people with vigorous exercisers, not specifically marathoners.
Two later studies, published in 2017, generally corroborated that endurance exercise was associated with higher calcium — with some caveats. A group from the Netherlands looked at lifelong exercise volume and compared men who accumulated > 2000 MET-min/week with those who exercised < 1000 MET-min/week. Again, the analysis was limited to men, and CAC scores, though statistically different, were still very low (9.4 vs 0; P =.02). Importantly, in men with coronary plaques, the more active group had less mixed plaque and more calcified plaque.
A UK study of middle-aged masters-level athletes at low cardiovascular risk had similar findings. Most of the study population (70%) were men, and 77% were runners (not all were marathoners). Overall, the male athletes had not only more plaque but more calcified plaque than their sedentary peers, even though most male athletes (60%) had a CAC score of zero.
The findings from these two studies were interpreted as reassuring. They confirmed that athletes are a generally low-risk group with low calcium scores, and although they might have more plaque and coronary calcium on average, it tends to be the more benign calcified type.
Masters at Heart
But the 2023 Master@Heart study challenged that assertion. It analyzed lifelong endurance athletes, late-onset endurance athletes (those who came to the game later in life), and healthy nonathletic controls. The study also found more coronary stenoses in lifelong athletes, but the breakdown of calcified vs noncalcified vs mixed plaques was the same across groups, thus contradicting the idea that exercise exerted its protective effect by calcifying and therefore stabilizing said plaques. The silver lining was fewer vulnerable plaques in the lifelong athletes (defined via high-risk features) but these were generally rare across the entire population.
Whether Master@Heart is groundbreaking or an outlier depends on your point of view. In 2024, a study from Portugal suggested that the relationship between exercise and coronary calcification is more complicated. Among 105 male veteran athletes, a high volume of exercise was associated with more coronary atherosclerosis in those at higher cardiovascular risk, but it tended to be protective in those deemed lower risk. In fact, the high-volume exercise group had fewer individuals with a CAC score > 100 (16% vs 4%; P =.029), though again, the vast majority had low CAC scores.
A limitation of all these studies is that they had cross-sectional designs, measuring coronary calcium at a single point in time and relying on questionnaires and patient recall to determine lifelong exposure to exercise. Recall bias could have been a problem, and exercise patterns vary over time. It’s not unreasonable to wonder whether people at higher cardiovascular risk should start exercising to mitigate that risk. Granted, they might not start running marathons, but many of these studies looked only at physical activity levels. A study that measured the increase (or stability) of coronary calcium over time would be more helpful.
Prior research (in men again) showed that high levels of physical activity were associated with more coronary calcium, but not with all-cause or cardiovascular mortality. But it too looked only at a single time point. The most recent study added to the body of evidence included data on nearly 9000 men and women and found that higher exercise volume did not correlate with CAC progression over the mean follow-up of 7.8 years. The study measured physical activity of any variety and included physically taxing sports like golf (without a cart). So it was not an assessment of the dangers of endurance exercise.
Outstanding Questions and Bananas
Ultimately, many questions remain. Is the lack of risk seen in women a spurious finding because they are underrepresented in most studies, or might exercise affect men and women differently? Is it valid to combine studies on endurance exercise with those looking at physical activity more generally? How accurate are self-reports of exercise? Could endurance exercisers be using performance-enhancing drugs that are confounding the associations? Are people who engage in more physical activity healthier or just trying to mitigate a higher baseline cardiovascular risk? Why do they give out bananas at the end of marathons given that there are better sources of potassium?
We have no randomized trials on the benefits and risks of endurance exercise. Even if you could get ethics approval, one imagines there would be few volunteers. In the end, we must make do with observational data and remember that coronary calcifications are a surrogate endpoint.
When it comes to hard endpoints, an analysis of French Tour de France participants found a lower risk for both cardiovascular and cancer deaths compared with the general male population. So perhaps the most important take-home message is one that has been said many times: Beware of surrogate endpoints. And for those contemplating running a marathon, I am forced to agree with the person who wrote the sign I saw during my first race. It does seem like a lot of work for a free banana.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, Quebec, Canada. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.