User login
A New Focus for Cushing Syndrome Screening in Obesity
TOPLINE:
METHODOLOGY:
- Obesity is a key clinical feature of Cushing syndrome and shares many overlapping characteristics. An ongoing debate continues about the need to screen patients with obesity for the rare endocrine disease, but phenotypes known as metabolically healthy or unhealthy obesity may help better define an at-risk population.
- To assess the prevalence of Cushing syndrome by metabolic health status, researchers conducted a retrospective study of 1008 patients with obesity (mean age, 40 years; 83% women; body mass index ≥ 30) seen at an endocrinology outpatient clinic in Turkey between December 2020 and June 2022.
- They screened patients for Cushing syndrome with an overnight dexamethasone suppression test (1 mg DST), an oral dexamethasone dose given at 11 PM followed by a fasting blood sample for cortisol measurement the next morning. A serum cortisol level < 1.8 mcg/dL indicated normal suppression.
- Patients were categorized into those with metabolically healthy obesity (n = 229) or metabolically unhealthy obesity (n = 779) based on the absence or presence of comorbidities such as diabetes, prediabetes, coronary artery disease, hypertension, or dyslipidemia.
TAKEAWAY:
- The overall prevalence of Cushing syndrome in the study cohort was 0.2%, with only two patients definitively diagnosed after more tests and the remaining 10 classified as having subclinical hypercortisolism.
- Cortisol levels following the 1 mg DST were higher in the metabolically unhealthy obesity group than in the metabolically healthy obesity group (P = .001).
- Among the 12 patients with unsuppressed levels of cortisol, 11 belonged to the metabolically unhealthy obesity group, indicating a strong association between metabolic health and the levels of cortisol.
- The test demonstrated a specificity of 99% and sensitivity of 100% for screening Cushing syndrome in patients with obesity.
IN PRACTICE:
“Screening all patients with obesity for CS [Cushing syndrome] without considering any associated metabolic conditions appears impractical and unnecessary in everyday clinical practice,” the authors wrote. “However, it may be more reasonable and applicable to selectively screen the patients with obesity having comorbidities such as DM [diabetes mellitus], hypertension, dyslipidemia, or coronary artery disease, which lead to a metabolically unhealthy phenotype, rather than all individuals with obesity,” they added.
SOURCE:
The study, led by Sema Hepsen, Ankara Etlik City Hospital, Department of Endocrinology and Metabolism, Ankara, Turkey, was published online in the International Journal of Obesity.
LIMITATIONS:
The single-center design of the study and inclusion of patients from a single racial group may limit the generalizability of the findings. The retrospective design prevented the retrieval of all relevant data on clinical features and fat distribution.
DISCLOSURES:
The study was supported by an open access funding provided by the Scientific and Technological Research Council of Türkiye. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Obesity is a key clinical feature of Cushing syndrome and shares many overlapping characteristics. An ongoing debate continues about the need to screen patients with obesity for the rare endocrine disease, but phenotypes known as metabolically healthy or unhealthy obesity may help better define an at-risk population.
- To assess the prevalence of Cushing syndrome by metabolic health status, researchers conducted a retrospective study of 1008 patients with obesity (mean age, 40 years; 83% women; body mass index ≥ 30) seen at an endocrinology outpatient clinic in Turkey between December 2020 and June 2022.
- They screened patients for Cushing syndrome with an overnight dexamethasone suppression test (1 mg DST), an oral dexamethasone dose given at 11 PM followed by a fasting blood sample for cortisol measurement the next morning. A serum cortisol level < 1.8 mcg/dL indicated normal suppression.
- Patients were categorized into those with metabolically healthy obesity (n = 229) or metabolically unhealthy obesity (n = 779) based on the absence or presence of comorbidities such as diabetes, prediabetes, coronary artery disease, hypertension, or dyslipidemia.
TAKEAWAY:
- The overall prevalence of Cushing syndrome in the study cohort was 0.2%, with only two patients definitively diagnosed after more tests and the remaining 10 classified as having subclinical hypercortisolism.
- Cortisol levels following the 1 mg DST were higher in the metabolically unhealthy obesity group than in the metabolically healthy obesity group (P = .001).
- Among the 12 patients with unsuppressed levels of cortisol, 11 belonged to the metabolically unhealthy obesity group, indicating a strong association between metabolic health and the levels of cortisol.
- The test demonstrated a specificity of 99% and sensitivity of 100% for screening Cushing syndrome in patients with obesity.
IN PRACTICE:
“Screening all patients with obesity for CS [Cushing syndrome] without considering any associated metabolic conditions appears impractical and unnecessary in everyday clinical practice,” the authors wrote. “However, it may be more reasonable and applicable to selectively screen the patients with obesity having comorbidities such as DM [diabetes mellitus], hypertension, dyslipidemia, or coronary artery disease, which lead to a metabolically unhealthy phenotype, rather than all individuals with obesity,” they added.
SOURCE:
The study, led by Sema Hepsen, Ankara Etlik City Hospital, Department of Endocrinology and Metabolism, Ankara, Turkey, was published online in the International Journal of Obesity.
LIMITATIONS:
The single-center design of the study and inclusion of patients from a single racial group may limit the generalizability of the findings. The retrospective design prevented the retrieval of all relevant data on clinical features and fat distribution.
DISCLOSURES:
The study was supported by an open access funding provided by the Scientific and Technological Research Council of Türkiye. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Obesity is a key clinical feature of Cushing syndrome and shares many overlapping characteristics. An ongoing debate continues about the need to screen patients with obesity for the rare endocrine disease, but phenotypes known as metabolically healthy or unhealthy obesity may help better define an at-risk population.
- To assess the prevalence of Cushing syndrome by metabolic health status, researchers conducted a retrospective study of 1008 patients with obesity (mean age, 40 years; 83% women; body mass index ≥ 30) seen at an endocrinology outpatient clinic in Turkey between December 2020 and June 2022.
- They screened patients for Cushing syndrome with an overnight dexamethasone suppression test (1 mg DST), an oral dexamethasone dose given at 11 PM followed by a fasting blood sample for cortisol measurement the next morning. A serum cortisol level < 1.8 mcg/dL indicated normal suppression.
- Patients were categorized into those with metabolically healthy obesity (n = 229) or metabolically unhealthy obesity (n = 779) based on the absence or presence of comorbidities such as diabetes, prediabetes, coronary artery disease, hypertension, or dyslipidemia.
TAKEAWAY:
- The overall prevalence of Cushing syndrome in the study cohort was 0.2%, with only two patients definitively diagnosed after more tests and the remaining 10 classified as having subclinical hypercortisolism.
- Cortisol levels following the 1 mg DST were higher in the metabolically unhealthy obesity group than in the metabolically healthy obesity group (P = .001).
- Among the 12 patients with unsuppressed levels of cortisol, 11 belonged to the metabolically unhealthy obesity group, indicating a strong association between metabolic health and the levels of cortisol.
- The test demonstrated a specificity of 99% and sensitivity of 100% for screening Cushing syndrome in patients with obesity.
IN PRACTICE:
“Screening all patients with obesity for CS [Cushing syndrome] without considering any associated metabolic conditions appears impractical and unnecessary in everyday clinical practice,” the authors wrote. “However, it may be more reasonable and applicable to selectively screen the patients with obesity having comorbidities such as DM [diabetes mellitus], hypertension, dyslipidemia, or coronary artery disease, which lead to a metabolically unhealthy phenotype, rather than all individuals with obesity,” they added.
SOURCE:
The study, led by Sema Hepsen, Ankara Etlik City Hospital, Department of Endocrinology and Metabolism, Ankara, Turkey, was published online in the International Journal of Obesity.
LIMITATIONS:
The single-center design of the study and inclusion of patients from a single racial group may limit the generalizability of the findings. The retrospective design prevented the retrieval of all relevant data on clinical features and fat distribution.
DISCLOSURES:
The study was supported by an open access funding provided by the Scientific and Technological Research Council of Türkiye. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Cell Phone Use Linked to Higher Heart Disease Risk
“We found that a poor sleep pattern, psychological distress, and neuroticism significantly mediated the positive association between weekly mobile phone usage time and the risk for incident CVD, with a mediating proportion of 5.11%, 11.50%, and 2.25%, respectively,” said principal investigator Xianhui Qin, MD, professor of nephrology at Southern Medical University, Guangzhou, China.
Poor sleep patterns and poor mental health could disrupt circadian rhythms and endocrine and metabolic functions, as well as increase inflammation, he explained.
In addition, chronic exposure to radiofrequency electromagnetic fields (RF-EMF) emitted from cell phones could lead to oxidative stress and an inflammatory response. Combined with smoking and diabetes, this exposure “may have a synergistic effect in increasing CVD risk,” Dr. Qin suggested.
The study was published online in the Canadian Journal of Cardiology.
Risk Underestimated?
The researchers aimed to examine the association of regular cell phone use with incident CVD and explore the mediating effects of sleep and mental health using linked hospital and mortality records.
Their analysis included 444,027 participants (mean age, 56 years; 44% men) without a history of CVD from the UK Biobank. A total of 378,161 participants were regular cell phone users.
Regular cell phone use was defined as at least one call per week. Weekly use was self-reported as the average time of calls per week during the previous 3 months.
The primary outcome was incident CVD. Secondary outcomes were each component of CVD (ie, coronary heart disease, stroke, atrial fibrillation, and heart failure) and increased carotid intima media thickness (CIMT).
Compared with nonregular cell phone users, regular users were younger, had higher proportions of current smokers and urban residents, and had lower proportions of history of hypertension and diabetes. They also had higher income, Townsend deprivation index, and body mass index, and lower education levels.
During a median follow-up of 12.3 years, 56,181 participants developed incident CVD. Compared with nonregular cell phone users, regular users had a significantly higher risk for incident CVD (hazard ratio, 1.04) and increased CIMT (odds ratio, 1.11).
Among regular cell phone users, the duration of cell phone use and hands-free device/speakerphone use during calls was not significantly associated with incident CVD. Yet a significant and positive dose-response relationship was seen between weekly cell phone usage time and the risk for CVD. The positive association was stronger in current vs noncurrent smokers and people with vs without diabetes.
To different extents, sleep patterns (5.11%), psychologic distress (11.5%), and neuroticism (2.25%) mediated the relationship between weekly cell phone usage time and the risk for incident CVD.
“Our study suggests that despite the advantages of mobile phone use, we should also pay attention to the potential harm of mobile phone use to cardiovascular health,” Dr. Qin said. “Future studies to assess the risk-benefit balance will help promote mobile phone use patterns that are conducive to cardiovascular health.”
Meanwhile, he added, “We encourage measures to reduce time spent on mobile phones to promote the primary prevention of CVD. On the other hand, improving sleep and mental health status may help reduce the higher risk of CVD associated with mobile phone use.”
There are several limitations to the study in addition to its observational nature, which cannot show cause and effect. The questionnaires on cell phone use were restricted to phone calls; other use patterns of cell phones (eg, messaging, watching videos, and browsing the web) were not considered. Although the researchers adjusted for many potential confounders, unmeasured confounding bias (eg, the type of cell phone used and other sources of RF-EMF) cannot be eliminated.
Weak Link?
In a comment, Nicholas Grubic, MSc, a PhD student in epidemiology at the University of Toronto, Ontario, Canada, and coauthor of a related editorial, said, “I found it interesting that there was a connection observed between mobile phone use and CVD. However, it is crucial to understand that this link appeared to be much weaker compared with other well-known cardiovascular risk factors, such as smoking, diabetes, and high blood pressure. For now, mobile phone use should not be a major concern for most people.”
Nevertheless, clinicians should encourage patients to practice healthy habits around their screen time, he advised. “This could include limiting mobile phone use before bedtime and taking regular breaks to engage in activities that promote heart health, such as exercising or spending time outdoors.
“For the time being, we probably won’t see mobile phone use included in standard assessments for cardiovascular risk or as a focal point of cardiovascular health promotion initiatives,” he added. Instead, clinicians should “focus on established risk factors that have a stronger impact on patients’ cardiovascular health.”
Nieca Goldberg, MD, a clinical associate professor of medicine at NYU Grossman School of Medicine in New York City and American Heart Association volunteer expert, had a similar message. “You don’t have to go back to using a landline,” she said. “Instead, patients should be more mindful of how much phone use is taking away from their physical activity, keeping them from sleeping, and causing them stress.” Clinicians should also remember to counsel smokers on smoking cessation.
“It would be important for future studies to look at time spent on the phone and the type of activities patients are doing on their phones, such as social media, calls, texts, movies, or streaming TV shows,” she said. “It would be important to see how phone use is leading to a sedentary lifestyle” and what that means for a larger, more diverse population.
The study was supported by the National Key R&D Program, the National Natural Science Foundation of China, and the Outstanding Youth Development Scheme of Nanfang Hospital, Southern Medical University. Dr. Qin, Dr. Grubic, and Dr. Goldberg reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
“We found that a poor sleep pattern, psychological distress, and neuroticism significantly mediated the positive association between weekly mobile phone usage time and the risk for incident CVD, with a mediating proportion of 5.11%, 11.50%, and 2.25%, respectively,” said principal investigator Xianhui Qin, MD, professor of nephrology at Southern Medical University, Guangzhou, China.
Poor sleep patterns and poor mental health could disrupt circadian rhythms and endocrine and metabolic functions, as well as increase inflammation, he explained.
In addition, chronic exposure to radiofrequency electromagnetic fields (RF-EMF) emitted from cell phones could lead to oxidative stress and an inflammatory response. Combined with smoking and diabetes, this exposure “may have a synergistic effect in increasing CVD risk,” Dr. Qin suggested.
The study was published online in the Canadian Journal of Cardiology.
Risk Underestimated?
The researchers aimed to examine the association of regular cell phone use with incident CVD and explore the mediating effects of sleep and mental health using linked hospital and mortality records.
Their analysis included 444,027 participants (mean age, 56 years; 44% men) without a history of CVD from the UK Biobank. A total of 378,161 participants were regular cell phone users.
Regular cell phone use was defined as at least one call per week. Weekly use was self-reported as the average time of calls per week during the previous 3 months.
The primary outcome was incident CVD. Secondary outcomes were each component of CVD (ie, coronary heart disease, stroke, atrial fibrillation, and heart failure) and increased carotid intima media thickness (CIMT).
Compared with nonregular cell phone users, regular users were younger, had higher proportions of current smokers and urban residents, and had lower proportions of history of hypertension and diabetes. They also had higher income, Townsend deprivation index, and body mass index, and lower education levels.
During a median follow-up of 12.3 years, 56,181 participants developed incident CVD. Compared with nonregular cell phone users, regular users had a significantly higher risk for incident CVD (hazard ratio, 1.04) and increased CIMT (odds ratio, 1.11).
Among regular cell phone users, the duration of cell phone use and hands-free device/speakerphone use during calls was not significantly associated with incident CVD. Yet a significant and positive dose-response relationship was seen between weekly cell phone usage time and the risk for CVD. The positive association was stronger in current vs noncurrent smokers and people with vs without diabetes.
To different extents, sleep patterns (5.11%), psychologic distress (11.5%), and neuroticism (2.25%) mediated the relationship between weekly cell phone usage time and the risk for incident CVD.
“Our study suggests that despite the advantages of mobile phone use, we should also pay attention to the potential harm of mobile phone use to cardiovascular health,” Dr. Qin said. “Future studies to assess the risk-benefit balance will help promote mobile phone use patterns that are conducive to cardiovascular health.”
Meanwhile, he added, “We encourage measures to reduce time spent on mobile phones to promote the primary prevention of CVD. On the other hand, improving sleep and mental health status may help reduce the higher risk of CVD associated with mobile phone use.”
There are several limitations to the study in addition to its observational nature, which cannot show cause and effect. The questionnaires on cell phone use were restricted to phone calls; other use patterns of cell phones (eg, messaging, watching videos, and browsing the web) were not considered. Although the researchers adjusted for many potential confounders, unmeasured confounding bias (eg, the type of cell phone used and other sources of RF-EMF) cannot be eliminated.
Weak Link?
In a comment, Nicholas Grubic, MSc, a PhD student in epidemiology at the University of Toronto, Ontario, Canada, and coauthor of a related editorial, said, “I found it interesting that there was a connection observed between mobile phone use and CVD. However, it is crucial to understand that this link appeared to be much weaker compared with other well-known cardiovascular risk factors, such as smoking, diabetes, and high blood pressure. For now, mobile phone use should not be a major concern for most people.”
Nevertheless, clinicians should encourage patients to practice healthy habits around their screen time, he advised. “This could include limiting mobile phone use before bedtime and taking regular breaks to engage in activities that promote heart health, such as exercising or spending time outdoors.
“For the time being, we probably won’t see mobile phone use included in standard assessments for cardiovascular risk or as a focal point of cardiovascular health promotion initiatives,” he added. Instead, clinicians should “focus on established risk factors that have a stronger impact on patients’ cardiovascular health.”
Nieca Goldberg, MD, a clinical associate professor of medicine at NYU Grossman School of Medicine in New York City and American Heart Association volunteer expert, had a similar message. “You don’t have to go back to using a landline,” she said. “Instead, patients should be more mindful of how much phone use is taking away from their physical activity, keeping them from sleeping, and causing them stress.” Clinicians should also remember to counsel smokers on smoking cessation.
“It would be important for future studies to look at time spent on the phone and the type of activities patients are doing on their phones, such as social media, calls, texts, movies, or streaming TV shows,” she said. “It would be important to see how phone use is leading to a sedentary lifestyle” and what that means for a larger, more diverse population.
The study was supported by the National Key R&D Program, the National Natural Science Foundation of China, and the Outstanding Youth Development Scheme of Nanfang Hospital, Southern Medical University. Dr. Qin, Dr. Grubic, and Dr. Goldberg reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
“We found that a poor sleep pattern, psychological distress, and neuroticism significantly mediated the positive association between weekly mobile phone usage time and the risk for incident CVD, with a mediating proportion of 5.11%, 11.50%, and 2.25%, respectively,” said principal investigator Xianhui Qin, MD, professor of nephrology at Southern Medical University, Guangzhou, China.
Poor sleep patterns and poor mental health could disrupt circadian rhythms and endocrine and metabolic functions, as well as increase inflammation, he explained.
In addition, chronic exposure to radiofrequency electromagnetic fields (RF-EMF) emitted from cell phones could lead to oxidative stress and an inflammatory response. Combined with smoking and diabetes, this exposure “may have a synergistic effect in increasing CVD risk,” Dr. Qin suggested.
The study was published online in the Canadian Journal of Cardiology.
Risk Underestimated?
The researchers aimed to examine the association of regular cell phone use with incident CVD and explore the mediating effects of sleep and mental health using linked hospital and mortality records.
Their analysis included 444,027 participants (mean age, 56 years; 44% men) without a history of CVD from the UK Biobank. A total of 378,161 participants were regular cell phone users.
Regular cell phone use was defined as at least one call per week. Weekly use was self-reported as the average time of calls per week during the previous 3 months.
The primary outcome was incident CVD. Secondary outcomes were each component of CVD (ie, coronary heart disease, stroke, atrial fibrillation, and heart failure) and increased carotid intima media thickness (CIMT).
Compared with nonregular cell phone users, regular users were younger, had higher proportions of current smokers and urban residents, and had lower proportions of history of hypertension and diabetes. They also had higher income, Townsend deprivation index, and body mass index, and lower education levels.
During a median follow-up of 12.3 years, 56,181 participants developed incident CVD. Compared with nonregular cell phone users, regular users had a significantly higher risk for incident CVD (hazard ratio, 1.04) and increased CIMT (odds ratio, 1.11).
Among regular cell phone users, the duration of cell phone use and hands-free device/speakerphone use during calls was not significantly associated with incident CVD. Yet a significant and positive dose-response relationship was seen between weekly cell phone usage time and the risk for CVD. The positive association was stronger in current vs noncurrent smokers and people with vs without diabetes.
To different extents, sleep patterns (5.11%), psychologic distress (11.5%), and neuroticism (2.25%) mediated the relationship between weekly cell phone usage time and the risk for incident CVD.
“Our study suggests that despite the advantages of mobile phone use, we should also pay attention to the potential harm of mobile phone use to cardiovascular health,” Dr. Qin said. “Future studies to assess the risk-benefit balance will help promote mobile phone use patterns that are conducive to cardiovascular health.”
Meanwhile, he added, “We encourage measures to reduce time spent on mobile phones to promote the primary prevention of CVD. On the other hand, improving sleep and mental health status may help reduce the higher risk of CVD associated with mobile phone use.”
There are several limitations to the study in addition to its observational nature, which cannot show cause and effect. The questionnaires on cell phone use were restricted to phone calls; other use patterns of cell phones (eg, messaging, watching videos, and browsing the web) were not considered. Although the researchers adjusted for many potential confounders, unmeasured confounding bias (eg, the type of cell phone used and other sources of RF-EMF) cannot be eliminated.
Weak Link?
In a comment, Nicholas Grubic, MSc, a PhD student in epidemiology at the University of Toronto, Ontario, Canada, and coauthor of a related editorial, said, “I found it interesting that there was a connection observed between mobile phone use and CVD. However, it is crucial to understand that this link appeared to be much weaker compared with other well-known cardiovascular risk factors, such as smoking, diabetes, and high blood pressure. For now, mobile phone use should not be a major concern for most people.”
Nevertheless, clinicians should encourage patients to practice healthy habits around their screen time, he advised. “This could include limiting mobile phone use before bedtime and taking regular breaks to engage in activities that promote heart health, such as exercising or spending time outdoors.
“For the time being, we probably won’t see mobile phone use included in standard assessments for cardiovascular risk or as a focal point of cardiovascular health promotion initiatives,” he added. Instead, clinicians should “focus on established risk factors that have a stronger impact on patients’ cardiovascular health.”
Nieca Goldberg, MD, a clinical associate professor of medicine at NYU Grossman School of Medicine in New York City and American Heart Association volunteer expert, had a similar message. “You don’t have to go back to using a landline,” she said. “Instead, patients should be more mindful of how much phone use is taking away from their physical activity, keeping them from sleeping, and causing them stress.” Clinicians should also remember to counsel smokers on smoking cessation.
“It would be important for future studies to look at time spent on the phone and the type of activities patients are doing on their phones, such as social media, calls, texts, movies, or streaming TV shows,” she said. “It would be important to see how phone use is leading to a sedentary lifestyle” and what that means for a larger, more diverse population.
The study was supported by the National Key R&D Program, the National Natural Science Foundation of China, and the Outstanding Youth Development Scheme of Nanfang Hospital, Southern Medical University. Dr. Qin, Dr. Grubic, and Dr. Goldberg reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE CANADIAN JOURNAL OF CARDIOLOGY
How ‘Oatzempic’ Stacks up to Ozempic
A so-called “oatzempic” diet has been bouncing around the internet posing as a cheap — and available — weight loss alternative to Ozempic.
Fans of the diet, made trendy by TikTok postings and a clever name, claim that an oat-based smoothie helps people quickly shed lots of weight. The smoothie is made by blending 1/2 cup of oats, 1 cup of water, a squeeze of lime, and maybe a dash of cinnamon or other flavoring agents, typically as the first meal of the day, often after fasting, followed by normal meals.
Despite the hype, the oatzempic drink is a far cry from Ozempic (semaglutide), the glucagon-like peptide 1 (GLP-1) medication the Food and Drug Administration has approved only for type 2 diabetes management but used off label for weight loss.
Nutritionists Answer Questions on Oatzempic
Caroline West Passerrello, EdD, RDN, LDN, an instructor and community coordinator in the School of Health and Rehabilitation Sciences at the University of Pittsburgh, Pennsylvania, and Emma Laing, PhD, RDN, LD, a clinical professor and director of dietetics in the College of Family and Consumer Sciences at the University of Georgia, Athens, talked about this fad in emails.
Can the ‘oatzempic’ diet help people lose weight?
Dr. Passerrello: Oats are particularly high in soluble fiber, and high-fiber foods can increase the natural production of GLPs. But studies are mixed on whether this happens when eating oats.
The high content of soluble beta-glucan fiber in oats and the appetite-suppressing citric acid in lime can potentially promote decreased appetite and increased satiety. But a bowl of oatmeal, though not as trendy, will probably produce the same results.
Is the oatzempic diet safe for people with type 2 diabetes?
Dr. Laing: This diet has the potential to cause harm. The diet and the drug are not similar in mechanism of action or strength of scientific evidence to support their role in diabetes and weight management. There is no evidence that this concoction provides the same outcomes as GLP-1 agonists. Rapid weight loss is unsustainable and can be harmful, and frequent spikes in blood sugar can harm adults and children with diabetes. So the oatzempic diet’s safety depends on the rate of weight loss and the effect on blood sugar. While it provides beta-glucan from oats and citric acid from lime juice, it is missing protein, healthy fats, and other vitamins and minerals that enhance the nutrient content and stabilize blood sugar.
Maintaining relatively consistent, normal-range blood glucose concentrations is key for managing diabetes and lowering the risks for other health complications. Carbohydrate sources consumed on their own can produce greater blood sugar fluctuations than when combined with proteins and fats, which slow carbohydrate digestion speed. So pairing oats with fruits, vegetables, healthy fats, and protein sources enhances the flavor, texture, and nutrient composition of the dish and can help slow the postprandial rise in blood glucose.
In the long term, any restrictive fad diet likely cannot be sustained and increases the risk for malnutrition, metabolic rate slowing to conserve energy, depression, social isolation, or eating disorder.
Additional considerations apply to children, with or without diabetes. Restrictive, extreme diets that promise quick results typically “work” by promoting body water and muscle mass losses. Such diets are not only contraindicated in children, who are undergoing rapid growth and development, but also unsustainable and can lead to physical and psychological problems that carry into adulthood.
What strategies and tactics can physicians use to effectively communicate with their patients about safe and effective diets?
Dr. Laing: Encourage patients to be skeptical of social media trends that seem too good to be true. Many [social media] creators lack the education or professional credentials to offer sound nutrition advice, and their posts could do harm. Explain that individual nutrition needs differ considerably based on age, activity patterns, health conditions, and medications, and one person’s way of eating or success is often not realistic for someone else.
Encourage open dialogue and provide nonjudgmental advice. If the taste of oatzempic intrigues patients, there is likely no harm in experimenting. Work on ensuring their meals are adequate in calories and contain sources of protein and healthy fats to prevent spikes in blood glucose. It’s crucial to communicate that weight loss doesn’t always equate with improved health.
Sharing information from the Academy of Nutrition and Dietetics and the American Diabetes Association can equip patients with tools they can implement under their clinician’s guidance. A provider’s greatest ally in diabetes care is a registered dietitian nutritionist (RDN) who is a Certified Diabetes Care and Education Specialist. RDNs will determine specific energy and nutrient needs and provide medical nutrition therapy such as carbohydrate counting, simplified meal plans, healthy food choices, exchange lists, and behavior strategies to help patients manage their diabetes. Many insurance plans cover these services.
What additional comments would you like to share with clinicians whose patients may ask them about the oatzempic diet?
Dr. Passerrello: What we do consistently matters. If your patient likes the taste of oatzempic in one meal a day, it’s a way to get more oats into their diet, if they focus their other meals on vegetables, fruits, whole grains, lean protein, and unsaturated fats.
Diets are out, and sustainable dietary patterns are in. Diets are one-size-fits-all, whereas a sustainable dietary pattern is individualized based on a person’s goals, medical history, taste preferences, budget, and lifestyle. Visit MyPlate.gov or work with an RDN [visit https://www.eatright.org/find-a-nutrition-expert to find nutritionists near your patients] to determine what a sustainable dietary pattern looks like.
What do clinicians need to know about claims on social media that a related drink — ‘ricezempic’ — aids weight loss?
Dr. Laing: Ricezempic promoters claim that drinking the beverage — typically made by soaking 1/2 cup of uncooked white rice in 1 cup of water and the juice from half a lime, then discarding the rice and drinking the liquid before breakfast — will lead to weight loss because the strained water provides a small dose of resistant starch, which is a source of prebiotics. Studies have shown that ingesting prebiotics may help lower blood cholesterol, improve blood glucose and insulin sensitivity, and benefit digestive function; however, more research is needed to determine specifics and if prebiotics are proven for weight loss.
Does ricezempic work?
Dr. Laing: There is no evidence that this concoction provides the same outcomes as GLP-1 agonists. The diet and the drug are not similar in mechanism of action or strength of scientific evidence to support their role in diabetes and weight management. Even if ricezempic provides a small amount of resistant starch and hydration from the rice water and citric acid from the lime juice, it is missing fiber, protein, healthy fats, and other vitamins and minerals that enhance the nutrient content of a meal or snack and stabilize blood sugar.
What advice do you have for clinicians whose patients with diabetes ask them about ricezempic?
Dr. Laing: I would not suggest that patients rely on ricezempic to support their health. There is no scientific evidence to show that people will lose weight in the short or long term by drinking ricezempic before a meal (or as a meal replacement).
If your patients are aiming to increase their intake of prebiotics, they are naturally found in various vegetables, fruits, whole grains, and seeds and in yogurt and high-fiber breads and cereals. A nutritious eating pattern that includes these foods is most beneficial for health.
A version of this article first appeared on Medscape.com.
A so-called “oatzempic” diet has been bouncing around the internet posing as a cheap — and available — weight loss alternative to Ozempic.
Fans of the diet, made trendy by TikTok postings and a clever name, claim that an oat-based smoothie helps people quickly shed lots of weight. The smoothie is made by blending 1/2 cup of oats, 1 cup of water, a squeeze of lime, and maybe a dash of cinnamon or other flavoring agents, typically as the first meal of the day, often after fasting, followed by normal meals.
Despite the hype, the oatzempic drink is a far cry from Ozempic (semaglutide), the glucagon-like peptide 1 (GLP-1) medication the Food and Drug Administration has approved only for type 2 diabetes management but used off label for weight loss.
Nutritionists Answer Questions on Oatzempic
Caroline West Passerrello, EdD, RDN, LDN, an instructor and community coordinator in the School of Health and Rehabilitation Sciences at the University of Pittsburgh, Pennsylvania, and Emma Laing, PhD, RDN, LD, a clinical professor and director of dietetics in the College of Family and Consumer Sciences at the University of Georgia, Athens, talked about this fad in emails.
Can the ‘oatzempic’ diet help people lose weight?
Dr. Passerrello: Oats are particularly high in soluble fiber, and high-fiber foods can increase the natural production of GLPs. But studies are mixed on whether this happens when eating oats.
The high content of soluble beta-glucan fiber in oats and the appetite-suppressing citric acid in lime can potentially promote decreased appetite and increased satiety. But a bowl of oatmeal, though not as trendy, will probably produce the same results.
Is the oatzempic diet safe for people with type 2 diabetes?
Dr. Laing: This diet has the potential to cause harm. The diet and the drug are not similar in mechanism of action or strength of scientific evidence to support their role in diabetes and weight management. There is no evidence that this concoction provides the same outcomes as GLP-1 agonists. Rapid weight loss is unsustainable and can be harmful, and frequent spikes in blood sugar can harm adults and children with diabetes. So the oatzempic diet’s safety depends on the rate of weight loss and the effect on blood sugar. While it provides beta-glucan from oats and citric acid from lime juice, it is missing protein, healthy fats, and other vitamins and minerals that enhance the nutrient content and stabilize blood sugar.
Maintaining relatively consistent, normal-range blood glucose concentrations is key for managing diabetes and lowering the risks for other health complications. Carbohydrate sources consumed on their own can produce greater blood sugar fluctuations than when combined with proteins and fats, which slow carbohydrate digestion speed. So pairing oats with fruits, vegetables, healthy fats, and protein sources enhances the flavor, texture, and nutrient composition of the dish and can help slow the postprandial rise in blood glucose.
In the long term, any restrictive fad diet likely cannot be sustained and increases the risk for malnutrition, metabolic rate slowing to conserve energy, depression, social isolation, or eating disorder.
Additional considerations apply to children, with or without diabetes. Restrictive, extreme diets that promise quick results typically “work” by promoting body water and muscle mass losses. Such diets are not only contraindicated in children, who are undergoing rapid growth and development, but also unsustainable and can lead to physical and psychological problems that carry into adulthood.
What strategies and tactics can physicians use to effectively communicate with their patients about safe and effective diets?
Dr. Laing: Encourage patients to be skeptical of social media trends that seem too good to be true. Many [social media] creators lack the education or professional credentials to offer sound nutrition advice, and their posts could do harm. Explain that individual nutrition needs differ considerably based on age, activity patterns, health conditions, and medications, and one person’s way of eating or success is often not realistic for someone else.
Encourage open dialogue and provide nonjudgmental advice. If the taste of oatzempic intrigues patients, there is likely no harm in experimenting. Work on ensuring their meals are adequate in calories and contain sources of protein and healthy fats to prevent spikes in blood glucose. It’s crucial to communicate that weight loss doesn’t always equate with improved health.
Sharing information from the Academy of Nutrition and Dietetics and the American Diabetes Association can equip patients with tools they can implement under their clinician’s guidance. A provider’s greatest ally in diabetes care is a registered dietitian nutritionist (RDN) who is a Certified Diabetes Care and Education Specialist. RDNs will determine specific energy and nutrient needs and provide medical nutrition therapy such as carbohydrate counting, simplified meal plans, healthy food choices, exchange lists, and behavior strategies to help patients manage their diabetes. Many insurance plans cover these services.
What additional comments would you like to share with clinicians whose patients may ask them about the oatzempic diet?
Dr. Passerrello: What we do consistently matters. If your patient likes the taste of oatzempic in one meal a day, it’s a way to get more oats into their diet, if they focus their other meals on vegetables, fruits, whole grains, lean protein, and unsaturated fats.
Diets are out, and sustainable dietary patterns are in. Diets are one-size-fits-all, whereas a sustainable dietary pattern is individualized based on a person’s goals, medical history, taste preferences, budget, and lifestyle. Visit MyPlate.gov or work with an RDN [visit https://www.eatright.org/find-a-nutrition-expert to find nutritionists near your patients] to determine what a sustainable dietary pattern looks like.
What do clinicians need to know about claims on social media that a related drink — ‘ricezempic’ — aids weight loss?
Dr. Laing: Ricezempic promoters claim that drinking the beverage — typically made by soaking 1/2 cup of uncooked white rice in 1 cup of water and the juice from half a lime, then discarding the rice and drinking the liquid before breakfast — will lead to weight loss because the strained water provides a small dose of resistant starch, which is a source of prebiotics. Studies have shown that ingesting prebiotics may help lower blood cholesterol, improve blood glucose and insulin sensitivity, and benefit digestive function; however, more research is needed to determine specifics and if prebiotics are proven for weight loss.
Does ricezempic work?
Dr. Laing: There is no evidence that this concoction provides the same outcomes as GLP-1 agonists. The diet and the drug are not similar in mechanism of action or strength of scientific evidence to support their role in diabetes and weight management. Even if ricezempic provides a small amount of resistant starch and hydration from the rice water and citric acid from the lime juice, it is missing fiber, protein, healthy fats, and other vitamins and minerals that enhance the nutrient content of a meal or snack and stabilize blood sugar.
What advice do you have for clinicians whose patients with diabetes ask them about ricezempic?
Dr. Laing: I would not suggest that patients rely on ricezempic to support their health. There is no scientific evidence to show that people will lose weight in the short or long term by drinking ricezempic before a meal (or as a meal replacement).
If your patients are aiming to increase their intake of prebiotics, they are naturally found in various vegetables, fruits, whole grains, and seeds and in yogurt and high-fiber breads and cereals. A nutritious eating pattern that includes these foods is most beneficial for health.
A version of this article first appeared on Medscape.com.
A so-called “oatzempic” diet has been bouncing around the internet posing as a cheap — and available — weight loss alternative to Ozempic.
Fans of the diet, made trendy by TikTok postings and a clever name, claim that an oat-based smoothie helps people quickly shed lots of weight. The smoothie is made by blending 1/2 cup of oats, 1 cup of water, a squeeze of lime, and maybe a dash of cinnamon or other flavoring agents, typically as the first meal of the day, often after fasting, followed by normal meals.
Despite the hype, the oatzempic drink is a far cry from Ozempic (semaglutide), the glucagon-like peptide 1 (GLP-1) medication the Food and Drug Administration has approved only for type 2 diabetes management but used off label for weight loss.
Nutritionists Answer Questions on Oatzempic
Caroline West Passerrello, EdD, RDN, LDN, an instructor and community coordinator in the School of Health and Rehabilitation Sciences at the University of Pittsburgh, Pennsylvania, and Emma Laing, PhD, RDN, LD, a clinical professor and director of dietetics in the College of Family and Consumer Sciences at the University of Georgia, Athens, talked about this fad in emails.
Can the ‘oatzempic’ diet help people lose weight?
Dr. Passerrello: Oats are particularly high in soluble fiber, and high-fiber foods can increase the natural production of GLPs. But studies are mixed on whether this happens when eating oats.
The high content of soluble beta-glucan fiber in oats and the appetite-suppressing citric acid in lime can potentially promote decreased appetite and increased satiety. But a bowl of oatmeal, though not as trendy, will probably produce the same results.
Is the oatzempic diet safe for people with type 2 diabetes?
Dr. Laing: This diet has the potential to cause harm. The diet and the drug are not similar in mechanism of action or strength of scientific evidence to support their role in diabetes and weight management. There is no evidence that this concoction provides the same outcomes as GLP-1 agonists. Rapid weight loss is unsustainable and can be harmful, and frequent spikes in blood sugar can harm adults and children with diabetes. So the oatzempic diet’s safety depends on the rate of weight loss and the effect on blood sugar. While it provides beta-glucan from oats and citric acid from lime juice, it is missing protein, healthy fats, and other vitamins and minerals that enhance the nutrient content and stabilize blood sugar.
Maintaining relatively consistent, normal-range blood glucose concentrations is key for managing diabetes and lowering the risks for other health complications. Carbohydrate sources consumed on their own can produce greater blood sugar fluctuations than when combined with proteins and fats, which slow carbohydrate digestion speed. So pairing oats with fruits, vegetables, healthy fats, and protein sources enhances the flavor, texture, and nutrient composition of the dish and can help slow the postprandial rise in blood glucose.
In the long term, any restrictive fad diet likely cannot be sustained and increases the risk for malnutrition, metabolic rate slowing to conserve energy, depression, social isolation, or eating disorder.
Additional considerations apply to children, with or without diabetes. Restrictive, extreme diets that promise quick results typically “work” by promoting body water and muscle mass losses. Such diets are not only contraindicated in children, who are undergoing rapid growth and development, but also unsustainable and can lead to physical and psychological problems that carry into adulthood.
What strategies and tactics can physicians use to effectively communicate with their patients about safe and effective diets?
Dr. Laing: Encourage patients to be skeptical of social media trends that seem too good to be true. Many [social media] creators lack the education or professional credentials to offer sound nutrition advice, and their posts could do harm. Explain that individual nutrition needs differ considerably based on age, activity patterns, health conditions, and medications, and one person’s way of eating or success is often not realistic for someone else.
Encourage open dialogue and provide nonjudgmental advice. If the taste of oatzempic intrigues patients, there is likely no harm in experimenting. Work on ensuring their meals are adequate in calories and contain sources of protein and healthy fats to prevent spikes in blood glucose. It’s crucial to communicate that weight loss doesn’t always equate with improved health.
Sharing information from the Academy of Nutrition and Dietetics and the American Diabetes Association can equip patients with tools they can implement under their clinician’s guidance. A provider’s greatest ally in diabetes care is a registered dietitian nutritionist (RDN) who is a Certified Diabetes Care and Education Specialist. RDNs will determine specific energy and nutrient needs and provide medical nutrition therapy such as carbohydrate counting, simplified meal plans, healthy food choices, exchange lists, and behavior strategies to help patients manage their diabetes. Many insurance plans cover these services.
What additional comments would you like to share with clinicians whose patients may ask them about the oatzempic diet?
Dr. Passerrello: What we do consistently matters. If your patient likes the taste of oatzempic in one meal a day, it’s a way to get more oats into their diet, if they focus their other meals on vegetables, fruits, whole grains, lean protein, and unsaturated fats.
Diets are out, and sustainable dietary patterns are in. Diets are one-size-fits-all, whereas a sustainable dietary pattern is individualized based on a person’s goals, medical history, taste preferences, budget, and lifestyle. Visit MyPlate.gov or work with an RDN [visit https://www.eatright.org/find-a-nutrition-expert to find nutritionists near your patients] to determine what a sustainable dietary pattern looks like.
What do clinicians need to know about claims on social media that a related drink — ‘ricezempic’ — aids weight loss?
Dr. Laing: Ricezempic promoters claim that drinking the beverage — typically made by soaking 1/2 cup of uncooked white rice in 1 cup of water and the juice from half a lime, then discarding the rice and drinking the liquid before breakfast — will lead to weight loss because the strained water provides a small dose of resistant starch, which is a source of prebiotics. Studies have shown that ingesting prebiotics may help lower blood cholesterol, improve blood glucose and insulin sensitivity, and benefit digestive function; however, more research is needed to determine specifics and if prebiotics are proven for weight loss.
Does ricezempic work?
Dr. Laing: There is no evidence that this concoction provides the same outcomes as GLP-1 agonists. The diet and the drug are not similar in mechanism of action or strength of scientific evidence to support their role in diabetes and weight management. Even if ricezempic provides a small amount of resistant starch and hydration from the rice water and citric acid from the lime juice, it is missing fiber, protein, healthy fats, and other vitamins and minerals that enhance the nutrient content of a meal or snack and stabilize blood sugar.
What advice do you have for clinicians whose patients with diabetes ask them about ricezempic?
Dr. Laing: I would not suggest that patients rely on ricezempic to support their health. There is no scientific evidence to show that people will lose weight in the short or long term by drinking ricezempic before a meal (or as a meal replacement).
If your patients are aiming to increase their intake of prebiotics, they are naturally found in various vegetables, fruits, whole grains, and seeds and in yogurt and high-fiber breads and cereals. A nutritious eating pattern that includes these foods is most beneficial for health.
A version of this article first appeared on Medscape.com.
Skip Potassium After Cardiac Surgery
LONDON —
“The widespread practice of giving patients potassium after bypass heart surgery even though their blood levels are within the normal range can be abandoned,” said Benjamin O’Brien, MD, PhD, director of the Clinic for Cardioanesthesiology and Intensive Care Medicine at Charité Hospital in Berlin, Germany.
Results from the randomized TIGHT-K trial that assessed two levels of potassium supplementation were presented at the annual congress of the European Society of Cardiology.
In the tight-control group, supplementation was provided to maintain high-normal levels of potassium (> 4.5 mEq/L). In the relaxed-control group, supplementation was provided only when potassium levels fell below the low-normal threshold (< 3.6 mEq/L).
Trial Upending Popular Practice
The multinational trial involved 23 centers in Germany and the United Kingdom. All 1690 participants enrolled were scheduled to undergo a coronary artery bypass graft procedure, but Dr. O’Brien said he considers the results of TIGHT-K to be broadly applicable.
“There is no physiological basis to expect a different result in patients undergoing different types of cardiac surgery,” he said.
The primary endpoint was clinically and electrocardiography confirmed new-onset atrial fibrillation that occurred in the 5 days after the bypass procedure.
For the primary atrial fibrillation endpoint, event rates were similar in the tight-control and the relaxed-control groups (26.2% vs 27.8%); the 1.7% difference did not approach statistical significance (P = .44). The difference in dysrhythmias other than atrial fibrillation, although numerically lower in the tight-control group, was also not significant (19.1% vs 21.1%; P = .26).
There were no significant differences in several secondary endpoints, including length of hospital stay and in-patient mortality, but cost, a prespecified secondary endpoint, was approximately $120 lower per patient in the relaxed-control group than in the tight-control group (P < .001).
Lowering Cost Across Cardiac Surgeries
During the 5-day follow-up, median potassium levels were higher in the tight-control group. Levels in both groups fell gradually, but essentially in parallel, over the study period, so median potassium levels were always higher in the tight-control group than in the relaxed-control group. At the end of the observation period, mean potassium levels were 4.34 mEq/L in the tight-control group and 4.08 mEq/L in the relaxed-control group.
Prior to the development of atrial fibrillation, participants in the tight-control group received a medium of seven potassium administrations (range, 4-12), whereas those in the relaxed-control group received a medium of zero.
There were no significant differences in episodes in any subgroup evaluated, including those divided by age, sex, baseline left ventricular ejection fraction, and the absence or presence of beta blockers or loop diuretics. A per-protocol analysis also failed to show any advantage for tight potassium control.
Atrial fibrillation occurs in about one third of patients after bypass surgery, as it does after many types of cardiac surgery. Institutions often have strategies in place to reduce the risk after cardiac surgery, and potassium supplementation is one of the most common, despite the lack of supportive evidence, Dr. O’Brien said.
Narrow Window for Optimal Potassium Levels
The difference in potassium levels between the tight-control group and the relaxed-control group were modest in this study, said Subodh Verma, MD, a cardiac surgeon at St Michael’s Hospital and professor at the University of Toronto, Ontario, Canada.
However, this is unavoidable and central to the question being posed, Dr. O’Brien pointed out. Because of the risks for both hypokalemia and hyperkalemia, the window for safe supplementation is short. Current practice is to achieve high-normal levels to reduce atrial fibrillation, but TIGHT-K demonstrates this has no benefit.
The conclusion of TIGHT-K is appropriate, said Faiez Zannad, MD, PhD, professor of therapeutics in the Division of Cardiology at the University of Lorraine in Nancy, France, who praised the design and conduct of the study.
He acknowledged an unmet need for effective methods to reduce the risk for atrial fibrillation after cardiac surgery, but the ESC invited discussant said it is now necessary to look at other strategies. Several are under current evaluation, such as supplementary magnesium and the use of sodium-glucose transporter-2 inhibitors.
Although Dr. Zannad encouraged more studies of methods to reduce atrial fibrillation risk after cardiac surgery, he said that TIGHT-K has answered the question of whether potassium supplementation is beneficial.
Potassium supplementation should no longer be offered, he said, which will “reduce healthcare costs and decrease patient risk from an unnecessary intervention.”
A version of this article first appeared on Medscape.com.
LONDON —
“The widespread practice of giving patients potassium after bypass heart surgery even though their blood levels are within the normal range can be abandoned,” said Benjamin O’Brien, MD, PhD, director of the Clinic for Cardioanesthesiology and Intensive Care Medicine at Charité Hospital in Berlin, Germany.
Results from the randomized TIGHT-K trial that assessed two levels of potassium supplementation were presented at the annual congress of the European Society of Cardiology.
In the tight-control group, supplementation was provided to maintain high-normal levels of potassium (> 4.5 mEq/L). In the relaxed-control group, supplementation was provided only when potassium levels fell below the low-normal threshold (< 3.6 mEq/L).
Trial Upending Popular Practice
The multinational trial involved 23 centers in Germany and the United Kingdom. All 1690 participants enrolled were scheduled to undergo a coronary artery bypass graft procedure, but Dr. O’Brien said he considers the results of TIGHT-K to be broadly applicable.
“There is no physiological basis to expect a different result in patients undergoing different types of cardiac surgery,” he said.
The primary endpoint was clinically and electrocardiography confirmed new-onset atrial fibrillation that occurred in the 5 days after the bypass procedure.
For the primary atrial fibrillation endpoint, event rates were similar in the tight-control and the relaxed-control groups (26.2% vs 27.8%); the 1.7% difference did not approach statistical significance (P = .44). The difference in dysrhythmias other than atrial fibrillation, although numerically lower in the tight-control group, was also not significant (19.1% vs 21.1%; P = .26).
There were no significant differences in several secondary endpoints, including length of hospital stay and in-patient mortality, but cost, a prespecified secondary endpoint, was approximately $120 lower per patient in the relaxed-control group than in the tight-control group (P < .001).
Lowering Cost Across Cardiac Surgeries
During the 5-day follow-up, median potassium levels were higher in the tight-control group. Levels in both groups fell gradually, but essentially in parallel, over the study period, so median potassium levels were always higher in the tight-control group than in the relaxed-control group. At the end of the observation period, mean potassium levels were 4.34 mEq/L in the tight-control group and 4.08 mEq/L in the relaxed-control group.
Prior to the development of atrial fibrillation, participants in the tight-control group received a medium of seven potassium administrations (range, 4-12), whereas those in the relaxed-control group received a medium of zero.
There were no significant differences in episodes in any subgroup evaluated, including those divided by age, sex, baseline left ventricular ejection fraction, and the absence or presence of beta blockers or loop diuretics. A per-protocol analysis also failed to show any advantage for tight potassium control.
Atrial fibrillation occurs in about one third of patients after bypass surgery, as it does after many types of cardiac surgery. Institutions often have strategies in place to reduce the risk after cardiac surgery, and potassium supplementation is one of the most common, despite the lack of supportive evidence, Dr. O’Brien said.
Narrow Window for Optimal Potassium Levels
The difference in potassium levels between the tight-control group and the relaxed-control group were modest in this study, said Subodh Verma, MD, a cardiac surgeon at St Michael’s Hospital and professor at the University of Toronto, Ontario, Canada.
However, this is unavoidable and central to the question being posed, Dr. O’Brien pointed out. Because of the risks for both hypokalemia and hyperkalemia, the window for safe supplementation is short. Current practice is to achieve high-normal levels to reduce atrial fibrillation, but TIGHT-K demonstrates this has no benefit.
The conclusion of TIGHT-K is appropriate, said Faiez Zannad, MD, PhD, professor of therapeutics in the Division of Cardiology at the University of Lorraine in Nancy, France, who praised the design and conduct of the study.
He acknowledged an unmet need for effective methods to reduce the risk for atrial fibrillation after cardiac surgery, but the ESC invited discussant said it is now necessary to look at other strategies. Several are under current evaluation, such as supplementary magnesium and the use of sodium-glucose transporter-2 inhibitors.
Although Dr. Zannad encouraged more studies of methods to reduce atrial fibrillation risk after cardiac surgery, he said that TIGHT-K has answered the question of whether potassium supplementation is beneficial.
Potassium supplementation should no longer be offered, he said, which will “reduce healthcare costs and decrease patient risk from an unnecessary intervention.”
A version of this article first appeared on Medscape.com.
LONDON —
“The widespread practice of giving patients potassium after bypass heart surgery even though their blood levels are within the normal range can be abandoned,” said Benjamin O’Brien, MD, PhD, director of the Clinic for Cardioanesthesiology and Intensive Care Medicine at Charité Hospital in Berlin, Germany.
Results from the randomized TIGHT-K trial that assessed two levels of potassium supplementation were presented at the annual congress of the European Society of Cardiology.
In the tight-control group, supplementation was provided to maintain high-normal levels of potassium (> 4.5 mEq/L). In the relaxed-control group, supplementation was provided only when potassium levels fell below the low-normal threshold (< 3.6 mEq/L).
Trial Upending Popular Practice
The multinational trial involved 23 centers in Germany and the United Kingdom. All 1690 participants enrolled were scheduled to undergo a coronary artery bypass graft procedure, but Dr. O’Brien said he considers the results of TIGHT-K to be broadly applicable.
“There is no physiological basis to expect a different result in patients undergoing different types of cardiac surgery,” he said.
The primary endpoint was clinically and electrocardiography confirmed new-onset atrial fibrillation that occurred in the 5 days after the bypass procedure.
For the primary atrial fibrillation endpoint, event rates were similar in the tight-control and the relaxed-control groups (26.2% vs 27.8%); the 1.7% difference did not approach statistical significance (P = .44). The difference in dysrhythmias other than atrial fibrillation, although numerically lower in the tight-control group, was also not significant (19.1% vs 21.1%; P = .26).
There were no significant differences in several secondary endpoints, including length of hospital stay and in-patient mortality, but cost, a prespecified secondary endpoint, was approximately $120 lower per patient in the relaxed-control group than in the tight-control group (P < .001).
Lowering Cost Across Cardiac Surgeries
During the 5-day follow-up, median potassium levels were higher in the tight-control group. Levels in both groups fell gradually, but essentially in parallel, over the study period, so median potassium levels were always higher in the tight-control group than in the relaxed-control group. At the end of the observation period, mean potassium levels were 4.34 mEq/L in the tight-control group and 4.08 mEq/L in the relaxed-control group.
Prior to the development of atrial fibrillation, participants in the tight-control group received a medium of seven potassium administrations (range, 4-12), whereas those in the relaxed-control group received a medium of zero.
There were no significant differences in episodes in any subgroup evaluated, including those divided by age, sex, baseline left ventricular ejection fraction, and the absence or presence of beta blockers or loop diuretics. A per-protocol analysis also failed to show any advantage for tight potassium control.
Atrial fibrillation occurs in about one third of patients after bypass surgery, as it does after many types of cardiac surgery. Institutions often have strategies in place to reduce the risk after cardiac surgery, and potassium supplementation is one of the most common, despite the lack of supportive evidence, Dr. O’Brien said.
Narrow Window for Optimal Potassium Levels
The difference in potassium levels between the tight-control group and the relaxed-control group were modest in this study, said Subodh Verma, MD, a cardiac surgeon at St Michael’s Hospital and professor at the University of Toronto, Ontario, Canada.
However, this is unavoidable and central to the question being posed, Dr. O’Brien pointed out. Because of the risks for both hypokalemia and hyperkalemia, the window for safe supplementation is short. Current practice is to achieve high-normal levels to reduce atrial fibrillation, but TIGHT-K demonstrates this has no benefit.
The conclusion of TIGHT-K is appropriate, said Faiez Zannad, MD, PhD, professor of therapeutics in the Division of Cardiology at the University of Lorraine in Nancy, France, who praised the design and conduct of the study.
He acknowledged an unmet need for effective methods to reduce the risk for atrial fibrillation after cardiac surgery, but the ESC invited discussant said it is now necessary to look at other strategies. Several are under current evaluation, such as supplementary magnesium and the use of sodium-glucose transporter-2 inhibitors.
Although Dr. Zannad encouraged more studies of methods to reduce atrial fibrillation risk after cardiac surgery, he said that TIGHT-K has answered the question of whether potassium supplementation is beneficial.
Potassium supplementation should no longer be offered, he said, which will “reduce healthcare costs and decrease patient risk from an unnecessary intervention.”
A version of this article first appeared on Medscape.com.
FROM ESC CONGRESS 2024
Brain Network Significantly Larger in People With Depression, Even in Childhood
Researchers have discovered that
Using a novel brain-mapping technique, researchers found that the frontostriatal salience network was expanded nearly twofold in the brains of most individuals studied with depression compared with controls.
“This expansion in cortex was trait-like, meaning it was stable over time and did not change as symptoms changed over time,” said lead author Charles Lynch, PhD, assistant professor of neuroscience, Department of Psychiatry, Weill Cornell Medicine in New York.
It could also be detected in children who later developed depression, suggesting it may serve as a biomarker of depression risk. Investigators said the findings could aid in prevention and early detection of depression, as well as the development of more personalized treatment.
The study was published online in Nature.
Prewired for Depression?
Precision functional mapping is a relatively new approach to brain mapping in individuals that uses large amounts of fMRI data from hours of scans per person. The technique has been used to show differences in brain networks between and in healthy individuals but had not been used to study brain networks in people with depression.
“We leveraged our large longitudinal datasets — with many hours of functional MRI scanning per subject — to construct individual-specific maps of functional brain networks in each patient using precision functional mapping, instead of relying on group average,” Dr. Lynch said.
In the primary analysis of 141 adults with major depression and 37 healthy controls, the frontostriatal salience network — which is involved in reward processing and attention to internal and external stimuli — was markedly larger in these individuals with depression.
“This is one of the first times these kinds of personalized maps have been created in individuals with depression, and this is how we first observed of the salience network being larger in individuals with depression,” Dr. Lynch said.
In four of the six individuals, the salience network was expanded more than twofold, outside the range observed in all 37 healthy controls. On average, the salience network occupied 73% more of the cortical surface relative to the average in healthy controls.
The findings were replicated using independent samples of repeatedly sampled individuals with depression and in large-scale group average data.
The expansion of the salience network did not change over time and was unaffected by changes in mood state.
“These observations led us to propose that instead of driving changes in depressive symptoms over time, salience network expansion may be a stable marker of risk for developing depression,” the study team wrote.
An analysis of brain scans from 57 children who went on to develop depressive symptoms during adolescence and an equal number of children who did not develop depressive symptoms supports this theory.
On average, the salience network occupied roughly 36% more of cortex in the children with no current or previous symptoms of depression at the time of their fMRI scans but who subsequently developed clinically significant symptoms of depression, relative to children with no depressive symptoms at any study time point, the researchers found.
Immediate Clinical Impact?
Reached for comment, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, said this research “exemplifies the promising intersection of neurology and digital health, where advanced neuroimaging and data-driven approaches can transform mental health care into a more precise and individualized practice,” Dr. Lakhan said. “By identifying this brain network expansion, we’re unlocking new possibilities for precision medicine in mental health.”
Dr. Lakhan, who wasn’t involved in this research, said identifying the expansion of the frontostriatal salience network in individuals with depression opens new avenues for developing novel therapeutics.
“By targeting this network through neuromodulation techniques like deep brain stimulation, transcranial magnetic stimulation, and prescription digital therapeutics, treatments can be more precisely tailored to individual neurobiological profiles,” Dr. Lakhan said. “Additionally, this network expansion could serve as a biomarker for early detection, allowing for preventive strategies or personalized treatment plans, particularly for those at risk of developing depression.”
In addition, a greater understanding of the mechanisms driving salience network expansion offers potential for discovering new pharmacological targets, Dr. Lakhan noted.
“Drugs that modulate synaptic plasticity or network connectivity might be developed to reverse or mitigate these neural changes. The findings also support the use of longitudinal monitoring to predict and preempt symptom emergence, improving outcomes through timely intervention. This research paves the way for more personalized, precise, and proactive approaches in treating depression,” Dr. Lakhan concluded.
Also weighing in, Teddy Akiki, MD, with the Department of Psychiatry and Behavioral Sciences at Stanford Medicine in California, noted that the effect size of the frontostriatal salience network difference in depression is “remarkably larger than typically seen in neuroimaging studies of depression, which often describe subtle differences. The consistency across multiple datasets and across time at the individual level adds significant weight to these findings, suggesting that it is a trait marker rather than a state-dependent marker.”
“The observation that this expansion is present even before the onset of depressive symptoms in adolescence suggests its potential as a biomarker for depression risk,” Dr. Akiki said. “This approach could lead to earlier identification of at-risk individuals and potentially inform the development of targeted preventive interventions.”
He cautioned that it remains to be seen whether interventions targeting the salience network can effectively prevent or treat depression.
This research was supported in part by the National Institute of Mental Health, the National Institute on Drug Addiction, the Hope for Depression Research Foundation, and the Foundation for OCD Research. Dr. Lynch and a coauthor are listed as inventors for Cornell University patent applications on neuroimaging biomarkers for depression which are pending or in preparation. Dr. Liston has served as a scientific advisor or consultant to Compass Pathways PLC, Delix Therapeutics, and Brainify.AI. Dr. Lakhan and Dr. Akiki had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Researchers have discovered that
Using a novel brain-mapping technique, researchers found that the frontostriatal salience network was expanded nearly twofold in the brains of most individuals studied with depression compared with controls.
“This expansion in cortex was trait-like, meaning it was stable over time and did not change as symptoms changed over time,” said lead author Charles Lynch, PhD, assistant professor of neuroscience, Department of Psychiatry, Weill Cornell Medicine in New York.
It could also be detected in children who later developed depression, suggesting it may serve as a biomarker of depression risk. Investigators said the findings could aid in prevention and early detection of depression, as well as the development of more personalized treatment.
The study was published online in Nature.
Prewired for Depression?
Precision functional mapping is a relatively new approach to brain mapping in individuals that uses large amounts of fMRI data from hours of scans per person. The technique has been used to show differences in brain networks between and in healthy individuals but had not been used to study brain networks in people with depression.
“We leveraged our large longitudinal datasets — with many hours of functional MRI scanning per subject — to construct individual-specific maps of functional brain networks in each patient using precision functional mapping, instead of relying on group average,” Dr. Lynch said.
In the primary analysis of 141 adults with major depression and 37 healthy controls, the frontostriatal salience network — which is involved in reward processing and attention to internal and external stimuli — was markedly larger in these individuals with depression.
“This is one of the first times these kinds of personalized maps have been created in individuals with depression, and this is how we first observed of the salience network being larger in individuals with depression,” Dr. Lynch said.
In four of the six individuals, the salience network was expanded more than twofold, outside the range observed in all 37 healthy controls. On average, the salience network occupied 73% more of the cortical surface relative to the average in healthy controls.
The findings were replicated using independent samples of repeatedly sampled individuals with depression and in large-scale group average data.
The expansion of the salience network did not change over time and was unaffected by changes in mood state.
“These observations led us to propose that instead of driving changes in depressive symptoms over time, salience network expansion may be a stable marker of risk for developing depression,” the study team wrote.
An analysis of brain scans from 57 children who went on to develop depressive symptoms during adolescence and an equal number of children who did not develop depressive symptoms supports this theory.
On average, the salience network occupied roughly 36% more of cortex in the children with no current or previous symptoms of depression at the time of their fMRI scans but who subsequently developed clinically significant symptoms of depression, relative to children with no depressive symptoms at any study time point, the researchers found.
Immediate Clinical Impact?
Reached for comment, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, said this research “exemplifies the promising intersection of neurology and digital health, where advanced neuroimaging and data-driven approaches can transform mental health care into a more precise and individualized practice,” Dr. Lakhan said. “By identifying this brain network expansion, we’re unlocking new possibilities for precision medicine in mental health.”
Dr. Lakhan, who wasn’t involved in this research, said identifying the expansion of the frontostriatal salience network in individuals with depression opens new avenues for developing novel therapeutics.
“By targeting this network through neuromodulation techniques like deep brain stimulation, transcranial magnetic stimulation, and prescription digital therapeutics, treatments can be more precisely tailored to individual neurobiological profiles,” Dr. Lakhan said. “Additionally, this network expansion could serve as a biomarker for early detection, allowing for preventive strategies or personalized treatment plans, particularly for those at risk of developing depression.”
In addition, a greater understanding of the mechanisms driving salience network expansion offers potential for discovering new pharmacological targets, Dr. Lakhan noted.
“Drugs that modulate synaptic plasticity or network connectivity might be developed to reverse or mitigate these neural changes. The findings also support the use of longitudinal monitoring to predict and preempt symptom emergence, improving outcomes through timely intervention. This research paves the way for more personalized, precise, and proactive approaches in treating depression,” Dr. Lakhan concluded.
Also weighing in, Teddy Akiki, MD, with the Department of Psychiatry and Behavioral Sciences at Stanford Medicine in California, noted that the effect size of the frontostriatal salience network difference in depression is “remarkably larger than typically seen in neuroimaging studies of depression, which often describe subtle differences. The consistency across multiple datasets and across time at the individual level adds significant weight to these findings, suggesting that it is a trait marker rather than a state-dependent marker.”
“The observation that this expansion is present even before the onset of depressive symptoms in adolescence suggests its potential as a biomarker for depression risk,” Dr. Akiki said. “This approach could lead to earlier identification of at-risk individuals and potentially inform the development of targeted preventive interventions.”
He cautioned that it remains to be seen whether interventions targeting the salience network can effectively prevent or treat depression.
This research was supported in part by the National Institute of Mental Health, the National Institute on Drug Addiction, the Hope for Depression Research Foundation, and the Foundation for OCD Research. Dr. Lynch and a coauthor are listed as inventors for Cornell University patent applications on neuroimaging biomarkers for depression which are pending or in preparation. Dr. Liston has served as a scientific advisor or consultant to Compass Pathways PLC, Delix Therapeutics, and Brainify.AI. Dr. Lakhan and Dr. Akiki had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Researchers have discovered that
Using a novel brain-mapping technique, researchers found that the frontostriatal salience network was expanded nearly twofold in the brains of most individuals studied with depression compared with controls.
“This expansion in cortex was trait-like, meaning it was stable over time and did not change as symptoms changed over time,” said lead author Charles Lynch, PhD, assistant professor of neuroscience, Department of Psychiatry, Weill Cornell Medicine in New York.
It could also be detected in children who later developed depression, suggesting it may serve as a biomarker of depression risk. Investigators said the findings could aid in prevention and early detection of depression, as well as the development of more personalized treatment.
The study was published online in Nature.
Prewired for Depression?
Precision functional mapping is a relatively new approach to brain mapping in individuals that uses large amounts of fMRI data from hours of scans per person. The technique has been used to show differences in brain networks between and in healthy individuals but had not been used to study brain networks in people with depression.
“We leveraged our large longitudinal datasets — with many hours of functional MRI scanning per subject — to construct individual-specific maps of functional brain networks in each patient using precision functional mapping, instead of relying on group average,” Dr. Lynch said.
In the primary analysis of 141 adults with major depression and 37 healthy controls, the frontostriatal salience network — which is involved in reward processing and attention to internal and external stimuli — was markedly larger in these individuals with depression.
“This is one of the first times these kinds of personalized maps have been created in individuals with depression, and this is how we first observed of the salience network being larger in individuals with depression,” Dr. Lynch said.
In four of the six individuals, the salience network was expanded more than twofold, outside the range observed in all 37 healthy controls. On average, the salience network occupied 73% more of the cortical surface relative to the average in healthy controls.
The findings were replicated using independent samples of repeatedly sampled individuals with depression and in large-scale group average data.
The expansion of the salience network did not change over time and was unaffected by changes in mood state.
“These observations led us to propose that instead of driving changes in depressive symptoms over time, salience network expansion may be a stable marker of risk for developing depression,” the study team wrote.
An analysis of brain scans from 57 children who went on to develop depressive symptoms during adolescence and an equal number of children who did not develop depressive symptoms supports this theory.
On average, the salience network occupied roughly 36% more of cortex in the children with no current or previous symptoms of depression at the time of their fMRI scans but who subsequently developed clinically significant symptoms of depression, relative to children with no depressive symptoms at any study time point, the researchers found.
Immediate Clinical Impact?
Reached for comment, Shaheen Lakhan, MD, PhD, neurologist and researcher based in Miami, said this research “exemplifies the promising intersection of neurology and digital health, where advanced neuroimaging and data-driven approaches can transform mental health care into a more precise and individualized practice,” Dr. Lakhan said. “By identifying this brain network expansion, we’re unlocking new possibilities for precision medicine in mental health.”
Dr. Lakhan, who wasn’t involved in this research, said identifying the expansion of the frontostriatal salience network in individuals with depression opens new avenues for developing novel therapeutics.
“By targeting this network through neuromodulation techniques like deep brain stimulation, transcranial magnetic stimulation, and prescription digital therapeutics, treatments can be more precisely tailored to individual neurobiological profiles,” Dr. Lakhan said. “Additionally, this network expansion could serve as a biomarker for early detection, allowing for preventive strategies or personalized treatment plans, particularly for those at risk of developing depression.”
In addition, a greater understanding of the mechanisms driving salience network expansion offers potential for discovering new pharmacological targets, Dr. Lakhan noted.
“Drugs that modulate synaptic plasticity or network connectivity might be developed to reverse or mitigate these neural changes. The findings also support the use of longitudinal monitoring to predict and preempt symptom emergence, improving outcomes through timely intervention. This research paves the way for more personalized, precise, and proactive approaches in treating depression,” Dr. Lakhan concluded.
Also weighing in, Teddy Akiki, MD, with the Department of Psychiatry and Behavioral Sciences at Stanford Medicine in California, noted that the effect size of the frontostriatal salience network difference in depression is “remarkably larger than typically seen in neuroimaging studies of depression, which often describe subtle differences. The consistency across multiple datasets and across time at the individual level adds significant weight to these findings, suggesting that it is a trait marker rather than a state-dependent marker.”
“The observation that this expansion is present even before the onset of depressive symptoms in adolescence suggests its potential as a biomarker for depression risk,” Dr. Akiki said. “This approach could lead to earlier identification of at-risk individuals and potentially inform the development of targeted preventive interventions.”
He cautioned that it remains to be seen whether interventions targeting the salience network can effectively prevent or treat depression.
This research was supported in part by the National Institute of Mental Health, the National Institute on Drug Addiction, the Hope for Depression Research Foundation, and the Foundation for OCD Research. Dr. Lynch and a coauthor are listed as inventors for Cornell University patent applications on neuroimaging biomarkers for depression which are pending or in preparation. Dr. Liston has served as a scientific advisor or consultant to Compass Pathways PLC, Delix Therapeutics, and Brainify.AI. Dr. Lakhan and Dr. Akiki had no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM NATURE
Should All Patients With Early Breast Cancer Receive Adjuvant Radiotherapy?
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM THE LANCET ONCOLOGY
Promising Results With CBT App in Young Adults With Anxiety
TOPLINE:
after 3 weeks, with continued improvement through week 12, a new randomized clinical trial shows.
METHODOLOGY:
- The study included 59 adults aged 18-25 years (mean age, 23 years; 78% women) with anxiety disorders (56% with generalized anxiety disorder; 41% with social anxiety disorder).
- Participants received a 6-week CBT program with a self-guided mobile application called Maya and were assigned to one of three incentive strategies to encourage engagement: Loss-framed (lose points for incomplete sessions), gain-framed (earn points for completed sessions), or gain-social support (gain points with added social support from a designated person).
- The primary end point was change in anxiety at week 6, measured with the Hamilton Anxiety Rating Scale.
- The researchers also evaluated change in anxiety at 3 and 12 weeks, change in anxiety sensitivity, social anxiety symptoms, and engagement and satisfaction with the app.
TAKEAWAY:
- Anxiety decreased significantly from baseline at week 3, 6, and 12 (mean differences, −3.20, −5.64, and −5.67, respectively; all P < .001), with similar reductions in anxiety among the three incentive conditions.
- Use of the CBT app was also associated with significant reductions in anxiety sensitivity and social anxiety symptoms over time, with moderate to large effect sizes.
- A total of 98% of participants completed the 6-week assessment and 93% the 12-week follow-up. On average, the participants completed 10.8 of 12 sessions and 64% completed all sessions.
- The participants reported high satisfaction with the app across all time points, with no significant differences based on time or incentive condition.
IN PRACTICE:
“We hear a lot about the negative impact of technology use on mental health in this age group,” senior study author Faith M. Gunning, PhD, said in a press release. “But the ubiquitous use of cell phones for information may provide a way of addressing anxiety for some people who, even if they have access to mental health providers, may not go. If the app helps reduce symptoms, they may then be able to take the next step of seeing a mental health professional when needed.”
SOURCE:
The study was led by Jennifer N. Bress, PhD, Department of Psychiatry, Weill Cornell Medicine, New York City. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked a control group, and the unbalanced allocation of participants to the three incentive groups due to the COVID-19 pandemic may have influenced the results. The study sample, which predominantly consisted of female and college-educated participants, may not have accurately represented the broader population of young adults with anxiety.
DISCLOSURES:
This study was funded by the NewYork-Presbyterian Center for Youth Mental Health, the Khoury Foundation, the Paul and Jenna Segal Family Foundation, the Saks Fifth Avenue Foundation, Mary and Jonathan Rather, Weill Cornell Medicine, the Pritzker Neuropsychiatric Disorders Research Consortium, and the National Institutes of Health. Some authors reported obtaining grants, receiving personal fees, serving on speaker’s bureaus, and having other ties with multiple pharmaceutical companies and institutions. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
after 3 weeks, with continued improvement through week 12, a new randomized clinical trial shows.
METHODOLOGY:
- The study included 59 adults aged 18-25 years (mean age, 23 years; 78% women) with anxiety disorders (56% with generalized anxiety disorder; 41% with social anxiety disorder).
- Participants received a 6-week CBT program with a self-guided mobile application called Maya and were assigned to one of three incentive strategies to encourage engagement: Loss-framed (lose points for incomplete sessions), gain-framed (earn points for completed sessions), or gain-social support (gain points with added social support from a designated person).
- The primary end point was change in anxiety at week 6, measured with the Hamilton Anxiety Rating Scale.
- The researchers also evaluated change in anxiety at 3 and 12 weeks, change in anxiety sensitivity, social anxiety symptoms, and engagement and satisfaction with the app.
TAKEAWAY:
- Anxiety decreased significantly from baseline at week 3, 6, and 12 (mean differences, −3.20, −5.64, and −5.67, respectively; all P < .001), with similar reductions in anxiety among the three incentive conditions.
- Use of the CBT app was also associated with significant reductions in anxiety sensitivity and social anxiety symptoms over time, with moderate to large effect sizes.
- A total of 98% of participants completed the 6-week assessment and 93% the 12-week follow-up. On average, the participants completed 10.8 of 12 sessions and 64% completed all sessions.
- The participants reported high satisfaction with the app across all time points, with no significant differences based on time or incentive condition.
IN PRACTICE:
“We hear a lot about the negative impact of technology use on mental health in this age group,” senior study author Faith M. Gunning, PhD, said in a press release. “But the ubiquitous use of cell phones for information may provide a way of addressing anxiety for some people who, even if they have access to mental health providers, may not go. If the app helps reduce symptoms, they may then be able to take the next step of seeing a mental health professional when needed.”
SOURCE:
The study was led by Jennifer N. Bress, PhD, Department of Psychiatry, Weill Cornell Medicine, New York City. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked a control group, and the unbalanced allocation of participants to the three incentive groups due to the COVID-19 pandemic may have influenced the results. The study sample, which predominantly consisted of female and college-educated participants, may not have accurately represented the broader population of young adults with anxiety.
DISCLOSURES:
This study was funded by the NewYork-Presbyterian Center for Youth Mental Health, the Khoury Foundation, the Paul and Jenna Segal Family Foundation, the Saks Fifth Avenue Foundation, Mary and Jonathan Rather, Weill Cornell Medicine, the Pritzker Neuropsychiatric Disorders Research Consortium, and the National Institutes of Health. Some authors reported obtaining grants, receiving personal fees, serving on speaker’s bureaus, and having other ties with multiple pharmaceutical companies and institutions. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
after 3 weeks, with continued improvement through week 12, a new randomized clinical trial shows.
METHODOLOGY:
- The study included 59 adults aged 18-25 years (mean age, 23 years; 78% women) with anxiety disorders (56% with generalized anxiety disorder; 41% with social anxiety disorder).
- Participants received a 6-week CBT program with a self-guided mobile application called Maya and were assigned to one of three incentive strategies to encourage engagement: Loss-framed (lose points for incomplete sessions), gain-framed (earn points for completed sessions), or gain-social support (gain points with added social support from a designated person).
- The primary end point was change in anxiety at week 6, measured with the Hamilton Anxiety Rating Scale.
- The researchers also evaluated change in anxiety at 3 and 12 weeks, change in anxiety sensitivity, social anxiety symptoms, and engagement and satisfaction with the app.
TAKEAWAY:
- Anxiety decreased significantly from baseline at week 3, 6, and 12 (mean differences, −3.20, −5.64, and −5.67, respectively; all P < .001), with similar reductions in anxiety among the three incentive conditions.
- Use of the CBT app was also associated with significant reductions in anxiety sensitivity and social anxiety symptoms over time, with moderate to large effect sizes.
- A total of 98% of participants completed the 6-week assessment and 93% the 12-week follow-up. On average, the participants completed 10.8 of 12 sessions and 64% completed all sessions.
- The participants reported high satisfaction with the app across all time points, with no significant differences based on time or incentive condition.
IN PRACTICE:
“We hear a lot about the negative impact of technology use on mental health in this age group,” senior study author Faith M. Gunning, PhD, said in a press release. “But the ubiquitous use of cell phones for information may provide a way of addressing anxiety for some people who, even if they have access to mental health providers, may not go. If the app helps reduce symptoms, they may then be able to take the next step of seeing a mental health professional when needed.”
SOURCE:
The study was led by Jennifer N. Bress, PhD, Department of Psychiatry, Weill Cornell Medicine, New York City. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked a control group, and the unbalanced allocation of participants to the three incentive groups due to the COVID-19 pandemic may have influenced the results. The study sample, which predominantly consisted of female and college-educated participants, may not have accurately represented the broader population of young adults with anxiety.
DISCLOSURES:
This study was funded by the NewYork-Presbyterian Center for Youth Mental Health, the Khoury Foundation, the Paul and Jenna Segal Family Foundation, the Saks Fifth Avenue Foundation, Mary and Jonathan Rather, Weill Cornell Medicine, the Pritzker Neuropsychiatric Disorders Research Consortium, and the National Institutes of Health. Some authors reported obtaining grants, receiving personal fees, serving on speaker’s bureaus, and having other ties with multiple pharmaceutical companies and institutions. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
Nighttime Outdoor Light Pollution Linked to Alzheimer’s Risk
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
FROM FRONTIERS OF NEUROSCIENCE
High Breast Cancer Risk With Menopausal Hormone Therapy & Strong Family History
TOPLINE:
These women have a striking cumulative risk of developing breast cancer (age, 50-80 years) of 22.4%, according to a new modelling study of UK women.
METHODOLOGY:
This was a modeling study integrating two data-sets of UK women: the BOADICEA dataset of age-specific breast cancer risk with family history and the Collaborative Group on Hormonal Factors in Breast Cancer, which covers relative risk for breast cancer with different types and durations of MHT.
Four different breast cancer family history profiles were:
- “Average” family history of breast cancer has unknown affected family members;
- “Modest” family history comprises a single first-degree relative with breast cancer at the age of 60 years.
- “Intermediate” family history comprises a single first-degree relative who developed breast cancer at the age of 40 years.
- “Strong” family history comprises two first-degree relatives who developed breast cancer at the age of 50 years.
TAKEAWAY:
- The lowest risk category: “Average” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 9.8% and a risk of dying from breast cancer of 1.7%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 11.0% and 1.8%, respectively.
- The highest risk category: “Strong” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 19.6% and a risk of dying from breast cancer of 3.2%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 22.4% and 3.5%, respectively.
IN PRACTICE:
The authors concluded that, “These integrated data will enable more accurate estimates of absolute and attributable risk associated with MHT exposure for women with a family history of breast cancer, informing shared decision-making.”
SOURCE:
The lead author is Catherine Huntley of the Institute of Cancer Research, London, England. The study appeared in the British Journal of General Practice.
LIMITATIONS:
Limitations included modeling study that did not directly measure individuals with combined risks.
DISCLOSURES:
The study was funded by several sources including Cancer Research UK. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
These women have a striking cumulative risk of developing breast cancer (age, 50-80 years) of 22.4%, according to a new modelling study of UK women.
METHODOLOGY:
This was a modeling study integrating two data-sets of UK women: the BOADICEA dataset of age-specific breast cancer risk with family history and the Collaborative Group on Hormonal Factors in Breast Cancer, which covers relative risk for breast cancer with different types and durations of MHT.
Four different breast cancer family history profiles were:
- “Average” family history of breast cancer has unknown affected family members;
- “Modest” family history comprises a single first-degree relative with breast cancer at the age of 60 years.
- “Intermediate” family history comprises a single first-degree relative who developed breast cancer at the age of 40 years.
- “Strong” family history comprises two first-degree relatives who developed breast cancer at the age of 50 years.
TAKEAWAY:
- The lowest risk category: “Average” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 9.8% and a risk of dying from breast cancer of 1.7%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 11.0% and 1.8%, respectively.
- The highest risk category: “Strong” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 19.6% and a risk of dying from breast cancer of 3.2%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 22.4% and 3.5%, respectively.
IN PRACTICE:
The authors concluded that, “These integrated data will enable more accurate estimates of absolute and attributable risk associated with MHT exposure for women with a family history of breast cancer, informing shared decision-making.”
SOURCE:
The lead author is Catherine Huntley of the Institute of Cancer Research, London, England. The study appeared in the British Journal of General Practice.
LIMITATIONS:
Limitations included modeling study that did not directly measure individuals with combined risks.
DISCLOSURES:
The study was funded by several sources including Cancer Research UK. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
These women have a striking cumulative risk of developing breast cancer (age, 50-80 years) of 22.4%, according to a new modelling study of UK women.
METHODOLOGY:
This was a modeling study integrating two data-sets of UK women: the BOADICEA dataset of age-specific breast cancer risk with family history and the Collaborative Group on Hormonal Factors in Breast Cancer, which covers relative risk for breast cancer with different types and durations of MHT.
Four different breast cancer family history profiles were:
- “Average” family history of breast cancer has unknown affected family members;
- “Modest” family history comprises a single first-degree relative with breast cancer at the age of 60 years.
- “Intermediate” family history comprises a single first-degree relative who developed breast cancer at the age of 40 years.
- “Strong” family history comprises two first-degree relatives who developed breast cancer at the age of 50 years.
TAKEAWAY:
- The lowest risk category: “Average” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 9.8% and a risk of dying from breast cancer of 1.7%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 11.0% and 1.8%, respectively.
- The highest risk category: “Strong” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 19.6% and a risk of dying from breast cancer of 3.2%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 22.4% and 3.5%, respectively.
IN PRACTICE:
The authors concluded that, “These integrated data will enable more accurate estimates of absolute and attributable risk associated with MHT exposure for women with a family history of breast cancer, informing shared decision-making.”
SOURCE:
The lead author is Catherine Huntley of the Institute of Cancer Research, London, England. The study appeared in the British Journal of General Practice.
LIMITATIONS:
Limitations included modeling study that did not directly measure individuals with combined risks.
DISCLOSURES:
The study was funded by several sources including Cancer Research UK. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Breast Cancer Hormone Therapy May Protect Against Dementia
TOPLINE:
with the greatest benefit seen in younger Black women.
METHODOLOGY:
- Hormone-modulating therapy is widely used to treat hormone receptor–positive breast cancer, but the cognitive effects of the treatment, including a potential link to dementia, remain unclear.
- To investigate, researchers used the SEER-Medicare linked database to identify women aged 65 years or older with breast cancer who did and did not receive hormone-modulating therapy within 3 years following their diagnosis.
- The researchers excluded women with preexisting Alzheimer’s disease/dementia diagnoses or those who had received hormone-modulating therapy before their breast cancer diagnosis.
- Analyses were adjusted for demographic, sociocultural, and clinical variables, and subgroup analyses evaluated the impact of age, race, and type of hormone-modulating therapy on Alzheimer’s disease/dementia risk.
TAKEAWAY:
- Among the 18,808 women included in the analysis, 66% received hormone-modulating therapy and 34% did not. During the mean follow-up of 12 years, 24% of hormone-modulating therapy users and 28% of nonusers developed Alzheimer’s disease/dementia.
- Overall, hormone-modulating therapy use (vs nonuse) was associated with a significant 7% lower risk for Alzheimer’s disease/dementia (hazard ratio [HR], 0.93; P = .005), with notable age and racial differences.
- Hormone-modulating therapy use was associated with a 24% lower risk for Alzheimer’s disease/dementia in Black women aged 65-74 years (HR, 0.76), but that protective effect decreased to 19% in Black women aged 75 years or older (HR, 0.81). White women aged 65-74 years who received hormone-modulating therapy (vs those who did not) had an 11% lower risk for Alzheimer’s disease/dementia (HR, 0.89), but the association disappeared among those aged 75 years or older (HR, 0.96; 95% CI, 0.90-1.02). Other races demonstrated no significant association between hormone-modulating therapy use and Alzheimer’s disease/dementia.
- Overall, the use of an aromatase inhibitor or a selective estrogen receptor modulator was associated with a significantly lower risk for Alzheimer’s disease/dementia (HR, 0.93 and HR, 0.89, respectively).
IN PRACTICE:
Overall, the retrospective study found that “hormone therapy was associated with protection against [Alzheimer’s/dementia] in women aged 65 years or older with newly diagnosed breast cancer,” with the decrease in risk relatively greater for Black women and women younger than 75 years, the authors concluded.
“The results highlight the critical need for personalized breast cancer treatment plans that are tailored to the individual characteristics of each patient, particularly given the significantly higher likelihood (two to three times more) of Black women developing [Alzheimer’s/dementia], compared with their White counterparts,” the researchers added.
SOURCE:
The study, with first author Chao Cai, PhD, Department of Clinical Pharmacy and Outcomes Sciences, University of South Carolina, Columbia, was published online on July 16 in JAMA Network Open.
LIMITATIONS:
The study included only women aged 65 years or older, limiting generalizability to younger women. The dataset lacked genetic information and laboratory data related to dementia. The duration of hormone-modulating therapy use beyond 3 years and specific formulations were not assessed. Potential confounders such as variations in chemotherapy, radiation, and surgery were not fully addressed.
DISCLOSURES:
Support for the study was provided by the National Institutes of Health; Carolina Center on Alzheimer’s Disease and Minority Research pilot project; and the Dean’s Faculty Advancement Fund, University of Pittsburgh, Pennsylvania. The authors reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
TOPLINE:
with the greatest benefit seen in younger Black women.
METHODOLOGY:
- Hormone-modulating therapy is widely used to treat hormone receptor–positive breast cancer, but the cognitive effects of the treatment, including a potential link to dementia, remain unclear.
- To investigate, researchers used the SEER-Medicare linked database to identify women aged 65 years or older with breast cancer who did and did not receive hormone-modulating therapy within 3 years following their diagnosis.
- The researchers excluded women with preexisting Alzheimer’s disease/dementia diagnoses or those who had received hormone-modulating therapy before their breast cancer diagnosis.
- Analyses were adjusted for demographic, sociocultural, and clinical variables, and subgroup analyses evaluated the impact of age, race, and type of hormone-modulating therapy on Alzheimer’s disease/dementia risk.
TAKEAWAY:
- Among the 18,808 women included in the analysis, 66% received hormone-modulating therapy and 34% did not. During the mean follow-up of 12 years, 24% of hormone-modulating therapy users and 28% of nonusers developed Alzheimer’s disease/dementia.
- Overall, hormone-modulating therapy use (vs nonuse) was associated with a significant 7% lower risk for Alzheimer’s disease/dementia (hazard ratio [HR], 0.93; P = .005), with notable age and racial differences.
- Hormone-modulating therapy use was associated with a 24% lower risk for Alzheimer’s disease/dementia in Black women aged 65-74 years (HR, 0.76), but that protective effect decreased to 19% in Black women aged 75 years or older (HR, 0.81). White women aged 65-74 years who received hormone-modulating therapy (vs those who did not) had an 11% lower risk for Alzheimer’s disease/dementia (HR, 0.89), but the association disappeared among those aged 75 years or older (HR, 0.96; 95% CI, 0.90-1.02). Other races demonstrated no significant association between hormone-modulating therapy use and Alzheimer’s disease/dementia.
- Overall, the use of an aromatase inhibitor or a selective estrogen receptor modulator was associated with a significantly lower risk for Alzheimer’s disease/dementia (HR, 0.93 and HR, 0.89, respectively).
IN PRACTICE:
Overall, the retrospective study found that “hormone therapy was associated with protection against [Alzheimer’s/dementia] in women aged 65 years or older with newly diagnosed breast cancer,” with the decrease in risk relatively greater for Black women and women younger than 75 years, the authors concluded.
“The results highlight the critical need for personalized breast cancer treatment plans that are tailored to the individual characteristics of each patient, particularly given the significantly higher likelihood (two to three times more) of Black women developing [Alzheimer’s/dementia], compared with their White counterparts,” the researchers added.
SOURCE:
The study, with first author Chao Cai, PhD, Department of Clinical Pharmacy and Outcomes Sciences, University of South Carolina, Columbia, was published online on July 16 in JAMA Network Open.
LIMITATIONS:
The study included only women aged 65 years or older, limiting generalizability to younger women. The dataset lacked genetic information and laboratory data related to dementia. The duration of hormone-modulating therapy use beyond 3 years and specific formulations were not assessed. Potential confounders such as variations in chemotherapy, radiation, and surgery were not fully addressed.
DISCLOSURES:
Support for the study was provided by the National Institutes of Health; Carolina Center on Alzheimer’s Disease and Minority Research pilot project; and the Dean’s Faculty Advancement Fund, University of Pittsburgh, Pennsylvania. The authors reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
TOPLINE:
with the greatest benefit seen in younger Black women.
METHODOLOGY:
- Hormone-modulating therapy is widely used to treat hormone receptor–positive breast cancer, but the cognitive effects of the treatment, including a potential link to dementia, remain unclear.
- To investigate, researchers used the SEER-Medicare linked database to identify women aged 65 years or older with breast cancer who did and did not receive hormone-modulating therapy within 3 years following their diagnosis.
- The researchers excluded women with preexisting Alzheimer’s disease/dementia diagnoses or those who had received hormone-modulating therapy before their breast cancer diagnosis.
- Analyses were adjusted for demographic, sociocultural, and clinical variables, and subgroup analyses evaluated the impact of age, race, and type of hormone-modulating therapy on Alzheimer’s disease/dementia risk.
TAKEAWAY:
- Among the 18,808 women included in the analysis, 66% received hormone-modulating therapy and 34% did not. During the mean follow-up of 12 years, 24% of hormone-modulating therapy users and 28% of nonusers developed Alzheimer’s disease/dementia.
- Overall, hormone-modulating therapy use (vs nonuse) was associated with a significant 7% lower risk for Alzheimer’s disease/dementia (hazard ratio [HR], 0.93; P = .005), with notable age and racial differences.
- Hormone-modulating therapy use was associated with a 24% lower risk for Alzheimer’s disease/dementia in Black women aged 65-74 years (HR, 0.76), but that protective effect decreased to 19% in Black women aged 75 years or older (HR, 0.81). White women aged 65-74 years who received hormone-modulating therapy (vs those who did not) had an 11% lower risk for Alzheimer’s disease/dementia (HR, 0.89), but the association disappeared among those aged 75 years or older (HR, 0.96; 95% CI, 0.90-1.02). Other races demonstrated no significant association between hormone-modulating therapy use and Alzheimer’s disease/dementia.
- Overall, the use of an aromatase inhibitor or a selective estrogen receptor modulator was associated with a significantly lower risk for Alzheimer’s disease/dementia (HR, 0.93 and HR, 0.89, respectively).
IN PRACTICE:
Overall, the retrospective study found that “hormone therapy was associated with protection against [Alzheimer’s/dementia] in women aged 65 years or older with newly diagnosed breast cancer,” with the decrease in risk relatively greater for Black women and women younger than 75 years, the authors concluded.
“The results highlight the critical need for personalized breast cancer treatment plans that are tailored to the individual characteristics of each patient, particularly given the significantly higher likelihood (two to three times more) of Black women developing [Alzheimer’s/dementia], compared with their White counterparts,” the researchers added.
SOURCE:
The study, with first author Chao Cai, PhD, Department of Clinical Pharmacy and Outcomes Sciences, University of South Carolina, Columbia, was published online on July 16 in JAMA Network Open.
LIMITATIONS:
The study included only women aged 65 years or older, limiting generalizability to younger women. The dataset lacked genetic information and laboratory data related to dementia. The duration of hormone-modulating therapy use beyond 3 years and specific formulations were not assessed. Potential confounders such as variations in chemotherapy, radiation, and surgery were not fully addressed.
DISCLOSURES:
Support for the study was provided by the National Institutes of Health; Carolina Center on Alzheimer’s Disease and Minority Research pilot project; and the Dean’s Faculty Advancement Fund, University of Pittsburgh, Pennsylvania. The authors reported no relevant disclosures.
A version of this article first appeared on Medscape.com.