User login
Should All Patients With Early Breast Cancer Receive Adjuvant Radiotherapy?
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM THE LANCET ONCOLOGY
Promising Results With CBT App in Young Adults With Anxiety
TOPLINE:
after 3 weeks, with continued improvement through week 12, a new randomized clinical trial shows.
METHODOLOGY:
- The study included 59 adults aged 18-25 years (mean age, 23 years; 78% women) with anxiety disorders (56% with generalized anxiety disorder; 41% with social anxiety disorder).
- Participants received a 6-week CBT program with a self-guided mobile application called Maya and were assigned to one of three incentive strategies to encourage engagement: Loss-framed (lose points for incomplete sessions), gain-framed (earn points for completed sessions), or gain-social support (gain points with added social support from a designated person).
- The primary end point was change in anxiety at week 6, measured with the Hamilton Anxiety Rating Scale.
- The researchers also evaluated change in anxiety at 3 and 12 weeks, change in anxiety sensitivity, social anxiety symptoms, and engagement and satisfaction with the app.
TAKEAWAY:
- Anxiety decreased significantly from baseline at week 3, 6, and 12 (mean differences, −3.20, −5.64, and −5.67, respectively; all P < .001), with similar reductions in anxiety among the three incentive conditions.
- Use of the CBT app was also associated with significant reductions in anxiety sensitivity and social anxiety symptoms over time, with moderate to large effect sizes.
- A total of 98% of participants completed the 6-week assessment and 93% the 12-week follow-up. On average, the participants completed 10.8 of 12 sessions and 64% completed all sessions.
- The participants reported high satisfaction with the app across all time points, with no significant differences based on time or incentive condition.
IN PRACTICE:
“We hear a lot about the negative impact of technology use on mental health in this age group,” senior study author Faith M. Gunning, PhD, said in a press release. “But the ubiquitous use of cell phones for information may provide a way of addressing anxiety for some people who, even if they have access to mental health providers, may not go. If the app helps reduce symptoms, they may then be able to take the next step of seeing a mental health professional when needed.”
SOURCE:
The study was led by Jennifer N. Bress, PhD, Department of Psychiatry, Weill Cornell Medicine, New York City. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked a control group, and the unbalanced allocation of participants to the three incentive groups due to the COVID-19 pandemic may have influenced the results. The study sample, which predominantly consisted of female and college-educated participants, may not have accurately represented the broader population of young adults with anxiety.
DISCLOSURES:
This study was funded by the NewYork-Presbyterian Center for Youth Mental Health, the Khoury Foundation, the Paul and Jenna Segal Family Foundation, the Saks Fifth Avenue Foundation, Mary and Jonathan Rather, Weill Cornell Medicine, the Pritzker Neuropsychiatric Disorders Research Consortium, and the National Institutes of Health. Some authors reported obtaining grants, receiving personal fees, serving on speaker’s bureaus, and having other ties with multiple pharmaceutical companies and institutions. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
after 3 weeks, with continued improvement through week 12, a new randomized clinical trial shows.
METHODOLOGY:
- The study included 59 adults aged 18-25 years (mean age, 23 years; 78% women) with anxiety disorders (56% with generalized anxiety disorder; 41% with social anxiety disorder).
- Participants received a 6-week CBT program with a self-guided mobile application called Maya and were assigned to one of three incentive strategies to encourage engagement: Loss-framed (lose points for incomplete sessions), gain-framed (earn points for completed sessions), or gain-social support (gain points with added social support from a designated person).
- The primary end point was change in anxiety at week 6, measured with the Hamilton Anxiety Rating Scale.
- The researchers also evaluated change in anxiety at 3 and 12 weeks, change in anxiety sensitivity, social anxiety symptoms, and engagement and satisfaction with the app.
TAKEAWAY:
- Anxiety decreased significantly from baseline at week 3, 6, and 12 (mean differences, −3.20, −5.64, and −5.67, respectively; all P < .001), with similar reductions in anxiety among the three incentive conditions.
- Use of the CBT app was also associated with significant reductions in anxiety sensitivity and social anxiety symptoms over time, with moderate to large effect sizes.
- A total of 98% of participants completed the 6-week assessment and 93% the 12-week follow-up. On average, the participants completed 10.8 of 12 sessions and 64% completed all sessions.
- The participants reported high satisfaction with the app across all time points, with no significant differences based on time or incentive condition.
IN PRACTICE:
“We hear a lot about the negative impact of technology use on mental health in this age group,” senior study author Faith M. Gunning, PhD, said in a press release. “But the ubiquitous use of cell phones for information may provide a way of addressing anxiety for some people who, even if they have access to mental health providers, may not go. If the app helps reduce symptoms, they may then be able to take the next step of seeing a mental health professional when needed.”
SOURCE:
The study was led by Jennifer N. Bress, PhD, Department of Psychiatry, Weill Cornell Medicine, New York City. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked a control group, and the unbalanced allocation of participants to the three incentive groups due to the COVID-19 pandemic may have influenced the results. The study sample, which predominantly consisted of female and college-educated participants, may not have accurately represented the broader population of young adults with anxiety.
DISCLOSURES:
This study was funded by the NewYork-Presbyterian Center for Youth Mental Health, the Khoury Foundation, the Paul and Jenna Segal Family Foundation, the Saks Fifth Avenue Foundation, Mary and Jonathan Rather, Weill Cornell Medicine, the Pritzker Neuropsychiatric Disorders Research Consortium, and the National Institutes of Health. Some authors reported obtaining grants, receiving personal fees, serving on speaker’s bureaus, and having other ties with multiple pharmaceutical companies and institutions. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
after 3 weeks, with continued improvement through week 12, a new randomized clinical trial shows.
METHODOLOGY:
- The study included 59 adults aged 18-25 years (mean age, 23 years; 78% women) with anxiety disorders (56% with generalized anxiety disorder; 41% with social anxiety disorder).
- Participants received a 6-week CBT program with a self-guided mobile application called Maya and were assigned to one of three incentive strategies to encourage engagement: Loss-framed (lose points for incomplete sessions), gain-framed (earn points for completed sessions), or gain-social support (gain points with added social support from a designated person).
- The primary end point was change in anxiety at week 6, measured with the Hamilton Anxiety Rating Scale.
- The researchers also evaluated change in anxiety at 3 and 12 weeks, change in anxiety sensitivity, social anxiety symptoms, and engagement and satisfaction with the app.
TAKEAWAY:
- Anxiety decreased significantly from baseline at week 3, 6, and 12 (mean differences, −3.20, −5.64, and −5.67, respectively; all P < .001), with similar reductions in anxiety among the three incentive conditions.
- Use of the CBT app was also associated with significant reductions in anxiety sensitivity and social anxiety symptoms over time, with moderate to large effect sizes.
- A total of 98% of participants completed the 6-week assessment and 93% the 12-week follow-up. On average, the participants completed 10.8 of 12 sessions and 64% completed all sessions.
- The participants reported high satisfaction with the app across all time points, with no significant differences based on time or incentive condition.
IN PRACTICE:
“We hear a lot about the negative impact of technology use on mental health in this age group,” senior study author Faith M. Gunning, PhD, said in a press release. “But the ubiquitous use of cell phones for information may provide a way of addressing anxiety for some people who, even if they have access to mental health providers, may not go. If the app helps reduce symptoms, they may then be able to take the next step of seeing a mental health professional when needed.”
SOURCE:
The study was led by Jennifer N. Bress, PhD, Department of Psychiatry, Weill Cornell Medicine, New York City. It was published online in JAMA Network Open.
LIMITATIONS:
This study lacked a control group, and the unbalanced allocation of participants to the three incentive groups due to the COVID-19 pandemic may have influenced the results. The study sample, which predominantly consisted of female and college-educated participants, may not have accurately represented the broader population of young adults with anxiety.
DISCLOSURES:
This study was funded by the NewYork-Presbyterian Center for Youth Mental Health, the Khoury Foundation, the Paul and Jenna Segal Family Foundation, the Saks Fifth Avenue Foundation, Mary and Jonathan Rather, Weill Cornell Medicine, the Pritzker Neuropsychiatric Disorders Research Consortium, and the National Institutes of Health. Some authors reported obtaining grants, receiving personal fees, serving on speaker’s bureaus, and having other ties with multiple pharmaceutical companies and institutions. Full disclosures are available in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
Nighttime Outdoor Light Pollution Linked to Alzheimer’s Risk
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
a new national study suggested.
Analyses of state and county light pollution data and Medicare claims showed that areas with higher average nighttime light intensity had a greater prevalence of Alzheimer’s disease.
Among people aged 65 years or older, Alzheimer’s disease prevalence was more strongly associated with nightly light pollution exposure than with alcohol misuse, chronic kidney disease, depression, or obesity.
In those younger than 65 years, greater nighttime light intensity had a stronger association with Alzheimer’s disease prevalence than any other risk factor included in the study.
“The results are pretty striking when you do these comparisons and it’s true for people of all ages,” said Robin Voigt-Zuwala, PhD, lead author and director, Circadian Rhythm Research Laboratory, Rush University, Chicago, Illinois.
The study was published online in Frontiers of Neuroscience.
Shining a Light
Exposure to artificial outdoor light at night has been associated with adverse health effects such as sleep disruption, obesity, atherosclerosis, and cancer, but this is the first study to look specifically at Alzheimer’s disease, investigators noted.
Two recent studies reported higher risks for mild cognitive impairment among Chinese veterans and late-onset dementia among Italian residents living in areas with brighter outdoor light at night.
For this study, Dr. Voigt-Zuwala and colleagues examined the relationship between Alzheimer’s disease prevalence and average nighttime light intensity in the lower 48 states using data from Medicare Part A and B, the Centers for Disease Control and Prevention, and NASA satellite–acquired radiance data.
The data were averaged for the years 2012-2018 and states divided into five groups based on average nighttime light intensity.
The darkest states were Montana, Wyoming, South Dakota, Idaho, Maine, New Mexico, Vermont, Oregon, Utah, and Nevada. The brightest states were Indiana, Illinois, Florida, Ohio, Massachusetts, Connecticut, Maryland, Delaware, Rhode Island, and New Jersey.
Analysis of variance revealed a significant difference in Alzheimer’s disease prevalence between state groups (P < .0001). Multiple comparisons testing also showed that states with the lowest average nighttime light had significantly different Alzheimer’s disease prevalence than those with higher intensity.
The same positive relationship was observed when each year was assessed individually and at the county level, using data from 45 counties and the District of Columbia.
Strong Association
The investigators also found that state average nighttime light intensity is significantly associated with Alzheimer’s disease prevalence (P = .006). This effect was seen across all ages, sexes, and races except Asian Pacific Island, the latter possibly related to statistical power, the authors said.
When known or proposed risk factors for Alzheimer’s disease were added to the model, atrial fibrillation, diabetes, hyperlipidemia, hypertension, and stroke had a stronger association with Alzheimer’s disease than average nighttime light intensity.
Nighttime light intensity, however, was more strongly associated with Alzheimer’s disease prevalence than alcohol abuse, chronic kidney disease, depression, heart failure, and obesity.
Moreover, in people younger than 65 years, nighttime light pollution had a stronger association with Alzheimer’s disease prevalence than all other risk factors (P = .007).
The mechanism behind this increased vulnerability is unclear, but there may be an interplay between genetic susceptibility of an individual and how they respond to light, Dr. Voigt-Zuwala suggested.
“APOE4 is the genotype most highly associated with Alzheimer’s disease risk, and maybe the people who have that genotype are just more sensitive to the effects of light exposure at night, more sensitive to circadian rhythm disruption,” she said.
The authors noted that additional research is needed but suggested light pollution may also influence Alzheimer’s disease through sleep disruption, which can promote inflammation, activate microglia and astrocytes, and negatively alter the clearance of amyloid beta, and by decreasing the levels of brain-derived neurotrophic factor.
Are We Measuring the Right Light?
“It’s a good article and it’s got a good message, but I have some caveats to that,” said George C. Brainard, PhD, director, Light Research Program, Thomas Jefferson University in Philadelphia, Pennsylvania, and a pioneer in the study of how light affects biology including breast cancer in night-shift workers.
The biggest caveat, and one acknowledged by the authors, is that the study didn’t measure indoor light exposure and relied instead on satellite imaging.
“They’re very striking images, but they may not be particularly relevant. And here’s why: People don’t live outdoors all night,” Dr. Brainard said.
Instead, people spend much of their time at night indoors where they’re exposed to lighting in the home and from smartphones, laptops, and television screens.
“It doesn’t invalidate their work. It’s an important advancement, an important observation,” Dr. Brainard said. “But the important thing really is to find out what is the population exposed to that triggers this response, and it’s probably indoor lighting related to the amount and physical characteristics of indoor lighting. It doesn’t mean outdoor lighting can’t play a role. It certainly can.”
Reached for comment, Erik Musiek, MD, PhD, a professor of neurology whose lab at Washington University School of Medicine in St. Louis, Missouri, has extensively studied circadian clock disruption and Alzheimer’s disease pathology in the brain, said the study provides a 10,000-foot view of the issue.
For example, the study was not designed to detect whether people living in high light pollution areas are actually experiencing more outdoor light at night and if risk factors such as air pollution and low socioeconomic status may correlate with these areas.
“Most of what we worry about is do people have lights on in the house, do they have their TV on, their screens up to their face late at night? This can’t tell us about that,” Dr. Musiek said. “But on the other hand, this kind of light exposure is something that public policy can affect.”
“It’s hard to control people’s personal habits nor should we probably, but we can control what types of bulbs you put into streetlights, how bright they are, and where you put lighting in a public place,” he added. “So I do think there’s value there.”
At least 19 states, the District of Columbia, and Puerto Rico have laws in place to reduce light pollution, with the majority doing so to promote energy conservation, public safety, aesthetic interests, or astronomical research, according to the National Conference of State Legislatures.
To respond to some of the limitations in this study, Dr. Voigt-Zuwala is writing a grant application for a new project to look at both indoor and outdoor light exposure on an individual level.
“This is what I’ve been wanting to study for a long time, and this study is just sort of the stepping stone, the proof of concept that this is something we need to be investigating,” she said.
Dr. Voigt-Zuwala reported RO1 and R24 grants from the National Institutes of Health (NIH), one coauthor reported an NIH R24 grant; another reported having no conflicts of interest. Dr. Brainard reported having no relevant conflicts of interest. Dr. Musiek reported research funding from Eisai Pharmaceuticals.
A version of this article first appeared on Medscape.com.
FROM FRONTIERS OF NEUROSCIENCE
High Breast Cancer Risk With Menopausal Hormone Therapy & Strong Family History
TOPLINE:
These women have a striking cumulative risk of developing breast cancer (age, 50-80 years) of 22.4%, according to a new modelling study of UK women.
METHODOLOGY:
This was a modeling study integrating two data-sets of UK women: the BOADICEA dataset of age-specific breast cancer risk with family history and the Collaborative Group on Hormonal Factors in Breast Cancer, which covers relative risk for breast cancer with different types and durations of MHT.
Four different breast cancer family history profiles were:
- “Average” family history of breast cancer has unknown affected family members;
- “Modest” family history comprises a single first-degree relative with breast cancer at the age of 60 years.
- “Intermediate” family history comprises a single first-degree relative who developed breast cancer at the age of 40 years.
- “Strong” family history comprises two first-degree relatives who developed breast cancer at the age of 50 years.
TAKEAWAY:
- The lowest risk category: “Average” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 9.8% and a risk of dying from breast cancer of 1.7%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 11.0% and 1.8%, respectively.
- The highest risk category: “Strong” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 19.6% and a risk of dying from breast cancer of 3.2%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 22.4% and 3.5%, respectively.
IN PRACTICE:
The authors concluded that, “These integrated data will enable more accurate estimates of absolute and attributable risk associated with MHT exposure for women with a family history of breast cancer, informing shared decision-making.”
SOURCE:
The lead author is Catherine Huntley of the Institute of Cancer Research, London, England. The study appeared in the British Journal of General Practice.
LIMITATIONS:
Limitations included modeling study that did not directly measure individuals with combined risks.
DISCLOSURES:
The study was funded by several sources including Cancer Research UK. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
These women have a striking cumulative risk of developing breast cancer (age, 50-80 years) of 22.4%, according to a new modelling study of UK women.
METHODOLOGY:
This was a modeling study integrating two data-sets of UK women: the BOADICEA dataset of age-specific breast cancer risk with family history and the Collaborative Group on Hormonal Factors in Breast Cancer, which covers relative risk for breast cancer with different types and durations of MHT.
Four different breast cancer family history profiles were:
- “Average” family history of breast cancer has unknown affected family members;
- “Modest” family history comprises a single first-degree relative with breast cancer at the age of 60 years.
- “Intermediate” family history comprises a single first-degree relative who developed breast cancer at the age of 40 years.
- “Strong” family history comprises two first-degree relatives who developed breast cancer at the age of 50 years.
TAKEAWAY:
- The lowest risk category: “Average” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 9.8% and a risk of dying from breast cancer of 1.7%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 11.0% and 1.8%, respectively.
- The highest risk category: “Strong” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 19.6% and a risk of dying from breast cancer of 3.2%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 22.4% and 3.5%, respectively.
IN PRACTICE:
The authors concluded that, “These integrated data will enable more accurate estimates of absolute and attributable risk associated with MHT exposure for women with a family history of breast cancer, informing shared decision-making.”
SOURCE:
The lead author is Catherine Huntley of the Institute of Cancer Research, London, England. The study appeared in the British Journal of General Practice.
LIMITATIONS:
Limitations included modeling study that did not directly measure individuals with combined risks.
DISCLOSURES:
The study was funded by several sources including Cancer Research UK. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
These women have a striking cumulative risk of developing breast cancer (age, 50-80 years) of 22.4%, according to a new modelling study of UK women.
METHODOLOGY:
This was a modeling study integrating two data-sets of UK women: the BOADICEA dataset of age-specific breast cancer risk with family history and the Collaborative Group on Hormonal Factors in Breast Cancer, which covers relative risk for breast cancer with different types and durations of MHT.
Four different breast cancer family history profiles were:
- “Average” family history of breast cancer has unknown affected family members;
- “Modest” family history comprises a single first-degree relative with breast cancer at the age of 60 years.
- “Intermediate” family history comprises a single first-degree relative who developed breast cancer at the age of 40 years.
- “Strong” family history comprises two first-degree relatives who developed breast cancer at the age of 50 years.
TAKEAWAY:
- The lowest risk category: “Average” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 9.8% and a risk of dying from breast cancer of 1.7%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 11.0% and 1.8%, respectively.
- The highest risk category: “Strong” family history with no MHT use has a cumulative breast cancer risk (age, 50-80 years) of 19.6% and a risk of dying from breast cancer of 3.2%. These risks rise with 5 years’ exposure to MHT (age, 50-55 years) to 22.4% and 3.5%, respectively.
IN PRACTICE:
The authors concluded that, “These integrated data will enable more accurate estimates of absolute and attributable risk associated with MHT exposure for women with a family history of breast cancer, informing shared decision-making.”
SOURCE:
The lead author is Catherine Huntley of the Institute of Cancer Research, London, England. The study appeared in the British Journal of General Practice.
LIMITATIONS:
Limitations included modeling study that did not directly measure individuals with combined risks.
DISCLOSURES:
The study was funded by several sources including Cancer Research UK. The authors reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Breast Cancer Hormone Therapy May Protect Against Dementia
TOPLINE:
with the greatest benefit seen in younger Black women.
METHODOLOGY:
- Hormone-modulating therapy is widely used to treat hormone receptor–positive breast cancer, but the cognitive effects of the treatment, including a potential link to dementia, remain unclear.
- To investigate, researchers used the SEER-Medicare linked database to identify women aged 65 years or older with breast cancer who did and did not receive hormone-modulating therapy within 3 years following their diagnosis.
- The researchers excluded women with preexisting Alzheimer’s disease/dementia diagnoses or those who had received hormone-modulating therapy before their breast cancer diagnosis.
- Analyses were adjusted for demographic, sociocultural, and clinical variables, and subgroup analyses evaluated the impact of age, race, and type of hormone-modulating therapy on Alzheimer’s disease/dementia risk.
TAKEAWAY:
- Among the 18,808 women included in the analysis, 66% received hormone-modulating therapy and 34% did not. During the mean follow-up of 12 years, 24% of hormone-modulating therapy users and 28% of nonusers developed Alzheimer’s disease/dementia.
- Overall, hormone-modulating therapy use (vs nonuse) was associated with a significant 7% lower risk for Alzheimer’s disease/dementia (hazard ratio [HR], 0.93; P = .005), with notable age and racial differences.
- Hormone-modulating therapy use was associated with a 24% lower risk for Alzheimer’s disease/dementia in Black women aged 65-74 years (HR, 0.76), but that protective effect decreased to 19% in Black women aged 75 years or older (HR, 0.81). White women aged 65-74 years who received hormone-modulating therapy (vs those who did not) had an 11% lower risk for Alzheimer’s disease/dementia (HR, 0.89), but the association disappeared among those aged 75 years or older (HR, 0.96; 95% CI, 0.90-1.02). Other races demonstrated no significant association between hormone-modulating therapy use and Alzheimer’s disease/dementia.
- Overall, the use of an aromatase inhibitor or a selective estrogen receptor modulator was associated with a significantly lower risk for Alzheimer’s disease/dementia (HR, 0.93 and HR, 0.89, respectively).
IN PRACTICE:
Overall, the retrospective study found that “hormone therapy was associated with protection against [Alzheimer’s/dementia] in women aged 65 years or older with newly diagnosed breast cancer,” with the decrease in risk relatively greater for Black women and women younger than 75 years, the authors concluded.
“The results highlight the critical need for personalized breast cancer treatment plans that are tailored to the individual characteristics of each patient, particularly given the significantly higher likelihood (two to three times more) of Black women developing [Alzheimer’s/dementia], compared with their White counterparts,” the researchers added.
SOURCE:
The study, with first author Chao Cai, PhD, Department of Clinical Pharmacy and Outcomes Sciences, University of South Carolina, Columbia, was published online on July 16 in JAMA Network Open.
LIMITATIONS:
The study included only women aged 65 years or older, limiting generalizability to younger women. The dataset lacked genetic information and laboratory data related to dementia. The duration of hormone-modulating therapy use beyond 3 years and specific formulations were not assessed. Potential confounders such as variations in chemotherapy, radiation, and surgery were not fully addressed.
DISCLOSURES:
Support for the study was provided by the National Institutes of Health; Carolina Center on Alzheimer’s Disease and Minority Research pilot project; and the Dean’s Faculty Advancement Fund, University of Pittsburgh, Pennsylvania. The authors reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
TOPLINE:
with the greatest benefit seen in younger Black women.
METHODOLOGY:
- Hormone-modulating therapy is widely used to treat hormone receptor–positive breast cancer, but the cognitive effects of the treatment, including a potential link to dementia, remain unclear.
- To investigate, researchers used the SEER-Medicare linked database to identify women aged 65 years or older with breast cancer who did and did not receive hormone-modulating therapy within 3 years following their diagnosis.
- The researchers excluded women with preexisting Alzheimer’s disease/dementia diagnoses or those who had received hormone-modulating therapy before their breast cancer diagnosis.
- Analyses were adjusted for demographic, sociocultural, and clinical variables, and subgroup analyses evaluated the impact of age, race, and type of hormone-modulating therapy on Alzheimer’s disease/dementia risk.
TAKEAWAY:
- Among the 18,808 women included in the analysis, 66% received hormone-modulating therapy and 34% did not. During the mean follow-up of 12 years, 24% of hormone-modulating therapy users and 28% of nonusers developed Alzheimer’s disease/dementia.
- Overall, hormone-modulating therapy use (vs nonuse) was associated with a significant 7% lower risk for Alzheimer’s disease/dementia (hazard ratio [HR], 0.93; P = .005), with notable age and racial differences.
- Hormone-modulating therapy use was associated with a 24% lower risk for Alzheimer’s disease/dementia in Black women aged 65-74 years (HR, 0.76), but that protective effect decreased to 19% in Black women aged 75 years or older (HR, 0.81). White women aged 65-74 years who received hormone-modulating therapy (vs those who did not) had an 11% lower risk for Alzheimer’s disease/dementia (HR, 0.89), but the association disappeared among those aged 75 years or older (HR, 0.96; 95% CI, 0.90-1.02). Other races demonstrated no significant association between hormone-modulating therapy use and Alzheimer’s disease/dementia.
- Overall, the use of an aromatase inhibitor or a selective estrogen receptor modulator was associated with a significantly lower risk for Alzheimer’s disease/dementia (HR, 0.93 and HR, 0.89, respectively).
IN PRACTICE:
Overall, the retrospective study found that “hormone therapy was associated with protection against [Alzheimer’s/dementia] in women aged 65 years or older with newly diagnosed breast cancer,” with the decrease in risk relatively greater for Black women and women younger than 75 years, the authors concluded.
“The results highlight the critical need for personalized breast cancer treatment plans that are tailored to the individual characteristics of each patient, particularly given the significantly higher likelihood (two to three times more) of Black women developing [Alzheimer’s/dementia], compared with their White counterparts,” the researchers added.
SOURCE:
The study, with first author Chao Cai, PhD, Department of Clinical Pharmacy and Outcomes Sciences, University of South Carolina, Columbia, was published online on July 16 in JAMA Network Open.
LIMITATIONS:
The study included only women aged 65 years or older, limiting generalizability to younger women. The dataset lacked genetic information and laboratory data related to dementia. The duration of hormone-modulating therapy use beyond 3 years and specific formulations were not assessed. Potential confounders such as variations in chemotherapy, radiation, and surgery were not fully addressed.
DISCLOSURES:
Support for the study was provided by the National Institutes of Health; Carolina Center on Alzheimer’s Disease and Minority Research pilot project; and the Dean’s Faculty Advancement Fund, University of Pittsburgh, Pennsylvania. The authors reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
TOPLINE:
with the greatest benefit seen in younger Black women.
METHODOLOGY:
- Hormone-modulating therapy is widely used to treat hormone receptor–positive breast cancer, but the cognitive effects of the treatment, including a potential link to dementia, remain unclear.
- To investigate, researchers used the SEER-Medicare linked database to identify women aged 65 years or older with breast cancer who did and did not receive hormone-modulating therapy within 3 years following their diagnosis.
- The researchers excluded women with preexisting Alzheimer’s disease/dementia diagnoses or those who had received hormone-modulating therapy before their breast cancer diagnosis.
- Analyses were adjusted for demographic, sociocultural, and clinical variables, and subgroup analyses evaluated the impact of age, race, and type of hormone-modulating therapy on Alzheimer’s disease/dementia risk.
TAKEAWAY:
- Among the 18,808 women included in the analysis, 66% received hormone-modulating therapy and 34% did not. During the mean follow-up of 12 years, 24% of hormone-modulating therapy users and 28% of nonusers developed Alzheimer’s disease/dementia.
- Overall, hormone-modulating therapy use (vs nonuse) was associated with a significant 7% lower risk for Alzheimer’s disease/dementia (hazard ratio [HR], 0.93; P = .005), with notable age and racial differences.
- Hormone-modulating therapy use was associated with a 24% lower risk for Alzheimer’s disease/dementia in Black women aged 65-74 years (HR, 0.76), but that protective effect decreased to 19% in Black women aged 75 years or older (HR, 0.81). White women aged 65-74 years who received hormone-modulating therapy (vs those who did not) had an 11% lower risk for Alzheimer’s disease/dementia (HR, 0.89), but the association disappeared among those aged 75 years or older (HR, 0.96; 95% CI, 0.90-1.02). Other races demonstrated no significant association between hormone-modulating therapy use and Alzheimer’s disease/dementia.
- Overall, the use of an aromatase inhibitor or a selective estrogen receptor modulator was associated with a significantly lower risk for Alzheimer’s disease/dementia (HR, 0.93 and HR, 0.89, respectively).
IN PRACTICE:
Overall, the retrospective study found that “hormone therapy was associated with protection against [Alzheimer’s/dementia] in women aged 65 years or older with newly diagnosed breast cancer,” with the decrease in risk relatively greater for Black women and women younger than 75 years, the authors concluded.
“The results highlight the critical need for personalized breast cancer treatment plans that are tailored to the individual characteristics of each patient, particularly given the significantly higher likelihood (two to three times more) of Black women developing [Alzheimer’s/dementia], compared with their White counterparts,” the researchers added.
SOURCE:
The study, with first author Chao Cai, PhD, Department of Clinical Pharmacy and Outcomes Sciences, University of South Carolina, Columbia, was published online on July 16 in JAMA Network Open.
LIMITATIONS:
The study included only women aged 65 years or older, limiting generalizability to younger women. The dataset lacked genetic information and laboratory data related to dementia. The duration of hormone-modulating therapy use beyond 3 years and specific formulations were not assessed. Potential confounders such as variations in chemotherapy, radiation, and surgery were not fully addressed.
DISCLOSURES:
Support for the study was provided by the National Institutes of Health; Carolina Center on Alzheimer’s Disease and Minority Research pilot project; and the Dean’s Faculty Advancement Fund, University of Pittsburgh, Pennsylvania. The authors reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
False-Positive Mammography Results Linked to Reduced Rates of Future Screenings
TOPLINE:
METHODOLOGY:
- Researchers analyzed more than three million screening mammograms from more than one million women aged between 40 and 73 years at nearly 200 facilities in the Breast Cancer Surveillance Consortium between 2005 and 2017.
- Mammography results were classified as true negative or false positive; women who received false-positive results were either asked to come back for additional imaging, a short interval follow-up or biopsy recommendations.
- The primary outcome was the probability of returning for routine screening within 9-30 months after a false-positive or true-negative result, adjusted for race, ethnicity, age, and time since the last mammogram.
- Women with two screening mammograms within 5 years were also analyzed to evaluate the probability of returning for a third screening based on combinations of true-negative and false-positive results.
TAKEAWAY:
- Nearly 10.0% (95% CI, 9.1%-10.5%) of women who received screening mammograms got a false-positive result, 5.8% (95% CI, 5.5%-6.2%) of whom needed immediate additional imaging, 2.7% (95% CI, 2.3%-3.2%) needed short-interval follow-up, and 1.3% (95% CI, 1.1%-1.4%) were recommended for a biopsy.
- Women were more likely to return for screening after a true-negative result (76.9%) than after a false positive to obtain more data through additional imaging (72.4%), short-interval follow-ups (54.7%), or biopsy (61.0%).
- Asian and Hispanic/Latinx women who received a false-positive result were much less likely to return for a screening than women of the same groups who received a true-negative result, with recommendations for short interval follow-up (decrease of 20-25 percentage points) or biopsy (decrease of 13-14 percentage points).
- For women who had two screening mammograms within 5 years, receiving a false-positive result on the second was linked to a lower likelihood of returning for a third screening, regardless of results for the first.
IN PRACTICE:
“Physicians should educate their patients about the importance of continued screening after false-positive results, especially given the associated increased future risk for breast cancer,” study authors wrote.
SOURCE:
The study was led by Diana L. Miglioretti, PhD, of the Department of Public Health Sciences at the University of California, Davis, and published online on September 3 in Annals of Internal Medicine.
LIMITATIONS:
Women could receive care at facilities outside of the trial, which may have affected the accuracy of return rates. The study did not track a complete history of false-positive results. The study did not have information about how often physicians recommend screenings and did not account for other health conditions.
DISCLOSURES:
One coauthor reported receiving grants from the National Institutes of Health and the American Cancer Society, as well as consulting fees from the University of Florida, Gainesville.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers analyzed more than three million screening mammograms from more than one million women aged between 40 and 73 years at nearly 200 facilities in the Breast Cancer Surveillance Consortium between 2005 and 2017.
- Mammography results were classified as true negative or false positive; women who received false-positive results were either asked to come back for additional imaging, a short interval follow-up or biopsy recommendations.
- The primary outcome was the probability of returning for routine screening within 9-30 months after a false-positive or true-negative result, adjusted for race, ethnicity, age, and time since the last mammogram.
- Women with two screening mammograms within 5 years were also analyzed to evaluate the probability of returning for a third screening based on combinations of true-negative and false-positive results.
TAKEAWAY:
- Nearly 10.0% (95% CI, 9.1%-10.5%) of women who received screening mammograms got a false-positive result, 5.8% (95% CI, 5.5%-6.2%) of whom needed immediate additional imaging, 2.7% (95% CI, 2.3%-3.2%) needed short-interval follow-up, and 1.3% (95% CI, 1.1%-1.4%) were recommended for a biopsy.
- Women were more likely to return for screening after a true-negative result (76.9%) than after a false positive to obtain more data through additional imaging (72.4%), short-interval follow-ups (54.7%), or biopsy (61.0%).
- Asian and Hispanic/Latinx women who received a false-positive result were much less likely to return for a screening than women of the same groups who received a true-negative result, with recommendations for short interval follow-up (decrease of 20-25 percentage points) or biopsy (decrease of 13-14 percentage points).
- For women who had two screening mammograms within 5 years, receiving a false-positive result on the second was linked to a lower likelihood of returning for a third screening, regardless of results for the first.
IN PRACTICE:
“Physicians should educate their patients about the importance of continued screening after false-positive results, especially given the associated increased future risk for breast cancer,” study authors wrote.
SOURCE:
The study was led by Diana L. Miglioretti, PhD, of the Department of Public Health Sciences at the University of California, Davis, and published online on September 3 in Annals of Internal Medicine.
LIMITATIONS:
Women could receive care at facilities outside of the trial, which may have affected the accuracy of return rates. The study did not track a complete history of false-positive results. The study did not have information about how often physicians recommend screenings and did not account for other health conditions.
DISCLOSURES:
One coauthor reported receiving grants from the National Institutes of Health and the American Cancer Society, as well as consulting fees from the University of Florida, Gainesville.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers analyzed more than three million screening mammograms from more than one million women aged between 40 and 73 years at nearly 200 facilities in the Breast Cancer Surveillance Consortium between 2005 and 2017.
- Mammography results were classified as true negative or false positive; women who received false-positive results were either asked to come back for additional imaging, a short interval follow-up or biopsy recommendations.
- The primary outcome was the probability of returning for routine screening within 9-30 months after a false-positive or true-negative result, adjusted for race, ethnicity, age, and time since the last mammogram.
- Women with two screening mammograms within 5 years were also analyzed to evaluate the probability of returning for a third screening based on combinations of true-negative and false-positive results.
TAKEAWAY:
- Nearly 10.0% (95% CI, 9.1%-10.5%) of women who received screening mammograms got a false-positive result, 5.8% (95% CI, 5.5%-6.2%) of whom needed immediate additional imaging, 2.7% (95% CI, 2.3%-3.2%) needed short-interval follow-up, and 1.3% (95% CI, 1.1%-1.4%) were recommended for a biopsy.
- Women were more likely to return for screening after a true-negative result (76.9%) than after a false positive to obtain more data through additional imaging (72.4%), short-interval follow-ups (54.7%), or biopsy (61.0%).
- Asian and Hispanic/Latinx women who received a false-positive result were much less likely to return for a screening than women of the same groups who received a true-negative result, with recommendations for short interval follow-up (decrease of 20-25 percentage points) or biopsy (decrease of 13-14 percentage points).
- For women who had two screening mammograms within 5 years, receiving a false-positive result on the second was linked to a lower likelihood of returning for a third screening, regardless of results for the first.
IN PRACTICE:
“Physicians should educate their patients about the importance of continued screening after false-positive results, especially given the associated increased future risk for breast cancer,” study authors wrote.
SOURCE:
The study was led by Diana L. Miglioretti, PhD, of the Department of Public Health Sciences at the University of California, Davis, and published online on September 3 in Annals of Internal Medicine.
LIMITATIONS:
Women could receive care at facilities outside of the trial, which may have affected the accuracy of return rates. The study did not track a complete history of false-positive results. The study did not have information about how often physicians recommend screenings and did not account for other health conditions.
DISCLOSURES:
One coauthor reported receiving grants from the National Institutes of Health and the American Cancer Society, as well as consulting fees from the University of Florida, Gainesville.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
To Choose the Best First-line Drug for CML, Consider Efficacy and Cost
When it comes to selecting a cost-effective, first-line tyrosine kinase inhibitor (TKI) for the treatment of chronic myeloid leukemia (CML), consider the treatment goal.
For survival, generic imatinib remains the gold standard, Elias Jabbour, MD, said during a session at the annual meeting of the Society of Hematologic Oncology in Houston.
For treatment-free remission, generic dasatinib or another generic second-generation TKI is needed, but not yet available in the United States, so generic imatinib is the best current choice, said Dr. Jabbour, a professor of medicine in the Department of Leukemia at the University of Texas MD Anderson Cancer Center, Houston.
Prior to the availability of generic imatinib, that wasn’t the case, he noted, explaining that second-generation TKIs met the cost-efficacy criteria, but now — at about $35 per month or about $400 per year — imatinib is far less expensive than the approximately $250,000 per year that brand-name second- and third-generation TKIs can currently cost.
To have treatment value, any new TKI should cost $40,000-$50,000 per quality-adjusted life-year, which is defined as the quality and duration of life after a novel TKI vs with the existing standard of care, Dr. Jabbour said.
And to qualify as a frontline therapy for CML, any new TKI should show efficacy superior to second-generation TKIs, in addition to meeting the cost-effectiveness criteria.
“It is hard to show survival benefit anymore, but we need to improve on the rate of durable deep molecular remission,” he said.
An equivalent or better long-term safety profile over at least 7-8 years is also needed.
Based on the current literature, none of the TKIs currently being evaluated has met that standard, although some trials are ongoing.
In a recent editorial, Dr. Jabbour and colleagues outlined treatment recommendations based on the currently available data. They suggested using lower-than-approved doses of TKIs in both frontline and later therapies to reduce toxicity, improve treatment compliance, and reduce costs.
They also suggested that the absence of an early molecular response might not warrant changing the TKI, especially when a second-generation TKI was used first line.
When treatment-free remission is not a therapeutic goal or is unlikely, changing the TKI to improve the depth of molecular response, which has been shown to improve the likelihood of treatment-free remission, could do more harm than good, they argued.
Instead, consider reducing the dose to manage reversible side effects, they suggested, noting that generic imatinib, and eventually generic dasatinib and possibly other generic second-generation TKIs, will likely offer 90% of patients with CML an effective, safe, and affordable treatment that normalizes life expectancy and leads to treatment-free remission in 30%-50% of patients over time.
Dr. Jabbour disclosed ties with AbbVie, Almoosa Specialist Hospital, Amgen, Ascentage Pharma, Biologix FZ, Hikma Pharmaceuticals, Kite, Takeda, and Terns.
A version of this article first appeared on Medscape.com.
When it comes to selecting a cost-effective, first-line tyrosine kinase inhibitor (TKI) for the treatment of chronic myeloid leukemia (CML), consider the treatment goal.
For survival, generic imatinib remains the gold standard, Elias Jabbour, MD, said during a session at the annual meeting of the Society of Hematologic Oncology in Houston.
For treatment-free remission, generic dasatinib or another generic second-generation TKI is needed, but not yet available in the United States, so generic imatinib is the best current choice, said Dr. Jabbour, a professor of medicine in the Department of Leukemia at the University of Texas MD Anderson Cancer Center, Houston.
Prior to the availability of generic imatinib, that wasn’t the case, he noted, explaining that second-generation TKIs met the cost-efficacy criteria, but now — at about $35 per month or about $400 per year — imatinib is far less expensive than the approximately $250,000 per year that brand-name second- and third-generation TKIs can currently cost.
To have treatment value, any new TKI should cost $40,000-$50,000 per quality-adjusted life-year, which is defined as the quality and duration of life after a novel TKI vs with the existing standard of care, Dr. Jabbour said.
And to qualify as a frontline therapy for CML, any new TKI should show efficacy superior to second-generation TKIs, in addition to meeting the cost-effectiveness criteria.
“It is hard to show survival benefit anymore, but we need to improve on the rate of durable deep molecular remission,” he said.
An equivalent or better long-term safety profile over at least 7-8 years is also needed.
Based on the current literature, none of the TKIs currently being evaluated has met that standard, although some trials are ongoing.
In a recent editorial, Dr. Jabbour and colleagues outlined treatment recommendations based on the currently available data. They suggested using lower-than-approved doses of TKIs in both frontline and later therapies to reduce toxicity, improve treatment compliance, and reduce costs.
They also suggested that the absence of an early molecular response might not warrant changing the TKI, especially when a second-generation TKI was used first line.
When treatment-free remission is not a therapeutic goal or is unlikely, changing the TKI to improve the depth of molecular response, which has been shown to improve the likelihood of treatment-free remission, could do more harm than good, they argued.
Instead, consider reducing the dose to manage reversible side effects, they suggested, noting that generic imatinib, and eventually generic dasatinib and possibly other generic second-generation TKIs, will likely offer 90% of patients with CML an effective, safe, and affordable treatment that normalizes life expectancy and leads to treatment-free remission in 30%-50% of patients over time.
Dr. Jabbour disclosed ties with AbbVie, Almoosa Specialist Hospital, Amgen, Ascentage Pharma, Biologix FZ, Hikma Pharmaceuticals, Kite, Takeda, and Terns.
A version of this article first appeared on Medscape.com.
When it comes to selecting a cost-effective, first-line tyrosine kinase inhibitor (TKI) for the treatment of chronic myeloid leukemia (CML), consider the treatment goal.
For survival, generic imatinib remains the gold standard, Elias Jabbour, MD, said during a session at the annual meeting of the Society of Hematologic Oncology in Houston.
For treatment-free remission, generic dasatinib or another generic second-generation TKI is needed, but not yet available in the United States, so generic imatinib is the best current choice, said Dr. Jabbour, a professor of medicine in the Department of Leukemia at the University of Texas MD Anderson Cancer Center, Houston.
Prior to the availability of generic imatinib, that wasn’t the case, he noted, explaining that second-generation TKIs met the cost-efficacy criteria, but now — at about $35 per month or about $400 per year — imatinib is far less expensive than the approximately $250,000 per year that brand-name second- and third-generation TKIs can currently cost.
To have treatment value, any new TKI should cost $40,000-$50,000 per quality-adjusted life-year, which is defined as the quality and duration of life after a novel TKI vs with the existing standard of care, Dr. Jabbour said.
And to qualify as a frontline therapy for CML, any new TKI should show efficacy superior to second-generation TKIs, in addition to meeting the cost-effectiveness criteria.
“It is hard to show survival benefit anymore, but we need to improve on the rate of durable deep molecular remission,” he said.
An equivalent or better long-term safety profile over at least 7-8 years is also needed.
Based on the current literature, none of the TKIs currently being evaluated has met that standard, although some trials are ongoing.
In a recent editorial, Dr. Jabbour and colleagues outlined treatment recommendations based on the currently available data. They suggested using lower-than-approved doses of TKIs in both frontline and later therapies to reduce toxicity, improve treatment compliance, and reduce costs.
They also suggested that the absence of an early molecular response might not warrant changing the TKI, especially when a second-generation TKI was used first line.
When treatment-free remission is not a therapeutic goal or is unlikely, changing the TKI to improve the depth of molecular response, which has been shown to improve the likelihood of treatment-free remission, could do more harm than good, they argued.
Instead, consider reducing the dose to manage reversible side effects, they suggested, noting that generic imatinib, and eventually generic dasatinib and possibly other generic second-generation TKIs, will likely offer 90% of patients with CML an effective, safe, and affordable treatment that normalizes life expectancy and leads to treatment-free remission in 30%-50% of patients over time.
Dr. Jabbour disclosed ties with AbbVie, Almoosa Specialist Hospital, Amgen, Ascentage Pharma, Biologix FZ, Hikma Pharmaceuticals, Kite, Takeda, and Terns.
A version of this article first appeared on Medscape.com.
FROM SOHO 2024
Do Clonal Hematopoiesis and Mosaic Chromosomal Alterations Increase Solid Tumor Risk?
Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.
These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
How This Study Differs From Others of Breast Cancer Risk Factors
“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.
In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.
But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.
“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”
In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?
To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.
In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.
More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.
The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.
“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.
“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.
“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.
Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
How Do Findings Compare With Those of the UK Biobank Study?
CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.
In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.
“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.
As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.
Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).
The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.
The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.
She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
Why Do Results Differ Between These Types of Studies?
Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.
“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.
“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.
Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?
“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”
Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.
“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
Future research and therapeutic development
Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.
“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.
Available data support both possibilities.
On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.
When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”
The presence of a causal association could be promising from a therapeutic standpoint.
“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.
Yet earlier intervention may still hold promise, according to experts.
“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.
The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.
A version of this article first appeared on Medscape.com.
Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.
These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
How This Study Differs From Others of Breast Cancer Risk Factors
“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.
In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.
But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.
“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”
In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?
To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.
In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.
More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.
The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.
“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.
“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.
“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.
Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
How Do Findings Compare With Those of the UK Biobank Study?
CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.
In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.
“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.
As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.
Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).
The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.
The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.
She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
Why Do Results Differ Between These Types of Studies?
Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.
“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.
“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.
Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?
“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”
Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.
“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
Future research and therapeutic development
Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.
“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.
Available data support both possibilities.
On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.
When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”
The presence of a causal association could be promising from a therapeutic standpoint.
“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.
Yet earlier intervention may still hold promise, according to experts.
“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.
The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.
A version of this article first appeared on Medscape.com.
Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.
These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
How This Study Differs From Others of Breast Cancer Risk Factors
“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.
In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.
But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.
“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”
In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?
To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.
In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.
More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.
The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.
“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.
“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.
“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.
Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
How Do Findings Compare With Those of the UK Biobank Study?
CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.
In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.
“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.
As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.
Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).
The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.
The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.
She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
Why Do Results Differ Between These Types of Studies?
Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.
“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.
“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.
Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?
“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”
Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.
“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
Future research and therapeutic development
Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.
“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.
Available data support both possibilities.
On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.
When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”
The presence of a causal association could be promising from a therapeutic standpoint.
“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.
Yet earlier intervention may still hold promise, according to experts.
“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.
The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.
A version of this article first appeared on Medscape.com.
FROM CANCER
Depiction of Cancer in Movies: Not an Accurate Portrayal
This transcript has been edited for clarity.
I’d like to talk about a very different topic from what I normally discuss, which is probably relatively rarely addressed in clinical conversations among clinicians. There was a very provocative commentary that appeared in JCO Oncology Practice, titled “Hollywood’s Take on Oncology: Portrayal of Cancer in Movies, 2010-2020.”
All of us, as we grow up — as kids, adolescents, young adults, adults, and older individuals — watch television and movies. The older of us know that the doctor in everybody’s home that we all wanted was Marcus Welby. Of course, there was Dr. Kildare, ER, Grey’s Anatomy, and St. Elsewhere. There was Love Story and Brian’s Song. We all know about these.
This particular review was fascinating. The authors looked at 100 English-language movies that had cancer included in the storyline over the past decade. They asked some relatively simple questions: How did they discuss it? What were the tumor types they discussed? What were the outcomes?
The question is, what is the public seeing? If you watch these movies and you don’t have family experience or personal experience with cancer, what do you think about cancer? Maybe this is what you know about it. Despite what the National Cancer Institute or the American Society of Clinical Oncology tells you, this may be what you know.
What they showed was really quite interesting.
There is another very interesting phenomenon. What do you think was the most common cancer type when they did define the cancer? It was brain tumors, even though we know that brain tumors are certainly not even within the top 10. They’re obviously very serious cancers, but if you’re talking about common cancers, brain cancer doesn’t rank in the top 10, and it was the most common cancer on these shows.
The authors of this paper made the point of whether this would be an opportunity for filmmakers. Again, with the storyline, they’re trying to sell a product here, but wouldn’t this be the opportunity to provide some information about the reality of cancer? They could emphasize the fact that smokers get lung cancer. In my opinion, they could discuss cervical cancer and comment that if HPV vaccination had been done, maybe this would not have happened.
They noted that the majority of cancers in these movies were incurable, and they commented that that’s not the reality today. Today, obviously, many of our cancers that weren’t curable have become quite curable for a percentage of patients, in addition to which, obviously, with early detection, we have a very high cure rate. How about trying to get that message out, too, that we’ve actually had increasing success?
They commented that there was very rarely, if ever, a conversation about multidisciplinary care, that somehow there are multiple doctors with multiple specialties involved. They noted that this is potentially a very important message to give out. They commented that in 12 of these movies, the patient refused cancer care. Again, that happens, but it’s clearly a rare event today. Maybe this is not really a very accurate depiction of what’s going on.
They commented on the fact that, obviously, we’re going back through the past 10 years, so there were no patients who received immunotherapy or targeted therapy. Again, the goal here is not to sell oncology care but to be accurate, or more accurate, about the state of treatment to the extent you can.
They noted that, in fact, there was essentially very little, if any, comment on palliative care or hospice care. The final point they made is that there was very little conversation in these movies about what we now recognize as financial distress in many of our patients. That’s an unfortunate reality and perhaps that might come in the future.
Again, the point of this was not to tell Hollywood how to make their movies but to have the oncology community recognize that if their patients or the families of their patients are seeing these movies, they are not getting a very accurate picture of what is happening in the oncology world today and that some education may very well be required.
Maurie Markman is Professor, Department of Medical Oncology and Therapeutics Research, City of Hope, Duarte, California, and President of Medicine & Science, City of Hope Atlanta, Chicago, and Phoenix. He disclosed the following relevant financial relationships: income in an amount equal to or greater than $250 from: GlaxoSmithKline; AstraZeneca.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to talk about a very different topic from what I normally discuss, which is probably relatively rarely addressed in clinical conversations among clinicians. There was a very provocative commentary that appeared in JCO Oncology Practice, titled “Hollywood’s Take on Oncology: Portrayal of Cancer in Movies, 2010-2020.”
All of us, as we grow up — as kids, adolescents, young adults, adults, and older individuals — watch television and movies. The older of us know that the doctor in everybody’s home that we all wanted was Marcus Welby. Of course, there was Dr. Kildare, ER, Grey’s Anatomy, and St. Elsewhere. There was Love Story and Brian’s Song. We all know about these.
This particular review was fascinating. The authors looked at 100 English-language movies that had cancer included in the storyline over the past decade. They asked some relatively simple questions: How did they discuss it? What were the tumor types they discussed? What were the outcomes?
The question is, what is the public seeing? If you watch these movies and you don’t have family experience or personal experience with cancer, what do you think about cancer? Maybe this is what you know about it. Despite what the National Cancer Institute or the American Society of Clinical Oncology tells you, this may be what you know.
What they showed was really quite interesting.
There is another very interesting phenomenon. What do you think was the most common cancer type when they did define the cancer? It was brain tumors, even though we know that brain tumors are certainly not even within the top 10. They’re obviously very serious cancers, but if you’re talking about common cancers, brain cancer doesn’t rank in the top 10, and it was the most common cancer on these shows.
The authors of this paper made the point of whether this would be an opportunity for filmmakers. Again, with the storyline, they’re trying to sell a product here, but wouldn’t this be the opportunity to provide some information about the reality of cancer? They could emphasize the fact that smokers get lung cancer. In my opinion, they could discuss cervical cancer and comment that if HPV vaccination had been done, maybe this would not have happened.
They noted that the majority of cancers in these movies were incurable, and they commented that that’s not the reality today. Today, obviously, many of our cancers that weren’t curable have become quite curable for a percentage of patients, in addition to which, obviously, with early detection, we have a very high cure rate. How about trying to get that message out, too, that we’ve actually had increasing success?
They commented that there was very rarely, if ever, a conversation about multidisciplinary care, that somehow there are multiple doctors with multiple specialties involved. They noted that this is potentially a very important message to give out. They commented that in 12 of these movies, the patient refused cancer care. Again, that happens, but it’s clearly a rare event today. Maybe this is not really a very accurate depiction of what’s going on.
They commented on the fact that, obviously, we’re going back through the past 10 years, so there were no patients who received immunotherapy or targeted therapy. Again, the goal here is not to sell oncology care but to be accurate, or more accurate, about the state of treatment to the extent you can.
They noted that, in fact, there was essentially very little, if any, comment on palliative care or hospice care. The final point they made is that there was very little conversation in these movies about what we now recognize as financial distress in many of our patients. That’s an unfortunate reality and perhaps that might come in the future.
Again, the point of this was not to tell Hollywood how to make their movies but to have the oncology community recognize that if their patients or the families of their patients are seeing these movies, they are not getting a very accurate picture of what is happening in the oncology world today and that some education may very well be required.
Maurie Markman is Professor, Department of Medical Oncology and Therapeutics Research, City of Hope, Duarte, California, and President of Medicine & Science, City of Hope Atlanta, Chicago, and Phoenix. He disclosed the following relevant financial relationships: income in an amount equal to or greater than $250 from: GlaxoSmithKline; AstraZeneca.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to talk about a very different topic from what I normally discuss, which is probably relatively rarely addressed in clinical conversations among clinicians. There was a very provocative commentary that appeared in JCO Oncology Practice, titled “Hollywood’s Take on Oncology: Portrayal of Cancer in Movies, 2010-2020.”
All of us, as we grow up — as kids, adolescents, young adults, adults, and older individuals — watch television and movies. The older of us know that the doctor in everybody’s home that we all wanted was Marcus Welby. Of course, there was Dr. Kildare, ER, Grey’s Anatomy, and St. Elsewhere. There was Love Story and Brian’s Song. We all know about these.
This particular review was fascinating. The authors looked at 100 English-language movies that had cancer included in the storyline over the past decade. They asked some relatively simple questions: How did they discuss it? What were the tumor types they discussed? What were the outcomes?
The question is, what is the public seeing? If you watch these movies and you don’t have family experience or personal experience with cancer, what do you think about cancer? Maybe this is what you know about it. Despite what the National Cancer Institute or the American Society of Clinical Oncology tells you, this may be what you know.
What they showed was really quite interesting.
There is another very interesting phenomenon. What do you think was the most common cancer type when they did define the cancer? It was brain tumors, even though we know that brain tumors are certainly not even within the top 10. They’re obviously very serious cancers, but if you’re talking about common cancers, brain cancer doesn’t rank in the top 10, and it was the most common cancer on these shows.
The authors of this paper made the point of whether this would be an opportunity for filmmakers. Again, with the storyline, they’re trying to sell a product here, but wouldn’t this be the opportunity to provide some information about the reality of cancer? They could emphasize the fact that smokers get lung cancer. In my opinion, they could discuss cervical cancer and comment that if HPV vaccination had been done, maybe this would not have happened.
They noted that the majority of cancers in these movies were incurable, and they commented that that’s not the reality today. Today, obviously, many of our cancers that weren’t curable have become quite curable for a percentage of patients, in addition to which, obviously, with early detection, we have a very high cure rate. How about trying to get that message out, too, that we’ve actually had increasing success?
They commented that there was very rarely, if ever, a conversation about multidisciplinary care, that somehow there are multiple doctors with multiple specialties involved. They noted that this is potentially a very important message to give out. They commented that in 12 of these movies, the patient refused cancer care. Again, that happens, but it’s clearly a rare event today. Maybe this is not really a very accurate depiction of what’s going on.
They commented on the fact that, obviously, we’re going back through the past 10 years, so there were no patients who received immunotherapy or targeted therapy. Again, the goal here is not to sell oncology care but to be accurate, or more accurate, about the state of treatment to the extent you can.
They noted that, in fact, there was essentially very little, if any, comment on palliative care or hospice care. The final point they made is that there was very little conversation in these movies about what we now recognize as financial distress in many of our patients. That’s an unfortunate reality and perhaps that might come in the future.
Again, the point of this was not to tell Hollywood how to make their movies but to have the oncology community recognize that if their patients or the families of their patients are seeing these movies, they are not getting a very accurate picture of what is happening in the oncology world today and that some education may very well be required.
Maurie Markman is Professor, Department of Medical Oncology and Therapeutics Research, City of Hope, Duarte, California, and President of Medicine & Science, City of Hope Atlanta, Chicago, and Phoenix. He disclosed the following relevant financial relationships: income in an amount equal to or greater than $250 from: GlaxoSmithKline; AstraZeneca.
A version of this article first appeared on Medscape.com.
Timing of Blood Pressure Dosing Doesn’t Matter (Again): BedMed and BedMed-Frail
This transcript has been edited for clarity.
Tricia Ward: I’m joined today by Dr. Scott R. Garrison, MD, PhD. He is a professor in the Department of Family Medicine at the University of Alberta in Edmonton, Alberta, Canada, and director of the Pragmatic Trials Collaborative.
You presented two studies at ESC. One is the BedMed study, comparing day vs nighttime dosing of blood pressure therapy. Can you tell us the top-line findings?
BedMed and BedMed-Frail
Dr. Garrison: We were looking to validate an earlier study that suggested a large benefit of taking blood pressure medication at bedtime, as far as reducing major adverse cardiovascular events (MACEs). That was the MAPEC study. They suggested a 60% reduction. The BedMed trial was in hypertensive primary care patients in five Canadian provinces. We randomized well over 3000 patients to bedtime or morning medications. We looked at MACEs — so all-cause death or hospitalizations for acute coronary syndrome, stroke, or heart failure, and a bunch of safety outcomes.
Essentially,
Ms. Ward: And then you did a second study, called BedMed-Frail. Do you want to tell us the reason you did that?
Dr. Garrison: BedMed-Frail took place in a nursing home population. We believed that it was possible that frail, older adults might have very different risks and benefits, and that they would probably be underrepresented, as they normally are in the main trial.
We thought that because bedtime blood pressure medications would be theoretically preferentially lowering night pressure, which is already the lowest pressure of the day, that if you were at risk for hypotensive or ischemic adverse events, that might make it worse. We looked at falls and fractures; worsening cognition in case they had vascular dementia; and whether they developed decubitus ulcers (pressure sores) because you need a certain amount of pressure to get past any obstruction — in this case, it’s the weight of your body if you’re lying in bed all the time.
We also looked at problem behaviors. People who have dementia have what’s called “sundowning,” where agitation and confusion are worse as the evening is going on. We looked at that on the off chance that it had anything to do with blood pressures being lower. And the BedMed-Frail results mirror those of BedMed exactly. So there was no cardiovascular benefit, and in this population, that was largely driven by mortality; one third of these people died every year.
Ms. Ward: The median age was about 88?
Dr. Garrison: Yes, the median age was 88. There was no cardiovascular mortality advantage to bedtime dosing, but neither was there any signal of safety concerns.
Other Complementary and Conflicting Studies
Ms. Ward: These two studies mirror the TIME study from the United Kingdom.
Dr. Garrison: Yes. We found exactly what TIME found. Our point estimate was pretty much the same. The hazard ratio in the main trial was 0.96. Theirs, I believe, was 0.95. Our findings agree completely with those of TIME and differ substantially from the previous trials that suggested a large benefit.
Ms. Ward: Those previous trials were MAPEC and the Hygia Chronotherapy Trial.
Dr. Garrison: MAPEC was the first one. While we were doing our trial, and while the TIME investigators were doing their trial, both of us trying to validate MAPEC, the same group published another study called Hygia, which also reported a large reduction: a 45% reduction in MACE with bedtime dosing.
Ms. Ward: You didn’t present it, but there was also a meta-analysis presented here by somebody independent.
Dr. Garrison: Yes, Ricky Turgeon. I know Ricky. We gave him patient-level data for his meta-analysis, but I was not otherwise involved.
Ms. Ward: And the conclusion is the same.
Dr. Garrison: It’s the same. He only found the same five trials: MAPEC, Hygia, TIME, BedMed, and BedMed-Frail. Combining them all together, the CIs still span 1.0, so it didn’t end up being significant. But he also analyzed TIME and the BedMed trials separately — again suggesting that those trials showed no benefit.
Ms. Ward: There was a TIME substudy of night owls vs early risers or morning people, and there was a hint (or whatever you should say for a subanalysis of a neutral trial) that timing might make a difference there.
Dr. Garrison: They recently published, I guess it is a substudy, where they looked at people’s chronotype according to whether you consider yourself an early bird or a night owl. Their assessment was more detailed. They reported that if people were tending toward being early birds and they took their blood pressure medicine in the morning, or if they were night owls and they took it in the evening, that they tended to have statistically significantly better outcomes than the opposite timing. In that analysis, they were only looking at nonfatal myocardial infarction and nonfatal stroke.
We did ask something that was related. We asked people: “Do we consider yourself more of an early bird or a night owl?” So we do have those data. For what I presented at ESC, we just looked at the primary outcome; we did subgroups according to early bird, night owl, and neither, and that was not statistically significant. It didn’t rule it out. There were some trends in the direction that the TIME group were suggesting. We do intend to do a closer look at that.
But, you know, they call these “late-breaking trials,” and it really was in our case. We didn’t get the last of our data from the last province until the end of June, so we still are finishing up the analysis of the chronotype portion — so more to come in another month or so.
Do What You Like, or Stick to Morning Dosing?
Ms. Ward: For the purposes of people’s take-home message, does this mostly apply to once-daily–dosed antihypertensives?
Dr. Garrison: It was essentially once-daily medicines that were changed. The docs did have the opportunity to consolidate twice-daily meds into once-daily or switch to a different medication. That’s probably the area where adherence was the biggest issue, because it’s largely beta-blockers that were given twice daily at baseline, and they were less likely to want to change.
At 6 months, 83% of once-daily medications were taken per allocation in the bedtime group and 95% per allocation in the morning group, which was actually pretty good. For angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, and calcium-channel blockers, the adherence was excellent. Again, it was beta-blockers taken twice a day where it fell down, and then also diuretics. But if you combine all diuretic medications (ie, pure diuretics and combo agents), still, 75% of them were successful at taking them at bedtime. Only 15% of people switching a diuretic to bedtime dosing actually had problems with nocturia. Most physicians think that they can’t get their patients to take those meds at bedtime, but you can. There’s probably no reason to take it at bedtime, but most people do tolerate it.
Ms. Ward: Is your advice to take it whenever you feel like? I know when TIME came out, Professor George Stergiou, who’s the incoming president of the International Society of Hypertension, said, well, maybe we should stick with the morning, because that’s what most of the trials did.
Dr. Garrison: I think that›s a perfectly valid point of view, and maybe for a lot of people, that could be the default. There are some people, though, who will have a particular reason why one time is better. For instance, most people have no problems with calcium-channel blockers, but some get ankle swelling and you’re more likely to have that happen if you take them in the morning. Or lots of people want to take all their pills at the same time; blood pressure pills are easy ones to switch the timing of if you’re trying to accomplish that, and if that will help adherence. Basically, whatever time of day you can remember to take it the best is probably the right time.
Ms. Ward: Given where we are today, with your trials and TIME, do you think this is now settled science that it doesn’t make a difference?
Dr. Garrison: I’m probably the wrong person to ask, because I clearly have a bias. I think the methods in the TIME trial are really transparent and solid. I hope that when our papers come out, people will feel the same. You just have to look at the different trials. You need people like Dr. Stergiou to wade through the trials to help you with that.
Ms. Ward: Thank you very much for joining me today and discussing this trial.
Scott R. Garrison, MD, PhD, is Professor, Department of Family Medicine, University of Alberta in Edmonton, Alberta, Canada, and Staff Physician, Department of Family Medicine, Kaye Edmonton Clinic, and he has disclosed receiving research grants from Alberta Innovates (the Alberta Provincial Government) and the Canadian Institutes of Health Research (the Canadian Federal Government).
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Tricia Ward: I’m joined today by Dr. Scott R. Garrison, MD, PhD. He is a professor in the Department of Family Medicine at the University of Alberta in Edmonton, Alberta, Canada, and director of the Pragmatic Trials Collaborative.
You presented two studies at ESC. One is the BedMed study, comparing day vs nighttime dosing of blood pressure therapy. Can you tell us the top-line findings?
BedMed and BedMed-Frail
Dr. Garrison: We were looking to validate an earlier study that suggested a large benefit of taking blood pressure medication at bedtime, as far as reducing major adverse cardiovascular events (MACEs). That was the MAPEC study. They suggested a 60% reduction. The BedMed trial was in hypertensive primary care patients in five Canadian provinces. We randomized well over 3000 patients to bedtime or morning medications. We looked at MACEs — so all-cause death or hospitalizations for acute coronary syndrome, stroke, or heart failure, and a bunch of safety outcomes.
Essentially,
Ms. Ward: And then you did a second study, called BedMed-Frail. Do you want to tell us the reason you did that?
Dr. Garrison: BedMed-Frail took place in a nursing home population. We believed that it was possible that frail, older adults might have very different risks and benefits, and that they would probably be underrepresented, as they normally are in the main trial.
We thought that because bedtime blood pressure medications would be theoretically preferentially lowering night pressure, which is already the lowest pressure of the day, that if you were at risk for hypotensive or ischemic adverse events, that might make it worse. We looked at falls and fractures; worsening cognition in case they had vascular dementia; and whether they developed decubitus ulcers (pressure sores) because you need a certain amount of pressure to get past any obstruction — in this case, it’s the weight of your body if you’re lying in bed all the time.
We also looked at problem behaviors. People who have dementia have what’s called “sundowning,” where agitation and confusion are worse as the evening is going on. We looked at that on the off chance that it had anything to do with blood pressures being lower. And the BedMed-Frail results mirror those of BedMed exactly. So there was no cardiovascular benefit, and in this population, that was largely driven by mortality; one third of these people died every year.
Ms. Ward: The median age was about 88?
Dr. Garrison: Yes, the median age was 88. There was no cardiovascular mortality advantage to bedtime dosing, but neither was there any signal of safety concerns.
Other Complementary and Conflicting Studies
Ms. Ward: These two studies mirror the TIME study from the United Kingdom.
Dr. Garrison: Yes. We found exactly what TIME found. Our point estimate was pretty much the same. The hazard ratio in the main trial was 0.96. Theirs, I believe, was 0.95. Our findings agree completely with those of TIME and differ substantially from the previous trials that suggested a large benefit.
Ms. Ward: Those previous trials were MAPEC and the Hygia Chronotherapy Trial.
Dr. Garrison: MAPEC was the first one. While we were doing our trial, and while the TIME investigators were doing their trial, both of us trying to validate MAPEC, the same group published another study called Hygia, which also reported a large reduction: a 45% reduction in MACE with bedtime dosing.
Ms. Ward: You didn’t present it, but there was also a meta-analysis presented here by somebody independent.
Dr. Garrison: Yes, Ricky Turgeon. I know Ricky. We gave him patient-level data for his meta-analysis, but I was not otherwise involved.
Ms. Ward: And the conclusion is the same.
Dr. Garrison: It’s the same. He only found the same five trials: MAPEC, Hygia, TIME, BedMed, and BedMed-Frail. Combining them all together, the CIs still span 1.0, so it didn’t end up being significant. But he also analyzed TIME and the BedMed trials separately — again suggesting that those trials showed no benefit.
Ms. Ward: There was a TIME substudy of night owls vs early risers or morning people, and there was a hint (or whatever you should say for a subanalysis of a neutral trial) that timing might make a difference there.
Dr. Garrison: They recently published, I guess it is a substudy, where they looked at people’s chronotype according to whether you consider yourself an early bird or a night owl. Their assessment was more detailed. They reported that if people were tending toward being early birds and they took their blood pressure medicine in the morning, or if they were night owls and they took it in the evening, that they tended to have statistically significantly better outcomes than the opposite timing. In that analysis, they were only looking at nonfatal myocardial infarction and nonfatal stroke.
We did ask something that was related. We asked people: “Do we consider yourself more of an early bird or a night owl?” So we do have those data. For what I presented at ESC, we just looked at the primary outcome; we did subgroups according to early bird, night owl, and neither, and that was not statistically significant. It didn’t rule it out. There were some trends in the direction that the TIME group were suggesting. We do intend to do a closer look at that.
But, you know, they call these “late-breaking trials,” and it really was in our case. We didn’t get the last of our data from the last province until the end of June, so we still are finishing up the analysis of the chronotype portion — so more to come in another month or so.
Do What You Like, or Stick to Morning Dosing?
Ms. Ward: For the purposes of people’s take-home message, does this mostly apply to once-daily–dosed antihypertensives?
Dr. Garrison: It was essentially once-daily medicines that were changed. The docs did have the opportunity to consolidate twice-daily meds into once-daily or switch to a different medication. That’s probably the area where adherence was the biggest issue, because it’s largely beta-blockers that were given twice daily at baseline, and they were less likely to want to change.
At 6 months, 83% of once-daily medications were taken per allocation in the bedtime group and 95% per allocation in the morning group, which was actually pretty good. For angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, and calcium-channel blockers, the adherence was excellent. Again, it was beta-blockers taken twice a day where it fell down, and then also diuretics. But if you combine all diuretic medications (ie, pure diuretics and combo agents), still, 75% of them were successful at taking them at bedtime. Only 15% of people switching a diuretic to bedtime dosing actually had problems with nocturia. Most physicians think that they can’t get their patients to take those meds at bedtime, but you can. There’s probably no reason to take it at bedtime, but most people do tolerate it.
Ms. Ward: Is your advice to take it whenever you feel like? I know when TIME came out, Professor George Stergiou, who’s the incoming president of the International Society of Hypertension, said, well, maybe we should stick with the morning, because that’s what most of the trials did.
Dr. Garrison: I think that›s a perfectly valid point of view, and maybe for a lot of people, that could be the default. There are some people, though, who will have a particular reason why one time is better. For instance, most people have no problems with calcium-channel blockers, but some get ankle swelling and you’re more likely to have that happen if you take them in the morning. Or lots of people want to take all their pills at the same time; blood pressure pills are easy ones to switch the timing of if you’re trying to accomplish that, and if that will help adherence. Basically, whatever time of day you can remember to take it the best is probably the right time.
Ms. Ward: Given where we are today, with your trials and TIME, do you think this is now settled science that it doesn’t make a difference?
Dr. Garrison: I’m probably the wrong person to ask, because I clearly have a bias. I think the methods in the TIME trial are really transparent and solid. I hope that when our papers come out, people will feel the same. You just have to look at the different trials. You need people like Dr. Stergiou to wade through the trials to help you with that.
Ms. Ward: Thank you very much for joining me today and discussing this trial.
Scott R. Garrison, MD, PhD, is Professor, Department of Family Medicine, University of Alberta in Edmonton, Alberta, Canada, and Staff Physician, Department of Family Medicine, Kaye Edmonton Clinic, and he has disclosed receiving research grants from Alberta Innovates (the Alberta Provincial Government) and the Canadian Institutes of Health Research (the Canadian Federal Government).
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Tricia Ward: I’m joined today by Dr. Scott R. Garrison, MD, PhD. He is a professor in the Department of Family Medicine at the University of Alberta in Edmonton, Alberta, Canada, and director of the Pragmatic Trials Collaborative.
You presented two studies at ESC. One is the BedMed study, comparing day vs nighttime dosing of blood pressure therapy. Can you tell us the top-line findings?
BedMed and BedMed-Frail
Dr. Garrison: We were looking to validate an earlier study that suggested a large benefit of taking blood pressure medication at bedtime, as far as reducing major adverse cardiovascular events (MACEs). That was the MAPEC study. They suggested a 60% reduction. The BedMed trial was in hypertensive primary care patients in five Canadian provinces. We randomized well over 3000 patients to bedtime or morning medications. We looked at MACEs — so all-cause death or hospitalizations for acute coronary syndrome, stroke, or heart failure, and a bunch of safety outcomes.
Essentially,
Ms. Ward: And then you did a second study, called BedMed-Frail. Do you want to tell us the reason you did that?
Dr. Garrison: BedMed-Frail took place in a nursing home population. We believed that it was possible that frail, older adults might have very different risks and benefits, and that they would probably be underrepresented, as they normally are in the main trial.
We thought that because bedtime blood pressure medications would be theoretically preferentially lowering night pressure, which is already the lowest pressure of the day, that if you were at risk for hypotensive or ischemic adverse events, that might make it worse. We looked at falls and fractures; worsening cognition in case they had vascular dementia; and whether they developed decubitus ulcers (pressure sores) because you need a certain amount of pressure to get past any obstruction — in this case, it’s the weight of your body if you’re lying in bed all the time.
We also looked at problem behaviors. People who have dementia have what’s called “sundowning,” where agitation and confusion are worse as the evening is going on. We looked at that on the off chance that it had anything to do with blood pressures being lower. And the BedMed-Frail results mirror those of BedMed exactly. So there was no cardiovascular benefit, and in this population, that was largely driven by mortality; one third of these people died every year.
Ms. Ward: The median age was about 88?
Dr. Garrison: Yes, the median age was 88. There was no cardiovascular mortality advantage to bedtime dosing, but neither was there any signal of safety concerns.
Other Complementary and Conflicting Studies
Ms. Ward: These two studies mirror the TIME study from the United Kingdom.
Dr. Garrison: Yes. We found exactly what TIME found. Our point estimate was pretty much the same. The hazard ratio in the main trial was 0.96. Theirs, I believe, was 0.95. Our findings agree completely with those of TIME and differ substantially from the previous trials that suggested a large benefit.
Ms. Ward: Those previous trials were MAPEC and the Hygia Chronotherapy Trial.
Dr. Garrison: MAPEC was the first one. While we were doing our trial, and while the TIME investigators were doing their trial, both of us trying to validate MAPEC, the same group published another study called Hygia, which also reported a large reduction: a 45% reduction in MACE with bedtime dosing.
Ms. Ward: You didn’t present it, but there was also a meta-analysis presented here by somebody independent.
Dr. Garrison: Yes, Ricky Turgeon. I know Ricky. We gave him patient-level data for his meta-analysis, but I was not otherwise involved.
Ms. Ward: And the conclusion is the same.
Dr. Garrison: It’s the same. He only found the same five trials: MAPEC, Hygia, TIME, BedMed, and BedMed-Frail. Combining them all together, the CIs still span 1.0, so it didn’t end up being significant. But he also analyzed TIME and the BedMed trials separately — again suggesting that those trials showed no benefit.
Ms. Ward: There was a TIME substudy of night owls vs early risers or morning people, and there was a hint (or whatever you should say for a subanalysis of a neutral trial) that timing might make a difference there.
Dr. Garrison: They recently published, I guess it is a substudy, where they looked at people’s chronotype according to whether you consider yourself an early bird or a night owl. Their assessment was more detailed. They reported that if people were tending toward being early birds and they took their blood pressure medicine in the morning, or if they were night owls and they took it in the evening, that they tended to have statistically significantly better outcomes than the opposite timing. In that analysis, they were only looking at nonfatal myocardial infarction and nonfatal stroke.
We did ask something that was related. We asked people: “Do we consider yourself more of an early bird or a night owl?” So we do have those data. For what I presented at ESC, we just looked at the primary outcome; we did subgroups according to early bird, night owl, and neither, and that was not statistically significant. It didn’t rule it out. There were some trends in the direction that the TIME group were suggesting. We do intend to do a closer look at that.
But, you know, they call these “late-breaking trials,” and it really was in our case. We didn’t get the last of our data from the last province until the end of June, so we still are finishing up the analysis of the chronotype portion — so more to come in another month or so.
Do What You Like, or Stick to Morning Dosing?
Ms. Ward: For the purposes of people’s take-home message, does this mostly apply to once-daily–dosed antihypertensives?
Dr. Garrison: It was essentially once-daily medicines that were changed. The docs did have the opportunity to consolidate twice-daily meds into once-daily or switch to a different medication. That’s probably the area where adherence was the biggest issue, because it’s largely beta-blockers that were given twice daily at baseline, and they were less likely to want to change.
At 6 months, 83% of once-daily medications were taken per allocation in the bedtime group and 95% per allocation in the morning group, which was actually pretty good. For angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, and calcium-channel blockers, the adherence was excellent. Again, it was beta-blockers taken twice a day where it fell down, and then also diuretics. But if you combine all diuretic medications (ie, pure diuretics and combo agents), still, 75% of them were successful at taking them at bedtime. Only 15% of people switching a diuretic to bedtime dosing actually had problems with nocturia. Most physicians think that they can’t get their patients to take those meds at bedtime, but you can. There’s probably no reason to take it at bedtime, but most people do tolerate it.
Ms. Ward: Is your advice to take it whenever you feel like? I know when TIME came out, Professor George Stergiou, who’s the incoming president of the International Society of Hypertension, said, well, maybe we should stick with the morning, because that’s what most of the trials did.
Dr. Garrison: I think that›s a perfectly valid point of view, and maybe for a lot of people, that could be the default. There are some people, though, who will have a particular reason why one time is better. For instance, most people have no problems with calcium-channel blockers, but some get ankle swelling and you’re more likely to have that happen if you take them in the morning. Or lots of people want to take all their pills at the same time; blood pressure pills are easy ones to switch the timing of if you’re trying to accomplish that, and if that will help adherence. Basically, whatever time of day you can remember to take it the best is probably the right time.
Ms. Ward: Given where we are today, with your trials and TIME, do you think this is now settled science that it doesn’t make a difference?
Dr. Garrison: I’m probably the wrong person to ask, because I clearly have a bias. I think the methods in the TIME trial are really transparent and solid. I hope that when our papers come out, people will feel the same. You just have to look at the different trials. You need people like Dr. Stergiou to wade through the trials to help you with that.
Ms. Ward: Thank you very much for joining me today and discussing this trial.
Scott R. Garrison, MD, PhD, is Professor, Department of Family Medicine, University of Alberta in Edmonton, Alberta, Canada, and Staff Physician, Department of Family Medicine, Kaye Edmonton Clinic, and he has disclosed receiving research grants from Alberta Innovates (the Alberta Provincial Government) and the Canadian Institutes of Health Research (the Canadian Federal Government).
A version of this article first appeared on Medscape.com.
FROM ESC 2024