User login
High-dose vitamin D for bone health may do more harm than good
In fact, rather than a hypothesized increase in volumetric bone mineral density (BMD) with doses well above the recommended dietary allowance, a negative dose-response relationship was observed, Lauren A. Burt, PhD, of the McCaig Institute for Bone and Joint Health at the University of Calgary (Alta.) and colleagues found.
The total volumetric radial BMD was significantly lower in 101 and 97 study participants randomized to receive daily vitamin D3 doses of 10,000 IU or 4,000 IU for 3 years, respectively (–7.5 and –3.9 mg of calcium hydroxyapatite [HA] per cm3), compared with 105 participants randomized to a reference group that received 400 IU (mean percent changes, –3.5%, –2.4%, and –1.2%, respectively). Total volumetric tibial BMD was also significantly lower in the 10,000 IU arm, compared with the reference arm (–4.1 mg HA per cm3; mean percent change –1.7% vs. –0.4%), the investigators reported Aug. 27 in JAMA.
There also were no significant differences seen between the three groups for the coprimary endpoint of bone strength at either the radius or tibia.
Participants in the double-blind trial were community-dwelling healthy men and women aged 55-70 years (mean age, 62.2 years) without osteoporosis and with baseline levels of 25-hydroxyvitamin D (25[OH]D) of 30-125 nmol/L. They were enrolled from a single center between August 2013 and December 2017 and treated with daily oral vitamin D3 drops at the assigned dosage for 3 years and with calcium supplementation if dietary calcium intake was less than 1,200 mg daily.
Mean supplementation adherence was 99% among the 303 participants who completed the trial (out of 311 enrolled), and adherence was similar across the groups.
Baseline 25(OH)D levels in the 400 IU group were 76.3 nmol/L at baseline, 76.7 nmol/L at 3 months, and 77.4 nmol/L at 3 years. The corresponding measures for the 4,000 IU group were 81.3, 115.3, and 132.2 nmol/L, and for the 10,000 IU group, they were 78.4, 188.0, and 144.4, the investigators said, noting that significant group-by-time interactions were noted for volumetric BMD.
Bone strength decreased over time, but group-by-time interactions for that measure were not statistically significant, they said.
A total of 44 serious adverse events occurred in 38 participants (12.2%), and one death from presumed myocardial infarction occurred in the 400 IU group. Of eight prespecified adverse events, only hypercalcemia and hypercalciuria had significant dose-response effects; all episodes of hypercalcemia were mild and had resolved at follow-up, and the two hypercalcemia events, which occurred in one participant in the 10,000 IU group, were also transient. No significant difference in fall rates was seen in the three groups, they noted.
Vitamin D is considered beneficial for preventing and treating osteoporosis, and data support supplementation in individuals with 25(OH)D levels less than 30 nmol/L, but recent meta-analyses did not find a major treatment benefit for osteoporosis or for preventing falls and fractures, the investigators said.
Further, while most supplementation recommendations call for 400-2,000 IU daily, with a tolerable upper intake level of 4,000-10,000 IU, 3% of U.S. adults in 2013-2014 reported intake of at least 4,000 IU per day, but few studies have assessed the effects of doses at or above the upper intake level for 12 months or longer, they noted, adding that this study was “motivated by the prevalence of high-dose vitamin D supplementation among healthy adults.”
“It was hypothesized that a higher dose of vitamin D has a positive effect on high-resolution peripheral quantitative CT measures of volumetric density and strength, perhaps via suppression of parathyroid hormone (PTH)–mediated bone turnover,” they wrote.
However, based on the significantly lower radial BMD seen with both 4,000 and 10,000 IU, compared with 400 IU; the lower tibial BMD with 10,000 IU, compared with 400 IU; and the lack of a difference in bone strength at the radius and tibia, the findings do not support a benefit of high-dose vitamin D supplementation for bone health, they said, noting that additional study is needed to determine whether such doses are harmful.
“Because these results are in the opposite direction of the research hypothesis, this evidence of high-dose vitamin D having a negative effect on bone should be regarded as hypothesis generating, requiring confirmation with further research,” they concluded.
SOURCE: Burt L et al. JAMA. 2019 Aug 27;322(8):736-45.
In fact, rather than a hypothesized increase in volumetric bone mineral density (BMD) with doses well above the recommended dietary allowance, a negative dose-response relationship was observed, Lauren A. Burt, PhD, of the McCaig Institute for Bone and Joint Health at the University of Calgary (Alta.) and colleagues found.
The total volumetric radial BMD was significantly lower in 101 and 97 study participants randomized to receive daily vitamin D3 doses of 10,000 IU or 4,000 IU for 3 years, respectively (–7.5 and –3.9 mg of calcium hydroxyapatite [HA] per cm3), compared with 105 participants randomized to a reference group that received 400 IU (mean percent changes, –3.5%, –2.4%, and –1.2%, respectively). Total volumetric tibial BMD was also significantly lower in the 10,000 IU arm, compared with the reference arm (–4.1 mg HA per cm3; mean percent change –1.7% vs. –0.4%), the investigators reported Aug. 27 in JAMA.
There also were no significant differences seen between the three groups for the coprimary endpoint of bone strength at either the radius or tibia.
Participants in the double-blind trial were community-dwelling healthy men and women aged 55-70 years (mean age, 62.2 years) without osteoporosis and with baseline levels of 25-hydroxyvitamin D (25[OH]D) of 30-125 nmol/L. They were enrolled from a single center between August 2013 and December 2017 and treated with daily oral vitamin D3 drops at the assigned dosage for 3 years and with calcium supplementation if dietary calcium intake was less than 1,200 mg daily.
Mean supplementation adherence was 99% among the 303 participants who completed the trial (out of 311 enrolled), and adherence was similar across the groups.
Baseline 25(OH)D levels in the 400 IU group were 76.3 nmol/L at baseline, 76.7 nmol/L at 3 months, and 77.4 nmol/L at 3 years. The corresponding measures for the 4,000 IU group were 81.3, 115.3, and 132.2 nmol/L, and for the 10,000 IU group, they were 78.4, 188.0, and 144.4, the investigators said, noting that significant group-by-time interactions were noted for volumetric BMD.
Bone strength decreased over time, but group-by-time interactions for that measure were not statistically significant, they said.
A total of 44 serious adverse events occurred in 38 participants (12.2%), and one death from presumed myocardial infarction occurred in the 400 IU group. Of eight prespecified adverse events, only hypercalcemia and hypercalciuria had significant dose-response effects; all episodes of hypercalcemia were mild and had resolved at follow-up, and the two hypercalcemia events, which occurred in one participant in the 10,000 IU group, were also transient. No significant difference in fall rates was seen in the three groups, they noted.
Vitamin D is considered beneficial for preventing and treating osteoporosis, and data support supplementation in individuals with 25(OH)D levels less than 30 nmol/L, but recent meta-analyses did not find a major treatment benefit for osteoporosis or for preventing falls and fractures, the investigators said.
Further, while most supplementation recommendations call for 400-2,000 IU daily, with a tolerable upper intake level of 4,000-10,000 IU, 3% of U.S. adults in 2013-2014 reported intake of at least 4,000 IU per day, but few studies have assessed the effects of doses at or above the upper intake level for 12 months or longer, they noted, adding that this study was “motivated by the prevalence of high-dose vitamin D supplementation among healthy adults.”
“It was hypothesized that a higher dose of vitamin D has a positive effect on high-resolution peripheral quantitative CT measures of volumetric density and strength, perhaps via suppression of parathyroid hormone (PTH)–mediated bone turnover,” they wrote.
However, based on the significantly lower radial BMD seen with both 4,000 and 10,000 IU, compared with 400 IU; the lower tibial BMD with 10,000 IU, compared with 400 IU; and the lack of a difference in bone strength at the radius and tibia, the findings do not support a benefit of high-dose vitamin D supplementation for bone health, they said, noting that additional study is needed to determine whether such doses are harmful.
“Because these results are in the opposite direction of the research hypothesis, this evidence of high-dose vitamin D having a negative effect on bone should be regarded as hypothesis generating, requiring confirmation with further research,” they concluded.
SOURCE: Burt L et al. JAMA. 2019 Aug 27;322(8):736-45.
In fact, rather than a hypothesized increase in volumetric bone mineral density (BMD) with doses well above the recommended dietary allowance, a negative dose-response relationship was observed, Lauren A. Burt, PhD, of the McCaig Institute for Bone and Joint Health at the University of Calgary (Alta.) and colleagues found.
The total volumetric radial BMD was significantly lower in 101 and 97 study participants randomized to receive daily vitamin D3 doses of 10,000 IU or 4,000 IU for 3 years, respectively (–7.5 and –3.9 mg of calcium hydroxyapatite [HA] per cm3), compared with 105 participants randomized to a reference group that received 400 IU (mean percent changes, –3.5%, –2.4%, and –1.2%, respectively). Total volumetric tibial BMD was also significantly lower in the 10,000 IU arm, compared with the reference arm (–4.1 mg HA per cm3; mean percent change –1.7% vs. –0.4%), the investigators reported Aug. 27 in JAMA.
There also were no significant differences seen between the three groups for the coprimary endpoint of bone strength at either the radius or tibia.
Participants in the double-blind trial were community-dwelling healthy men and women aged 55-70 years (mean age, 62.2 years) without osteoporosis and with baseline levels of 25-hydroxyvitamin D (25[OH]D) of 30-125 nmol/L. They were enrolled from a single center between August 2013 and December 2017 and treated with daily oral vitamin D3 drops at the assigned dosage for 3 years and with calcium supplementation if dietary calcium intake was less than 1,200 mg daily.
Mean supplementation adherence was 99% among the 303 participants who completed the trial (out of 311 enrolled), and adherence was similar across the groups.
Baseline 25(OH)D levels in the 400 IU group were 76.3 nmol/L at baseline, 76.7 nmol/L at 3 months, and 77.4 nmol/L at 3 years. The corresponding measures for the 4,000 IU group were 81.3, 115.3, and 132.2 nmol/L, and for the 10,000 IU group, they were 78.4, 188.0, and 144.4, the investigators said, noting that significant group-by-time interactions were noted for volumetric BMD.
Bone strength decreased over time, but group-by-time interactions for that measure were not statistically significant, they said.
A total of 44 serious adverse events occurred in 38 participants (12.2%), and one death from presumed myocardial infarction occurred in the 400 IU group. Of eight prespecified adverse events, only hypercalcemia and hypercalciuria had significant dose-response effects; all episodes of hypercalcemia were mild and had resolved at follow-up, and the two hypercalcemia events, which occurred in one participant in the 10,000 IU group, were also transient. No significant difference in fall rates was seen in the three groups, they noted.
Vitamin D is considered beneficial for preventing and treating osteoporosis, and data support supplementation in individuals with 25(OH)D levels less than 30 nmol/L, but recent meta-analyses did not find a major treatment benefit for osteoporosis or for preventing falls and fractures, the investigators said.
Further, while most supplementation recommendations call for 400-2,000 IU daily, with a tolerable upper intake level of 4,000-10,000 IU, 3% of U.S. adults in 2013-2014 reported intake of at least 4,000 IU per day, but few studies have assessed the effects of doses at or above the upper intake level for 12 months or longer, they noted, adding that this study was “motivated by the prevalence of high-dose vitamin D supplementation among healthy adults.”
“It was hypothesized that a higher dose of vitamin D has a positive effect on high-resolution peripheral quantitative CT measures of volumetric density and strength, perhaps via suppression of parathyroid hormone (PTH)–mediated bone turnover,” they wrote.
However, based on the significantly lower radial BMD seen with both 4,000 and 10,000 IU, compared with 400 IU; the lower tibial BMD with 10,000 IU, compared with 400 IU; and the lack of a difference in bone strength at the radius and tibia, the findings do not support a benefit of high-dose vitamin D supplementation for bone health, they said, noting that additional study is needed to determine whether such doses are harmful.
“Because these results are in the opposite direction of the research hypothesis, this evidence of high-dose vitamin D having a negative effect on bone should be regarded as hypothesis generating, requiring confirmation with further research,” they concluded.
SOURCE: Burt L et al. JAMA. 2019 Aug 27;322(8):736-45.
FROM JAMA
Predictive model estimates likelihood of failing induction of labor in obese patients
reported researchers from the University of Cincinnati and Cincinnati Children’s Hospital Medical Center.
The ten variables included in the model were prior vaginal delivery; prior cesarean delivery; maternal height, age, and weight at delivery; parity; gestational weight gain; Medicaid insurance; pregestational diabetes; and chronic hypertension, said Robert M. Rossi, MD, of the university and associates, who developed the model.
“Our hope is that this model may be useful as a tool to estimate an individualized risk based on commonly available prenatal factors that may assist in delivery planning and allocation of appropriate resources,” the investigators said in a study summarizing their findings, published in Obstetrics & Gynecology.
The researchers conducted a population-based, retrospective cohort study of delivery records from 1,098,981 obese women in a National Center for Health Statistics birth-death cohort database who underwent induction of labor between 2012 and 2016. Of these women, 825,797 (75%) women succeeded in delivering after induction, while 273,184 (25%) women failed to deliver after induction of labor and instead underwent cesarean section. The women included in the study had a body mass index of 30 or higher and underwent induction between 37 weeks and 44 weeks of gestation.
The class of obesity prior to pregnancy impacted the rate of induction failure, as patients with class I obesity had a rate of cesarean section of 21.6% (95% confidence interval, 21.4%-21.7%), while women with class II obesity had a rate of 25% (95% CI, 24.8%-25.2%) and women with class III obesity had a rate of 31% (95% CI, 30.8%-31.3%). Women also were more likely to fail induction if they had received fertility treatment, if they were older than 35 years, if they were of non-Hispanic black race, if they had gestational weight gain or maternal weight gain, if they had pregestational diabetes or gestational diabetes, or if they had gestational hypertension or preeclampsia (all P less than .001). Factors that made a woman less likely to undergo cesarean delivery were Medicaid insurance status or receiving Special Supplemental Nutrition Program for Women, Infant and Children (SNAP WIC) support.
Under the predictive model, the receiver operator characteristic curve (ROC) had an area under the curve (AUC) of 0.79 (95% CI, 0.78-0.79), and subsequent validation of the model using a different external U.S. birth cohort dataset showed an AUC of 0.77 (95% CI, 0.76-0.77). In both datasets, the model was calibrated to predict failure of induction of labor up to 75%, at which point the model overestimated the risk in patients, Dr. Rossi and associates said.
“Although we do not stipulate that an elective cesarean delivery should be offered for ‘high risk’ obese women, this tool may allow the provider to have a heightened awareness and prepare accordingly with timing of delivery, increased staffing, and anesthesia presence, particularly given the higher rates of maternal and neonatal adverse outcomes after a failed induction of labor,” said Dr. Rossi and colleagues.
Martina Louise Badell, MD, commented in an interview, “This is well-designed, large, population-based cohort study of more than 1 million obese women with a singleton pregnancy who underwent induction of labor. To determine the chance of successful induction of labor, a 10-variable model was created. This model achieved an AUC of 0.79, which is fairly good accuracy.
“They created an easy-to-use risk calculator as a tool to help identify chance of successful induction of labor in obese women. Similar to the VBAC [vaginal birth after cesarean] calculator, this calculator may help clinicians with patient-specific counseling, risk stratifying, and delivery planning,” said Dr. Badell, a maternal-fetal medicine specialist who is director of the Emory Perinatal Center at Emory University, Atlanta. Dr. Badell, who was not a coauthor of this study, was asked to comment on the study’s merit.
The authors reported no relevant financial disclosures. Dr. Badell had no relevant financial disclosures. There was no external funding.
SOURCE: Rossi R et al. Obstet Gynecol. 2019. doi: 10.1097/AOG.0000000000003377.
reported researchers from the University of Cincinnati and Cincinnati Children’s Hospital Medical Center.
The ten variables included in the model were prior vaginal delivery; prior cesarean delivery; maternal height, age, and weight at delivery; parity; gestational weight gain; Medicaid insurance; pregestational diabetes; and chronic hypertension, said Robert M. Rossi, MD, of the university and associates, who developed the model.
“Our hope is that this model may be useful as a tool to estimate an individualized risk based on commonly available prenatal factors that may assist in delivery planning and allocation of appropriate resources,” the investigators said in a study summarizing their findings, published in Obstetrics & Gynecology.
The researchers conducted a population-based, retrospective cohort study of delivery records from 1,098,981 obese women in a National Center for Health Statistics birth-death cohort database who underwent induction of labor between 2012 and 2016. Of these women, 825,797 (75%) women succeeded in delivering after induction, while 273,184 (25%) women failed to deliver after induction of labor and instead underwent cesarean section. The women included in the study had a body mass index of 30 or higher and underwent induction between 37 weeks and 44 weeks of gestation.
The class of obesity prior to pregnancy impacted the rate of induction failure, as patients with class I obesity had a rate of cesarean section of 21.6% (95% confidence interval, 21.4%-21.7%), while women with class II obesity had a rate of 25% (95% CI, 24.8%-25.2%) and women with class III obesity had a rate of 31% (95% CI, 30.8%-31.3%). Women also were more likely to fail induction if they had received fertility treatment, if they were older than 35 years, if they were of non-Hispanic black race, if they had gestational weight gain or maternal weight gain, if they had pregestational diabetes or gestational diabetes, or if they had gestational hypertension or preeclampsia (all P less than .001). Factors that made a woman less likely to undergo cesarean delivery were Medicaid insurance status or receiving Special Supplemental Nutrition Program for Women, Infant and Children (SNAP WIC) support.
Under the predictive model, the receiver operator characteristic curve (ROC) had an area under the curve (AUC) of 0.79 (95% CI, 0.78-0.79), and subsequent validation of the model using a different external U.S. birth cohort dataset showed an AUC of 0.77 (95% CI, 0.76-0.77). In both datasets, the model was calibrated to predict failure of induction of labor up to 75%, at which point the model overestimated the risk in patients, Dr. Rossi and associates said.
“Although we do not stipulate that an elective cesarean delivery should be offered for ‘high risk’ obese women, this tool may allow the provider to have a heightened awareness and prepare accordingly with timing of delivery, increased staffing, and anesthesia presence, particularly given the higher rates of maternal and neonatal adverse outcomes after a failed induction of labor,” said Dr. Rossi and colleagues.
Martina Louise Badell, MD, commented in an interview, “This is well-designed, large, population-based cohort study of more than 1 million obese women with a singleton pregnancy who underwent induction of labor. To determine the chance of successful induction of labor, a 10-variable model was created. This model achieved an AUC of 0.79, which is fairly good accuracy.
“They created an easy-to-use risk calculator as a tool to help identify chance of successful induction of labor in obese women. Similar to the VBAC [vaginal birth after cesarean] calculator, this calculator may help clinicians with patient-specific counseling, risk stratifying, and delivery planning,” said Dr. Badell, a maternal-fetal medicine specialist who is director of the Emory Perinatal Center at Emory University, Atlanta. Dr. Badell, who was not a coauthor of this study, was asked to comment on the study’s merit.
The authors reported no relevant financial disclosures. Dr. Badell had no relevant financial disclosures. There was no external funding.
SOURCE: Rossi R et al. Obstet Gynecol. 2019. doi: 10.1097/AOG.0000000000003377.
reported researchers from the University of Cincinnati and Cincinnati Children’s Hospital Medical Center.
The ten variables included in the model were prior vaginal delivery; prior cesarean delivery; maternal height, age, and weight at delivery; parity; gestational weight gain; Medicaid insurance; pregestational diabetes; and chronic hypertension, said Robert M. Rossi, MD, of the university and associates, who developed the model.
“Our hope is that this model may be useful as a tool to estimate an individualized risk based on commonly available prenatal factors that may assist in delivery planning and allocation of appropriate resources,” the investigators said in a study summarizing their findings, published in Obstetrics & Gynecology.
The researchers conducted a population-based, retrospective cohort study of delivery records from 1,098,981 obese women in a National Center for Health Statistics birth-death cohort database who underwent induction of labor between 2012 and 2016. Of these women, 825,797 (75%) women succeeded in delivering after induction, while 273,184 (25%) women failed to deliver after induction of labor and instead underwent cesarean section. The women included in the study had a body mass index of 30 or higher and underwent induction between 37 weeks and 44 weeks of gestation.
The class of obesity prior to pregnancy impacted the rate of induction failure, as patients with class I obesity had a rate of cesarean section of 21.6% (95% confidence interval, 21.4%-21.7%), while women with class II obesity had a rate of 25% (95% CI, 24.8%-25.2%) and women with class III obesity had a rate of 31% (95% CI, 30.8%-31.3%). Women also were more likely to fail induction if they had received fertility treatment, if they were older than 35 years, if they were of non-Hispanic black race, if they had gestational weight gain or maternal weight gain, if they had pregestational diabetes or gestational diabetes, or if they had gestational hypertension or preeclampsia (all P less than .001). Factors that made a woman less likely to undergo cesarean delivery were Medicaid insurance status or receiving Special Supplemental Nutrition Program for Women, Infant and Children (SNAP WIC) support.
Under the predictive model, the receiver operator characteristic curve (ROC) had an area under the curve (AUC) of 0.79 (95% CI, 0.78-0.79), and subsequent validation of the model using a different external U.S. birth cohort dataset showed an AUC of 0.77 (95% CI, 0.76-0.77). In both datasets, the model was calibrated to predict failure of induction of labor up to 75%, at which point the model overestimated the risk in patients, Dr. Rossi and associates said.
“Although we do not stipulate that an elective cesarean delivery should be offered for ‘high risk’ obese women, this tool may allow the provider to have a heightened awareness and prepare accordingly with timing of delivery, increased staffing, and anesthesia presence, particularly given the higher rates of maternal and neonatal adverse outcomes after a failed induction of labor,” said Dr. Rossi and colleagues.
Martina Louise Badell, MD, commented in an interview, “This is well-designed, large, population-based cohort study of more than 1 million obese women with a singleton pregnancy who underwent induction of labor. To determine the chance of successful induction of labor, a 10-variable model was created. This model achieved an AUC of 0.79, which is fairly good accuracy.
“They created an easy-to-use risk calculator as a tool to help identify chance of successful induction of labor in obese women. Similar to the VBAC [vaginal birth after cesarean] calculator, this calculator may help clinicians with patient-specific counseling, risk stratifying, and delivery planning,” said Dr. Badell, a maternal-fetal medicine specialist who is director of the Emory Perinatal Center at Emory University, Atlanta. Dr. Badell, who was not a coauthor of this study, was asked to comment on the study’s merit.
The authors reported no relevant financial disclosures. Dr. Badell had no relevant financial disclosures. There was no external funding.
SOURCE: Rossi R et al. Obstet Gynecol. 2019. doi: 10.1097/AOG.0000000000003377.
FROM OBSTETRICS & GYNECOLOGY
Endometriosis is linked to adverse pregnancy outcomes
a large study has found.
Leslie V. Farland, ScD, of the University of Arizona, Tucson, and coauthors reported their analysis of data from 196,722 pregnancies in 116,429 women aged 25-42 years enrolled in the Nurses Health Study II cohort in Obstetrics & Gynecology.
Among the women with eligible pregnancies, 4.5% had laparoscopically confirmed endometriosis. These women were found to have a 40% higher risk of spontaneous abortion than were women without endometriosis (19.3% vs. 12.3%) and a 46% higher risk of ectopic pregnancy (1.8% vs. 0.8%). The risk of ectopic pregnancy was even more pronounced in women without a history of infertility.
Researchers also saw a 16% higher risk of preterm birth in women with endometriosis (12% in women with endometriosis vs. 8.1% in women without endometriosis), and a 16% greater risk of low-birth-weight babies (5.6% in women with endometriosis vs. 3.6% in women without endometriosis).
There also was the suggestion of an increased risk of stillbirth, although the researchers said this finding should be interpreted with caution because of the small sample size.
Women with endometriosis also had a 35% greater risk of gestational diabetes than did women without endometriosis. This association was stronger in women younger than age 35 years, in women without a history of infertility, and in women undergoing their second or later pregnancy. Endometriosis also was associated with a 30% greater risk of hypertensive disorders of pregnancy, particularly in second or later pregnancies.
Dr. Farland and associates wrote that recent research on the relationship between endometriosis and pregnancy outcomes had yielded “mixed results.”
“For example, much of the research to date has been conducted among women attending infertility clinics, which may conflate the influence of advanced maternal age, fertility treatment, and infertility itself with endometriosis, given the known elevated risk of adverse pregnancy outcomes in this population,” they wrote.
They suggested that one possible mechanism for the association between endometriosis and adverse pregnancy outcomes was progesterone resistance, which was hypothesized to affect genes important for embryo implantation and therefore contribute to pregnancy loss. Another mechanism could be increased inflammation, which may increase the risk of preterm birth and abnormal placentation.
“Elucidating mechanisms of association and possible pathways for intervention or screening procedures will be critical to improve the health of women with endometriosis and their children,” they wrote.
Katrina Mark, MD, commented in an interview, “This study, which identifies an increased risk of adverse pregnancy outcomes in women with endometriosis, is an important step in improving reproductive success.
“Although some explanations for these findings were postulated by the researchers, the next step will be to study the underlying physiology that leads to these complications so that interventions can be offered to improve outcomes,” said Dr. Mark, who is an associate professor of obstetrics, gynecology & reproductive sciences at the University of Maryland School of Medicine. Dr. Mark, who is not a coauthor of the study, was asked to comment on the study’s merit.
The study was supported by grants from the National Institutes of Health. Daniela A. Carusi, MD, received funding from UpToDate; Andrew W. Horne, MB, ChB, PhD, declared European government grants funding and consultancies with the pharmaceutical sector unrelated to the present study; Jorge E. Chavarro, MD, and Stacey A. Missmer, ScD, declared institutional funding from the NIH, and Dr. Missmer also received institutional funding from other funding bodies, as well as consulting fees. Dr. Farland and the remaining coauthors had no relevant financial disclosures. Dr. Mark has no relevant financial disclosures.
SOURCE: Farland LV et al. Obstetr Gynecol. 2019. doi: 10.1097/AOG.0000000000003410.
a large study has found.
Leslie V. Farland, ScD, of the University of Arizona, Tucson, and coauthors reported their analysis of data from 196,722 pregnancies in 116,429 women aged 25-42 years enrolled in the Nurses Health Study II cohort in Obstetrics & Gynecology.
Among the women with eligible pregnancies, 4.5% had laparoscopically confirmed endometriosis. These women were found to have a 40% higher risk of spontaneous abortion than were women without endometriosis (19.3% vs. 12.3%) and a 46% higher risk of ectopic pregnancy (1.8% vs. 0.8%). The risk of ectopic pregnancy was even more pronounced in women without a history of infertility.
Researchers also saw a 16% higher risk of preterm birth in women with endometriosis (12% in women with endometriosis vs. 8.1% in women without endometriosis), and a 16% greater risk of low-birth-weight babies (5.6% in women with endometriosis vs. 3.6% in women without endometriosis).
There also was the suggestion of an increased risk of stillbirth, although the researchers said this finding should be interpreted with caution because of the small sample size.
Women with endometriosis also had a 35% greater risk of gestational diabetes than did women without endometriosis. This association was stronger in women younger than age 35 years, in women without a history of infertility, and in women undergoing their second or later pregnancy. Endometriosis also was associated with a 30% greater risk of hypertensive disorders of pregnancy, particularly in second or later pregnancies.
Dr. Farland and associates wrote that recent research on the relationship between endometriosis and pregnancy outcomes had yielded “mixed results.”
“For example, much of the research to date has been conducted among women attending infertility clinics, which may conflate the influence of advanced maternal age, fertility treatment, and infertility itself with endometriosis, given the known elevated risk of adverse pregnancy outcomes in this population,” they wrote.
They suggested that one possible mechanism for the association between endometriosis and adverse pregnancy outcomes was progesterone resistance, which was hypothesized to affect genes important for embryo implantation and therefore contribute to pregnancy loss. Another mechanism could be increased inflammation, which may increase the risk of preterm birth and abnormal placentation.
“Elucidating mechanisms of association and possible pathways for intervention or screening procedures will be critical to improve the health of women with endometriosis and their children,” they wrote.
Katrina Mark, MD, commented in an interview, “This study, which identifies an increased risk of adverse pregnancy outcomes in women with endometriosis, is an important step in improving reproductive success.
“Although some explanations for these findings were postulated by the researchers, the next step will be to study the underlying physiology that leads to these complications so that interventions can be offered to improve outcomes,” said Dr. Mark, who is an associate professor of obstetrics, gynecology & reproductive sciences at the University of Maryland School of Medicine. Dr. Mark, who is not a coauthor of the study, was asked to comment on the study’s merit.
The study was supported by grants from the National Institutes of Health. Daniela A. Carusi, MD, received funding from UpToDate; Andrew W. Horne, MB, ChB, PhD, declared European government grants funding and consultancies with the pharmaceutical sector unrelated to the present study; Jorge E. Chavarro, MD, and Stacey A. Missmer, ScD, declared institutional funding from the NIH, and Dr. Missmer also received institutional funding from other funding bodies, as well as consulting fees. Dr. Farland and the remaining coauthors had no relevant financial disclosures. Dr. Mark has no relevant financial disclosures.
SOURCE: Farland LV et al. Obstetr Gynecol. 2019. doi: 10.1097/AOG.0000000000003410.
a large study has found.
Leslie V. Farland, ScD, of the University of Arizona, Tucson, and coauthors reported their analysis of data from 196,722 pregnancies in 116,429 women aged 25-42 years enrolled in the Nurses Health Study II cohort in Obstetrics & Gynecology.
Among the women with eligible pregnancies, 4.5% had laparoscopically confirmed endometriosis. These women were found to have a 40% higher risk of spontaneous abortion than were women without endometriosis (19.3% vs. 12.3%) and a 46% higher risk of ectopic pregnancy (1.8% vs. 0.8%). The risk of ectopic pregnancy was even more pronounced in women without a history of infertility.
Researchers also saw a 16% higher risk of preterm birth in women with endometriosis (12% in women with endometriosis vs. 8.1% in women without endometriosis), and a 16% greater risk of low-birth-weight babies (5.6% in women with endometriosis vs. 3.6% in women without endometriosis).
There also was the suggestion of an increased risk of stillbirth, although the researchers said this finding should be interpreted with caution because of the small sample size.
Women with endometriosis also had a 35% greater risk of gestational diabetes than did women without endometriosis. This association was stronger in women younger than age 35 years, in women without a history of infertility, and in women undergoing their second or later pregnancy. Endometriosis also was associated with a 30% greater risk of hypertensive disorders of pregnancy, particularly in second or later pregnancies.
Dr. Farland and associates wrote that recent research on the relationship between endometriosis and pregnancy outcomes had yielded “mixed results.”
“For example, much of the research to date has been conducted among women attending infertility clinics, which may conflate the influence of advanced maternal age, fertility treatment, and infertility itself with endometriosis, given the known elevated risk of adverse pregnancy outcomes in this population,” they wrote.
They suggested that one possible mechanism for the association between endometriosis and adverse pregnancy outcomes was progesterone resistance, which was hypothesized to affect genes important for embryo implantation and therefore contribute to pregnancy loss. Another mechanism could be increased inflammation, which may increase the risk of preterm birth and abnormal placentation.
“Elucidating mechanisms of association and possible pathways for intervention or screening procedures will be critical to improve the health of women with endometriosis and their children,” they wrote.
Katrina Mark, MD, commented in an interview, “This study, which identifies an increased risk of adverse pregnancy outcomes in women with endometriosis, is an important step in improving reproductive success.
“Although some explanations for these findings were postulated by the researchers, the next step will be to study the underlying physiology that leads to these complications so that interventions can be offered to improve outcomes,” said Dr. Mark, who is an associate professor of obstetrics, gynecology & reproductive sciences at the University of Maryland School of Medicine. Dr. Mark, who is not a coauthor of the study, was asked to comment on the study’s merit.
The study was supported by grants from the National Institutes of Health. Daniela A. Carusi, MD, received funding from UpToDate; Andrew W. Horne, MB, ChB, PhD, declared European government grants funding and consultancies with the pharmaceutical sector unrelated to the present study; Jorge E. Chavarro, MD, and Stacey A. Missmer, ScD, declared institutional funding from the NIH, and Dr. Missmer also received institutional funding from other funding bodies, as well as consulting fees. Dr. Farland and the remaining coauthors had no relevant financial disclosures. Dr. Mark has no relevant financial disclosures.
SOURCE: Farland LV et al. Obstetr Gynecol. 2019. doi: 10.1097/AOG.0000000000003410.
FROM OBSTETRICS & GYNECOLOGY
Ovarian cancer and perineal talc exposure: An epidemiologic dilemma
Many readers may be aware of large payments made by such companies as Johnson & Johnson to compensate women with a history of ovarian cancer who have claimed that perineal application of talc played a causative role in their cancer development. This column serves to review the purported role of perineal talc use in the development of ovarian cancer, and explore some of the pitfalls of observational science.
Talc, a hydrated magnesium silicate, is the softest mineral on earth, and has been sold as a personal hygiene product for many decades. Perineal application of talc to sanitary pads, perineal skin, undergarments, and diapers has been a common practice to decrease friction, moisture build-up, and as a deodorant. Talc is chemically similar, although not identical, to asbestos and is geologically located in close proximity to the known carcinogen. In the 1970s, there were concerns raised regarding the possible contamination of cosmetic-grade talc with asbestos, which led to the development of asbestos-free forms of the substance. Given that a strong causal relationship had been established between asbestos exposure and lung and pleural cancers, there was concern that exposure to perineal talc might increase cancer risk.
In the 1980s, an association between perineal talc exposure and ovarian cancer was observed in a case-control study.1 Since that time, multiple other observational studies, predominately case-control studies, have observed an increased ovarian cancer risk among users of perineal talc including the findings of a meta-analysis which estimated a 24%-39% increased risk for ovarian cancer among users.2 Does this establish a causal relationship? For the purposes of legal cases, these associations are adequate. However, science demands a different standard when determining cause and effect.
It is not unusual to rely on observational studies to establish a causal relationship between exposure and disease when it is unethical to randomize subjects in a clinical trial to exposure of the potential harmful agent. This was the necessary methodology behind establishing that smoking causes lung cancer. Several factors must be present when relying on observational studies to establish plausible causation including an observable biologic mechanism, dose-effect response, temporal relationship, consistent effect observed in multiple study populations, and statistical strength of response. These elements should be present in a consistent and powerful enough way to balance the pitfalls of observational studies, namely biases.
A particularly problematic bias is one of recall bias, which plagues case-control studies. Case-control studies are a popular tool to measure a relationship between an exposure and a rare disease, because they are more feasible than the prospective, observational cohort studies that require very large study populations observed over very long periods of time to capture enough events of interest (in this case, cases of ovarian cancer). In case-control studies, researchers identify a cohort of patients with the outcome of interest (ovarian cancer) and compare this population to a control group of similar demographic features. They then survey directly or indirectly (through medical records) for the exposure of interest (perineal talc use).
Recall bias occurs when subjects who have the disease are more likely to have memory of exposure than do control subjects because of the natural instincts individuals have toward attribution. This is emphasized when there is public commentary, justified or not, about the potential risks of that exposure. Given the significant publicity that these lawsuits have had with companies that produced cosmetic talc, it is plausible that ovarian cancer survivors are more likely to remember and negatively attribute their talc exposure to their cancer than are subjects without cancer. Additionally, their memory of volume and duration of exposure generally is enhanced by the same pressures. The potential for this bias is eliminated in prospective, cohort observational studies such as the Women’s Health Initiative Observational Study which, among 61,576 women, half of whom reported perineal talc exposure, did not measure a difference in the development of ovarian cancers during their 12 years of mean follow-up.3
Given these inherent biases, The biologic mechanism of talc carcinogenesis is largely theoretical. As mentioned earlier, prior to the 1970s, there was some observed contamination of talc with asbestos likely caused by the geologic proximity of these minerals. Asbestos is a known carcinogen, and therefore possibly could be harmful if a contaminant of talc. However, it is not known if this level of contamination was enough to be achieve ovarian carcinogenesis. Most theories of talc carcinogenesis are based on foreign body inflammatory reaction via talc particle ascent through the genital tract. This is proposed to induce an inflammatory release of prostaglandins and cytokines, which could cause a mutagenic effect promoting carcinogenesis. The foreign body inflammatory mechanism is further supported by the observation of a decreased incidence of ovarian cancer after hysterectomy or tubal ligation.4 However, inconsistently, a protective effect of NSAIDs has not been observed in ovarian cancer.5
A recent meta-analysis, which reviewed 27 of the largest, best-quality observational studies, identified a dose-effect response with an increased risk for ovarian cancer with greater than 3,600 lifetime applications, compared with less than 3,600 applications.2 The observed association between perineal talc exposure and increased risk of ovarian cancer appears to be consistent across a number of observational studies, including both case-control studies and prospective cohort studies (although somewhat mitigated in the latter). Additionally, there appears to be consistency in the finding that the risk is present for the epithelial subtypes of serous and endometrioid, but not mucinous or clear cell cancer. However, when considering the magnitude of effect, this remains somewhat small (odds ratio, 1.31; 95% confidence interval, 1.24-1.39) when compared with other better established carcinogenic relationships such as smoking and lung cancer where the hazard ratio is 12.12 (95% CI, 6.94-21.17).2,6
If talc does not cause ovarian cancer, why would this association be observed at all? One explanation could be that talc use is a confounder for the true causative mechanism. A theoretical example of this would be if the genital microbiome (a subject we have reviewed previously in this column) was the true culprit. If a particular microbiome profile promotes both oncogenic change in the ovary while also causing vaginal discharge and odor, it might increase the likelihood that perineal talc use is reported in the history of these cancer patients. This is purely speculative, but it always is important to consider the potential for confounding variables when utilizing observational studies to attribute cause and effect.
Therefore, there is a consistently observed association between perineal talc application and ovarian cancer, however, the relationship does not appear to be strong enough, associated with a proven carcinogenic mechanism, or free from interfering recall bias such to definitively state that perineal talc exposure causes ovarian cancer. Given these findings, it is reasonable to recommend patients avoid the use of perineal talc application until further definitive safety evidence is provided. In the meantime, it should be noted that even though talc-containing products are not commercially labeled as carcinogens, many pharmaceutical and cosmetic companies have replaced the mineral talc with corn starch in their powders.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She had no relevant financial disclosures. Email her at [email protected].
References
1. Cancer. 1982 Jul 15;50(2):372-6.
2. Epidemiology. 2018 Jan;29(1):41-9.
3. J Natl Cancer Inst. 2014 Sep 10;106(9). pii: dju208.
4. Am J Epidemiol. 1991 Aug 15;134(4):362-9.
5. Int J Cancer. 2008 Jan 1;122(1):170-6.
6. J Natl Cancer Inst. 2018 Nov 1;110(11):1201-7.
Many readers may be aware of large payments made by such companies as Johnson & Johnson to compensate women with a history of ovarian cancer who have claimed that perineal application of talc played a causative role in their cancer development. This column serves to review the purported role of perineal talc use in the development of ovarian cancer, and explore some of the pitfalls of observational science.
Talc, a hydrated magnesium silicate, is the softest mineral on earth, and has been sold as a personal hygiene product for many decades. Perineal application of talc to sanitary pads, perineal skin, undergarments, and diapers has been a common practice to decrease friction, moisture build-up, and as a deodorant. Talc is chemically similar, although not identical, to asbestos and is geologically located in close proximity to the known carcinogen. In the 1970s, there were concerns raised regarding the possible contamination of cosmetic-grade talc with asbestos, which led to the development of asbestos-free forms of the substance. Given that a strong causal relationship had been established between asbestos exposure and lung and pleural cancers, there was concern that exposure to perineal talc might increase cancer risk.
In the 1980s, an association between perineal talc exposure and ovarian cancer was observed in a case-control study.1 Since that time, multiple other observational studies, predominately case-control studies, have observed an increased ovarian cancer risk among users of perineal talc including the findings of a meta-analysis which estimated a 24%-39% increased risk for ovarian cancer among users.2 Does this establish a causal relationship? For the purposes of legal cases, these associations are adequate. However, science demands a different standard when determining cause and effect.
It is not unusual to rely on observational studies to establish a causal relationship between exposure and disease when it is unethical to randomize subjects in a clinical trial to exposure of the potential harmful agent. This was the necessary methodology behind establishing that smoking causes lung cancer. Several factors must be present when relying on observational studies to establish plausible causation including an observable biologic mechanism, dose-effect response, temporal relationship, consistent effect observed in multiple study populations, and statistical strength of response. These elements should be present in a consistent and powerful enough way to balance the pitfalls of observational studies, namely biases.
A particularly problematic bias is one of recall bias, which plagues case-control studies. Case-control studies are a popular tool to measure a relationship between an exposure and a rare disease, because they are more feasible than the prospective, observational cohort studies that require very large study populations observed over very long periods of time to capture enough events of interest (in this case, cases of ovarian cancer). In case-control studies, researchers identify a cohort of patients with the outcome of interest (ovarian cancer) and compare this population to a control group of similar demographic features. They then survey directly or indirectly (through medical records) for the exposure of interest (perineal talc use).
Recall bias occurs when subjects who have the disease are more likely to have memory of exposure than do control subjects because of the natural instincts individuals have toward attribution. This is emphasized when there is public commentary, justified or not, about the potential risks of that exposure. Given the significant publicity that these lawsuits have had with companies that produced cosmetic talc, it is plausible that ovarian cancer survivors are more likely to remember and negatively attribute their talc exposure to their cancer than are subjects without cancer. Additionally, their memory of volume and duration of exposure generally is enhanced by the same pressures. The potential for this bias is eliminated in prospective, cohort observational studies such as the Women’s Health Initiative Observational Study which, among 61,576 women, half of whom reported perineal talc exposure, did not measure a difference in the development of ovarian cancers during their 12 years of mean follow-up.3
Given these inherent biases, The biologic mechanism of talc carcinogenesis is largely theoretical. As mentioned earlier, prior to the 1970s, there was some observed contamination of talc with asbestos likely caused by the geologic proximity of these minerals. Asbestos is a known carcinogen, and therefore possibly could be harmful if a contaminant of talc. However, it is not known if this level of contamination was enough to be achieve ovarian carcinogenesis. Most theories of talc carcinogenesis are based on foreign body inflammatory reaction via talc particle ascent through the genital tract. This is proposed to induce an inflammatory release of prostaglandins and cytokines, which could cause a mutagenic effect promoting carcinogenesis. The foreign body inflammatory mechanism is further supported by the observation of a decreased incidence of ovarian cancer after hysterectomy or tubal ligation.4 However, inconsistently, a protective effect of NSAIDs has not been observed in ovarian cancer.5
A recent meta-analysis, which reviewed 27 of the largest, best-quality observational studies, identified a dose-effect response with an increased risk for ovarian cancer with greater than 3,600 lifetime applications, compared with less than 3,600 applications.2 The observed association between perineal talc exposure and increased risk of ovarian cancer appears to be consistent across a number of observational studies, including both case-control studies and prospective cohort studies (although somewhat mitigated in the latter). Additionally, there appears to be consistency in the finding that the risk is present for the epithelial subtypes of serous and endometrioid, but not mucinous or clear cell cancer. However, when considering the magnitude of effect, this remains somewhat small (odds ratio, 1.31; 95% confidence interval, 1.24-1.39) when compared with other better established carcinogenic relationships such as smoking and lung cancer where the hazard ratio is 12.12 (95% CI, 6.94-21.17).2,6
If talc does not cause ovarian cancer, why would this association be observed at all? One explanation could be that talc use is a confounder for the true causative mechanism. A theoretical example of this would be if the genital microbiome (a subject we have reviewed previously in this column) was the true culprit. If a particular microbiome profile promotes both oncogenic change in the ovary while also causing vaginal discharge and odor, it might increase the likelihood that perineal talc use is reported in the history of these cancer patients. This is purely speculative, but it always is important to consider the potential for confounding variables when utilizing observational studies to attribute cause and effect.
Therefore, there is a consistently observed association between perineal talc application and ovarian cancer, however, the relationship does not appear to be strong enough, associated with a proven carcinogenic mechanism, or free from interfering recall bias such to definitively state that perineal talc exposure causes ovarian cancer. Given these findings, it is reasonable to recommend patients avoid the use of perineal talc application until further definitive safety evidence is provided. In the meantime, it should be noted that even though talc-containing products are not commercially labeled as carcinogens, many pharmaceutical and cosmetic companies have replaced the mineral talc with corn starch in their powders.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She had no relevant financial disclosures. Email her at [email protected].
References
1. Cancer. 1982 Jul 15;50(2):372-6.
2. Epidemiology. 2018 Jan;29(1):41-9.
3. J Natl Cancer Inst. 2014 Sep 10;106(9). pii: dju208.
4. Am J Epidemiol. 1991 Aug 15;134(4):362-9.
5. Int J Cancer. 2008 Jan 1;122(1):170-6.
6. J Natl Cancer Inst. 2018 Nov 1;110(11):1201-7.
Many readers may be aware of large payments made by such companies as Johnson & Johnson to compensate women with a history of ovarian cancer who have claimed that perineal application of talc played a causative role in their cancer development. This column serves to review the purported role of perineal talc use in the development of ovarian cancer, and explore some of the pitfalls of observational science.
Talc, a hydrated magnesium silicate, is the softest mineral on earth, and has been sold as a personal hygiene product for many decades. Perineal application of talc to sanitary pads, perineal skin, undergarments, and diapers has been a common practice to decrease friction, moisture build-up, and as a deodorant. Talc is chemically similar, although not identical, to asbestos and is geologically located in close proximity to the known carcinogen. In the 1970s, there were concerns raised regarding the possible contamination of cosmetic-grade talc with asbestos, which led to the development of asbestos-free forms of the substance. Given that a strong causal relationship had been established between asbestos exposure and lung and pleural cancers, there was concern that exposure to perineal talc might increase cancer risk.
In the 1980s, an association between perineal talc exposure and ovarian cancer was observed in a case-control study.1 Since that time, multiple other observational studies, predominately case-control studies, have observed an increased ovarian cancer risk among users of perineal talc including the findings of a meta-analysis which estimated a 24%-39% increased risk for ovarian cancer among users.2 Does this establish a causal relationship? For the purposes of legal cases, these associations are adequate. However, science demands a different standard when determining cause and effect.
It is not unusual to rely on observational studies to establish a causal relationship between exposure and disease when it is unethical to randomize subjects in a clinical trial to exposure of the potential harmful agent. This was the necessary methodology behind establishing that smoking causes lung cancer. Several factors must be present when relying on observational studies to establish plausible causation including an observable biologic mechanism, dose-effect response, temporal relationship, consistent effect observed in multiple study populations, and statistical strength of response. These elements should be present in a consistent and powerful enough way to balance the pitfalls of observational studies, namely biases.
A particularly problematic bias is one of recall bias, which plagues case-control studies. Case-control studies are a popular tool to measure a relationship between an exposure and a rare disease, because they are more feasible than the prospective, observational cohort studies that require very large study populations observed over very long periods of time to capture enough events of interest (in this case, cases of ovarian cancer). In case-control studies, researchers identify a cohort of patients with the outcome of interest (ovarian cancer) and compare this population to a control group of similar demographic features. They then survey directly or indirectly (through medical records) for the exposure of interest (perineal talc use).
Recall bias occurs when subjects who have the disease are more likely to have memory of exposure than do control subjects because of the natural instincts individuals have toward attribution. This is emphasized when there is public commentary, justified or not, about the potential risks of that exposure. Given the significant publicity that these lawsuits have had with companies that produced cosmetic talc, it is plausible that ovarian cancer survivors are more likely to remember and negatively attribute their talc exposure to their cancer than are subjects without cancer. Additionally, their memory of volume and duration of exposure generally is enhanced by the same pressures. The potential for this bias is eliminated in prospective, cohort observational studies such as the Women’s Health Initiative Observational Study which, among 61,576 women, half of whom reported perineal talc exposure, did not measure a difference in the development of ovarian cancers during their 12 years of mean follow-up.3
Given these inherent biases, The biologic mechanism of talc carcinogenesis is largely theoretical. As mentioned earlier, prior to the 1970s, there was some observed contamination of talc with asbestos likely caused by the geologic proximity of these minerals. Asbestos is a known carcinogen, and therefore possibly could be harmful if a contaminant of talc. However, it is not known if this level of contamination was enough to be achieve ovarian carcinogenesis. Most theories of talc carcinogenesis are based on foreign body inflammatory reaction via talc particle ascent through the genital tract. This is proposed to induce an inflammatory release of prostaglandins and cytokines, which could cause a mutagenic effect promoting carcinogenesis. The foreign body inflammatory mechanism is further supported by the observation of a decreased incidence of ovarian cancer after hysterectomy or tubal ligation.4 However, inconsistently, a protective effect of NSAIDs has not been observed in ovarian cancer.5
A recent meta-analysis, which reviewed 27 of the largest, best-quality observational studies, identified a dose-effect response with an increased risk for ovarian cancer with greater than 3,600 lifetime applications, compared with less than 3,600 applications.2 The observed association between perineal talc exposure and increased risk of ovarian cancer appears to be consistent across a number of observational studies, including both case-control studies and prospective cohort studies (although somewhat mitigated in the latter). Additionally, there appears to be consistency in the finding that the risk is present for the epithelial subtypes of serous and endometrioid, but not mucinous or clear cell cancer. However, when considering the magnitude of effect, this remains somewhat small (odds ratio, 1.31; 95% confidence interval, 1.24-1.39) when compared with other better established carcinogenic relationships such as smoking and lung cancer where the hazard ratio is 12.12 (95% CI, 6.94-21.17).2,6
If talc does not cause ovarian cancer, why would this association be observed at all? One explanation could be that talc use is a confounder for the true causative mechanism. A theoretical example of this would be if the genital microbiome (a subject we have reviewed previously in this column) was the true culprit. If a particular microbiome profile promotes both oncogenic change in the ovary while also causing vaginal discharge and odor, it might increase the likelihood that perineal talc use is reported in the history of these cancer patients. This is purely speculative, but it always is important to consider the potential for confounding variables when utilizing observational studies to attribute cause and effect.
Therefore, there is a consistently observed association between perineal talc application and ovarian cancer, however, the relationship does not appear to be strong enough, associated with a proven carcinogenic mechanism, or free from interfering recall bias such to definitively state that perineal talc exposure causes ovarian cancer. Given these findings, it is reasonable to recommend patients avoid the use of perineal talc application until further definitive safety evidence is provided. In the meantime, it should be noted that even though talc-containing products are not commercially labeled as carcinogens, many pharmaceutical and cosmetic companies have replaced the mineral talc with corn starch in their powders.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She had no relevant financial disclosures. Email her at [email protected].
References
1. Cancer. 1982 Jul 15;50(2):372-6.
2. Epidemiology. 2018 Jan;29(1):41-9.
3. J Natl Cancer Inst. 2014 Sep 10;106(9). pii: dju208.
4. Am J Epidemiol. 1991 Aug 15;134(4):362-9.
5. Int J Cancer. 2008 Jan 1;122(1):170-6.
6. J Natl Cancer Inst. 2018 Nov 1;110(11):1201-7.
Pretreatment CT data may help predict immunotherapy benefit in ovarian cancer
Pretreatment CT data may help identify responders to immunotherapy in ovarian cancer, according to a new study.
Specifically, fewer sites of disease and lower intratumor heterogeneity on contrast-enhanced CT may indicate a higher likelihood of durable response to immune checkpoint inhibitors, according to results of the retrospective study, recently published in JCO Precision Oncology.
“Our results suggest that quantitative analysis of baseline contrast-enhanced CT may facilitate the delivery of precision medicine to patients with ovarian cancer by identifying patients who may benefit from immunotherapy,” wrote Yuki Himoto, MD, PhD, of Memorial Sloan Kettering Cancer Center in New York, and colleagues.
The study leverages findings from the emerging field of radiomics, which the investigators note allows for “virtual sampling” of tumor heterogeneity within a single lesion and between lesions.
“This information may complement molecular profiling in personalizing medical decisions,” Dr. Himoto and coauthors explained.
The study cohort included 75 patients with recurrent ovarian cancer who were enrolled in ongoing, prospective trials of immunotherapy, according to the researchers. Of that group, just under one in five derived a durable clinical benefit, defined as progression-free survival lasting at least 24 weeks.
In univariable analysis, they found a number of contrast-enhanced CT variables were linked to durable clinical benefit, including fewer disease sites, lower cluster-site entropy and dissimilarity, which they wrote were an indicator of lower intertumor heterogeneity, and higher energy in the largest-volume lesion, which they described as an indicator of lower intratumor heterogeneity.
However, in multivariable analysis, the only variables that were still associated with durable clinical benefit were fewer disease sites (odds ratio, 1.64; 95% confidence interval, 1.19-2.27; P = .012) and higher energy in the largest lesion (odds ratio, 1.41; 95% CI, 1.11-1.81; P = .006), according to the report.
Those two factors combined were a composite indicator of durable clinical benefit (C-index, 0.821).
These findings could represent a step forward in the provision of immunotherapy in ovarian cancer, which exhibits poor response to immune checkpoint inhibitors, compared with some other cancer types, the investigators wrote.
More insights are needed, however, to help personalize the selection of immunotherapy in ovarian cancer, including a better understanding of cancer immune reactions and retooling of immune response criteria, they added.
“Composite multimodal multifaceted biomarkers that noninvasively capture spatiotemporal tumor heterogeneity will likely be necessary to comprehensively assess immune the tumor microenvironment and serve as clinical decision support for prognosis inference and prediction of response,” Dr. Himoto and associates wrote.
The study was supported by the National Cancer Institute, among other sources. Study authors reported disclosures related to Merck, Bristol-Myers Squibb, Genentech, Celgene, AstraZeneca, Y-mAbs Therapeutics, and others.
SOURCE: Himoto Y et al. JCO Precis Oncol. 2019 Aug 13. doi: 10.1200/PO.19.00038.
Pretreatment CT data may help identify responders to immunotherapy in ovarian cancer, according to a new study.
Specifically, fewer sites of disease and lower intratumor heterogeneity on contrast-enhanced CT may indicate a higher likelihood of durable response to immune checkpoint inhibitors, according to results of the retrospective study, recently published in JCO Precision Oncology.
“Our results suggest that quantitative analysis of baseline contrast-enhanced CT may facilitate the delivery of precision medicine to patients with ovarian cancer by identifying patients who may benefit from immunotherapy,” wrote Yuki Himoto, MD, PhD, of Memorial Sloan Kettering Cancer Center in New York, and colleagues.
The study leverages findings from the emerging field of radiomics, which the investigators note allows for “virtual sampling” of tumor heterogeneity within a single lesion and between lesions.
“This information may complement molecular profiling in personalizing medical decisions,” Dr. Himoto and coauthors explained.
The study cohort included 75 patients with recurrent ovarian cancer who were enrolled in ongoing, prospective trials of immunotherapy, according to the researchers. Of that group, just under one in five derived a durable clinical benefit, defined as progression-free survival lasting at least 24 weeks.
In univariable analysis, they found a number of contrast-enhanced CT variables were linked to durable clinical benefit, including fewer disease sites, lower cluster-site entropy and dissimilarity, which they wrote were an indicator of lower intertumor heterogeneity, and higher energy in the largest-volume lesion, which they described as an indicator of lower intratumor heterogeneity.
However, in multivariable analysis, the only variables that were still associated with durable clinical benefit were fewer disease sites (odds ratio, 1.64; 95% confidence interval, 1.19-2.27; P = .012) and higher energy in the largest lesion (odds ratio, 1.41; 95% CI, 1.11-1.81; P = .006), according to the report.
Those two factors combined were a composite indicator of durable clinical benefit (C-index, 0.821).
These findings could represent a step forward in the provision of immunotherapy in ovarian cancer, which exhibits poor response to immune checkpoint inhibitors, compared with some other cancer types, the investigators wrote.
More insights are needed, however, to help personalize the selection of immunotherapy in ovarian cancer, including a better understanding of cancer immune reactions and retooling of immune response criteria, they added.
“Composite multimodal multifaceted biomarkers that noninvasively capture spatiotemporal tumor heterogeneity will likely be necessary to comprehensively assess immune the tumor microenvironment and serve as clinical decision support for prognosis inference and prediction of response,” Dr. Himoto and associates wrote.
The study was supported by the National Cancer Institute, among other sources. Study authors reported disclosures related to Merck, Bristol-Myers Squibb, Genentech, Celgene, AstraZeneca, Y-mAbs Therapeutics, and others.
SOURCE: Himoto Y et al. JCO Precis Oncol. 2019 Aug 13. doi: 10.1200/PO.19.00038.
Pretreatment CT data may help identify responders to immunotherapy in ovarian cancer, according to a new study.
Specifically, fewer sites of disease and lower intratumor heterogeneity on contrast-enhanced CT may indicate a higher likelihood of durable response to immune checkpoint inhibitors, according to results of the retrospective study, recently published in JCO Precision Oncology.
“Our results suggest that quantitative analysis of baseline contrast-enhanced CT may facilitate the delivery of precision medicine to patients with ovarian cancer by identifying patients who may benefit from immunotherapy,” wrote Yuki Himoto, MD, PhD, of Memorial Sloan Kettering Cancer Center in New York, and colleagues.
The study leverages findings from the emerging field of radiomics, which the investigators note allows for “virtual sampling” of tumor heterogeneity within a single lesion and between lesions.
“This information may complement molecular profiling in personalizing medical decisions,” Dr. Himoto and coauthors explained.
The study cohort included 75 patients with recurrent ovarian cancer who were enrolled in ongoing, prospective trials of immunotherapy, according to the researchers. Of that group, just under one in five derived a durable clinical benefit, defined as progression-free survival lasting at least 24 weeks.
In univariable analysis, they found a number of contrast-enhanced CT variables were linked to durable clinical benefit, including fewer disease sites, lower cluster-site entropy and dissimilarity, which they wrote were an indicator of lower intertumor heterogeneity, and higher energy in the largest-volume lesion, which they described as an indicator of lower intratumor heterogeneity.
However, in multivariable analysis, the only variables that were still associated with durable clinical benefit were fewer disease sites (odds ratio, 1.64; 95% confidence interval, 1.19-2.27; P = .012) and higher energy in the largest lesion (odds ratio, 1.41; 95% CI, 1.11-1.81; P = .006), according to the report.
Those two factors combined were a composite indicator of durable clinical benefit (C-index, 0.821).
These findings could represent a step forward in the provision of immunotherapy in ovarian cancer, which exhibits poor response to immune checkpoint inhibitors, compared with some other cancer types, the investigators wrote.
More insights are needed, however, to help personalize the selection of immunotherapy in ovarian cancer, including a better understanding of cancer immune reactions and retooling of immune response criteria, they added.
“Composite multimodal multifaceted biomarkers that noninvasively capture spatiotemporal tumor heterogeneity will likely be necessary to comprehensively assess immune the tumor microenvironment and serve as clinical decision support for prognosis inference and prediction of response,” Dr. Himoto and associates wrote.
The study was supported by the National Cancer Institute, among other sources. Study authors reported disclosures related to Merck, Bristol-Myers Squibb, Genentech, Celgene, AstraZeneca, Y-mAbs Therapeutics, and others.
SOURCE: Himoto Y et al. JCO Precis Oncol. 2019 Aug 13. doi: 10.1200/PO.19.00038.
FROM JCO PRECISION ONCOLOGY
A Call to Address Sexual Harassment and Gender Discrimination in Medicine
PART I
Reports of sexual harassment and gender discrimination have dominated news headlines, and the #MeToo movement has brought the scope and severity of discriminatory behavior to the forefront of public consciousness. The #MeToo movement has raised national and global awareness of gender discrimination and sexual harassment in all industries and has given rise to Time’s Up initiative within health care.
Academic medicine has not been immune to workplace gender discrimination and sexual harassment as has been vastly reported in the literature and clearly documented in the 2018 National Academies of Sciences, Engineering, and Medicine report, which points out that … “the cumulative effect of sexual harassment is a significant and costly loss of talent in academic science, engineering, and medicine, which has consequences for advancing the nation’s economic and social well-being and its overall public health.”1
With the increasing recognition that healthcare is an environment especially prone to inequality, gender discrimination and sexual discrimination, the Time’s Up national organization, supported by the Time’s Up Legal Defense Fund, launched the Time’s Up initiative for health care workers on March 1, 2019.2,3 The overarching goal of this initiative is to expose workplace inequalities; drive policy and legislative changes focused on equal pay, equal opportunity, and equal work environments; and support safe, fair, and dignified work for women in health care. 2,3
This article, presented over the next three issues of Vascular Specialist, will present data on the ongoing problem of sexual harassment in medicine, discuss why the problem is prevalent in academic medicine, and provide recommendations for mitigating the problem in our workplace.
Defining & Measuring Sexual Harassment
Although commonly referred to as “sex discrimination,” sexual harassment differs from sexual discrimination. Sex discrimination refers to an employees’ denial of civil rights, raises, job opportunities, employment or a demotion or other mistreatments based on sex. On the other hand, sexual harassment relates to behavior that is inappropriate or offensive. A 2018 report from the National Academies Press defined sexual harassment (a form of discrimination) as comprising three categories of behavior: gender harassment – verbal and nonverbal behaviors that convey hostility, objectification, exclusion, or second-class status about members of one sex; unwanted sexual attention – verbal or physical unwelcome sexual advances, which can include assault; and sexual coercion – when favorable professional or educational treatment is conditional based on sexual activity.1
During 1995-2016, more than 7,000 health care service employees filed claims of sexual harassment with the Equal Employment Opportunity Commission. While this number may seem large, the number of official reports severely undervalues the prevalence of sexual discrimination in U.S. health care.1 Prevalence is best determined using representative validated surveys that rely on firsthand experience or observation of the behavior(s) without requiring the respondent to label those behaviors.
Environments at Risk for Sexual Harassment
Research reveals that academic settings in the fields of science exhibit characteristics that create high levels of risk for sexual harassment to occur. These environments historically are male dominated, tolerate sexually harassing behavior, and create a hierarchy in which men hold most of the positions of power and authority. Moreover, dependent relationships often exist between these gatekeepers and those subordinate to them, with gatekeepers directly influencing the career advancement of those subordinates.1
The greatest predictor of sexual harassment in the workplace is the organizational climate, which refers to the tolerance for sexual harassment and is measured on three elements: a lack of sanctions against offenders; a perceived risk to those who report sexually harassing behavior; and the perception that one’s report of sexually harassing behavior will not be taken seriously.1 Women are less likely to be directly harassed in environments that do not tolerate harassing behaviors or have a strong, clear, transparent consequence for these behaviors.
Sexual Harassment in Academic Medicine
Academic medicine has the highest rate of gender and sexual harassment in the health care industry, with about 50% of female academic physicians reporting incidents of sexual harassment.1 A recent survey suggests that more than half (58%) of women surgeons experienced sexual harassment within just the previous year alone.4 The conditions that increase the risk of sexual harassment against women – male-dominated hierarchical environments and organizational tolerance of sexual harassment – still prevail in academic medicine.
Higher-education environments are perceived as permissive environments in part because when targets report sexual harassment, they are retaliated against or there are few consequences for the perpetrator. Academic institutions are replete with cases in which the conduct of offenders is regarded as an open secret, but there are no sanctions for that bad behavior. These offenders often are perceived as superstars in their particular substantive area. Because they hold valued grants or national status within their specialty area, they often receive preferential treatment and are not held accountable for gender-biased and sexually harassing behavior. Interview data regarding sexual harassment in academic medicine reveals that interview respondents and other colleagues often know which individuals have a history of sexually harassing behavior. Both men and women warn colleagues of these perpetrators – knowing that calling out or reporting these behaviors is fruitless – and that the best manner for dealing with their behavior is to avoid or ignore it. This normalization of sexual harassment and gender bias was noted, unfortunately, to fuel similar behavior in new cohorts of medicine faculty.1
Sexual harassment of women in academic medicine starts in medical school. Female medical students are significantly more likely to experience sexual harassment by faculty and staff than are graduate or undergraduate students. Sexual harassment continues into residency training with residency described as “breeding grounds for abusive behavior by superiors.”1 Interview studies report that both men and women trainees widely accept harassing behavior at this stage of their training. The expectation of abusive and grueling conditions during residency caused several respondents to view sexual harassment as part of a continuum that they were expected to endure. Female residents in surgery and emergency medicine are more likely to be harassed than those in other specialties because of the high value placed on a hierarchical and authoritative workplace. Once out of residency, the sexual harassment of women in the workplace continues. A recent meta-analysis reveals that 58% of women faculty experience sexual harassment at work. Academic medicine has the second-highest rate of sexual harassment, behind the military (69%), as compared with all other workplaces. Women physicians of color experience more harassment (as a combination of sexual and racial harassment) than do white women physicians.1
Why Women Are Not Likely to Report Sexual Harassment
Only 25% of targets file formal reports with their employer, with even fewer taking claims to court. These numbers are even lower for women in the military and academic medicine, where formal reporting is the last resort for the victims. The reluctance to use formal reporting mechanisms is rooted in the “fear of blame, disbelief, inaction, retaliation, humiliation, ostracism, and the damage to one’s career and reputation.”1 Targets may perceive that there seem to be few benefits and high costs for reporting. Women and nonwhites often resist calling bad behavior “discrimination” because that increases their loss of control and victimhood.1 Women frequently perceive that grievance procedures favor the institution over the individual, and research has proven that women face retaliation, both professional and social, for speaking out. Furthermore, stark power differentials between the target and the perpetrator exacerbate the reluctance to report and the fear of retaliation. The overall effects can be long lasting.
References:
1. National Academies of Sciences, Engineering, and Medicine. Sexual Harassment of Women: Climate, Culture, and Consequences in Academic Sciences, Engineering, and Medicine. The National Academies Press, Washington, DC; 2018. doi. 10.17226/24994.
2. Choo EK et al. From #MeToo to #TimesUp in Health Care: Can a Culture of Accountability End Inequity and Harassment? Lancet. 2019 Feb 9;393(10171):499-502.
3. Choo EK et al. Time’s Up for Medicine? Only Time Will Tell. N Engl J Med. 2018 Oct 25;379(17):1592-3.
4. Medicine Has Its Own #MeToo Problems. Can Time’s Up Healthcare Fix It?
Dr. Mitchell is a vascular surgeon at Salem (Ore.) Hospital; Dr. Drudi is as vascular surgery resident at McGill University, Montreal; Dr. Brown is a professor of surgery at the Medical College of Wisconsin. Milwaukee; Dr. Sachdev-Ost is an associate professor of surgery at the University of Pittsburgh Medical Center.
PART I
Reports of sexual harassment and gender discrimination have dominated news headlines, and the #MeToo movement has brought the scope and severity of discriminatory behavior to the forefront of public consciousness. The #MeToo movement has raised national and global awareness of gender discrimination and sexual harassment in all industries and has given rise to Time’s Up initiative within health care.
Academic medicine has not been immune to workplace gender discrimination and sexual harassment as has been vastly reported in the literature and clearly documented in the 2018 National Academies of Sciences, Engineering, and Medicine report, which points out that … “the cumulative effect of sexual harassment is a significant and costly loss of talent in academic science, engineering, and medicine, which has consequences for advancing the nation’s economic and social well-being and its overall public health.”1
With the increasing recognition that healthcare is an environment especially prone to inequality, gender discrimination and sexual discrimination, the Time’s Up national organization, supported by the Time’s Up Legal Defense Fund, launched the Time’s Up initiative for health care workers on March 1, 2019.2,3 The overarching goal of this initiative is to expose workplace inequalities; drive policy and legislative changes focused on equal pay, equal opportunity, and equal work environments; and support safe, fair, and dignified work for women in health care. 2,3
This article, presented over the next three issues of Vascular Specialist, will present data on the ongoing problem of sexual harassment in medicine, discuss why the problem is prevalent in academic medicine, and provide recommendations for mitigating the problem in our workplace.
Defining & Measuring Sexual Harassment
Although commonly referred to as “sex discrimination,” sexual harassment differs from sexual discrimination. Sex discrimination refers to an employees’ denial of civil rights, raises, job opportunities, employment or a demotion or other mistreatments based on sex. On the other hand, sexual harassment relates to behavior that is inappropriate or offensive. A 2018 report from the National Academies Press defined sexual harassment (a form of discrimination) as comprising three categories of behavior: gender harassment – verbal and nonverbal behaviors that convey hostility, objectification, exclusion, or second-class status about members of one sex; unwanted sexual attention – verbal or physical unwelcome sexual advances, which can include assault; and sexual coercion – when favorable professional or educational treatment is conditional based on sexual activity.1
During 1995-2016, more than 7,000 health care service employees filed claims of sexual harassment with the Equal Employment Opportunity Commission. While this number may seem large, the number of official reports severely undervalues the prevalence of sexual discrimination in U.S. health care.1 Prevalence is best determined using representative validated surveys that rely on firsthand experience or observation of the behavior(s) without requiring the respondent to label those behaviors.
Environments at Risk for Sexual Harassment
Research reveals that academic settings in the fields of science exhibit characteristics that create high levels of risk for sexual harassment to occur. These environments historically are male dominated, tolerate sexually harassing behavior, and create a hierarchy in which men hold most of the positions of power and authority. Moreover, dependent relationships often exist between these gatekeepers and those subordinate to them, with gatekeepers directly influencing the career advancement of those subordinates.1
The greatest predictor of sexual harassment in the workplace is the organizational climate, which refers to the tolerance for sexual harassment and is measured on three elements: a lack of sanctions against offenders; a perceived risk to those who report sexually harassing behavior; and the perception that one’s report of sexually harassing behavior will not be taken seriously.1 Women are less likely to be directly harassed in environments that do not tolerate harassing behaviors or have a strong, clear, transparent consequence for these behaviors.
Sexual Harassment in Academic Medicine
Academic medicine has the highest rate of gender and sexual harassment in the health care industry, with about 50% of female academic physicians reporting incidents of sexual harassment.1 A recent survey suggests that more than half (58%) of women surgeons experienced sexual harassment within just the previous year alone.4 The conditions that increase the risk of sexual harassment against women – male-dominated hierarchical environments and organizational tolerance of sexual harassment – still prevail in academic medicine.
Higher-education environments are perceived as permissive environments in part because when targets report sexual harassment, they are retaliated against or there are few consequences for the perpetrator. Academic institutions are replete with cases in which the conduct of offenders is regarded as an open secret, but there are no sanctions for that bad behavior. These offenders often are perceived as superstars in their particular substantive area. Because they hold valued grants or national status within their specialty area, they often receive preferential treatment and are not held accountable for gender-biased and sexually harassing behavior. Interview data regarding sexual harassment in academic medicine reveals that interview respondents and other colleagues often know which individuals have a history of sexually harassing behavior. Both men and women warn colleagues of these perpetrators – knowing that calling out or reporting these behaviors is fruitless – and that the best manner for dealing with their behavior is to avoid or ignore it. This normalization of sexual harassment and gender bias was noted, unfortunately, to fuel similar behavior in new cohorts of medicine faculty.1
Sexual harassment of women in academic medicine starts in medical school. Female medical students are significantly more likely to experience sexual harassment by faculty and staff than are graduate or undergraduate students. Sexual harassment continues into residency training with residency described as “breeding grounds for abusive behavior by superiors.”1 Interview studies report that both men and women trainees widely accept harassing behavior at this stage of their training. The expectation of abusive and grueling conditions during residency caused several respondents to view sexual harassment as part of a continuum that they were expected to endure. Female residents in surgery and emergency medicine are more likely to be harassed than those in other specialties because of the high value placed on a hierarchical and authoritative workplace. Once out of residency, the sexual harassment of women in the workplace continues. A recent meta-analysis reveals that 58% of women faculty experience sexual harassment at work. Academic medicine has the second-highest rate of sexual harassment, behind the military (69%), as compared with all other workplaces. Women physicians of color experience more harassment (as a combination of sexual and racial harassment) than do white women physicians.1
Why Women Are Not Likely to Report Sexual Harassment
Only 25% of targets file formal reports with their employer, with even fewer taking claims to court. These numbers are even lower for women in the military and academic medicine, where formal reporting is the last resort for the victims. The reluctance to use formal reporting mechanisms is rooted in the “fear of blame, disbelief, inaction, retaliation, humiliation, ostracism, and the damage to one’s career and reputation.”1 Targets may perceive that there seem to be few benefits and high costs for reporting. Women and nonwhites often resist calling bad behavior “discrimination” because that increases their loss of control and victimhood.1 Women frequently perceive that grievance procedures favor the institution over the individual, and research has proven that women face retaliation, both professional and social, for speaking out. Furthermore, stark power differentials between the target and the perpetrator exacerbate the reluctance to report and the fear of retaliation. The overall effects can be long lasting.
References:
1. National Academies of Sciences, Engineering, and Medicine. Sexual Harassment of Women: Climate, Culture, and Consequences in Academic Sciences, Engineering, and Medicine. The National Academies Press, Washington, DC; 2018. doi. 10.17226/24994.
2. Choo EK et al. From #MeToo to #TimesUp in Health Care: Can a Culture of Accountability End Inequity and Harassment? Lancet. 2019 Feb 9;393(10171):499-502.
3. Choo EK et al. Time’s Up for Medicine? Only Time Will Tell. N Engl J Med. 2018 Oct 25;379(17):1592-3.
4. Medicine Has Its Own #MeToo Problems. Can Time’s Up Healthcare Fix It?
Dr. Mitchell is a vascular surgeon at Salem (Ore.) Hospital; Dr. Drudi is as vascular surgery resident at McGill University, Montreal; Dr. Brown is a professor of surgery at the Medical College of Wisconsin. Milwaukee; Dr. Sachdev-Ost is an associate professor of surgery at the University of Pittsburgh Medical Center.
PART I
Reports of sexual harassment and gender discrimination have dominated news headlines, and the #MeToo movement has brought the scope and severity of discriminatory behavior to the forefront of public consciousness. The #MeToo movement has raised national and global awareness of gender discrimination and sexual harassment in all industries and has given rise to Time’s Up initiative within health care.
Academic medicine has not been immune to workplace gender discrimination and sexual harassment as has been vastly reported in the literature and clearly documented in the 2018 National Academies of Sciences, Engineering, and Medicine report, which points out that … “the cumulative effect of sexual harassment is a significant and costly loss of talent in academic science, engineering, and medicine, which has consequences for advancing the nation’s economic and social well-being and its overall public health.”1
With the increasing recognition that healthcare is an environment especially prone to inequality, gender discrimination and sexual discrimination, the Time’s Up national organization, supported by the Time’s Up Legal Defense Fund, launched the Time’s Up initiative for health care workers on March 1, 2019.2,3 The overarching goal of this initiative is to expose workplace inequalities; drive policy and legislative changes focused on equal pay, equal opportunity, and equal work environments; and support safe, fair, and dignified work for women in health care. 2,3
This article, presented over the next three issues of Vascular Specialist, will present data on the ongoing problem of sexual harassment in medicine, discuss why the problem is prevalent in academic medicine, and provide recommendations for mitigating the problem in our workplace.
Defining & Measuring Sexual Harassment
Although commonly referred to as “sex discrimination,” sexual harassment differs from sexual discrimination. Sex discrimination refers to an employees’ denial of civil rights, raises, job opportunities, employment or a demotion or other mistreatments based on sex. On the other hand, sexual harassment relates to behavior that is inappropriate or offensive. A 2018 report from the National Academies Press defined sexual harassment (a form of discrimination) as comprising three categories of behavior: gender harassment – verbal and nonverbal behaviors that convey hostility, objectification, exclusion, or second-class status about members of one sex; unwanted sexual attention – verbal or physical unwelcome sexual advances, which can include assault; and sexual coercion – when favorable professional or educational treatment is conditional based on sexual activity.1
During 1995-2016, more than 7,000 health care service employees filed claims of sexual harassment with the Equal Employment Opportunity Commission. While this number may seem large, the number of official reports severely undervalues the prevalence of sexual discrimination in U.S. health care.1 Prevalence is best determined using representative validated surveys that rely on firsthand experience or observation of the behavior(s) without requiring the respondent to label those behaviors.
Environments at Risk for Sexual Harassment
Research reveals that academic settings in the fields of science exhibit characteristics that create high levels of risk for sexual harassment to occur. These environments historically are male dominated, tolerate sexually harassing behavior, and create a hierarchy in which men hold most of the positions of power and authority. Moreover, dependent relationships often exist between these gatekeepers and those subordinate to them, with gatekeepers directly influencing the career advancement of those subordinates.1
The greatest predictor of sexual harassment in the workplace is the organizational climate, which refers to the tolerance for sexual harassment and is measured on three elements: a lack of sanctions against offenders; a perceived risk to those who report sexually harassing behavior; and the perception that one’s report of sexually harassing behavior will not be taken seriously.1 Women are less likely to be directly harassed in environments that do not tolerate harassing behaviors or have a strong, clear, transparent consequence for these behaviors.
Sexual Harassment in Academic Medicine
Academic medicine has the highest rate of gender and sexual harassment in the health care industry, with about 50% of female academic physicians reporting incidents of sexual harassment.1 A recent survey suggests that more than half (58%) of women surgeons experienced sexual harassment within just the previous year alone.4 The conditions that increase the risk of sexual harassment against women – male-dominated hierarchical environments and organizational tolerance of sexual harassment – still prevail in academic medicine.
Higher-education environments are perceived as permissive environments in part because when targets report sexual harassment, they are retaliated against or there are few consequences for the perpetrator. Academic institutions are replete with cases in which the conduct of offenders is regarded as an open secret, but there are no sanctions for that bad behavior. These offenders often are perceived as superstars in their particular substantive area. Because they hold valued grants or national status within their specialty area, they often receive preferential treatment and are not held accountable for gender-biased and sexually harassing behavior. Interview data regarding sexual harassment in academic medicine reveals that interview respondents and other colleagues often know which individuals have a history of sexually harassing behavior. Both men and women warn colleagues of these perpetrators – knowing that calling out or reporting these behaviors is fruitless – and that the best manner for dealing with their behavior is to avoid or ignore it. This normalization of sexual harassment and gender bias was noted, unfortunately, to fuel similar behavior in new cohorts of medicine faculty.1
Sexual harassment of women in academic medicine starts in medical school. Female medical students are significantly more likely to experience sexual harassment by faculty and staff than are graduate or undergraduate students. Sexual harassment continues into residency training with residency described as “breeding grounds for abusive behavior by superiors.”1 Interview studies report that both men and women trainees widely accept harassing behavior at this stage of their training. The expectation of abusive and grueling conditions during residency caused several respondents to view sexual harassment as part of a continuum that they were expected to endure. Female residents in surgery and emergency medicine are more likely to be harassed than those in other specialties because of the high value placed on a hierarchical and authoritative workplace. Once out of residency, the sexual harassment of women in the workplace continues. A recent meta-analysis reveals that 58% of women faculty experience sexual harassment at work. Academic medicine has the second-highest rate of sexual harassment, behind the military (69%), as compared with all other workplaces. Women physicians of color experience more harassment (as a combination of sexual and racial harassment) than do white women physicians.1
Why Women Are Not Likely to Report Sexual Harassment
Only 25% of targets file formal reports with their employer, with even fewer taking claims to court. These numbers are even lower for women in the military and academic medicine, where formal reporting is the last resort for the victims. The reluctance to use formal reporting mechanisms is rooted in the “fear of blame, disbelief, inaction, retaliation, humiliation, ostracism, and the damage to one’s career and reputation.”1 Targets may perceive that there seem to be few benefits and high costs for reporting. Women and nonwhites often resist calling bad behavior “discrimination” because that increases their loss of control and victimhood.1 Women frequently perceive that grievance procedures favor the institution over the individual, and research has proven that women face retaliation, both professional and social, for speaking out. Furthermore, stark power differentials between the target and the perpetrator exacerbate the reluctance to report and the fear of retaliation. The overall effects can be long lasting.
References:
1. National Academies of Sciences, Engineering, and Medicine. Sexual Harassment of Women: Climate, Culture, and Consequences in Academic Sciences, Engineering, and Medicine. The National Academies Press, Washington, DC; 2018. doi. 10.17226/24994.
2. Choo EK et al. From #MeToo to #TimesUp in Health Care: Can a Culture of Accountability End Inequity and Harassment? Lancet. 2019 Feb 9;393(10171):499-502.
3. Choo EK et al. Time’s Up for Medicine? Only Time Will Tell. N Engl J Med. 2018 Oct 25;379(17):1592-3.
4. Medicine Has Its Own #MeToo Problems. Can Time’s Up Healthcare Fix It?
Dr. Mitchell is a vascular surgeon at Salem (Ore.) Hospital; Dr. Drudi is as vascular surgery resident at McGill University, Montreal; Dr. Brown is a professor of surgery at the Medical College of Wisconsin. Milwaukee; Dr. Sachdev-Ost is an associate professor of surgery at the University of Pittsburgh Medical Center.
FRAX with BMD may not be accurate for women with diabetes
according to data from 566 women aged 40-90 years.
In a study published in Bone Reports, Lelia L.F. de Abreu, MD, of Deakin University, Geelong, Australia, and colleagues investigated the accuracy of FRAX scores and the role of impaired fasting glucose (IFG) and bone mineral density (BMD) on fracture risk by comparing FRAX scores for 252 normoglycemic women, 247 women with IFG, and 67 women with diabetes.
When BMD was not included, women with diabetes had a higher median FRAX score for major osteoporotic fractures of the hip, clinical spine, forearm, and wrist than women without diabetes or women with IFG (7.1, 4.3, and 5.1, respectively). In the diabetes group, 11 major osteoporotic fractures were observed versus 5 predicted by FRAX. In the normoglycemic group, 28 fractures were observed versus 15 predicted, and in the IFG group 31 fractures were observed versus 16 predicted.
When BMD was included, major osteoporotic fractures and hip fractures also were underestimated in the diabetes group (11 observed vs. 4 observed; 6 observed vs. 1 predicted, respectively), but the difference in observed versus predicted fractures trended toward statistical significance but was not significant (P = .055; P = .52, respectively). FRAX with BMD increased the underestimation of major osteoporotic fractures in the normoglycemic and IFG groups (28 observed vs. 13 predicted; 31 observed vs. 13 predicted).
The study findings were limited by several factors including the inability to determine the impact of specific types of diabetes on fracture risk, lack of data on the duration of diabetes in study participants, the use of self-reports, and a relatively small and homogeneous sample size, the researchers noted.
However, the results support data from previous studies showing an increased fracture risk in diabetes patients regardless of BMD, and suggest that FRAX may be unreliable as a predictor of fractures in the diabetes population, they concluded.
The study was supported in part by the Victorian Health Promotion Foundation, National Health and Medical Research Council Australia, and the Geelong Region Medical Research Foundation. Two researchers were supported by university postgraduate rewards and one researcher was supported by a university postdoctoral research fellowship. The remaining coauthors reported no relevant financial conflicts.
SOURCE: de Abreu LLF et al. Bone Reports. 2019 Aug 13. doi: 10.1016/j.bonr.2019.100223.
according to data from 566 women aged 40-90 years.
In a study published in Bone Reports, Lelia L.F. de Abreu, MD, of Deakin University, Geelong, Australia, and colleagues investigated the accuracy of FRAX scores and the role of impaired fasting glucose (IFG) and bone mineral density (BMD) on fracture risk by comparing FRAX scores for 252 normoglycemic women, 247 women with IFG, and 67 women with diabetes.
When BMD was not included, women with diabetes had a higher median FRAX score for major osteoporotic fractures of the hip, clinical spine, forearm, and wrist than women without diabetes or women with IFG (7.1, 4.3, and 5.1, respectively). In the diabetes group, 11 major osteoporotic fractures were observed versus 5 predicted by FRAX. In the normoglycemic group, 28 fractures were observed versus 15 predicted, and in the IFG group 31 fractures were observed versus 16 predicted.
When BMD was included, major osteoporotic fractures and hip fractures also were underestimated in the diabetes group (11 observed vs. 4 observed; 6 observed vs. 1 predicted, respectively), but the difference in observed versus predicted fractures trended toward statistical significance but was not significant (P = .055; P = .52, respectively). FRAX with BMD increased the underestimation of major osteoporotic fractures in the normoglycemic and IFG groups (28 observed vs. 13 predicted; 31 observed vs. 13 predicted).
The study findings were limited by several factors including the inability to determine the impact of specific types of diabetes on fracture risk, lack of data on the duration of diabetes in study participants, the use of self-reports, and a relatively small and homogeneous sample size, the researchers noted.
However, the results support data from previous studies showing an increased fracture risk in diabetes patients regardless of BMD, and suggest that FRAX may be unreliable as a predictor of fractures in the diabetes population, they concluded.
The study was supported in part by the Victorian Health Promotion Foundation, National Health and Medical Research Council Australia, and the Geelong Region Medical Research Foundation. Two researchers were supported by university postgraduate rewards and one researcher was supported by a university postdoctoral research fellowship. The remaining coauthors reported no relevant financial conflicts.
SOURCE: de Abreu LLF et al. Bone Reports. 2019 Aug 13. doi: 10.1016/j.bonr.2019.100223.
according to data from 566 women aged 40-90 years.
In a study published in Bone Reports, Lelia L.F. de Abreu, MD, of Deakin University, Geelong, Australia, and colleagues investigated the accuracy of FRAX scores and the role of impaired fasting glucose (IFG) and bone mineral density (BMD) on fracture risk by comparing FRAX scores for 252 normoglycemic women, 247 women with IFG, and 67 women with diabetes.
When BMD was not included, women with diabetes had a higher median FRAX score for major osteoporotic fractures of the hip, clinical spine, forearm, and wrist than women without diabetes or women with IFG (7.1, 4.3, and 5.1, respectively). In the diabetes group, 11 major osteoporotic fractures were observed versus 5 predicted by FRAX. In the normoglycemic group, 28 fractures were observed versus 15 predicted, and in the IFG group 31 fractures were observed versus 16 predicted.
When BMD was included, major osteoporotic fractures and hip fractures also were underestimated in the diabetes group (11 observed vs. 4 observed; 6 observed vs. 1 predicted, respectively), but the difference in observed versus predicted fractures trended toward statistical significance but was not significant (P = .055; P = .52, respectively). FRAX with BMD increased the underestimation of major osteoporotic fractures in the normoglycemic and IFG groups (28 observed vs. 13 predicted; 31 observed vs. 13 predicted).
The study findings were limited by several factors including the inability to determine the impact of specific types of diabetes on fracture risk, lack of data on the duration of diabetes in study participants, the use of self-reports, and a relatively small and homogeneous sample size, the researchers noted.
However, the results support data from previous studies showing an increased fracture risk in diabetes patients regardless of BMD, and suggest that FRAX may be unreliable as a predictor of fractures in the diabetes population, they concluded.
The study was supported in part by the Victorian Health Promotion Foundation, National Health and Medical Research Council Australia, and the Geelong Region Medical Research Foundation. Two researchers were supported by university postgraduate rewards and one researcher was supported by a university postdoctoral research fellowship. The remaining coauthors reported no relevant financial conflicts.
SOURCE: de Abreu LLF et al. Bone Reports. 2019 Aug 13. doi: 10.1016/j.bonr.2019.100223.
FROM BONE REPORTS
Cancer survivors face more age-related deficits
Long-term survivors of cancer have more age-related functional deficits than do those who have not experienced cancer, and these deficits – as well as their cancer history – are both associated with a higher risk of all-cause mortality, a study has found.
A paper published in Cancer reported the outcomes of a population-based cohort study involving 1,723 female cancer survivors and 11,145 cancer-free women enrolled in the Iowa Women’s Health Study, who were followed for 10 years.
The analysis revealed that women with a history of cancer had significantly more deficits on a geriatric assessment compared with their age-matched controls without a history of cancer. While 66% of women without a cancer history had one or more deficits, 70% of those with a history had at least one age-related deficit, and they were significantly more likely to have two or more deficits.
Cancer survivors were significantly more likely to have two or more physical function limitations than were those without a history of cancer (42.4% vs. 36.9%, P less than .0001), to have two or more comorbidities (41.3% vs. 38.2%, P = .02) and to have poor general health (23.3% vs. 17.4%, P less than .0001). They were also significantly less likely to be underweight.
The study found that both cancer history and age-related functional deficits were predictors of mortality, even after adjustment for confounders such as chronological age, smoking, and physical activity levels. The highest mortality risk was seen in cancer survivors with two or more age-related health deficits, who had a twofold greater mortality risk compared with the noncancer controls with fewer than two health deficits.
Even individuals with a history of cancer but without any health deficits still had a 1.3-1.4-fold increased risk of mortality compared with individuals without a history of cancer and without health deficits.
“These results confirm the increased risk of mortality associated with GA domain deficits and extend the research by demonstrating that a cancer history is associated with an older functional age compared with aged-matched cancer-free individuals,” wrote Cindy K. Blair, PhD, of the department of internal medicine at the University of New Mexico, Albuquerque, and coauthors.
They noted that the study included very long-term cancer survivors who had survived for an average of 11 years before they underwent the geriatric assessment and were then followed for 10 years after that point.
“Further research is needed to identify older cancer survivors who are at risk of accelerated aging,” the authors wrote. “Interventions that target physical function, comorbidity, nutritional status, and general health are greatly needed to improve or maintain the quality of survivorship in older cancer survivors.”
The National Cancer Institute, the University of Minnesota Cancer Center, and the University of New Mexico Comprehensive Cancer Center supported the study. Two authors declared grants from the National Institutes of Health related to the study.
SOURCE: Blair C et al. Cancer 2019, Aug 16. doi: 10.1002/cncr.32449.
Long-term survivors of cancer have more age-related functional deficits than do those who have not experienced cancer, and these deficits – as well as their cancer history – are both associated with a higher risk of all-cause mortality, a study has found.
A paper published in Cancer reported the outcomes of a population-based cohort study involving 1,723 female cancer survivors and 11,145 cancer-free women enrolled in the Iowa Women’s Health Study, who were followed for 10 years.
The analysis revealed that women with a history of cancer had significantly more deficits on a geriatric assessment compared with their age-matched controls without a history of cancer. While 66% of women without a cancer history had one or more deficits, 70% of those with a history had at least one age-related deficit, and they were significantly more likely to have two or more deficits.
Cancer survivors were significantly more likely to have two or more physical function limitations than were those without a history of cancer (42.4% vs. 36.9%, P less than .0001), to have two or more comorbidities (41.3% vs. 38.2%, P = .02) and to have poor general health (23.3% vs. 17.4%, P less than .0001). They were also significantly less likely to be underweight.
The study found that both cancer history and age-related functional deficits were predictors of mortality, even after adjustment for confounders such as chronological age, smoking, and physical activity levels. The highest mortality risk was seen in cancer survivors with two or more age-related health deficits, who had a twofold greater mortality risk compared with the noncancer controls with fewer than two health deficits.
Even individuals with a history of cancer but without any health deficits still had a 1.3-1.4-fold increased risk of mortality compared with individuals without a history of cancer and without health deficits.
“These results confirm the increased risk of mortality associated with GA domain deficits and extend the research by demonstrating that a cancer history is associated with an older functional age compared with aged-matched cancer-free individuals,” wrote Cindy K. Blair, PhD, of the department of internal medicine at the University of New Mexico, Albuquerque, and coauthors.
They noted that the study included very long-term cancer survivors who had survived for an average of 11 years before they underwent the geriatric assessment and were then followed for 10 years after that point.
“Further research is needed to identify older cancer survivors who are at risk of accelerated aging,” the authors wrote. “Interventions that target physical function, comorbidity, nutritional status, and general health are greatly needed to improve or maintain the quality of survivorship in older cancer survivors.”
The National Cancer Institute, the University of Minnesota Cancer Center, and the University of New Mexico Comprehensive Cancer Center supported the study. Two authors declared grants from the National Institutes of Health related to the study.
SOURCE: Blair C et al. Cancer 2019, Aug 16. doi: 10.1002/cncr.32449.
Long-term survivors of cancer have more age-related functional deficits than do those who have not experienced cancer, and these deficits – as well as their cancer history – are both associated with a higher risk of all-cause mortality, a study has found.
A paper published in Cancer reported the outcomes of a population-based cohort study involving 1,723 female cancer survivors and 11,145 cancer-free women enrolled in the Iowa Women’s Health Study, who were followed for 10 years.
The analysis revealed that women with a history of cancer had significantly more deficits on a geriatric assessment compared with their age-matched controls without a history of cancer. While 66% of women without a cancer history had one or more deficits, 70% of those with a history had at least one age-related deficit, and they were significantly more likely to have two or more deficits.
Cancer survivors were significantly more likely to have two or more physical function limitations than were those without a history of cancer (42.4% vs. 36.9%, P less than .0001), to have two or more comorbidities (41.3% vs. 38.2%, P = .02) and to have poor general health (23.3% vs. 17.4%, P less than .0001). They were also significantly less likely to be underweight.
The study found that both cancer history and age-related functional deficits were predictors of mortality, even after adjustment for confounders such as chronological age, smoking, and physical activity levels. The highest mortality risk was seen in cancer survivors with two or more age-related health deficits, who had a twofold greater mortality risk compared with the noncancer controls with fewer than two health deficits.
Even individuals with a history of cancer but without any health deficits still had a 1.3-1.4-fold increased risk of mortality compared with individuals without a history of cancer and without health deficits.
“These results confirm the increased risk of mortality associated with GA domain deficits and extend the research by demonstrating that a cancer history is associated with an older functional age compared with aged-matched cancer-free individuals,” wrote Cindy K. Blair, PhD, of the department of internal medicine at the University of New Mexico, Albuquerque, and coauthors.
They noted that the study included very long-term cancer survivors who had survived for an average of 11 years before they underwent the geriatric assessment and were then followed for 10 years after that point.
“Further research is needed to identify older cancer survivors who are at risk of accelerated aging,” the authors wrote. “Interventions that target physical function, comorbidity, nutritional status, and general health are greatly needed to improve or maintain the quality of survivorship in older cancer survivors.”
The National Cancer Institute, the University of Minnesota Cancer Center, and the University of New Mexico Comprehensive Cancer Center supported the study. Two authors declared grants from the National Institutes of Health related to the study.
SOURCE: Blair C et al. Cancer 2019, Aug 16. doi: 10.1002/cncr.32449.
FROM CANCER
USPSTF expands BRCA1/2 testing recommendations
The U.S. Preventive Services Task Force (USPSTF) has updated its recommendations on assessment of breast cancer susceptibility gene (BRCA)-related cancer, substantially expanding the pool of individuals for whom risk assessment, testing, and counseling would be warranted.
In its 2013 recommendation, the USPSTF said referral for genetic counseling and evaluation for BRCA1/2 testing was warranted for women who had a family history linked to increased risk of potentially harmful BRCA1/2 mutations.
The updated recommendations, just published in JAMA, expand the screening-eligible population to include those with personal cancer history, and more specifically call out ancestry linked to BRCA1/2 mutations as a risk factor (JAMA. 2019;322[7]:652-65. doi: 10.1001/jama.2019.10987).
“The USPSTF recommends that primary care clinicians assess women with a personal or family history of breast, ovarian, tubal, or peritoneal cancer or who have an ancestry associated with BRCA1/2 gene mutations with an appropriate brief familial risk assessment tool,” wrote Douglas K. Owens, MD, of Stanford (Calif.) University, and coauthors of the task force report.
Positive results on the risk assessment tool should prompt genetic counseling, and genetic testing if indicated after counseling, the USPSTF added in its statement.
By contrast, the task force recommends against routine assessment, counseling, and testing in women with no family history, personal history, or ancestry linked to possibly harmful BRCA1/2 gene mutations, consistent with their previous recommendation.
Mutations of BRCA1/2 genes occur in an estimated 1 in 300-500 women in the general population, and account for 15% of ovarian cancer and up to 10% of breast cancer cases, according to the USPSTF.
Breast cancer risk is increased up to 65% by 70 years in those women with clinically significant BRCA1/2 mutations, while risk of ovarian, fallopian tube, or peritoneal cancer are increased by up to 39%, according to studies cited by the USPSTF.
Important step forward
Including women with prior breast and ovarian cancer in the screening-eligible population is an “important step forward,” Susan Domcheck, MD, and Mark Robson, MD, said in a related editorial.
“While further expansion of the USPSTF recommendation should be considered, the importance is clear: Identification of individuals at risk of carrying a BRCA1/2 mutation can be lifesaving and should be a part of routine medical care,” Dr. Domcheck and Dr. Robson said in their editorial, which appears in JAMA.
While the updated recommendations explicitly call out ancestry as a risk factor, they stop short of endorsing testing for unaffected Ashkenazi Jewish women with no family history, the authors said.
“However, the statement may be interpreted as a step toward supporting unselected testing in this group,” they added.
Among unselected individuals of Ashkenazi Jewish descent, 1 in 40 have 1 of 3 specific BRCA1 or BRCA2 founder mutations, according to one study cited by Dr. Domcheck and Dr. Robson.
More research needed
Current research is still “limited or lacking” to address many key questions about the benefits and harms of risk assessment, genetic counseling, and genetic testing in women without BRCA1/2-related cancer, according to authors of a literature review used by the USPSTF.
Notably, the ability of risk assessment, testing, and counseling to reduce cancer incidence and mortality among such women has not been directly evaluated by studies to date, said the review authors, led by Heidi D. Nelson, MD, MPH, of Oregon Health & Science University, Portland.
“Without effectiveness trials of intensive screening, practice standards have preceded supporting evidence,” said Dr. Nelson and coauthors noted in a report on the review findings.
In observational studies, mastectomy and oophorectomy have been associated with substantial reductions in subsequent cancer incidence and mortality; however, they are invasive procedures with potential complications, the authors noted.
“To determine the appropriateness of risk assessment and genetic testing for BRCA1/2 mutations as a preventive service in primary care, more information is needed about mutation prevalence and the effect of testing in the general population,” they added.
Researchers studying BRCA1/2 assessment as preventive service in primary care have generally looked at highly selected patient populations in referral centers, and have reported relatively short-term outcomes, they said.
Research is additionally needed on access to genetic testing and follow-up, effectiveness of risk stratification and multigene panels, and the impact of direct-to-consumer genetic testing, among other key questions, the authors of the review added.
Treatment implications
While the USPSTF recommendations do not mention systemic therapy, finding a BRCA mutation in a cancer patient today has important implications for treatment, said Rachel L. Yung, MD, and Larissa A. Korde, MD, MPH
Specifically, poly (ADP-ribose) polymerase (PARP) inhibitors have proved effective in certain BRCA-related cancers, Dr. Yung and Dr. Korde said in an editorial on the updated recommendations appearing in JAMA Oncology.
The Food and Drug Administration has already approved several PARP inhibitors for treatment of BRCA-linked metastatic breast or ovarian cancers, and studies are underway for other tumor types, including prostate and pancreatic cancers that harbor a BRCA mutation.
“Increasing awareness of BRCA mutation as a target for treatment will likely lead to an increase in the identification of patients with cancer harboring germline BRCA mutations, which in turn will increase the need for cascade testing for relatives of affected probands,” wrote Dr. Yung and Dr. Korde.
Addressing disparities in care
The USPSTF recommendations for BRCA risk assessment do not address disparities in testing referral and variation in breast cancer phenotypes among women of African ancestry, owing to lack of evidence, according to Lisa Newman, MD, MPH, of the Interdisciplinary Breast Program at New York–Presbyterian/Weill Cornell Medical Center, New York.
“Paradoxically, the data-driven basis for the USPSTF recommendation statement may magnify existing genetic testing disparities,” Dr. Newman wrote in an editorial that appears in JAMA Surgery.
Non-Hispanic black women in the United States have a twofold higher incidence of triple-negative breast cancer, which is a well documented risk factor for BRCA1 mutation carrier status, according to Dr. Newman.
Despite this, she added, genetic counseling and testing referrals remain “disproportionately low” among U.S. patients of African ancestry.
“It remains imperative for clinicians to exercise clinical judgment and to be mindful of patient subsets that do not necessarily fit into recommendations designed for the majority or general populations,” Dr. Newman concluded in her editorial.
The USPSTF is funded by the Agency for Healthcare Research and Quality. Members of the task force receive travel reimbursement and honoraria for participating in USPSTF meetings.
The U.S. Preventive Services Task Force (USPSTF) has updated its recommendations on assessment of breast cancer susceptibility gene (BRCA)-related cancer, substantially expanding the pool of individuals for whom risk assessment, testing, and counseling would be warranted.
In its 2013 recommendation, the USPSTF said referral for genetic counseling and evaluation for BRCA1/2 testing was warranted for women who had a family history linked to increased risk of potentially harmful BRCA1/2 mutations.
The updated recommendations, just published in JAMA, expand the screening-eligible population to include those with personal cancer history, and more specifically call out ancestry linked to BRCA1/2 mutations as a risk factor (JAMA. 2019;322[7]:652-65. doi: 10.1001/jama.2019.10987).
“The USPSTF recommends that primary care clinicians assess women with a personal or family history of breast, ovarian, tubal, or peritoneal cancer or who have an ancestry associated with BRCA1/2 gene mutations with an appropriate brief familial risk assessment tool,” wrote Douglas K. Owens, MD, of Stanford (Calif.) University, and coauthors of the task force report.
Positive results on the risk assessment tool should prompt genetic counseling, and genetic testing if indicated after counseling, the USPSTF added in its statement.
By contrast, the task force recommends against routine assessment, counseling, and testing in women with no family history, personal history, or ancestry linked to possibly harmful BRCA1/2 gene mutations, consistent with their previous recommendation.
Mutations of BRCA1/2 genes occur in an estimated 1 in 300-500 women in the general population, and account for 15% of ovarian cancer and up to 10% of breast cancer cases, according to the USPSTF.
Breast cancer risk is increased up to 65% by 70 years in those women with clinically significant BRCA1/2 mutations, while risk of ovarian, fallopian tube, or peritoneal cancer are increased by up to 39%, according to studies cited by the USPSTF.
Important step forward
Including women with prior breast and ovarian cancer in the screening-eligible population is an “important step forward,” Susan Domcheck, MD, and Mark Robson, MD, said in a related editorial.
“While further expansion of the USPSTF recommendation should be considered, the importance is clear: Identification of individuals at risk of carrying a BRCA1/2 mutation can be lifesaving and should be a part of routine medical care,” Dr. Domcheck and Dr. Robson said in their editorial, which appears in JAMA.
While the updated recommendations explicitly call out ancestry as a risk factor, they stop short of endorsing testing for unaffected Ashkenazi Jewish women with no family history, the authors said.
“However, the statement may be interpreted as a step toward supporting unselected testing in this group,” they added.
Among unselected individuals of Ashkenazi Jewish descent, 1 in 40 have 1 of 3 specific BRCA1 or BRCA2 founder mutations, according to one study cited by Dr. Domcheck and Dr. Robson.
More research needed
Current research is still “limited or lacking” to address many key questions about the benefits and harms of risk assessment, genetic counseling, and genetic testing in women without BRCA1/2-related cancer, according to authors of a literature review used by the USPSTF.
Notably, the ability of risk assessment, testing, and counseling to reduce cancer incidence and mortality among such women has not been directly evaluated by studies to date, said the review authors, led by Heidi D. Nelson, MD, MPH, of Oregon Health & Science University, Portland.
“Without effectiveness trials of intensive screening, practice standards have preceded supporting evidence,” said Dr. Nelson and coauthors noted in a report on the review findings.
In observational studies, mastectomy and oophorectomy have been associated with substantial reductions in subsequent cancer incidence and mortality; however, they are invasive procedures with potential complications, the authors noted.
“To determine the appropriateness of risk assessment and genetic testing for BRCA1/2 mutations as a preventive service in primary care, more information is needed about mutation prevalence and the effect of testing in the general population,” they added.
Researchers studying BRCA1/2 assessment as preventive service in primary care have generally looked at highly selected patient populations in referral centers, and have reported relatively short-term outcomes, they said.
Research is additionally needed on access to genetic testing and follow-up, effectiveness of risk stratification and multigene panels, and the impact of direct-to-consumer genetic testing, among other key questions, the authors of the review added.
Treatment implications
While the USPSTF recommendations do not mention systemic therapy, finding a BRCA mutation in a cancer patient today has important implications for treatment, said Rachel L. Yung, MD, and Larissa A. Korde, MD, MPH
Specifically, poly (ADP-ribose) polymerase (PARP) inhibitors have proved effective in certain BRCA-related cancers, Dr. Yung and Dr. Korde said in an editorial on the updated recommendations appearing in JAMA Oncology.
The Food and Drug Administration has already approved several PARP inhibitors for treatment of BRCA-linked metastatic breast or ovarian cancers, and studies are underway for other tumor types, including prostate and pancreatic cancers that harbor a BRCA mutation.
“Increasing awareness of BRCA mutation as a target for treatment will likely lead to an increase in the identification of patients with cancer harboring germline BRCA mutations, which in turn will increase the need for cascade testing for relatives of affected probands,” wrote Dr. Yung and Dr. Korde.
Addressing disparities in care
The USPSTF recommendations for BRCA risk assessment do not address disparities in testing referral and variation in breast cancer phenotypes among women of African ancestry, owing to lack of evidence, according to Lisa Newman, MD, MPH, of the Interdisciplinary Breast Program at New York–Presbyterian/Weill Cornell Medical Center, New York.
“Paradoxically, the data-driven basis for the USPSTF recommendation statement may magnify existing genetic testing disparities,” Dr. Newman wrote in an editorial that appears in JAMA Surgery.
Non-Hispanic black women in the United States have a twofold higher incidence of triple-negative breast cancer, which is a well documented risk factor for BRCA1 mutation carrier status, according to Dr. Newman.
Despite this, she added, genetic counseling and testing referrals remain “disproportionately low” among U.S. patients of African ancestry.
“It remains imperative for clinicians to exercise clinical judgment and to be mindful of patient subsets that do not necessarily fit into recommendations designed for the majority or general populations,” Dr. Newman concluded in her editorial.
The USPSTF is funded by the Agency for Healthcare Research and Quality. Members of the task force receive travel reimbursement and honoraria for participating in USPSTF meetings.
The U.S. Preventive Services Task Force (USPSTF) has updated its recommendations on assessment of breast cancer susceptibility gene (BRCA)-related cancer, substantially expanding the pool of individuals for whom risk assessment, testing, and counseling would be warranted.
In its 2013 recommendation, the USPSTF said referral for genetic counseling and evaluation for BRCA1/2 testing was warranted for women who had a family history linked to increased risk of potentially harmful BRCA1/2 mutations.
The updated recommendations, just published in JAMA, expand the screening-eligible population to include those with personal cancer history, and more specifically call out ancestry linked to BRCA1/2 mutations as a risk factor (JAMA. 2019;322[7]:652-65. doi: 10.1001/jama.2019.10987).
“The USPSTF recommends that primary care clinicians assess women with a personal or family history of breast, ovarian, tubal, or peritoneal cancer or who have an ancestry associated with BRCA1/2 gene mutations with an appropriate brief familial risk assessment tool,” wrote Douglas K. Owens, MD, of Stanford (Calif.) University, and coauthors of the task force report.
Positive results on the risk assessment tool should prompt genetic counseling, and genetic testing if indicated after counseling, the USPSTF added in its statement.
By contrast, the task force recommends against routine assessment, counseling, and testing in women with no family history, personal history, or ancestry linked to possibly harmful BRCA1/2 gene mutations, consistent with their previous recommendation.
Mutations of BRCA1/2 genes occur in an estimated 1 in 300-500 women in the general population, and account for 15% of ovarian cancer and up to 10% of breast cancer cases, according to the USPSTF.
Breast cancer risk is increased up to 65% by 70 years in those women with clinically significant BRCA1/2 mutations, while risk of ovarian, fallopian tube, or peritoneal cancer are increased by up to 39%, according to studies cited by the USPSTF.
Important step forward
Including women with prior breast and ovarian cancer in the screening-eligible population is an “important step forward,” Susan Domcheck, MD, and Mark Robson, MD, said in a related editorial.
“While further expansion of the USPSTF recommendation should be considered, the importance is clear: Identification of individuals at risk of carrying a BRCA1/2 mutation can be lifesaving and should be a part of routine medical care,” Dr. Domcheck and Dr. Robson said in their editorial, which appears in JAMA.
While the updated recommendations explicitly call out ancestry as a risk factor, they stop short of endorsing testing for unaffected Ashkenazi Jewish women with no family history, the authors said.
“However, the statement may be interpreted as a step toward supporting unselected testing in this group,” they added.
Among unselected individuals of Ashkenazi Jewish descent, 1 in 40 have 1 of 3 specific BRCA1 or BRCA2 founder mutations, according to one study cited by Dr. Domcheck and Dr. Robson.
More research needed
Current research is still “limited or lacking” to address many key questions about the benefits and harms of risk assessment, genetic counseling, and genetic testing in women without BRCA1/2-related cancer, according to authors of a literature review used by the USPSTF.
Notably, the ability of risk assessment, testing, and counseling to reduce cancer incidence and mortality among such women has not been directly evaluated by studies to date, said the review authors, led by Heidi D. Nelson, MD, MPH, of Oregon Health & Science University, Portland.
“Without effectiveness trials of intensive screening, practice standards have preceded supporting evidence,” said Dr. Nelson and coauthors noted in a report on the review findings.
In observational studies, mastectomy and oophorectomy have been associated with substantial reductions in subsequent cancer incidence and mortality; however, they are invasive procedures with potential complications, the authors noted.
“To determine the appropriateness of risk assessment and genetic testing for BRCA1/2 mutations as a preventive service in primary care, more information is needed about mutation prevalence and the effect of testing in the general population,” they added.
Researchers studying BRCA1/2 assessment as preventive service in primary care have generally looked at highly selected patient populations in referral centers, and have reported relatively short-term outcomes, they said.
Research is additionally needed on access to genetic testing and follow-up, effectiveness of risk stratification and multigene panels, and the impact of direct-to-consumer genetic testing, among other key questions, the authors of the review added.
Treatment implications
While the USPSTF recommendations do not mention systemic therapy, finding a BRCA mutation in a cancer patient today has important implications for treatment, said Rachel L. Yung, MD, and Larissa A. Korde, MD, MPH
Specifically, poly (ADP-ribose) polymerase (PARP) inhibitors have proved effective in certain BRCA-related cancers, Dr. Yung and Dr. Korde said in an editorial on the updated recommendations appearing in JAMA Oncology.
The Food and Drug Administration has already approved several PARP inhibitors for treatment of BRCA-linked metastatic breast or ovarian cancers, and studies are underway for other tumor types, including prostate and pancreatic cancers that harbor a BRCA mutation.
“Increasing awareness of BRCA mutation as a target for treatment will likely lead to an increase in the identification of patients with cancer harboring germline BRCA mutations, which in turn will increase the need for cascade testing for relatives of affected probands,” wrote Dr. Yung and Dr. Korde.
Addressing disparities in care
The USPSTF recommendations for BRCA risk assessment do not address disparities in testing referral and variation in breast cancer phenotypes among women of African ancestry, owing to lack of evidence, according to Lisa Newman, MD, MPH, of the Interdisciplinary Breast Program at New York–Presbyterian/Weill Cornell Medical Center, New York.
“Paradoxically, the data-driven basis for the USPSTF recommendation statement may magnify existing genetic testing disparities,” Dr. Newman wrote in an editorial that appears in JAMA Surgery.
Non-Hispanic black women in the United States have a twofold higher incidence of triple-negative breast cancer, which is a well documented risk factor for BRCA1 mutation carrier status, according to Dr. Newman.
Despite this, she added, genetic counseling and testing referrals remain “disproportionately low” among U.S. patients of African ancestry.
“It remains imperative for clinicians to exercise clinical judgment and to be mindful of patient subsets that do not necessarily fit into recommendations designed for the majority or general populations,” Dr. Newman concluded in her editorial.
The USPSTF is funded by the Agency for Healthcare Research and Quality. Members of the task force receive travel reimbursement and honoraria for participating in USPSTF meetings.
FROM JAMA
Self-reported falls can predict osteoporotic fracture risk
A single, simple question about a patient’s experience of falls in the previous year can help predict their risk of fractures, a study suggests.
In Osteoporosis International, researchers reported the outcomes of a cohort study using Manitoba clinical registry data from 24,943 men and women aged 40 years and older within the province who had undergone a fracture-probability assessment, and had data on self-reported falls for the previous year and fracture outcomes.
William D. Leslie, MD, of the University of Manitoba in Winnipeg, and coauthors wrote that a frequent criticism of the FRAX fracture risk assessment tool was the fact that it didn’t include falls or fall risk in predicting fractures.
“Recent evidence derived from carefully conducted research cohort studies in men found that falls increase fracture risk independent of FRAX probability,” they wrote. “However, data are inconsistent with a paucity of evidence demonstrating usefulness of self-reported fall data as collected in routine clinical practice.”
0.8% experienced a hip fracture, and 4.9% experienced any incident fracture.
The analysis showed an increased risk of fracture with the increasing number of self-reported falls experienced in the previous year. The risk of major osteoporotic fracture was 49% higher among individuals who reported one fall, 74% in those who reported two falls and 2.6-fold higher for those who reported three or more falls in the previous year, compared with those who did not report any falls.
A similar pattern was seen for any incident fracture and hip fracture, with a 3.4-fold higher risk of hip fracture seen in those who reported three or more falls. The study also showed an increase in mortality risk with increasing number of falls.
“We documented that a simple question regarding self-reported falls in the previous year could be easily collected during routine clinical practice and that this information was strongly predictive of short-term fracture risk independent of multiple clinical risk factors including fracture probability using the FRAX tool with BMD [bone mineral density],” the authors wrote.
The analysis did not find an interaction with age or sex and the number of falls.
John A. Kanis, MD, reported grants from Amgen, Lily, and Radius Health. Three other coauthors reported nothing to declare for the context of this article, but reported research grants, speaking honoraria, consultancies from a variety of pharmaceutical companies and organizations. The remaining five coauthors declared no conflicts of interest.
SOURCE: Leslie WD et al. Osteoporos Int. 2019 Aug. 2. doi: 10.1007/s00198-019-05106-3.
Fragility fractures remain a major contributor to morbidity and even mortality of aging populations. Concerted efforts of clinicians, epidemiologists, and researchers have yielded an assortment of diagnostic strategies and prognostic algorithms in efforts to identify individuals at fracture risk. A variety of demographic (age, sex), biological (family history, specific disorders and medications), anatomical (bone mineral density, body mass index), and behavioral (smoking, alcohol consumption) parameters are recognized as predictors of fracture risk, and often are incorporated in predictive algorithms for fracture predisposition. FRAX (Fracture Risk Assessment) is a widely used screening tool that is valid in offering fracture risk quantification across populations (Arch Osteoporos. 2016 Dec;11[1]:25; World Health Organization Assessment of Osteoporosis at the Primary Health Care Level).
Aging and accompanying neurocognitive deterioration, visual impairment, as well as iatrogenic factors are recognized to contribute to predisposition to falls in aging populations. A propensity for falls has long been regarded as a fracture risk (Curr Osteoporos Rep. 2008;6[4]:149-54). However, the evidence to support this logical assumption has been mixed with resulting exclusion of tendency to fall from commonly utilized fracture risk predictive models and tools. A predisposition to and frequency of falls is considered neither a risk modulator nor a mediator in the commonly utilized FRAX-based fracture risk assessments, and it is believed that fracture probability may be underestimated by FRAX in those predisposed to frequent falls (J Clin Densitom. 2011 Jul-Sep;14[3]:194–204).
The landscape of fracture risk assessment and quantification in the aforementioned backdrop has been refreshingly enhanced by a recent contribution by Leslie et al. wherein the authors provide real-life evidence relating self-reported falls to fracture risk. In a robust population sample nearing 25,000 women, increasing number of falls within the past year was associated with an increasing fracture risk, and this relationship persisted after adjusting for covariates that are recognized to predispose to fragility fractures, including age, body mass index, and bone mineral density. Women’s health providers are encouraged to familiarize themselves with the work of Leslie et al.; the authors’ message, that fall history be incorporated into risk quantification measures, is striking in its simplicity and profound in its preventative potential given that fall risk in and of itself may be mitigated in many through targeted interventions.
Lubna Pal, MBBS, MS, is professor and fellowship director of the division of reproductive endocrinology & infertility at Yale University, New Haven, Conn. She also is the director of the Yale reproductive endocrinology & infertility menopause program. She said she had no relevant financial disclosures. Email her at [email protected].
Fragility fractures remain a major contributor to morbidity and even mortality of aging populations. Concerted efforts of clinicians, epidemiologists, and researchers have yielded an assortment of diagnostic strategies and prognostic algorithms in efforts to identify individuals at fracture risk. A variety of demographic (age, sex), biological (family history, specific disorders and medications), anatomical (bone mineral density, body mass index), and behavioral (smoking, alcohol consumption) parameters are recognized as predictors of fracture risk, and often are incorporated in predictive algorithms for fracture predisposition. FRAX (Fracture Risk Assessment) is a widely used screening tool that is valid in offering fracture risk quantification across populations (Arch Osteoporos. 2016 Dec;11[1]:25; World Health Organization Assessment of Osteoporosis at the Primary Health Care Level).
Aging and accompanying neurocognitive deterioration, visual impairment, as well as iatrogenic factors are recognized to contribute to predisposition to falls in aging populations. A propensity for falls has long been regarded as a fracture risk (Curr Osteoporos Rep. 2008;6[4]:149-54). However, the evidence to support this logical assumption has been mixed with resulting exclusion of tendency to fall from commonly utilized fracture risk predictive models and tools. A predisposition to and frequency of falls is considered neither a risk modulator nor a mediator in the commonly utilized FRAX-based fracture risk assessments, and it is believed that fracture probability may be underestimated by FRAX in those predisposed to frequent falls (J Clin Densitom. 2011 Jul-Sep;14[3]:194–204).
The landscape of fracture risk assessment and quantification in the aforementioned backdrop has been refreshingly enhanced by a recent contribution by Leslie et al. wherein the authors provide real-life evidence relating self-reported falls to fracture risk. In a robust population sample nearing 25,000 women, increasing number of falls within the past year was associated with an increasing fracture risk, and this relationship persisted after adjusting for covariates that are recognized to predispose to fragility fractures, including age, body mass index, and bone mineral density. Women’s health providers are encouraged to familiarize themselves with the work of Leslie et al.; the authors’ message, that fall history be incorporated into risk quantification measures, is striking in its simplicity and profound in its preventative potential given that fall risk in and of itself may be mitigated in many through targeted interventions.
Lubna Pal, MBBS, MS, is professor and fellowship director of the division of reproductive endocrinology & infertility at Yale University, New Haven, Conn. She also is the director of the Yale reproductive endocrinology & infertility menopause program. She said she had no relevant financial disclosures. Email her at [email protected].
Fragility fractures remain a major contributor to morbidity and even mortality of aging populations. Concerted efforts of clinicians, epidemiologists, and researchers have yielded an assortment of diagnostic strategies and prognostic algorithms in efforts to identify individuals at fracture risk. A variety of demographic (age, sex), biological (family history, specific disorders and medications), anatomical (bone mineral density, body mass index), and behavioral (smoking, alcohol consumption) parameters are recognized as predictors of fracture risk, and often are incorporated in predictive algorithms for fracture predisposition. FRAX (Fracture Risk Assessment) is a widely used screening tool that is valid in offering fracture risk quantification across populations (Arch Osteoporos. 2016 Dec;11[1]:25; World Health Organization Assessment of Osteoporosis at the Primary Health Care Level).
Aging and accompanying neurocognitive deterioration, visual impairment, as well as iatrogenic factors are recognized to contribute to predisposition to falls in aging populations. A propensity for falls has long been regarded as a fracture risk (Curr Osteoporos Rep. 2008;6[4]:149-54). However, the evidence to support this logical assumption has been mixed with resulting exclusion of tendency to fall from commonly utilized fracture risk predictive models and tools. A predisposition to and frequency of falls is considered neither a risk modulator nor a mediator in the commonly utilized FRAX-based fracture risk assessments, and it is believed that fracture probability may be underestimated by FRAX in those predisposed to frequent falls (J Clin Densitom. 2011 Jul-Sep;14[3]:194–204).
The landscape of fracture risk assessment and quantification in the aforementioned backdrop has been refreshingly enhanced by a recent contribution by Leslie et al. wherein the authors provide real-life evidence relating self-reported falls to fracture risk. In a robust population sample nearing 25,000 women, increasing number of falls within the past year was associated with an increasing fracture risk, and this relationship persisted after adjusting for covariates that are recognized to predispose to fragility fractures, including age, body mass index, and bone mineral density. Women’s health providers are encouraged to familiarize themselves with the work of Leslie et al.; the authors’ message, that fall history be incorporated into risk quantification measures, is striking in its simplicity and profound in its preventative potential given that fall risk in and of itself may be mitigated in many through targeted interventions.
Lubna Pal, MBBS, MS, is professor and fellowship director of the division of reproductive endocrinology & infertility at Yale University, New Haven, Conn. She also is the director of the Yale reproductive endocrinology & infertility menopause program. She said she had no relevant financial disclosures. Email her at [email protected].
A single, simple question about a patient’s experience of falls in the previous year can help predict their risk of fractures, a study suggests.
In Osteoporosis International, researchers reported the outcomes of a cohort study using Manitoba clinical registry data from 24,943 men and women aged 40 years and older within the province who had undergone a fracture-probability assessment, and had data on self-reported falls for the previous year and fracture outcomes.
William D. Leslie, MD, of the University of Manitoba in Winnipeg, and coauthors wrote that a frequent criticism of the FRAX fracture risk assessment tool was the fact that it didn’t include falls or fall risk in predicting fractures.
“Recent evidence derived from carefully conducted research cohort studies in men found that falls increase fracture risk independent of FRAX probability,” they wrote. “However, data are inconsistent with a paucity of evidence demonstrating usefulness of self-reported fall data as collected in routine clinical practice.”
0.8% experienced a hip fracture, and 4.9% experienced any incident fracture.
The analysis showed an increased risk of fracture with the increasing number of self-reported falls experienced in the previous year. The risk of major osteoporotic fracture was 49% higher among individuals who reported one fall, 74% in those who reported two falls and 2.6-fold higher for those who reported three or more falls in the previous year, compared with those who did not report any falls.
A similar pattern was seen for any incident fracture and hip fracture, with a 3.4-fold higher risk of hip fracture seen in those who reported three or more falls. The study also showed an increase in mortality risk with increasing number of falls.
“We documented that a simple question regarding self-reported falls in the previous year could be easily collected during routine clinical practice and that this information was strongly predictive of short-term fracture risk independent of multiple clinical risk factors including fracture probability using the FRAX tool with BMD [bone mineral density],” the authors wrote.
The analysis did not find an interaction with age or sex and the number of falls.
John A. Kanis, MD, reported grants from Amgen, Lily, and Radius Health. Three other coauthors reported nothing to declare for the context of this article, but reported research grants, speaking honoraria, consultancies from a variety of pharmaceutical companies and organizations. The remaining five coauthors declared no conflicts of interest.
SOURCE: Leslie WD et al. Osteoporos Int. 2019 Aug. 2. doi: 10.1007/s00198-019-05106-3.
A single, simple question about a patient’s experience of falls in the previous year can help predict their risk of fractures, a study suggests.
In Osteoporosis International, researchers reported the outcomes of a cohort study using Manitoba clinical registry data from 24,943 men and women aged 40 years and older within the province who had undergone a fracture-probability assessment, and had data on self-reported falls for the previous year and fracture outcomes.
William D. Leslie, MD, of the University of Manitoba in Winnipeg, and coauthors wrote that a frequent criticism of the FRAX fracture risk assessment tool was the fact that it didn’t include falls or fall risk in predicting fractures.
“Recent evidence derived from carefully conducted research cohort studies in men found that falls increase fracture risk independent of FRAX probability,” they wrote. “However, data are inconsistent with a paucity of evidence demonstrating usefulness of self-reported fall data as collected in routine clinical practice.”
0.8% experienced a hip fracture, and 4.9% experienced any incident fracture.
The analysis showed an increased risk of fracture with the increasing number of self-reported falls experienced in the previous year. The risk of major osteoporotic fracture was 49% higher among individuals who reported one fall, 74% in those who reported two falls and 2.6-fold higher for those who reported three or more falls in the previous year, compared with those who did not report any falls.
A similar pattern was seen for any incident fracture and hip fracture, with a 3.4-fold higher risk of hip fracture seen in those who reported three or more falls. The study also showed an increase in mortality risk with increasing number of falls.
“We documented that a simple question regarding self-reported falls in the previous year could be easily collected during routine clinical practice and that this information was strongly predictive of short-term fracture risk independent of multiple clinical risk factors including fracture probability using the FRAX tool with BMD [bone mineral density],” the authors wrote.
The analysis did not find an interaction with age or sex and the number of falls.
John A. Kanis, MD, reported grants from Amgen, Lily, and Radius Health. Three other coauthors reported nothing to declare for the context of this article, but reported research grants, speaking honoraria, consultancies from a variety of pharmaceutical companies and organizations. The remaining five coauthors declared no conflicts of interest.
SOURCE: Leslie WD et al. Osteoporos Int. 2019 Aug. 2. doi: 10.1007/s00198-019-05106-3.
FROM OSTEOPOROSIS INTERNATIONAL








