User login
Childhood cancer survivors face several long-term risks
Chicago – Survivors of childhood cancers face several later risks from treatment, and investigators presented studies evaluating risks in three specific areas – secondary neoplasms, premature menopause, and neurocognitive function – at the annual meeting of the American Society of Clinical Oncology.
Discussant Paul Nathan, M.D., of The Hospital for Sick Children in Toronto, said “the whole purpose” of research in this area “is to start to understand the predictors and modifiers of late effects” and then to design risk assessment tools and interventions to reduce long-term toxicity. These interventions include modification of chemotherapy and radiation doses, protective strategies, and disease risk stratification to adjust intensity of therapies.
Other strategies are to use behavioral interventions directed at improving compliance with follow-up to detect problems earlier and the use of real-time monitoring, such as with smart phones or fitness trackers. He said one limitation of this sort of research and implementing interventions to reduce late toxicities is that “you need time to document long-term outcomes.” So tracking newer therapies, such as proton beam radiation, small molecule drugs, and immunotherapy, is “going to take time, perhaps decades, before you understand their impact on patients.”
Risk of secondary neoplasms reduced
Risk-stratifying of disease “has allowed us to make attempts to minimize late effects by modifying therapy over time in certain subgroups of lower-risk patients,” said Dr. Lucie Turcotte of the University of Minnesota in Minneapolis.
To study the effects of these changes, she determined the risk of certain subsequent malignant or benign neoplasms over three periods of therapeutic exposure among 23,603 5-year survivors of childhood cancers diagnosed at less than 21 years of age from 1970 to 1999, drawing from the cohort of the Childhood Cancer Survivor Study (CCSS). The CCSS represents about 20% of childhood cancer survivors in the United States for the study period.
Over the decades of 1970-1979, 1980-1989, and 1990-1999, the use of any radiation went from 77% to 58% to 41%, respectively. Cranial radiation for acute lymphoblastic leukemia (ALL) decreased from 85% to 19%, abdominal radiation for Wilms tumor from 78% to 43%, and chest radiotherapy for Hodgkin lymphoma from 87% to 61%. The proportion of children receiving alkylating agents, anthracyclines, and epipodophyllotoxins went up, but the cumulative doses went down (N Engl J Med. 2016 Mar 3;374(9):833-42).
Dr. Nathan said today, almost no child gets cranial radiation for ALL. “So we’ve slowly learned that our treatments are toxic, and we’ve certainly done what we can to change them.”
But have these changes made a difference? Dr. Turcotte found that survivors remain at increased risk of a secondary neoplasm, but the risk was lower for children treated in later time periods.
Dr. Nathan pointed to Dr. Turcotte’s data showing that the incidence of subsequent malignant neoplasms decreased from 1970 to 1999 by 7% for each 5-year era (15-year risk: 2.3% to 1.6%; P = .001; number needed to treat, NNT = 143). Similarly, non-melanoma skin cancer 15-year risk decreased from 0.7% to 0.1% (P less than .001; NNT = 167). The NNT’s are “certainly important, but these are not major differences over time,” Dr. Nathan said. Knowing the impact of newer, targeted therapeutic approaches will take some time.
Predicting risk of premature menopause
Also using the CCSS data, Dr. Jennifer Levine of Columbia University Medical Center, New York, N.Y., studied the prevalence of and risk factors for nonsurgical premature menopause (NSPM), defined as cessation of menses prior to age 40 years, as well as the effect on reproductive outcomes for survivors of childhood cancers.
Dr. Nathan said when a child is first diagnosed with cancer, seldom does the issue of fertility come up early in the discussion, “but when you treat young adults who are survivors, the number one thing they talk about often is fertility. And so doing a better job in predicting who is at risk for infertility is clearly a priority for survivorship research.”
He said the development of the cyclophosphamide equivalent dose (CED) by D.M Green et al. (Pediatr Blood Cancer. 2014 Jan;61(1):53-67) has been very helpful for quantifying alkylating agent exposure to make comparisons between studies. The goal is to develop a risk assessment tool to be able to tell patients and families their fertility risk based on demographics, therapy, and biomarkers.
Being able to evaluate risk is critically important because for girls, oocyte or ovarian harvesting or even transvaginal ultrasound is highly invasive, and these procedures should be recommended only if their risk for infertility is very high.
Dr. Levine studied 2,930 female cancer survivors diagnosed at a median age of 6 years between 1979 and 1986 and a median age at follow-up of 34 years, who were compared with 1,399 healthy siblings. Of the survivor cohort, 110 developed NSPM at a median age of 32 years, and the prevalence of NSPM at age 40 years for the entire cohort was 9.1%, giving a relative risk of NSPM of 10.5 compared with siblings, who had a 0.9% NSPM prevalence at age 40.
She found that exposure to alkylating agents and older age at diagnosis put childhood cancer survivors at increased risk of NSPM, which was associated with lower rates of pregnancy and live births after age 31 years. The greatest risk of NSPM occurred if the cyclophosphamide equivalent dose was greater than 6000 mg/m2 (odds ratio = 3.6 compared with no CED); if there had been any radiation to the ovaries (less than 5 Gy: OR = 4.0; 5 Gy or more: OR = 20.4); or if the age at diagnosis was greater than 14 years (OR = 2.3).
Women with NSPM, compared with survivors without NSPM, were less likely ever to be pregnant (OR = 0.41) or to have a live birth after age 30 (OR = 0.35). However, these outcomes were no different between the ages of 21 and 30. Dr. Levine said this information can assist clinicians in counseling their patients about the risk for early menopause and planning for alternative reproductive means, such as oocyte or embryo harvesting and preservation.
Neurocognitive functioning after treatment
Dr. Wei Liu of St. Jude Children’s Research Hospital, Memphis, Tenn., studied the neurocognitive function of long-term survivors of ALL.
Dr. Nathan called ALL “the paradigm for how we’ve sort of learned and adjusted how we treat patients based on late effects.” Early on, the disease was treated with craniospinal radiation and intrathecal chemotherapy, and while patients survived, it became obvious that they suffered neurocognitive and endocrine problems, growth abnormalities, and secondary malignancies. These findings forced a reevaluatuon of treatments, leading to elimination of spinal radiation, reduction of cranial radiation dose, intensification of systemic therapy, including methotrexate, and risk stratification allowing modification of therapies.
Survival was sustained, but long-term outcomes were still based on children treated with radiation. So long-term cognitive consequences in the more modern era of therapy were unknown. Only recently have adult cohorts become available who were treated in the chemotherapy-only era.
Dr. Liu studied 159 ALL survivors who had been treated with chemotherapy alone at a mean age of 9.2 years. The follow-up was at a median of 7.6 years off therapy at a mean age of 13.7 years. At the end of the chemotherapy protocol, patients completed tests of sustained attention, and parents rated survivors’ behavior on standard scales.
She found that for these childhood cancer survivors, sustained attention and behavior functioning at the end of chemotherapy predicted long-term attention and processing speed outcomes. Only exposure to chemotherapy, and not end-of-therapy function, predicted that survivors would have poor executive function of fluency and flexibility at long-term follow up.
Dr. Nathan praised the investigators for their foresight to collect data on the methotrexate area under the curve, number of triple intrathecal therapies (cytarabine, methotrexate, and hydrocortisone), and neurocognitive functioning at the end of chemotherapy. “What’s clear is that chemotherapy alone can lead to neurocognitive late effects,” he said. “But what’s also important is that not all late effects can be predicted by end of therapy assessments.” These late effects appear to evolve over time, so ongoing assessments are needed.
Dr. Turcotte, Dr. Liu, Dr. Levine, and Dr. Nathan each reported no financial disclosures.
Chicago – Survivors of childhood cancers face several later risks from treatment, and investigators presented studies evaluating risks in three specific areas – secondary neoplasms, premature menopause, and neurocognitive function – at the annual meeting of the American Society of Clinical Oncology.
Discussant Paul Nathan, M.D., of The Hospital for Sick Children in Toronto, said “the whole purpose” of research in this area “is to start to understand the predictors and modifiers of late effects” and then to design risk assessment tools and interventions to reduce long-term toxicity. These interventions include modification of chemotherapy and radiation doses, protective strategies, and disease risk stratification to adjust intensity of therapies.
Other strategies are to use behavioral interventions directed at improving compliance with follow-up to detect problems earlier and the use of real-time monitoring, such as with smart phones or fitness trackers. He said one limitation of this sort of research and implementing interventions to reduce late toxicities is that “you need time to document long-term outcomes.” So tracking newer therapies, such as proton beam radiation, small molecule drugs, and immunotherapy, is “going to take time, perhaps decades, before you understand their impact on patients.”
Risk of secondary neoplasms reduced
Risk-stratifying of disease “has allowed us to make attempts to minimize late effects by modifying therapy over time in certain subgroups of lower-risk patients,” said Dr. Lucie Turcotte of the University of Minnesota in Minneapolis.
To study the effects of these changes, she determined the risk of certain subsequent malignant or benign neoplasms over three periods of therapeutic exposure among 23,603 5-year survivors of childhood cancers diagnosed at less than 21 years of age from 1970 to 1999, drawing from the cohort of the Childhood Cancer Survivor Study (CCSS). The CCSS represents about 20% of childhood cancer survivors in the United States for the study period.
Over the decades of 1970-1979, 1980-1989, and 1990-1999, the use of any radiation went from 77% to 58% to 41%, respectively. Cranial radiation for acute lymphoblastic leukemia (ALL) decreased from 85% to 19%, abdominal radiation for Wilms tumor from 78% to 43%, and chest radiotherapy for Hodgkin lymphoma from 87% to 61%. The proportion of children receiving alkylating agents, anthracyclines, and epipodophyllotoxins went up, but the cumulative doses went down (N Engl J Med. 2016 Mar 3;374(9):833-42).
Dr. Nathan said today, almost no child gets cranial radiation for ALL. “So we’ve slowly learned that our treatments are toxic, and we’ve certainly done what we can to change them.”
But have these changes made a difference? Dr. Turcotte found that survivors remain at increased risk of a secondary neoplasm, but the risk was lower for children treated in later time periods.
Dr. Nathan pointed to Dr. Turcotte’s data showing that the incidence of subsequent malignant neoplasms decreased from 1970 to 1999 by 7% for each 5-year era (15-year risk: 2.3% to 1.6%; P = .001; number needed to treat, NNT = 143). Similarly, non-melanoma skin cancer 15-year risk decreased from 0.7% to 0.1% (P less than .001; NNT = 167). The NNT’s are “certainly important, but these are not major differences over time,” Dr. Nathan said. Knowing the impact of newer, targeted therapeutic approaches will take some time.
Predicting risk of premature menopause
Also using the CCSS data, Dr. Jennifer Levine of Columbia University Medical Center, New York, N.Y., studied the prevalence of and risk factors for nonsurgical premature menopause (NSPM), defined as cessation of menses prior to age 40 years, as well as the effect on reproductive outcomes for survivors of childhood cancers.
Dr. Nathan said when a child is first diagnosed with cancer, seldom does the issue of fertility come up early in the discussion, “but when you treat young adults who are survivors, the number one thing they talk about often is fertility. And so doing a better job in predicting who is at risk for infertility is clearly a priority for survivorship research.”
He said the development of the cyclophosphamide equivalent dose (CED) by D.M Green et al. (Pediatr Blood Cancer. 2014 Jan;61(1):53-67) has been very helpful for quantifying alkylating agent exposure to make comparisons between studies. The goal is to develop a risk assessment tool to be able to tell patients and families their fertility risk based on demographics, therapy, and biomarkers.
Being able to evaluate risk is critically important because for girls, oocyte or ovarian harvesting or even transvaginal ultrasound is highly invasive, and these procedures should be recommended only if their risk for infertility is very high.
Dr. Levine studied 2,930 female cancer survivors diagnosed at a median age of 6 years between 1979 and 1986 and a median age at follow-up of 34 years, who were compared with 1,399 healthy siblings. Of the survivor cohort, 110 developed NSPM at a median age of 32 years, and the prevalence of NSPM at age 40 years for the entire cohort was 9.1%, giving a relative risk of NSPM of 10.5 compared with siblings, who had a 0.9% NSPM prevalence at age 40.
She found that exposure to alkylating agents and older age at diagnosis put childhood cancer survivors at increased risk of NSPM, which was associated with lower rates of pregnancy and live births after age 31 years. The greatest risk of NSPM occurred if the cyclophosphamide equivalent dose was greater than 6000 mg/m2 (odds ratio = 3.6 compared with no CED); if there had been any radiation to the ovaries (less than 5 Gy: OR = 4.0; 5 Gy or more: OR = 20.4); or if the age at diagnosis was greater than 14 years (OR = 2.3).
Women with NSPM, compared with survivors without NSPM, were less likely ever to be pregnant (OR = 0.41) or to have a live birth after age 30 (OR = 0.35). However, these outcomes were no different between the ages of 21 and 30. Dr. Levine said this information can assist clinicians in counseling their patients about the risk for early menopause and planning for alternative reproductive means, such as oocyte or embryo harvesting and preservation.
Neurocognitive functioning after treatment
Dr. Wei Liu of St. Jude Children’s Research Hospital, Memphis, Tenn., studied the neurocognitive function of long-term survivors of ALL.
Dr. Nathan called ALL “the paradigm for how we’ve sort of learned and adjusted how we treat patients based on late effects.” Early on, the disease was treated with craniospinal radiation and intrathecal chemotherapy, and while patients survived, it became obvious that they suffered neurocognitive and endocrine problems, growth abnormalities, and secondary malignancies. These findings forced a reevaluatuon of treatments, leading to elimination of spinal radiation, reduction of cranial radiation dose, intensification of systemic therapy, including methotrexate, and risk stratification allowing modification of therapies.
Survival was sustained, but long-term outcomes were still based on children treated with radiation. So long-term cognitive consequences in the more modern era of therapy were unknown. Only recently have adult cohorts become available who were treated in the chemotherapy-only era.
Dr. Liu studied 159 ALL survivors who had been treated with chemotherapy alone at a mean age of 9.2 years. The follow-up was at a median of 7.6 years off therapy at a mean age of 13.7 years. At the end of the chemotherapy protocol, patients completed tests of sustained attention, and parents rated survivors’ behavior on standard scales.
She found that for these childhood cancer survivors, sustained attention and behavior functioning at the end of chemotherapy predicted long-term attention and processing speed outcomes. Only exposure to chemotherapy, and not end-of-therapy function, predicted that survivors would have poor executive function of fluency and flexibility at long-term follow up.
Dr. Nathan praised the investigators for their foresight to collect data on the methotrexate area under the curve, number of triple intrathecal therapies (cytarabine, methotrexate, and hydrocortisone), and neurocognitive functioning at the end of chemotherapy. “What’s clear is that chemotherapy alone can lead to neurocognitive late effects,” he said. “But what’s also important is that not all late effects can be predicted by end of therapy assessments.” These late effects appear to evolve over time, so ongoing assessments are needed.
Dr. Turcotte, Dr. Liu, Dr. Levine, and Dr. Nathan each reported no financial disclosures.
Chicago – Survivors of childhood cancers face several later risks from treatment, and investigators presented studies evaluating risks in three specific areas – secondary neoplasms, premature menopause, and neurocognitive function – at the annual meeting of the American Society of Clinical Oncology.
Discussant Paul Nathan, M.D., of The Hospital for Sick Children in Toronto, said “the whole purpose” of research in this area “is to start to understand the predictors and modifiers of late effects” and then to design risk assessment tools and interventions to reduce long-term toxicity. These interventions include modification of chemotherapy and radiation doses, protective strategies, and disease risk stratification to adjust intensity of therapies.
Other strategies are to use behavioral interventions directed at improving compliance with follow-up to detect problems earlier and the use of real-time monitoring, such as with smart phones or fitness trackers. He said one limitation of this sort of research and implementing interventions to reduce late toxicities is that “you need time to document long-term outcomes.” So tracking newer therapies, such as proton beam radiation, small molecule drugs, and immunotherapy, is “going to take time, perhaps decades, before you understand their impact on patients.”
Risk of secondary neoplasms reduced
Risk-stratifying of disease “has allowed us to make attempts to minimize late effects by modifying therapy over time in certain subgroups of lower-risk patients,” said Dr. Lucie Turcotte of the University of Minnesota in Minneapolis.
To study the effects of these changes, she determined the risk of certain subsequent malignant or benign neoplasms over three periods of therapeutic exposure among 23,603 5-year survivors of childhood cancers diagnosed at less than 21 years of age from 1970 to 1999, drawing from the cohort of the Childhood Cancer Survivor Study (CCSS). The CCSS represents about 20% of childhood cancer survivors in the United States for the study period.
Over the decades of 1970-1979, 1980-1989, and 1990-1999, the use of any radiation went from 77% to 58% to 41%, respectively. Cranial radiation for acute lymphoblastic leukemia (ALL) decreased from 85% to 19%, abdominal radiation for Wilms tumor from 78% to 43%, and chest radiotherapy for Hodgkin lymphoma from 87% to 61%. The proportion of children receiving alkylating agents, anthracyclines, and epipodophyllotoxins went up, but the cumulative doses went down (N Engl J Med. 2016 Mar 3;374(9):833-42).
Dr. Nathan said today, almost no child gets cranial radiation for ALL. “So we’ve slowly learned that our treatments are toxic, and we’ve certainly done what we can to change them.”
But have these changes made a difference? Dr. Turcotte found that survivors remain at increased risk of a secondary neoplasm, but the risk was lower for children treated in later time periods.
Dr. Nathan pointed to Dr. Turcotte’s data showing that the incidence of subsequent malignant neoplasms decreased from 1970 to 1999 by 7% for each 5-year era (15-year risk: 2.3% to 1.6%; P = .001; number needed to treat, NNT = 143). Similarly, non-melanoma skin cancer 15-year risk decreased from 0.7% to 0.1% (P less than .001; NNT = 167). The NNT’s are “certainly important, but these are not major differences over time,” Dr. Nathan said. Knowing the impact of newer, targeted therapeutic approaches will take some time.
Predicting risk of premature menopause
Also using the CCSS data, Dr. Jennifer Levine of Columbia University Medical Center, New York, N.Y., studied the prevalence of and risk factors for nonsurgical premature menopause (NSPM), defined as cessation of menses prior to age 40 years, as well as the effect on reproductive outcomes for survivors of childhood cancers.
Dr. Nathan said when a child is first diagnosed with cancer, seldom does the issue of fertility come up early in the discussion, “but when you treat young adults who are survivors, the number one thing they talk about often is fertility. And so doing a better job in predicting who is at risk for infertility is clearly a priority for survivorship research.”
He said the development of the cyclophosphamide equivalent dose (CED) by D.M Green et al. (Pediatr Blood Cancer. 2014 Jan;61(1):53-67) has been very helpful for quantifying alkylating agent exposure to make comparisons between studies. The goal is to develop a risk assessment tool to be able to tell patients and families their fertility risk based on demographics, therapy, and biomarkers.
Being able to evaluate risk is critically important because for girls, oocyte or ovarian harvesting or even transvaginal ultrasound is highly invasive, and these procedures should be recommended only if their risk for infertility is very high.
Dr. Levine studied 2,930 female cancer survivors diagnosed at a median age of 6 years between 1979 and 1986 and a median age at follow-up of 34 years, who were compared with 1,399 healthy siblings. Of the survivor cohort, 110 developed NSPM at a median age of 32 years, and the prevalence of NSPM at age 40 years for the entire cohort was 9.1%, giving a relative risk of NSPM of 10.5 compared with siblings, who had a 0.9% NSPM prevalence at age 40.
She found that exposure to alkylating agents and older age at diagnosis put childhood cancer survivors at increased risk of NSPM, which was associated with lower rates of pregnancy and live births after age 31 years. The greatest risk of NSPM occurred if the cyclophosphamide equivalent dose was greater than 6000 mg/m2 (odds ratio = 3.6 compared with no CED); if there had been any radiation to the ovaries (less than 5 Gy: OR = 4.0; 5 Gy or more: OR = 20.4); or if the age at diagnosis was greater than 14 years (OR = 2.3).
Women with NSPM, compared with survivors without NSPM, were less likely ever to be pregnant (OR = 0.41) or to have a live birth after age 30 (OR = 0.35). However, these outcomes were no different between the ages of 21 and 30. Dr. Levine said this information can assist clinicians in counseling their patients about the risk for early menopause and planning for alternative reproductive means, such as oocyte or embryo harvesting and preservation.
Neurocognitive functioning after treatment
Dr. Wei Liu of St. Jude Children’s Research Hospital, Memphis, Tenn., studied the neurocognitive function of long-term survivors of ALL.
Dr. Nathan called ALL “the paradigm for how we’ve sort of learned and adjusted how we treat patients based on late effects.” Early on, the disease was treated with craniospinal radiation and intrathecal chemotherapy, and while patients survived, it became obvious that they suffered neurocognitive and endocrine problems, growth abnormalities, and secondary malignancies. These findings forced a reevaluatuon of treatments, leading to elimination of spinal radiation, reduction of cranial radiation dose, intensification of systemic therapy, including methotrexate, and risk stratification allowing modification of therapies.
Survival was sustained, but long-term outcomes were still based on children treated with radiation. So long-term cognitive consequences in the more modern era of therapy were unknown. Only recently have adult cohorts become available who were treated in the chemotherapy-only era.
Dr. Liu studied 159 ALL survivors who had been treated with chemotherapy alone at a mean age of 9.2 years. The follow-up was at a median of 7.6 years off therapy at a mean age of 13.7 years. At the end of the chemotherapy protocol, patients completed tests of sustained attention, and parents rated survivors’ behavior on standard scales.
She found that for these childhood cancer survivors, sustained attention and behavior functioning at the end of chemotherapy predicted long-term attention and processing speed outcomes. Only exposure to chemotherapy, and not end-of-therapy function, predicted that survivors would have poor executive function of fluency and flexibility at long-term follow up.
Dr. Nathan praised the investigators for their foresight to collect data on the methotrexate area under the curve, number of triple intrathecal therapies (cytarabine, methotrexate, and hydrocortisone), and neurocognitive functioning at the end of chemotherapy. “What’s clear is that chemotherapy alone can lead to neurocognitive late effects,” he said. “But what’s also important is that not all late effects can be predicted by end of therapy assessments.” These late effects appear to evolve over time, so ongoing assessments are needed.
Dr. Turcotte, Dr. Liu, Dr. Levine, and Dr. Nathan each reported no financial disclosures.
AT THE ANNUAL MEETING OF THE AMERICAN SOCIETY OF CLINICAL ONCOLOGY
Key clinical point: Despite improvements, survivors of childhood cancers still face long-term risks in terms of secondary neoplasms, nonsurgical premature menopause (NSPM), and neurocognitive function.
Major finding: Of the survivor cohort, 110 developed NSPM at a median age of 32 years, so the prevalence of NSPM at age 40 years for the entire cohort was 9.1%, while siblings had a 0.9% NSPM prevalence at age 40.
Data source: Retrospective study of 2,930 childhood cancer survivors diagnosed at age 6 years and follow-up at median age 34 years and 1,390 healthy siblings. Also cross-sectional prospective study for neurocognitive assessment of 159 ALL survivors, and risks of secondary neoplasms in 23,603 5-year survivors of childhood cancers .
Disclosures: Dr. Turcotte, Dr. Liu, Dr. Levine, and Dr. Nathan each reported no financial disclosures.
One-time AMH level predicts rapid perimenopausal bone loss
BOSTON – Anti-Müllerian hormone levels strongly predict the rate of perimenopausal loss of bone mineral density and might help identify women who need early intervention to prevent future osteoporotic fractures, according to data from a review of 474 perimenopausal women that was presented at the annual meeting of the Endocrine Society.
The team matched anti-Müllerian hormone (AMH) levels and bone mineral density (BMD) measurements taken 2-4 years before the final menstrual period to BMD measurements taken 3 years later. The women were part of the Study of Women’s Health Across the Nation (SWAN), an ongoing multicenter study of women during their middle years.
When perimenopausal AMH “goes below 250 pg/mL, you are beginning to lose bone, and, when it goes below 200 pg/mL, you are losing bone fast, so that’s when you might want to intervene.” The finding “opens up the possibility of identifying women who are going to lose the most bone mass during the transition and targeting them before they have lost a substantial amount,” said lead investigator Dr. Arun Karlamangla of the department of geriatrics at the University of California, Los Angeles.
BMD loss is normal during menopause but rates of decline vary among women. AMH is a product of ovarian granulosa cells commonly used in fertility clinics to gauge ovarian reserve, but AMH levels also decline during menopause, and in a fairly stable fashion, he explained.
The women in SWAN were 42-52 years old at baseline with an intact uterus, at least one ovary, and no use of exogenous hormones. Blood was drawn during the early follicular phase of the menstrual cycle.
The median rate of BMD decline was 1.26% per year in the lumbar spine and 1.03% per year in the femoral neck. The median AMH was 49 pg/mL but varied widely.
Adjusted for age, body mass index, smoking, race, and study site, the team found that for each 75% (or fourfold) decrement in AMH level, there was a 0.15% per year faster decline in spine BMD and 0.13% per year faster decline in femoral neck BMD. Each fourfold decrement was also associated with an 18% increase in the odds of faster than median decline in spine BMD and 17% increase in the odds of faster than median decline in femoral neck BMD. The fast losers lost more than 2% of their BMD per year in both the lumbar spine and femoral neck.
The results were the same after adjustment for follicle-stimulating hormone and estrogen levels, “so AMH provides information that cannot be obtained from estrogen and FSH,” Dr. Karlamangla said.
He cautioned that the technique needs further development and validation before it’s ready for the clinic. The team used the PicoAMH test from Ansh Labs in Webster, Tex.
The investigators had no disclosures. Ansh provided the assays for free. SWAN is funded by the National Institutes of Health.
The current recommendation is to start bone mineral density screening in women at age 65 years. All of us who see patients in the menopause years worry that we are missing someone with faster than normal bone loss. Fast losers are critical to identify because if we wait until they are 65 years old, it’s too late. A clinical test such as this to identify fast losers for earlier BMD measurement would be a tremendous benefit.
Dr. Cynthia Stuenkel is a clinical professor of endocrinology at the University of California, San Diego. She moderated the presentation and was not involved in the research.
The current recommendation is to start bone mineral density screening in women at age 65 years. All of us who see patients in the menopause years worry that we are missing someone with faster than normal bone loss. Fast losers are critical to identify because if we wait until they are 65 years old, it’s too late. A clinical test such as this to identify fast losers for earlier BMD measurement would be a tremendous benefit.
Dr. Cynthia Stuenkel is a clinical professor of endocrinology at the University of California, San Diego. She moderated the presentation and was not involved in the research.
The current recommendation is to start bone mineral density screening in women at age 65 years. All of us who see patients in the menopause years worry that we are missing someone with faster than normal bone loss. Fast losers are critical to identify because if we wait until they are 65 years old, it’s too late. A clinical test such as this to identify fast losers for earlier BMD measurement would be a tremendous benefit.
Dr. Cynthia Stuenkel is a clinical professor of endocrinology at the University of California, San Diego. She moderated the presentation and was not involved in the research.
BOSTON – Anti-Müllerian hormone levels strongly predict the rate of perimenopausal loss of bone mineral density and might help identify women who need early intervention to prevent future osteoporotic fractures, according to data from a review of 474 perimenopausal women that was presented at the annual meeting of the Endocrine Society.
The team matched anti-Müllerian hormone (AMH) levels and bone mineral density (BMD) measurements taken 2-4 years before the final menstrual period to BMD measurements taken 3 years later. The women were part of the Study of Women’s Health Across the Nation (SWAN), an ongoing multicenter study of women during their middle years.
When perimenopausal AMH “goes below 250 pg/mL, you are beginning to lose bone, and, when it goes below 200 pg/mL, you are losing bone fast, so that’s when you might want to intervene.” The finding “opens up the possibility of identifying women who are going to lose the most bone mass during the transition and targeting them before they have lost a substantial amount,” said lead investigator Dr. Arun Karlamangla of the department of geriatrics at the University of California, Los Angeles.
BMD loss is normal during menopause but rates of decline vary among women. AMH is a product of ovarian granulosa cells commonly used in fertility clinics to gauge ovarian reserve, but AMH levels also decline during menopause, and in a fairly stable fashion, he explained.
The women in SWAN were 42-52 years old at baseline with an intact uterus, at least one ovary, and no use of exogenous hormones. Blood was drawn during the early follicular phase of the menstrual cycle.
The median rate of BMD decline was 1.26% per year in the lumbar spine and 1.03% per year in the femoral neck. The median AMH was 49 pg/mL but varied widely.
Adjusted for age, body mass index, smoking, race, and study site, the team found that for each 75% (or fourfold) decrement in AMH level, there was a 0.15% per year faster decline in spine BMD and 0.13% per year faster decline in femoral neck BMD. Each fourfold decrement was also associated with an 18% increase in the odds of faster than median decline in spine BMD and 17% increase in the odds of faster than median decline in femoral neck BMD. The fast losers lost more than 2% of their BMD per year in both the lumbar spine and femoral neck.
The results were the same after adjustment for follicle-stimulating hormone and estrogen levels, “so AMH provides information that cannot be obtained from estrogen and FSH,” Dr. Karlamangla said.
He cautioned that the technique needs further development and validation before it’s ready for the clinic. The team used the PicoAMH test from Ansh Labs in Webster, Tex.
The investigators had no disclosures. Ansh provided the assays for free. SWAN is funded by the National Institutes of Health.
BOSTON – Anti-Müllerian hormone levels strongly predict the rate of perimenopausal loss of bone mineral density and might help identify women who need early intervention to prevent future osteoporotic fractures, according to data from a review of 474 perimenopausal women that was presented at the annual meeting of the Endocrine Society.
The team matched anti-Müllerian hormone (AMH) levels and bone mineral density (BMD) measurements taken 2-4 years before the final menstrual period to BMD measurements taken 3 years later. The women were part of the Study of Women’s Health Across the Nation (SWAN), an ongoing multicenter study of women during their middle years.
When perimenopausal AMH “goes below 250 pg/mL, you are beginning to lose bone, and, when it goes below 200 pg/mL, you are losing bone fast, so that’s when you might want to intervene.” The finding “opens up the possibility of identifying women who are going to lose the most bone mass during the transition and targeting them before they have lost a substantial amount,” said lead investigator Dr. Arun Karlamangla of the department of geriatrics at the University of California, Los Angeles.
BMD loss is normal during menopause but rates of decline vary among women. AMH is a product of ovarian granulosa cells commonly used in fertility clinics to gauge ovarian reserve, but AMH levels also decline during menopause, and in a fairly stable fashion, he explained.
The women in SWAN were 42-52 years old at baseline with an intact uterus, at least one ovary, and no use of exogenous hormones. Blood was drawn during the early follicular phase of the menstrual cycle.
The median rate of BMD decline was 1.26% per year in the lumbar spine and 1.03% per year in the femoral neck. The median AMH was 49 pg/mL but varied widely.
Adjusted for age, body mass index, smoking, race, and study site, the team found that for each 75% (or fourfold) decrement in AMH level, there was a 0.15% per year faster decline in spine BMD and 0.13% per year faster decline in femoral neck BMD. Each fourfold decrement was also associated with an 18% increase in the odds of faster than median decline in spine BMD and 17% increase in the odds of faster than median decline in femoral neck BMD. The fast losers lost more than 2% of their BMD per year in both the lumbar spine and femoral neck.
The results were the same after adjustment for follicle-stimulating hormone and estrogen levels, “so AMH provides information that cannot be obtained from estrogen and FSH,” Dr. Karlamangla said.
He cautioned that the technique needs further development and validation before it’s ready for the clinic. The team used the PicoAMH test from Ansh Labs in Webster, Tex.
The investigators had no disclosures. Ansh provided the assays for free. SWAN is funded by the National Institutes of Health.
AT ENDO 2016
Key clinical point: Anti-Müllerian hormone levels strongly predict the rate of perimenopausal bone mineral density loss and might help identify women who need early intervention to prevent future osteoporotic fractures, according to a review of 474 perimenopausal women that was presented at the Endocrine Society annual meeting.
Major finding: Adjusted for age, body mass index, smoking, race, and study site, the team found that for each 75% (or fourfold) decrement in AMH level, there was a 0.15% per year faster decline in lumbar spine BMD and 0.13% per year faster decline in femoral neck BMD.
Data source: Review of 474 perimenopausal women in the Study of Women’s Health Across the Nation.
Disclosures: The investigators had no disclosures. Ansh Labs provided the assays for free. SWAN is funded by the National Institutes of Health.
Early estrogen likely prevents bone fractures in Turner syndrome
BOSTON – The longer that estrogen therapy is delayed in girls with Turner syndrome, the lower their bone density will be in subsequent years, based on results of a retrospective, cross-sectional study from Monash University, in Melbourne, Australia.
For every year after age 11 that Turner patients went without estrogen – generally due to delayed initiation, but sometimes noncompliance – there was a significant reduction in bone mineral density in both the lumbar spine (Beta -0.582, P less than 0.001) and femoral neck (Beta -0.383, P = 0.008).
Estrogen deficiency and subsequent suboptimal bone mass accrual are known to contribute to the increased risk of osteoporosis in women with Turner syndrome, and about a doubling of the risk of fragility fractures, mostly of the forearm. About a third of the 76 women in the study had at least one fracture, explained investigator Dr. Amanda Vincent, head of the Midlife Health and Menopause Program at Monash.
“Avoiding estrogen deficiency is important to optimize bone health in Turner syndrome.” It “depends on early diagnosis, age-appropriate pubertal induction, and optimization of compliance,” Dr. Vincent said at the Endocrine Society annual meeting.
The median age of Turner syndrome diagnosis was 11 years, but estrogen treatments didn’t begin until a median age of 15. The women in the study were a median of about 30 years old, which means that they were adolescents at the time when estrogen treatment was often delayed in the mistaken belief that growth hormone therapy would be more effective before puberty was induced.
It’s now known that estrogen replacement works synergistically with, and even potentiates, the effects of growth hormone. Current guidelines recommend pubertal induction by age 13 (J Clin Endocrinol Metab. 2007 Jan;92(1):10-25).
The women had at least one dual-energy x-ray absorptiometry scan at Monash since 1998. Z-scores below - 2, indicating low bone density, were found in the lumbar spines of about a quarter the subjects, and in the femoral necks of about 8%. Primary amenorrhea and premature menopause, followed by vitamin D deficiency, were the most common risk factors for low bone mass. Almost 40% of the women reported non-continuous use of estrogen. About half had undergone growth hormone therapy.
At a median height of 149 cm, the subjects were about 15 cm shorter than age-matched, healthy controls, and also had a slightly higher median body mass index of 25.6 kg/m2. Lumbar spine bone area, bone mineral content, areal bone mineral density, and bone mineral apparent density were significantly lower in Turner syndrome patients. In the femoral neck, areal bone mineral density was significantly lower.
There was no relationship between bone markers and growth hormone use or Turner syndrome karyotype; the predominant karyotype was 45XO, but the study also included mosaic karyotypes.
The investigators had no disclosures.
These are important observations. The bottom line is early recognition and early referral. It’s clear from this study and others that earlier institution of estrogen is beneficial for height, bone density, and fracture risk throughout life. It’s not just an issue of a 20 year old with low bone density; that 20 year old later becomes a 60 year old with low bone density.
![]() |
Dr. Michael Levine |
[However,] we still have a problem with delayed recognition and referral of young girls with Turner syndrome. Most girls with Turner syndrome have some typical phenotypic features, but some do not, so the diagnosis is often made too late. [To get around that problem,] we recommend that all children below the 5th percentile for height – or who flatten out too early on growth curves – be referred to rule out Turner syndrome and other problems.
Dr. Michael Levine is chief of the Division of Endocrinology at The Children’s Hospital of Philadelphia. He made his comments after the study presentation, and was not involved in the work.
These are important observations. The bottom line is early recognition and early referral. It’s clear from this study and others that earlier institution of estrogen is beneficial for height, bone density, and fracture risk throughout life. It’s not just an issue of a 20 year old with low bone density; that 20 year old later becomes a 60 year old with low bone density.
![]() |
Dr. Michael Levine |
[However,] we still have a problem with delayed recognition and referral of young girls with Turner syndrome. Most girls with Turner syndrome have some typical phenotypic features, but some do not, so the diagnosis is often made too late. [To get around that problem,] we recommend that all children below the 5th percentile for height – or who flatten out too early on growth curves – be referred to rule out Turner syndrome and other problems.
Dr. Michael Levine is chief of the Division of Endocrinology at The Children’s Hospital of Philadelphia. He made his comments after the study presentation, and was not involved in the work.
These are important observations. The bottom line is early recognition and early referral. It’s clear from this study and others that earlier institution of estrogen is beneficial for height, bone density, and fracture risk throughout life. It’s not just an issue of a 20 year old with low bone density; that 20 year old later becomes a 60 year old with low bone density.
![]() |
Dr. Michael Levine |
[However,] we still have a problem with delayed recognition and referral of young girls with Turner syndrome. Most girls with Turner syndrome have some typical phenotypic features, but some do not, so the diagnosis is often made too late. [To get around that problem,] we recommend that all children below the 5th percentile for height – or who flatten out too early on growth curves – be referred to rule out Turner syndrome and other problems.
Dr. Michael Levine is chief of the Division of Endocrinology at The Children’s Hospital of Philadelphia. He made his comments after the study presentation, and was not involved in the work.
BOSTON – The longer that estrogen therapy is delayed in girls with Turner syndrome, the lower their bone density will be in subsequent years, based on results of a retrospective, cross-sectional study from Monash University, in Melbourne, Australia.
For every year after age 11 that Turner patients went without estrogen – generally due to delayed initiation, but sometimes noncompliance – there was a significant reduction in bone mineral density in both the lumbar spine (Beta -0.582, P less than 0.001) and femoral neck (Beta -0.383, P = 0.008).
Estrogen deficiency and subsequent suboptimal bone mass accrual are known to contribute to the increased risk of osteoporosis in women with Turner syndrome, and about a doubling of the risk of fragility fractures, mostly of the forearm. About a third of the 76 women in the study had at least one fracture, explained investigator Dr. Amanda Vincent, head of the Midlife Health and Menopause Program at Monash.
“Avoiding estrogen deficiency is important to optimize bone health in Turner syndrome.” It “depends on early diagnosis, age-appropriate pubertal induction, and optimization of compliance,” Dr. Vincent said at the Endocrine Society annual meeting.
The median age of Turner syndrome diagnosis was 11 years, but estrogen treatments didn’t begin until a median age of 15. The women in the study were a median of about 30 years old, which means that they were adolescents at the time when estrogen treatment was often delayed in the mistaken belief that growth hormone therapy would be more effective before puberty was induced.
It’s now known that estrogen replacement works synergistically with, and even potentiates, the effects of growth hormone. Current guidelines recommend pubertal induction by age 13 (J Clin Endocrinol Metab. 2007 Jan;92(1):10-25).
The women had at least one dual-energy x-ray absorptiometry scan at Monash since 1998. Z-scores below - 2, indicating low bone density, were found in the lumbar spines of about a quarter the subjects, and in the femoral necks of about 8%. Primary amenorrhea and premature menopause, followed by vitamin D deficiency, were the most common risk factors for low bone mass. Almost 40% of the women reported non-continuous use of estrogen. About half had undergone growth hormone therapy.
At a median height of 149 cm, the subjects were about 15 cm shorter than age-matched, healthy controls, and also had a slightly higher median body mass index of 25.6 kg/m2. Lumbar spine bone area, bone mineral content, areal bone mineral density, and bone mineral apparent density were significantly lower in Turner syndrome patients. In the femoral neck, areal bone mineral density was significantly lower.
There was no relationship between bone markers and growth hormone use or Turner syndrome karyotype; the predominant karyotype was 45XO, but the study also included mosaic karyotypes.
The investigators had no disclosures.
BOSTON – The longer that estrogen therapy is delayed in girls with Turner syndrome, the lower their bone density will be in subsequent years, based on results of a retrospective, cross-sectional study from Monash University, in Melbourne, Australia.
For every year after age 11 that Turner patients went without estrogen – generally due to delayed initiation, but sometimes noncompliance – there was a significant reduction in bone mineral density in both the lumbar spine (Beta -0.582, P less than 0.001) and femoral neck (Beta -0.383, P = 0.008).
Estrogen deficiency and subsequent suboptimal bone mass accrual are known to contribute to the increased risk of osteoporosis in women with Turner syndrome, and about a doubling of the risk of fragility fractures, mostly of the forearm. About a third of the 76 women in the study had at least one fracture, explained investigator Dr. Amanda Vincent, head of the Midlife Health and Menopause Program at Monash.
“Avoiding estrogen deficiency is important to optimize bone health in Turner syndrome.” It “depends on early diagnosis, age-appropriate pubertal induction, and optimization of compliance,” Dr. Vincent said at the Endocrine Society annual meeting.
The median age of Turner syndrome diagnosis was 11 years, but estrogen treatments didn’t begin until a median age of 15. The women in the study were a median of about 30 years old, which means that they were adolescents at the time when estrogen treatment was often delayed in the mistaken belief that growth hormone therapy would be more effective before puberty was induced.
It’s now known that estrogen replacement works synergistically with, and even potentiates, the effects of growth hormone. Current guidelines recommend pubertal induction by age 13 (J Clin Endocrinol Metab. 2007 Jan;92(1):10-25).
The women had at least one dual-energy x-ray absorptiometry scan at Monash since 1998. Z-scores below - 2, indicating low bone density, were found in the lumbar spines of about a quarter the subjects, and in the femoral necks of about 8%. Primary amenorrhea and premature menopause, followed by vitamin D deficiency, were the most common risk factors for low bone mass. Almost 40% of the women reported non-continuous use of estrogen. About half had undergone growth hormone therapy.
At a median height of 149 cm, the subjects were about 15 cm shorter than age-matched, healthy controls, and also had a slightly higher median body mass index of 25.6 kg/m2. Lumbar spine bone area, bone mineral content, areal bone mineral density, and bone mineral apparent density were significantly lower in Turner syndrome patients. In the femoral neck, areal bone mineral density was significantly lower.
There was no relationship between bone markers and growth hormone use or Turner syndrome karyotype; the predominant karyotype was 45XO, but the study also included mosaic karyotypes.
The investigators had no disclosures.
AT ENDO 2016
Key clinical point: Induce puberty by age 13 in Turner syndrome.
Major finding: For every year after age 11 that Turner patients went without estrogen – generally due to delayed initiation, but sometimes noncompliance – there was a significant reduction in bone mineral density in both the lubar spine (Beta -0.582, P less than 0.001) and femoral neck (Beta -0.383, P = 0.008).
Data source: Retrospective, cross-section study of 76 Turner syndrome patients
Disclosures: The investigators had no disclosures.
Only ‘early’ estradiol limits atherosclerosis progression
Hormone therapy – estradiol with or without progesterone – only limits the progression of subclinical atherosclerosis if it is initiated within 6 years of menopause onset, according to a report published online March 30 in the New England Journal of Medicine.
The “hormone-timing hypothesis” posits that hormone therapy’s beneficial effects on atherosclerosis depend on the timing of initiating that therapy relative to menopause. To test this hypothesis, researchers began the ELITE study (Early versus Late Intervention Trial with Estradiol) in 2002, using serial noninvasive measurements of carotid-artery intima-media thickness (CIMT) as a marker of atherosclerosis progression.
Several other studies since 2002 have reported that the timing hypothesis appears to be valid, wrote Dr. Howard N. Hodis of the Atherosclerosis Research Unit, University of Southern California, Los Angeles, and his associates.
Their single-center trial involved 643 healthy postmenopausal women who had no diabetes and no evidence of cardiovascular disease at baseline, and who were randomly assigned to receive either daily oral estradiol or a matching placebo for 5 years. Women who had an intact uterus and took active estradiol also received a 4% micronized progesterone vaginal gel, while those who had an intact uterus and took placebo also received a matching placebo gel.
The participants were stratified according to the number of years they were past menopause: less than 6 years (271 women in the “early” group) or more than 10 years (372 in the “late” group).
A total of 137 women in the early group and 186 women in the late group were assigned to active estradiol, while 134 women in the early group and 186 women in the late group were assigned to placebo. As expected, serum estradiol levels were at least 3 times higher among women assigned to active treatment, compared with those assigned to placebo.
The primary outcome – the effect of hormone therapy on CIMT progression – differed by timing of the initiation of treatment. In the “early” group, the mean CIMT progression rate was decreased by 0.0034 mm per year with estradiol, compared with placebo.
In contrast, in the “late” group, the rates of CIMT progression were not significantly different between estradiol and placebo, the investigators wrote (N Engl J Med. 2016;374:1221-31. doi: 10.1056/NEJMoa1505241).
This beneficial effect remained significant in a sensitivity analysis restricted only to study participants who showed at least 80% adherence to their assigned treatment. The benefit also remained significant in a post-hoc analysis comparing women who took estradiol alone against those who took estradiol plus progestogen, as well as in a separate analysis comparing women who used lipid-lowering and/or hypertensive medications against those who did not.
The findings add further evidence in favor of the hormone timing hypothesis. The effect of estradiol therapy on CIMT progression was significantly modified by time since menopause (P = .007 for the interaction), the researchers wrote.
Cardiac computer tomography (CT) was used as a different method of assessing coronary atherosclerosis in a subgroup of 167 women in the early group (88 receiving estradiol and 79 receiving placebo) and 214 in the late group (101 receiving estradiol and 113 receiving placebo). The timing of estradiol treatment did not affect coronary artery calcium and other cardiac CT measures. This is consistent with previous reports that hormone therapy has no significant effect on established lesions in the coronary arteries, the researchers wrote.
The ELITE trial was funded by the National Institute on Aging. Dr. Hodis reported having no relevant financial disclosures; two of his associates reported ties to GE and TherapeuticsMD.
Despite the favorable effect of estrogen on atherosclerosis in early postmenopausal women in the ELITE trial, the relevance of these results to clinical coronary heart disease events remains questionable. The trial assessed only surrogate measures of coronary heart disease and was not designed or powered to assess clinical events. The occurrence of myocardial infarction and stroke involves not only atherosclerotic plaque formation but also plaque rupture and thrombosis. Any changes in these latter two phenomena would not be captured by the CIMT measurements in ELITE — a point of particular interest, given that postmenopausal hormone therapy may promote thrombosis and inflammation. A final caution is that the available clinical data in support of the timing hypothesis are suggestive but inconsistent.
Guidelines from various professional organizations currently caution against using postmenopausal hormone therapy for the purpose of preventing cardiovascular events. Although the ELITE trial results support the hypothesis that postmenopausal hormone therapy may have more favorable effects on atherosclerosis when initiated soon after menopause, extrapolation of these results to clinical events would be premature, and the present guidance remains prudent.
Dr. John F. Keaney, Jr., is at the University of Massachusetts, Worcester and is an associate editor at the New England Journal of Medicine, and Dr. Caren G. Solomon is a deputy editor at the New England Journal of Medicine. They reported having no relevant financial disclosures. These remarks are adapted from an accompanying editorial (N Engl J Med. 2016 Mar 30. doi: 10.1056/NEJMe1602846).
Despite the favorable effect of estrogen on atherosclerosis in early postmenopausal women in the ELITE trial, the relevance of these results to clinical coronary heart disease events remains questionable. The trial assessed only surrogate measures of coronary heart disease and was not designed or powered to assess clinical events. The occurrence of myocardial infarction and stroke involves not only atherosclerotic plaque formation but also plaque rupture and thrombosis. Any changes in these latter two phenomena would not be captured by the CIMT measurements in ELITE — a point of particular interest, given that postmenopausal hormone therapy may promote thrombosis and inflammation. A final caution is that the available clinical data in support of the timing hypothesis are suggestive but inconsistent.
Guidelines from various professional organizations currently caution against using postmenopausal hormone therapy for the purpose of preventing cardiovascular events. Although the ELITE trial results support the hypothesis that postmenopausal hormone therapy may have more favorable effects on atherosclerosis when initiated soon after menopause, extrapolation of these results to clinical events would be premature, and the present guidance remains prudent.
Dr. John F. Keaney, Jr., is at the University of Massachusetts, Worcester and is an associate editor at the New England Journal of Medicine, and Dr. Caren G. Solomon is a deputy editor at the New England Journal of Medicine. They reported having no relevant financial disclosures. These remarks are adapted from an accompanying editorial (N Engl J Med. 2016 Mar 30. doi: 10.1056/NEJMe1602846).
Despite the favorable effect of estrogen on atherosclerosis in early postmenopausal women in the ELITE trial, the relevance of these results to clinical coronary heart disease events remains questionable. The trial assessed only surrogate measures of coronary heart disease and was not designed or powered to assess clinical events. The occurrence of myocardial infarction and stroke involves not only atherosclerotic plaque formation but also plaque rupture and thrombosis. Any changes in these latter two phenomena would not be captured by the CIMT measurements in ELITE — a point of particular interest, given that postmenopausal hormone therapy may promote thrombosis and inflammation. A final caution is that the available clinical data in support of the timing hypothesis are suggestive but inconsistent.
Guidelines from various professional organizations currently caution against using postmenopausal hormone therapy for the purpose of preventing cardiovascular events. Although the ELITE trial results support the hypothesis that postmenopausal hormone therapy may have more favorable effects on atherosclerosis when initiated soon after menopause, extrapolation of these results to clinical events would be premature, and the present guidance remains prudent.
Dr. John F. Keaney, Jr., is at the University of Massachusetts, Worcester and is an associate editor at the New England Journal of Medicine, and Dr. Caren G. Solomon is a deputy editor at the New England Journal of Medicine. They reported having no relevant financial disclosures. These remarks are adapted from an accompanying editorial (N Engl J Med. 2016 Mar 30. doi: 10.1056/NEJMe1602846).
Hormone therapy – estradiol with or without progesterone – only limits the progression of subclinical atherosclerosis if it is initiated within 6 years of menopause onset, according to a report published online March 30 in the New England Journal of Medicine.
The “hormone-timing hypothesis” posits that hormone therapy’s beneficial effects on atherosclerosis depend on the timing of initiating that therapy relative to menopause. To test this hypothesis, researchers began the ELITE study (Early versus Late Intervention Trial with Estradiol) in 2002, using serial noninvasive measurements of carotid-artery intima-media thickness (CIMT) as a marker of atherosclerosis progression.
Several other studies since 2002 have reported that the timing hypothesis appears to be valid, wrote Dr. Howard N. Hodis of the Atherosclerosis Research Unit, University of Southern California, Los Angeles, and his associates.
Their single-center trial involved 643 healthy postmenopausal women who had no diabetes and no evidence of cardiovascular disease at baseline, and who were randomly assigned to receive either daily oral estradiol or a matching placebo for 5 years. Women who had an intact uterus and took active estradiol also received a 4% micronized progesterone vaginal gel, while those who had an intact uterus and took placebo also received a matching placebo gel.
The participants were stratified according to the number of years they were past menopause: less than 6 years (271 women in the “early” group) or more than 10 years (372 in the “late” group).
A total of 137 women in the early group and 186 women in the late group were assigned to active estradiol, while 134 women in the early group and 186 women in the late group were assigned to placebo. As expected, serum estradiol levels were at least 3 times higher among women assigned to active treatment, compared with those assigned to placebo.
The primary outcome – the effect of hormone therapy on CIMT progression – differed by timing of the initiation of treatment. In the “early” group, the mean CIMT progression rate was decreased by 0.0034 mm per year with estradiol, compared with placebo.
In contrast, in the “late” group, the rates of CIMT progression were not significantly different between estradiol and placebo, the investigators wrote (N Engl J Med. 2016;374:1221-31. doi: 10.1056/NEJMoa1505241).
This beneficial effect remained significant in a sensitivity analysis restricted only to study participants who showed at least 80% adherence to their assigned treatment. The benefit also remained significant in a post-hoc analysis comparing women who took estradiol alone against those who took estradiol plus progestogen, as well as in a separate analysis comparing women who used lipid-lowering and/or hypertensive medications against those who did not.
The findings add further evidence in favor of the hormone timing hypothesis. The effect of estradiol therapy on CIMT progression was significantly modified by time since menopause (P = .007 for the interaction), the researchers wrote.
Cardiac computer tomography (CT) was used as a different method of assessing coronary atherosclerosis in a subgroup of 167 women in the early group (88 receiving estradiol and 79 receiving placebo) and 214 in the late group (101 receiving estradiol and 113 receiving placebo). The timing of estradiol treatment did not affect coronary artery calcium and other cardiac CT measures. This is consistent with previous reports that hormone therapy has no significant effect on established lesions in the coronary arteries, the researchers wrote.
The ELITE trial was funded by the National Institute on Aging. Dr. Hodis reported having no relevant financial disclosures; two of his associates reported ties to GE and TherapeuticsMD.
Hormone therapy – estradiol with or without progesterone – only limits the progression of subclinical atherosclerosis if it is initiated within 6 years of menopause onset, according to a report published online March 30 in the New England Journal of Medicine.
The “hormone-timing hypothesis” posits that hormone therapy’s beneficial effects on atherosclerosis depend on the timing of initiating that therapy relative to menopause. To test this hypothesis, researchers began the ELITE study (Early versus Late Intervention Trial with Estradiol) in 2002, using serial noninvasive measurements of carotid-artery intima-media thickness (CIMT) as a marker of atherosclerosis progression.
Several other studies since 2002 have reported that the timing hypothesis appears to be valid, wrote Dr. Howard N. Hodis of the Atherosclerosis Research Unit, University of Southern California, Los Angeles, and his associates.
Their single-center trial involved 643 healthy postmenopausal women who had no diabetes and no evidence of cardiovascular disease at baseline, and who were randomly assigned to receive either daily oral estradiol or a matching placebo for 5 years. Women who had an intact uterus and took active estradiol also received a 4% micronized progesterone vaginal gel, while those who had an intact uterus and took placebo also received a matching placebo gel.
The participants were stratified according to the number of years they were past menopause: less than 6 years (271 women in the “early” group) or more than 10 years (372 in the “late” group).
A total of 137 women in the early group and 186 women in the late group were assigned to active estradiol, while 134 women in the early group and 186 women in the late group were assigned to placebo. As expected, serum estradiol levels were at least 3 times higher among women assigned to active treatment, compared with those assigned to placebo.
The primary outcome – the effect of hormone therapy on CIMT progression – differed by timing of the initiation of treatment. In the “early” group, the mean CIMT progression rate was decreased by 0.0034 mm per year with estradiol, compared with placebo.
In contrast, in the “late” group, the rates of CIMT progression were not significantly different between estradiol and placebo, the investigators wrote (N Engl J Med. 2016;374:1221-31. doi: 10.1056/NEJMoa1505241).
This beneficial effect remained significant in a sensitivity analysis restricted only to study participants who showed at least 80% adherence to their assigned treatment. The benefit also remained significant in a post-hoc analysis comparing women who took estradiol alone against those who took estradiol plus progestogen, as well as in a separate analysis comparing women who used lipid-lowering and/or hypertensive medications against those who did not.
The findings add further evidence in favor of the hormone timing hypothesis. The effect of estradiol therapy on CIMT progression was significantly modified by time since menopause (P = .007 for the interaction), the researchers wrote.
Cardiac computer tomography (CT) was used as a different method of assessing coronary atherosclerosis in a subgroup of 167 women in the early group (88 receiving estradiol and 79 receiving placebo) and 214 in the late group (101 receiving estradiol and 113 receiving placebo). The timing of estradiol treatment did not affect coronary artery calcium and other cardiac CT measures. This is consistent with previous reports that hormone therapy has no significant effect on established lesions in the coronary arteries, the researchers wrote.
The ELITE trial was funded by the National Institute on Aging. Dr. Hodis reported having no relevant financial disclosures; two of his associates reported ties to GE and TherapeuticsMD.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
Key clinical point: Estradiol only limits the progression of subclinical atherosclerosis if it is initiated within 6 years of menopause, not later.
Major finding: The mean CIMT progression rate was decreased by 0.0034 mm per year with estradiol, compared with placebo, but only in women who initiated hormone therapy within 6 years of menopause onset.
Data source: A single-center randomized, double-blind, placebo-controlled trial involving 643 healthy postmenopausal women treated for 5 years.
Disclosures: The ELITE trial was funded by the National Institute on Aging. Dr. Hodis reported having no relevant financial disclosures; two of his associates reported ties to GE and TherapeuticsMD.
Can CA 125 screening reduce mortality from ovarian cancer?
To date, screening has not been found effective in reducing mortality from ovarian cancer. Collaborative trial investigators in the United Kingdom studied postmenopausal women in the general population to assess whether early detection by screening could decrease ovarian cancer mortality.
Details of the study
During 2001 to 2005, more than 200,000 UK postmenopausal women aged 50 to 74 years (mean age at baseline, 60.6 years) were randomly assigned to no screening, annual transvaginal ultrasound screening (TVUS), or annual multimodal screening (MMS) with serum CA 125 using the Risk of Ovarian Cancer Algorithm (ROCA), which takes into account changes in CA 125 levels over time. When ROCA scores indicated normal risk for ovarian cancer, women were advised to undergo repeat CA 125 assessment in 1 year. Women with intermediate risk were advised to repeat CA 125 assessment in 3 months, while high-risk women were advised to undergo TVUS.
With a median of 11.1 years of follow-up, ovarian cancer (including fallopian tube malignancies) was diagnosed in 1,282 participants (0.6%), with fatal outcomes among the 3 groups as follows: 0.34% in the no-screening group, 0.30% in the TVUS group, and 0.29% in the MMS group. Based on the results of a planned secondary analysis that excluded prevalent cases of ovarian cancer, annual MMS was associated with an overall average mortality reduction of 20% compared with no screening (P = .021). When the mortality reduction was broken down by years of annual screening, 0 to 7 years was associated with an 8% mortality reduction over no screening, and this jumped to 28% for 7 to 14 annual MMS screening years.
The overall average mortality reduction with TVUS compared with no screening was smaller than with MMS. With MMS, the number needed to screen to prevent 1 death from ovarian cancer was 641.
Assessing unnecessary treatment
False-positive screens that resulted in surgical intervention with findings of benign adnexal pathology or normal adnexa occurred in 14 and 50 per 10,000 screens in the MMS and TVUS groups, respectively. For each ovarian cancer detected in the MMS and TVUS groups, an additional 2 and 10 women, respectively, underwent surgery based on false-positive results.
WHAT THIS EVIDENCE MEANS FOR PRACTICE
This massive trial’s findings provide optimism that screening for ovarian cancer can indeed reduce mortality from this uncommon but too-often lethal disease. There are unanswered questions, however, which include the cost-effectiveness of MMS screening and how well this strategy can be implemented outside of a highly centralized and controlled clinical trial. While encouraging, these trial results should be viewed as preliminary until additional efficacy and cost-effectiveness data—and guidance from professional organizations—are available.
—ANDREW M. KAUNITZ, MD
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
To date, screening has not been found effective in reducing mortality from ovarian cancer. Collaborative trial investigators in the United Kingdom studied postmenopausal women in the general population to assess whether early detection by screening could decrease ovarian cancer mortality.
Details of the study
During 2001 to 2005, more than 200,000 UK postmenopausal women aged 50 to 74 years (mean age at baseline, 60.6 years) were randomly assigned to no screening, annual transvaginal ultrasound screening (TVUS), or annual multimodal screening (MMS) with serum CA 125 using the Risk of Ovarian Cancer Algorithm (ROCA), which takes into account changes in CA 125 levels over time. When ROCA scores indicated normal risk for ovarian cancer, women were advised to undergo repeat CA 125 assessment in 1 year. Women with intermediate risk were advised to repeat CA 125 assessment in 3 months, while high-risk women were advised to undergo TVUS.
With a median of 11.1 years of follow-up, ovarian cancer (including fallopian tube malignancies) was diagnosed in 1,282 participants (0.6%), with fatal outcomes among the 3 groups as follows: 0.34% in the no-screening group, 0.30% in the TVUS group, and 0.29% in the MMS group. Based on the results of a planned secondary analysis that excluded prevalent cases of ovarian cancer, annual MMS was associated with an overall average mortality reduction of 20% compared with no screening (P = .021). When the mortality reduction was broken down by years of annual screening, 0 to 7 years was associated with an 8% mortality reduction over no screening, and this jumped to 28% for 7 to 14 annual MMS screening years.
The overall average mortality reduction with TVUS compared with no screening was smaller than with MMS. With MMS, the number needed to screen to prevent 1 death from ovarian cancer was 641.
Assessing unnecessary treatment
False-positive screens that resulted in surgical intervention with findings of benign adnexal pathology or normal adnexa occurred in 14 and 50 per 10,000 screens in the MMS and TVUS groups, respectively. For each ovarian cancer detected in the MMS and TVUS groups, an additional 2 and 10 women, respectively, underwent surgery based on false-positive results.
WHAT THIS EVIDENCE MEANS FOR PRACTICE
This massive trial’s findings provide optimism that screening for ovarian cancer can indeed reduce mortality from this uncommon but too-often lethal disease. There are unanswered questions, however, which include the cost-effectiveness of MMS screening and how well this strategy can be implemented outside of a highly centralized and controlled clinical trial. While encouraging, these trial results should be viewed as preliminary until additional efficacy and cost-effectiveness data—and guidance from professional organizations—are available.
—ANDREW M. KAUNITZ, MD
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
To date, screening has not been found effective in reducing mortality from ovarian cancer. Collaborative trial investigators in the United Kingdom studied postmenopausal women in the general population to assess whether early detection by screening could decrease ovarian cancer mortality.
Details of the study
During 2001 to 2005, more than 200,000 UK postmenopausal women aged 50 to 74 years (mean age at baseline, 60.6 years) were randomly assigned to no screening, annual transvaginal ultrasound screening (TVUS), or annual multimodal screening (MMS) with serum CA 125 using the Risk of Ovarian Cancer Algorithm (ROCA), which takes into account changes in CA 125 levels over time. When ROCA scores indicated normal risk for ovarian cancer, women were advised to undergo repeat CA 125 assessment in 1 year. Women with intermediate risk were advised to repeat CA 125 assessment in 3 months, while high-risk women were advised to undergo TVUS.
With a median of 11.1 years of follow-up, ovarian cancer (including fallopian tube malignancies) was diagnosed in 1,282 participants (0.6%), with fatal outcomes among the 3 groups as follows: 0.34% in the no-screening group, 0.30% in the TVUS group, and 0.29% in the MMS group. Based on the results of a planned secondary analysis that excluded prevalent cases of ovarian cancer, annual MMS was associated with an overall average mortality reduction of 20% compared with no screening (P = .021). When the mortality reduction was broken down by years of annual screening, 0 to 7 years was associated with an 8% mortality reduction over no screening, and this jumped to 28% for 7 to 14 annual MMS screening years.
The overall average mortality reduction with TVUS compared with no screening was smaller than with MMS. With MMS, the number needed to screen to prevent 1 death from ovarian cancer was 641.
Assessing unnecessary treatment
False-positive screens that resulted in surgical intervention with findings of benign adnexal pathology or normal adnexa occurred in 14 and 50 per 10,000 screens in the MMS and TVUS groups, respectively. For each ovarian cancer detected in the MMS and TVUS groups, an additional 2 and 10 women, respectively, underwent surgery based on false-positive results.
WHAT THIS EVIDENCE MEANS FOR PRACTICE
This massive trial’s findings provide optimism that screening for ovarian cancer can indeed reduce mortality from this uncommon but too-often lethal disease. There are unanswered questions, however, which include the cost-effectiveness of MMS screening and how well this strategy can be implemented outside of a highly centralized and controlled clinical trial. While encouraging, these trial results should be viewed as preliminary until additional efficacy and cost-effectiveness data—and guidance from professional organizations—are available.
—ANDREW M. KAUNITZ, MD
Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.
Readers weigh in on vaginal cleansing prior to cesarean delivery
“SHOULD YOU ADOPT THE PRACTICE OF VAGINAL CLEANSING WITH POVIDONE-IODINE PRIOR TO CESAREAN DELIVERY?”
ROBERT L. BARBIERI, MD (EDITORIAL; JANUARY 2016)
In his January 2016 Editorial, Editor in Chief Robert L. Barbieri, MD, presented evidence supporting the practice of vaginal cleansing with povidone-iodine prior to cesarean delivery (CD) to prevent postoperative endometritis. He then asked readers if they would consider adopting such a practice. More than 250 readers weighed in through the Quick Poll at obgmanagement.com, and many readers sent in letters with follow-up questions and comments on controlling bacterial contamination, vaginal seeding, etc. Here are some of the letters, along with Dr. Barbieri’s response and the Quick Poll results.
A contradiction in definitions?
There seems to be a contradiction in definitions. The second sentence of the article defines endometritis as the presence of fever plus low abdominal tenderness. However, the studies presented state that vaginal cleansing pre-CD decreased endometritis but did not decrease postpartum fever. Is this not a discrepancy?
Nancy Kerr, MD, MPH
Albuquerque, New Mexico
A question about povidone-iodine
Have any studies been done on newborn iodine levels after vaginal cleansing with povidone-iodine prior to CD?
G. Millard Simmons Jr, MD
Hilton Head, Bluffton, South Carolina
Additional tips for controlling bacterial contamination
Dr. Barbieri’s editorial on vaginal cleansing prior to CD is eye opening. I have a few additional suggestions to control bacterial contamination.
First, I examine my patients in labor as few times as necessary, and I ask the nurses (RNs) not to place their fingers in the patient’s vagina while she is pushing. I remove the Foley catheter when I feel progress (descent of fetal head) is being achieved. In addition, physicians as well as RNs should consider changing their scrubs between deliveries, as I believe that bacterial contamination is splattered all over the place, especially into the birth canal. These methods have worked for me in my over-20 years of practice.
I also firmly remind the RN circulator to perform a generous vaginal cleanse with povidone-iodine, in addition to the usual intravenous prophylaxis, before hysterectomy.
Luis Leyva Jr, MD
Miami, Florida
Mixed feelings
My first reaction to this Editorial was: Is this a solution in search of a problem? That is to say, how much of a clinical problem is endometritis after CD? Are we really treating the proposed problem, and does treatment affect long-term outcomes?
Upon reflection, I have concluded that vaginal cleansing pre-CD does intuitively make sense. What sways me in this direction is that the practice is simple, easy, and inexpensive. Since we typically have the patient positioned for Foley catheter insertion, performing vaginal cleansing as we put in the Foley would be easy. If vaginal cleansing were to be done, I definitely would be in favor of doing such practice liberally—for all CDs to make vaginal cleansing part of the “routine.”
Keep in mind that we are still chasing a problem of little clinical significance.
The biggest accomplishment has been to get everyone to give antibiotics preoperatively rather than after cutting the umbilical cord. We knew that this was best practice as early as the late 1980s/early 1990s, and I have been fighting this battle ever since. Believe it or not, there are still a few holdouts.
George H. Davis, DO
Johnson City, Tennessee
Would vaginal cleansing benefit all women in labor?
Vaginal cleansing before CD reminds me of my residency days when all women having hysterectomies were admitted early and given povidone-iodine (Betadine) douches the evening before surgery (unless an iodine allergy was present).
While reading your Editorial, I had several thoughts and questions. 1) Since vaginal cleansing seems to benefit CD patients, might it not benefit all laboring patients? 2) Is the timing of vaginal cleansing critical? 3) Should we do vaginal cleansing on all laboring patients if timing is not critical?
I plan to bring up the topic of vaginal cleansing for CD with my colleagues at our next department meeting, since it seems like such a simple, logical, inexpensive, and beneficial thing to do.
Douglas G. Tolley, MD
Yuba City, California
An early study on using povidone-iodine gel before CD
When I was a chief resident at Kings County Hospital in 1973, we had a very high rate of post-CD endometritis. I conducted a small study on the use of povidone-iodine gel in the last month of pregnancy. Before commencing, we confirmed that the gel did not interfere with diagnosing ruptured membranes.
Obstetric service patients were randomly divided into “A” and “B” groups. The A patients were asked to use povidone-iodine gel at night for the last 2 weeks before their estimated due date. When admitted in labor, they were asked to confirm its use. When a resident diagnosed post-CD endometritis, we kept track of which group the patient was in and whether or not that patient had used povidone-iodine. Approximately 100 infected patients were evaluated from each group.
As it turned out, there were about 3 times the number of infections among the patients who did not use povidone-iodine than among those who said they used it. It did not seem to matter how many times povidone-iodine was used. The “As” who did not use povidone-iodine had results similar to the “Bs.”
It was many years ago, and the study design was crude. However, it does seem to support the suggestion for vaginal cleansing.
Steve Ross, MD
Port Jefferson, New York
Two different ideas about the vaginal biome
This Editorial is timely in that Dr. Dominguez-Bello and colleagues recently published an article in Nature Medicine titled, “Partial restoration of the microbiota of cesarean-born infants via vaginal microbial transfer.”1 Dr. Dominguez-Bello is one of the founders of the idea of “vaginal seeding,” or using the natural biome of the vagina on a newborn immediately after CD by swabbing the baby with the bacteria from the vagina.
I find it interesting that there are two very different ideas about the biome at this time. Vaginal seeding is a new trend that a few patients have asked about during prenatal care. The jury is still out on seeding, but a larger study is currently underway at New York University. Of course, infection is one of the risks of seeding. I appreciate hearing both sides of the issue.
Deborah Herchelroath, DO
Harrisburg, Pennsylvania
Reference
- Dominguez-Bello MG, De Jesus-Labor KM, Shen N, et al. Partial restoration of the microbiota of cesarean-born infants via vaginal microbial transfer [published online ahead of print February 1, 2016]. Nat Med. doi:10.1038/nm.4039.
Dr. Barbieri responds
I would like to thank our readers for taking the time from their busy schedules to write about their clinical experiences and current practices for reducing infectious complications following CD.
Dr. Kerr raises the important issue of the apparent contradictory finding of the beneficial impact of vaginal cleansing on endometritis without a beneficial effect on the overall rate of fever. In the trial reported by Starr,1 fever was defined as a temperature above 38˚C at any time after CD and endometritis was defined as a temperature above 38.4˚C PLUS uterine tenderness occurring more than 24 hours after CD. Given these 2 definitions one can understand the differential effect of vaginal cleansing on fever versus endometritis.
Dr. Simmons raises the intriguing question of the impact of an iodine-containing surgical preparation on newborn thyroid function. There are few studies addressing this issue. One study reports a transient increase in thyroid-stimulating hormone (TSH) levels in a small percentage of newborns whose mothers received an iodine preparation.2 Another study reports no effect of an iodine surgical preparation on newborn thyroid function indices.3
I agree with the guidance of Drs. Leyva and Davis that we can help prevent postcesarean endometritis by minimizing the number of cervical examinations, changing scrubs between deliveries, and by ensuring that an intravenous anti‑ biotic is given before skin incision.
Dr. Tolley wonders if all women should receive vaginal cleansing, regardless of delivery route. It is possible that such an approach would be effective and it deserves study. Given the lower rate of endometritis following vaginal delivery compared with CD, many more women having a vaginal delivery would need to be treated to prevent one case of endometritis. Dr. Ross mentions his experience with the benefit of outpatient vaginal cleansing in the 2 weeks prior to delivery. Many general surgeons are recommending that their patients shower with chlorhexidine the day before surgery in order to reduce the rate of postoperative infection. Short-term and long-term outpatient vaginal cleansing prior to delivery deserves additional study.
Dr. Herchelroath raises the possibility that vaginal cleansing will decrease the ability of the newborn to develop a normal microbiome because it may not be exposed to sufficient vaginal bacteria. This possibility certainly deserves additional study.
The questions and guidance of our readers were incredibly helpful and stimulating. Thank you for sharing your perspective.
References
- Starr RV, Zurawski J, Ismail M. Preoperative vaginal preparation with povidone-iodine and the risk of postcesarean endometritis. Obstet Gynecol. 2005;105(5 pt 1):1024–1029.
- Nili F, Hantoushzadeh S, Alimohamadi A, et al. Iodine-containing disinfectants in preparation for cesarean section: impact on thyroid profile in cord blood. Postgrad Med J. 2015;91(1082):681–684.
- Ordookhani A, Pearce EN, Mirmiran P, Azizi F, Braverman LE. The effect of type of delivery and povidone-iodine application at delivery on cord dried-blood-specimen thyrotropin level and the rate of hyperthyrotropinemia in mature and normal-birth-weight neonates residing in an iodine-replete area. Thyroid. 2007;17(11):1097–1102.
“CELL-FREE DNA SCREENING FOR WOMEN AT LOW RISK FOR FETAL ANEUPLOIDY” MARY E. NORTON, MD (JANUARY 2016)
The price of cfDNA screening is dropping
I found Dr. Norton’s article on cell-free DNA (cfDNA) screening for women at low risk for fetal abnormalities to be enlightening and educational. The section addressing cost-effectiveness, however, was somewhat obsolete. The referenced study by Cuckle and colleagues,1 which estimated the cost of cfDNA per case of Down syndrome in low-risk patients at $3.6 million, was published in 2013. With 4 major companies in the market, the cost/benefit ratio has been changing rapidly. At least one company has dropped the cost of the cfDNA test nearly 80% from 2015 to 2016, making the above reference irrelevant. Recently, Ariosa dropped the price of their Harmony cfDNA test to just $119 in our area, regardless of a patient’s insurance or poverty level. This is significantly less than the cost of performing an early screen and is being welcomed by my patients even after substantial counseling on the test’s limitations in the low-risk population. Natera, another laboratory with a similar test, offers a low-cost option. However, patients must provide proof that their income is below a specified level.
Guidelines from the American College of Obstetricians and Gynecologists (ACOG) and the Society for Maternal-Fetal Medicine (SMFM) likely will have a hard time keeping up with the cost-effectiveness of noninvasive prenatal testing, as the price continues to be dynamic.
Samuel Wolf, DO
Panama City, Florida
Reference
- Cuckle H, Benn P, Pergament E. Maternal cfDNA screening for Down syndrome—a cost sensitivity analysis. Prenat Diagn. 2013;33(7):636–642.
“DOES THE DISCONTINUATION OF MENOPAUSAL HORMONE THERAPY AFFECT A WOMAN’S CARDIOVASCULAR RISK?”
ANDREW M. KAUNITZ, MD; JOANN E. MANSON, MD, DRPH; AND CYNTHIA A. STUENKEL, MD(EXAMINING THE EVIDENCE; DECEMBER 2015)
Disagrees with conclusion
In their expert commentary, Drs. Kaunitz, Manson, and Stuenkel state:
They support this claim by citing Heiss 2008.1 In fact, however, the Women’s Health Initiative (WHI) data show opposite to their statement: In the WHI, all-cause mortality was increased among the women who were assigned to estrogen-progestin therapy (EPT) relative to those who were assigned to placebo within the 3 years of EPT cessation (hazard ratio [HR], 1.15; 95% confidence interval [CI], 0.95–1.39). More importantly, mortality was significantly increased among women who were originally assigned to EPT relative to those who were assigned to placebo and were at least 80% adherent with intervention (HR, 1.53; 95% CI, 1.04–2.24). Thus, the statement by Drs. Kaunitz, Manson, and Stuenkel is incorrect.
In addition to the WHI studies, data are available from at least 2 other randomized controlled trials addressing the issue of HT withdrawal. In the Heart and Estrogen/progestin Replacement Study (HERS) II,2 the unblinded 2.7-year follow-up to the HERS trial, women originally assigned to EPT had a 3.3-fold higher rate of ventricular arrhythmia requiring resuscitation than women assigned to placebo (HR, 3.30; 95% CI, 1.08–10.10). During the first 6 months of posttrial follow-up of the Women’s Estrogen for Stroke Trial (WEST),3 there were 3 fatal strokes and 18 nonfatal strokes among the women originally randomized to estradiol therapy; there were 9 strokes (1 fatal and 8 nonfatal) among the women originally assigned to placebo (HR, 2.3; 95% CI, 1.1–5.0; P = .03).
In our study we detected that women who stopped HT, compared with women who continued HT, had a 2.3-fold (95% CI, 2.12–2.50) greater risk of cardiac death within the first post-HT year and a 1.3-fold (95% CI, 1.21–1.31) greater risk of cardiac death more than 1 year after stopping HT.4 In addition, women who stopped HT, compared with women who continuedHT, had a 2.5-fold (95% CI, 2.28–2.77) greater risk of dying from stroke within the first post-HT year and a 1.3-fold (95% CI, 1.19–1.31) greater risk of dying from stroke more than 1 year after stopping HT. We believe that these data substantially further our understanding of the posttrial data from WHI, as well as HERS and WEST. Thus, cumulative data support that HT withdrawal potentially has detrimental implications for women. In total, the data are highly informative when counseling women regarding use or discontinuation of HT.
Tomi Mikkola, MD
Helsinki, Finland
References
- Heiss G, Wallace R, Anderson GL, et al; WHI investigators. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA. 2008;299(9):1036–1045.
- Grady D, Herrington D, Bittner V, et al; HERS Research Group. Cardiovascular disease outcomes during 6.8 years of hormone therapy: Heart and Estrogen/progestin Replacement Study follow-up (HERS II) [published correction appears in JAMA. 2002;288(9):1064]. JAMA. 2002;288(1):49–57.
- Viscoli CM, Brass LM, Kernan WN, Sarrel PM, Suissa S, Horwitz RI. A clinical trial of estrogen-replacement therapy after ischemic stroke. N Engl J Med. 2001;345(17):1243–1249.
- Mikkola TS, Tuomikoski P, Lyytinen H, et al. Increased cardiovascular mortality risk in women discontinuing postmenopausal hormone therapy. J Clin Endocrinol Metab. 2015;100(12):4588–4594.
Drs. Kaunitz, Manson, and Stuenkel respond
We thank Dr. Mikkola for his response to our commentary, but we do not agree with his interpretation of the WHI reports or our conclusions. As we originally stated, the WHI trial of estrogen-only therapy (ET) and EPT provides an opportunity to observe outcomes in the largest randomized controlled trial of HT in healthy postmenopausal women. Our commentary was based on the most recent, 13-year follow-up of the WHI trials,1 and we are confident in the accuracy of our presentation of the results.
As the debate apparently focuses on the safety of stopping HT, we wish to reiterate, for those who may not be familiar with the data, that, in the ET trial, all-cause mortality declined (although not significantly) after stopping ET, as summarized here:
HR (95% CI) | |
Intervention phase | 1.03 (0.88–1.21) |
Postintervention phase (after stopping study medication) | 0.96 (0.84–1.10) |
Cumulative 13 years of follow-up | 0.99 (0.90–1.10) |
Similarly, in the EPT trial, as the following findings indicate, stopping HT did not increase all-cause mortality:
HR (95% CI) | |
Intervention phase | 0.97 (0.81–1.16) |
Postintervention phase (afterstopping study medication) | 1.01 (0.91–1.11) |
Cumulative 13 years of follow-up | 0.99 (0.91–1.08) |
Again, these findings from the largest randomized trial of HT in healthy postmenopausal women are adequate for us to conclude that stopping HT does not elevate risk of mortality. Among all women participating in the WHI HT trials, HRs for coronary heart disease, pulmonary embolism, stroke, and cardiovascular disease mortality likewise were lower (better) after stopping treatment than during the intervention phase. The results for these outcomes in younger women followed similar patterns but, due to smaller numbers of events, could not be tested formally for differences in time trends.
Moreover, the data Dr. Mikkola cites from analyses conducted 3 years postcessation2 reflected a borderline increased risk of cancer mortality that emerged in the EPT trial after stopping treatment. This clearly was related to the prolonged effects of EPT on breast cancer and other cancers, given the known latency period for cancer, and was not observed in the ET trial postcessation. The risk elevation in the EPT trial became attenuated with longer follow-up and, as of 13 years, the HRs for cancer mortality were 1.07 (0.93–1.23) in the EPT trial and 0.95 (0.81–1.13) in the ET trial.
It is interesting that Dr. Mikkola now inculcates his interpretation of his findings3 with those from secondary prevention trials such as the Heart and Estrogen/progestin Replacement Study and the Women’s Estrogen for Stroke Trial, neither of which was included as corroborative evidence in the discussion section of his originally published manuscript, and neither of which is considered applicable to healthy postmenopausal women taking HT for treatment of menopausal symptoms. Based on these findings, we do not recommend that clinicians counsel women that stopping HT increases their risk of cardiovascular or overall mortality. Thank you for the opportunity to clarify the evidence and our position.
References
- Manson JE, Chlebowski RT, Stefanick ML, et al. Menopausal hormone therapy and health outcomes during the intervention and extended poststopping phases of the Women’s Health Initiative randomized trials. JAMA. 2013;310(13):1353–1368.
- Heiss G, Wallace R, Anderson GL, et al; WHI investigators. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA. 2008;299(9):1036–1045.
“SHOULD YOU ADOPT THE PRACTICE OF VAGINAL CLEANSING WITH POVIDONE-IODINE PRIOR TO CESAREAN DELIVERY?”
ROBERT L. BARBIERI, MD (EDITORIAL; JANUARY 2016)
In his January 2016 Editorial, Editor in Chief Robert L. Barbieri, MD, presented evidence supporting the practice of vaginal cleansing with povidone-iodine prior to cesarean delivery (CD) to prevent postoperative endometritis. He then asked readers if they would consider adopting such a practice. More than 250 readers weighed in through the Quick Poll at obgmanagement.com, and many readers sent in letters with follow-up questions and comments on controlling bacterial contamination, vaginal seeding, etc. Here are some of the letters, along with Dr. Barbieri’s response and the Quick Poll results.
A contradiction in definitions?
There seems to be a contradiction in definitions. The second sentence of the article defines endometritis as the presence of fever plus low abdominal tenderness. However, the studies presented state that vaginal cleansing pre-CD decreased endometritis but did not decrease postpartum fever. Is this not a discrepancy?
Nancy Kerr, MD, MPH
Albuquerque, New Mexico
A question about povidone-iodine
Have any studies been done on newborn iodine levels after vaginal cleansing with povidone-iodine prior to CD?
G. Millard Simmons Jr, MD
Hilton Head, Bluffton, South Carolina
Additional tips for controlling bacterial contamination
Dr. Barbieri’s editorial on vaginal cleansing prior to CD is eye opening. I have a few additional suggestions to control bacterial contamination.
First, I examine my patients in labor as few times as necessary, and I ask the nurses (RNs) not to place their fingers in the patient’s vagina while she is pushing. I remove the Foley catheter when I feel progress (descent of fetal head) is being achieved. In addition, physicians as well as RNs should consider changing their scrubs between deliveries, as I believe that bacterial contamination is splattered all over the place, especially into the birth canal. These methods have worked for me in my over-20 years of practice.
I also firmly remind the RN circulator to perform a generous vaginal cleanse with povidone-iodine, in addition to the usual intravenous prophylaxis, before hysterectomy.
Luis Leyva Jr, MD
Miami, Florida
Mixed feelings
My first reaction to this Editorial was: Is this a solution in search of a problem? That is to say, how much of a clinical problem is endometritis after CD? Are we really treating the proposed problem, and does treatment affect long-term outcomes?
Upon reflection, I have concluded that vaginal cleansing pre-CD does intuitively make sense. What sways me in this direction is that the practice is simple, easy, and inexpensive. Since we typically have the patient positioned for Foley catheter insertion, performing vaginal cleansing as we put in the Foley would be easy. If vaginal cleansing were to be done, I definitely would be in favor of doing such practice liberally—for all CDs to make vaginal cleansing part of the “routine.”
Keep in mind that we are still chasing a problem of little clinical significance.
The biggest accomplishment has been to get everyone to give antibiotics preoperatively rather than after cutting the umbilical cord. We knew that this was best practice as early as the late 1980s/early 1990s, and I have been fighting this battle ever since. Believe it or not, there are still a few holdouts.
George H. Davis, DO
Johnson City, Tennessee
Would vaginal cleansing benefit all women in labor?
Vaginal cleansing before CD reminds me of my residency days when all women having hysterectomies were admitted early and given povidone-iodine (Betadine) douches the evening before surgery (unless an iodine allergy was present).
While reading your Editorial, I had several thoughts and questions. 1) Since vaginal cleansing seems to benefit CD patients, might it not benefit all laboring patients? 2) Is the timing of vaginal cleansing critical? 3) Should we do vaginal cleansing on all laboring patients if timing is not critical?
I plan to bring up the topic of vaginal cleansing for CD with my colleagues at our next department meeting, since it seems like such a simple, logical, inexpensive, and beneficial thing to do.
Douglas G. Tolley, MD
Yuba City, California
An early study on using povidone-iodine gel before CD
When I was a chief resident at Kings County Hospital in 1973, we had a very high rate of post-CD endometritis. I conducted a small study on the use of povidone-iodine gel in the last month of pregnancy. Before commencing, we confirmed that the gel did not interfere with diagnosing ruptured membranes.
Obstetric service patients were randomly divided into “A” and “B” groups. The A patients were asked to use povidone-iodine gel at night for the last 2 weeks before their estimated due date. When admitted in labor, they were asked to confirm its use. When a resident diagnosed post-CD endometritis, we kept track of which group the patient was in and whether or not that patient had used povidone-iodine. Approximately 100 infected patients were evaluated from each group.
As it turned out, there were about 3 times the number of infections among the patients who did not use povidone-iodine than among those who said they used it. It did not seem to matter how many times povidone-iodine was used. The “As” who did not use povidone-iodine had results similar to the “Bs.”
It was many years ago, and the study design was crude. However, it does seem to support the suggestion for vaginal cleansing.
Steve Ross, MD
Port Jefferson, New York
Two different ideas about the vaginal biome
This Editorial is timely in that Dr. Dominguez-Bello and colleagues recently published an article in Nature Medicine titled, “Partial restoration of the microbiota of cesarean-born infants via vaginal microbial transfer.”1 Dr. Dominguez-Bello is one of the founders of the idea of “vaginal seeding,” or using the natural biome of the vagina on a newborn immediately after CD by swabbing the baby with the bacteria from the vagina.
I find it interesting that there are two very different ideas about the biome at this time. Vaginal seeding is a new trend that a few patients have asked about during prenatal care. The jury is still out on seeding, but a larger study is currently underway at New York University. Of course, infection is one of the risks of seeding. I appreciate hearing both sides of the issue.
Deborah Herchelroath, DO
Harrisburg, Pennsylvania
Reference
- Dominguez-Bello MG, De Jesus-Labor KM, Shen N, et al. Partial restoration of the microbiota of cesarean-born infants via vaginal microbial transfer [published online ahead of print February 1, 2016]. Nat Med. doi:10.1038/nm.4039.
Dr. Barbieri responds
I would like to thank our readers for taking the time from their busy schedules to write about their clinical experiences and current practices for reducing infectious complications following CD.
Dr. Kerr raises the important issue of the apparent contradictory finding of the beneficial impact of vaginal cleansing on endometritis without a beneficial effect on the overall rate of fever. In the trial reported by Starr,1 fever was defined as a temperature above 38˚C at any time after CD and endometritis was defined as a temperature above 38.4˚C PLUS uterine tenderness occurring more than 24 hours after CD. Given these 2 definitions one can understand the differential effect of vaginal cleansing on fever versus endometritis.
Dr. Simmons raises the intriguing question of the impact of an iodine-containing surgical preparation on newborn thyroid function. There are few studies addressing this issue. One study reports a transient increase in thyroid-stimulating hormone (TSH) levels in a small percentage of newborns whose mothers received an iodine preparation.2 Another study reports no effect of an iodine surgical preparation on newborn thyroid function indices.3
I agree with the guidance of Drs. Leyva and Davis that we can help prevent postcesarean endometritis by minimizing the number of cervical examinations, changing scrubs between deliveries, and by ensuring that an intravenous anti‑ biotic is given before skin incision.
Dr. Tolley wonders if all women should receive vaginal cleansing, regardless of delivery route. It is possible that such an approach would be effective and it deserves study. Given the lower rate of endometritis following vaginal delivery compared with CD, many more women having a vaginal delivery would need to be treated to prevent one case of endometritis. Dr. Ross mentions his experience with the benefit of outpatient vaginal cleansing in the 2 weeks prior to delivery. Many general surgeons are recommending that their patients shower with chlorhexidine the day before surgery in order to reduce the rate of postoperative infection. Short-term and long-term outpatient vaginal cleansing prior to delivery deserves additional study.
Dr. Herchelroath raises the possibility that vaginal cleansing will decrease the ability of the newborn to develop a normal microbiome because it may not be exposed to sufficient vaginal bacteria. This possibility certainly deserves additional study.
The questions and guidance of our readers were incredibly helpful and stimulating. Thank you for sharing your perspective.
References
- Starr RV, Zurawski J, Ismail M. Preoperative vaginal preparation with povidone-iodine and the risk of postcesarean endometritis. Obstet Gynecol. 2005;105(5 pt 1):1024–1029.
- Nili F, Hantoushzadeh S, Alimohamadi A, et al. Iodine-containing disinfectants in preparation for cesarean section: impact on thyroid profile in cord blood. Postgrad Med J. 2015;91(1082):681–684.
- Ordookhani A, Pearce EN, Mirmiran P, Azizi F, Braverman LE. The effect of type of delivery and povidone-iodine application at delivery on cord dried-blood-specimen thyrotropin level and the rate of hyperthyrotropinemia in mature and normal-birth-weight neonates residing in an iodine-replete area. Thyroid. 2007;17(11):1097–1102.
“CELL-FREE DNA SCREENING FOR WOMEN AT LOW RISK FOR FETAL ANEUPLOIDY” MARY E. NORTON, MD (JANUARY 2016)
The price of cfDNA screening is dropping
I found Dr. Norton’s article on cell-free DNA (cfDNA) screening for women at low risk for fetal abnormalities to be enlightening and educational. The section addressing cost-effectiveness, however, was somewhat obsolete. The referenced study by Cuckle and colleagues,1 which estimated the cost of cfDNA per case of Down syndrome in low-risk patients at $3.6 million, was published in 2013. With 4 major companies in the market, the cost/benefit ratio has been changing rapidly. At least one company has dropped the cost of the cfDNA test nearly 80% from 2015 to 2016, making the above reference irrelevant. Recently, Ariosa dropped the price of their Harmony cfDNA test to just $119 in our area, regardless of a patient’s insurance or poverty level. This is significantly less than the cost of performing an early screen and is being welcomed by my patients even after substantial counseling on the test’s limitations in the low-risk population. Natera, another laboratory with a similar test, offers a low-cost option. However, patients must provide proof that their income is below a specified level.
Guidelines from the American College of Obstetricians and Gynecologists (ACOG) and the Society for Maternal-Fetal Medicine (SMFM) likely will have a hard time keeping up with the cost-effectiveness of noninvasive prenatal testing, as the price continues to be dynamic.
Samuel Wolf, DO
Panama City, Florida
Reference
- Cuckle H, Benn P, Pergament E. Maternal cfDNA screening for Down syndrome—a cost sensitivity analysis. Prenat Diagn. 2013;33(7):636–642.
“DOES THE DISCONTINUATION OF MENOPAUSAL HORMONE THERAPY AFFECT A WOMAN’S CARDIOVASCULAR RISK?”
ANDREW M. KAUNITZ, MD; JOANN E. MANSON, MD, DRPH; AND CYNTHIA A. STUENKEL, MD(EXAMINING THE EVIDENCE; DECEMBER 2015)
Disagrees with conclusion
In their expert commentary, Drs. Kaunitz, Manson, and Stuenkel state:
They support this claim by citing Heiss 2008.1 In fact, however, the Women’s Health Initiative (WHI) data show opposite to their statement: In the WHI, all-cause mortality was increased among the women who were assigned to estrogen-progestin therapy (EPT) relative to those who were assigned to placebo within the 3 years of EPT cessation (hazard ratio [HR], 1.15; 95% confidence interval [CI], 0.95–1.39). More importantly, mortality was significantly increased among women who were originally assigned to EPT relative to those who were assigned to placebo and were at least 80% adherent with intervention (HR, 1.53; 95% CI, 1.04–2.24). Thus, the statement by Drs. Kaunitz, Manson, and Stuenkel is incorrect.
In addition to the WHI studies, data are available from at least 2 other randomized controlled trials addressing the issue of HT withdrawal. In the Heart and Estrogen/progestin Replacement Study (HERS) II,2 the unblinded 2.7-year follow-up to the HERS trial, women originally assigned to EPT had a 3.3-fold higher rate of ventricular arrhythmia requiring resuscitation than women assigned to placebo (HR, 3.30; 95% CI, 1.08–10.10). During the first 6 months of posttrial follow-up of the Women’s Estrogen for Stroke Trial (WEST),3 there were 3 fatal strokes and 18 nonfatal strokes among the women originally randomized to estradiol therapy; there were 9 strokes (1 fatal and 8 nonfatal) among the women originally assigned to placebo (HR, 2.3; 95% CI, 1.1–5.0; P = .03).
In our study we detected that women who stopped HT, compared with women who continued HT, had a 2.3-fold (95% CI, 2.12–2.50) greater risk of cardiac death within the first post-HT year and a 1.3-fold (95% CI, 1.21–1.31) greater risk of cardiac death more than 1 year after stopping HT.4 In addition, women who stopped HT, compared with women who continuedHT, had a 2.5-fold (95% CI, 2.28–2.77) greater risk of dying from stroke within the first post-HT year and a 1.3-fold (95% CI, 1.19–1.31) greater risk of dying from stroke more than 1 year after stopping HT. We believe that these data substantially further our understanding of the posttrial data from WHI, as well as HERS and WEST. Thus, cumulative data support that HT withdrawal potentially has detrimental implications for women. In total, the data are highly informative when counseling women regarding use or discontinuation of HT.
Tomi Mikkola, MD
Helsinki, Finland
References
- Heiss G, Wallace R, Anderson GL, et al; WHI investigators. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA. 2008;299(9):1036–1045.
- Grady D, Herrington D, Bittner V, et al; HERS Research Group. Cardiovascular disease outcomes during 6.8 years of hormone therapy: Heart and Estrogen/progestin Replacement Study follow-up (HERS II) [published correction appears in JAMA. 2002;288(9):1064]. JAMA. 2002;288(1):49–57.
- Viscoli CM, Brass LM, Kernan WN, Sarrel PM, Suissa S, Horwitz RI. A clinical trial of estrogen-replacement therapy after ischemic stroke. N Engl J Med. 2001;345(17):1243–1249.
- Mikkola TS, Tuomikoski P, Lyytinen H, et al. Increased cardiovascular mortality risk in women discontinuing postmenopausal hormone therapy. J Clin Endocrinol Metab. 2015;100(12):4588–4594.
Drs. Kaunitz, Manson, and Stuenkel respond
We thank Dr. Mikkola for his response to our commentary, but we do not agree with his interpretation of the WHI reports or our conclusions. As we originally stated, the WHI trial of estrogen-only therapy (ET) and EPT provides an opportunity to observe outcomes in the largest randomized controlled trial of HT in healthy postmenopausal women. Our commentary was based on the most recent, 13-year follow-up of the WHI trials,1 and we are confident in the accuracy of our presentation of the results.
As the debate apparently focuses on the safety of stopping HT, we wish to reiterate, for those who may not be familiar with the data, that, in the ET trial, all-cause mortality declined (although not significantly) after stopping ET, as summarized here:
HR (95% CI) | |
Intervention phase | 1.03 (0.88–1.21) |
Postintervention phase (after stopping study medication) | 0.96 (0.84–1.10) |
Cumulative 13 years of follow-up | 0.99 (0.90–1.10) |
Similarly, in the EPT trial, as the following findings indicate, stopping HT did not increase all-cause mortality:
HR (95% CI) | |
Intervention phase | 0.97 (0.81–1.16) |
Postintervention phase (afterstopping study medication) | 1.01 (0.91–1.11) |
Cumulative 13 years of follow-up | 0.99 (0.91–1.08) |
Again, these findings from the largest randomized trial of HT in healthy postmenopausal women are adequate for us to conclude that stopping HT does not elevate risk of mortality. Among all women participating in the WHI HT trials, HRs for coronary heart disease, pulmonary embolism, stroke, and cardiovascular disease mortality likewise were lower (better) after stopping treatment than during the intervention phase. The results for these outcomes in younger women followed similar patterns but, due to smaller numbers of events, could not be tested formally for differences in time trends.
Moreover, the data Dr. Mikkola cites from analyses conducted 3 years postcessation2 reflected a borderline increased risk of cancer mortality that emerged in the EPT trial after stopping treatment. This clearly was related to the prolonged effects of EPT on breast cancer and other cancers, given the known latency period for cancer, and was not observed in the ET trial postcessation. The risk elevation in the EPT trial became attenuated with longer follow-up and, as of 13 years, the HRs for cancer mortality were 1.07 (0.93–1.23) in the EPT trial and 0.95 (0.81–1.13) in the ET trial.
It is interesting that Dr. Mikkola now inculcates his interpretation of his findings3 with those from secondary prevention trials such as the Heart and Estrogen/progestin Replacement Study and the Women’s Estrogen for Stroke Trial, neither of which was included as corroborative evidence in the discussion section of his originally published manuscript, and neither of which is considered applicable to healthy postmenopausal women taking HT for treatment of menopausal symptoms. Based on these findings, we do not recommend that clinicians counsel women that stopping HT increases their risk of cardiovascular or overall mortality. Thank you for the opportunity to clarify the evidence and our position.
References
- Manson JE, Chlebowski RT, Stefanick ML, et al. Menopausal hormone therapy and health outcomes during the intervention and extended poststopping phases of the Women’s Health Initiative randomized trials. JAMA. 2013;310(13):1353–1368.
- Heiss G, Wallace R, Anderson GL, et al; WHI investigators. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA. 2008;299(9):1036–1045.
“SHOULD YOU ADOPT THE PRACTICE OF VAGINAL CLEANSING WITH POVIDONE-IODINE PRIOR TO CESAREAN DELIVERY?”
ROBERT L. BARBIERI, MD (EDITORIAL; JANUARY 2016)
In his January 2016 Editorial, Editor in Chief Robert L. Barbieri, MD, presented evidence supporting the practice of vaginal cleansing with povidone-iodine prior to cesarean delivery (CD) to prevent postoperative endometritis. He then asked readers if they would consider adopting such a practice. More than 250 readers weighed in through the Quick Poll at obgmanagement.com, and many readers sent in letters with follow-up questions and comments on controlling bacterial contamination, vaginal seeding, etc. Here are some of the letters, along with Dr. Barbieri’s response and the Quick Poll results.
A contradiction in definitions?
There seems to be a contradiction in definitions. The second sentence of the article defines endometritis as the presence of fever plus low abdominal tenderness. However, the studies presented state that vaginal cleansing pre-CD decreased endometritis but did not decrease postpartum fever. Is this not a discrepancy?
Nancy Kerr, MD, MPH
Albuquerque, New Mexico
A question about povidone-iodine
Have any studies been done on newborn iodine levels after vaginal cleansing with povidone-iodine prior to CD?
G. Millard Simmons Jr, MD
Hilton Head, Bluffton, South Carolina
Additional tips for controlling bacterial contamination
Dr. Barbieri’s editorial on vaginal cleansing prior to CD is eye opening. I have a few additional suggestions to control bacterial contamination.
First, I examine my patients in labor as few times as necessary, and I ask the nurses (RNs) not to place their fingers in the patient’s vagina while she is pushing. I remove the Foley catheter when I feel progress (descent of fetal head) is being achieved. In addition, physicians as well as RNs should consider changing their scrubs between deliveries, as I believe that bacterial contamination is splattered all over the place, especially into the birth canal. These methods have worked for me in my over-20 years of practice.
I also firmly remind the RN circulator to perform a generous vaginal cleanse with povidone-iodine, in addition to the usual intravenous prophylaxis, before hysterectomy.
Luis Leyva Jr, MD
Miami, Florida
Mixed feelings
My first reaction to this Editorial was: Is this a solution in search of a problem? That is to say, how much of a clinical problem is endometritis after CD? Are we really treating the proposed problem, and does treatment affect long-term outcomes?
Upon reflection, I have concluded that vaginal cleansing pre-CD does intuitively make sense. What sways me in this direction is that the practice is simple, easy, and inexpensive. Since we typically have the patient positioned for Foley catheter insertion, performing vaginal cleansing as we put in the Foley would be easy. If vaginal cleansing were to be done, I definitely would be in favor of doing such practice liberally—for all CDs to make vaginal cleansing part of the “routine.”
Keep in mind that we are still chasing a problem of little clinical significance.
The biggest accomplishment has been to get everyone to give antibiotics preoperatively rather than after cutting the umbilical cord. We knew that this was best practice as early as the late 1980s/early 1990s, and I have been fighting this battle ever since. Believe it or not, there are still a few holdouts.
George H. Davis, DO
Johnson City, Tennessee
Would vaginal cleansing benefit all women in labor?
Vaginal cleansing before CD reminds me of my residency days when all women having hysterectomies were admitted early and given povidone-iodine (Betadine) douches the evening before surgery (unless an iodine allergy was present).
While reading your Editorial, I had several thoughts and questions. 1) Since vaginal cleansing seems to benefit CD patients, might it not benefit all laboring patients? 2) Is the timing of vaginal cleansing critical? 3) Should we do vaginal cleansing on all laboring patients if timing is not critical?
I plan to bring up the topic of vaginal cleansing for CD with my colleagues at our next department meeting, since it seems like such a simple, logical, inexpensive, and beneficial thing to do.
Douglas G. Tolley, MD
Yuba City, California
An early study on using povidone-iodine gel before CD
When I was a chief resident at Kings County Hospital in 1973, we had a very high rate of post-CD endometritis. I conducted a small study on the use of povidone-iodine gel in the last month of pregnancy. Before commencing, we confirmed that the gel did not interfere with diagnosing ruptured membranes.
Obstetric service patients were randomly divided into “A” and “B” groups. The A patients were asked to use povidone-iodine gel at night for the last 2 weeks before their estimated due date. When admitted in labor, they were asked to confirm its use. When a resident diagnosed post-CD endometritis, we kept track of which group the patient was in and whether or not that patient had used povidone-iodine. Approximately 100 infected patients were evaluated from each group.
As it turned out, there were about 3 times the number of infections among the patients who did not use povidone-iodine than among those who said they used it. It did not seem to matter how many times povidone-iodine was used. The “As” who did not use povidone-iodine had results similar to the “Bs.”
It was many years ago, and the study design was crude. However, it does seem to support the suggestion for vaginal cleansing.
Steve Ross, MD
Port Jefferson, New York
Two different ideas about the vaginal biome
This Editorial is timely in that Dr. Dominguez-Bello and colleagues recently published an article in Nature Medicine titled, “Partial restoration of the microbiota of cesarean-born infants via vaginal microbial transfer.”1 Dr. Dominguez-Bello is one of the founders of the idea of “vaginal seeding,” or using the natural biome of the vagina on a newborn immediately after CD by swabbing the baby with the bacteria from the vagina.
I find it interesting that there are two very different ideas about the biome at this time. Vaginal seeding is a new trend that a few patients have asked about during prenatal care. The jury is still out on seeding, but a larger study is currently underway at New York University. Of course, infection is one of the risks of seeding. I appreciate hearing both sides of the issue.
Deborah Herchelroath, DO
Harrisburg, Pennsylvania
Reference
- Dominguez-Bello MG, De Jesus-Labor KM, Shen N, et al. Partial restoration of the microbiota of cesarean-born infants via vaginal microbial transfer [published online ahead of print February 1, 2016]. Nat Med. doi:10.1038/nm.4039.
Dr. Barbieri responds
I would like to thank our readers for taking the time from their busy schedules to write about their clinical experiences and current practices for reducing infectious complications following CD.
Dr. Kerr raises the important issue of the apparent contradictory finding of the beneficial impact of vaginal cleansing on endometritis without a beneficial effect on the overall rate of fever. In the trial reported by Starr,1 fever was defined as a temperature above 38˚C at any time after CD and endometritis was defined as a temperature above 38.4˚C PLUS uterine tenderness occurring more than 24 hours after CD. Given these 2 definitions one can understand the differential effect of vaginal cleansing on fever versus endometritis.
Dr. Simmons raises the intriguing question of the impact of an iodine-containing surgical preparation on newborn thyroid function. There are few studies addressing this issue. One study reports a transient increase in thyroid-stimulating hormone (TSH) levels in a small percentage of newborns whose mothers received an iodine preparation.2 Another study reports no effect of an iodine surgical preparation on newborn thyroid function indices.3
I agree with the guidance of Drs. Leyva and Davis that we can help prevent postcesarean endometritis by minimizing the number of cervical examinations, changing scrubs between deliveries, and by ensuring that an intravenous anti‑ biotic is given before skin incision.
Dr. Tolley wonders if all women should receive vaginal cleansing, regardless of delivery route. It is possible that such an approach would be effective and it deserves study. Given the lower rate of endometritis following vaginal delivery compared with CD, many more women having a vaginal delivery would need to be treated to prevent one case of endometritis. Dr. Ross mentions his experience with the benefit of outpatient vaginal cleansing in the 2 weeks prior to delivery. Many general surgeons are recommending that their patients shower with chlorhexidine the day before surgery in order to reduce the rate of postoperative infection. Short-term and long-term outpatient vaginal cleansing prior to delivery deserves additional study.
Dr. Herchelroath raises the possibility that vaginal cleansing will decrease the ability of the newborn to develop a normal microbiome because it may not be exposed to sufficient vaginal bacteria. This possibility certainly deserves additional study.
The questions and guidance of our readers were incredibly helpful and stimulating. Thank you for sharing your perspective.
References
- Starr RV, Zurawski J, Ismail M. Preoperative vaginal preparation with povidone-iodine and the risk of postcesarean endometritis. Obstet Gynecol. 2005;105(5 pt 1):1024–1029.
- Nili F, Hantoushzadeh S, Alimohamadi A, et al. Iodine-containing disinfectants in preparation for cesarean section: impact on thyroid profile in cord blood. Postgrad Med J. 2015;91(1082):681–684.
- Ordookhani A, Pearce EN, Mirmiran P, Azizi F, Braverman LE. The effect of type of delivery and povidone-iodine application at delivery on cord dried-blood-specimen thyrotropin level and the rate of hyperthyrotropinemia in mature and normal-birth-weight neonates residing in an iodine-replete area. Thyroid. 2007;17(11):1097–1102.
“CELL-FREE DNA SCREENING FOR WOMEN AT LOW RISK FOR FETAL ANEUPLOIDY” MARY E. NORTON, MD (JANUARY 2016)
The price of cfDNA screening is dropping
I found Dr. Norton’s article on cell-free DNA (cfDNA) screening for women at low risk for fetal abnormalities to be enlightening and educational. The section addressing cost-effectiveness, however, was somewhat obsolete. The referenced study by Cuckle and colleagues,1 which estimated the cost of cfDNA per case of Down syndrome in low-risk patients at $3.6 million, was published in 2013. With 4 major companies in the market, the cost/benefit ratio has been changing rapidly. At least one company has dropped the cost of the cfDNA test nearly 80% from 2015 to 2016, making the above reference irrelevant. Recently, Ariosa dropped the price of their Harmony cfDNA test to just $119 in our area, regardless of a patient’s insurance or poverty level. This is significantly less than the cost of performing an early screen and is being welcomed by my patients even after substantial counseling on the test’s limitations in the low-risk population. Natera, another laboratory with a similar test, offers a low-cost option. However, patients must provide proof that their income is below a specified level.
Guidelines from the American College of Obstetricians and Gynecologists (ACOG) and the Society for Maternal-Fetal Medicine (SMFM) likely will have a hard time keeping up with the cost-effectiveness of noninvasive prenatal testing, as the price continues to be dynamic.
Samuel Wolf, DO
Panama City, Florida
Reference
- Cuckle H, Benn P, Pergament E. Maternal cfDNA screening for Down syndrome—a cost sensitivity analysis. Prenat Diagn. 2013;33(7):636–642.
“DOES THE DISCONTINUATION OF MENOPAUSAL HORMONE THERAPY AFFECT A WOMAN’S CARDIOVASCULAR RISK?”
ANDREW M. KAUNITZ, MD; JOANN E. MANSON, MD, DRPH; AND CYNTHIA A. STUENKEL, MD(EXAMINING THE EVIDENCE; DECEMBER 2015)
Disagrees with conclusion
In their expert commentary, Drs. Kaunitz, Manson, and Stuenkel state:
They support this claim by citing Heiss 2008.1 In fact, however, the Women’s Health Initiative (WHI) data show opposite to their statement: In the WHI, all-cause mortality was increased among the women who were assigned to estrogen-progestin therapy (EPT) relative to those who were assigned to placebo within the 3 years of EPT cessation (hazard ratio [HR], 1.15; 95% confidence interval [CI], 0.95–1.39). More importantly, mortality was significantly increased among women who were originally assigned to EPT relative to those who were assigned to placebo and were at least 80% adherent with intervention (HR, 1.53; 95% CI, 1.04–2.24). Thus, the statement by Drs. Kaunitz, Manson, and Stuenkel is incorrect.
In addition to the WHI studies, data are available from at least 2 other randomized controlled trials addressing the issue of HT withdrawal. In the Heart and Estrogen/progestin Replacement Study (HERS) II,2 the unblinded 2.7-year follow-up to the HERS trial, women originally assigned to EPT had a 3.3-fold higher rate of ventricular arrhythmia requiring resuscitation than women assigned to placebo (HR, 3.30; 95% CI, 1.08–10.10). During the first 6 months of posttrial follow-up of the Women’s Estrogen for Stroke Trial (WEST),3 there were 3 fatal strokes and 18 nonfatal strokes among the women originally randomized to estradiol therapy; there were 9 strokes (1 fatal and 8 nonfatal) among the women originally assigned to placebo (HR, 2.3; 95% CI, 1.1–5.0; P = .03).
In our study we detected that women who stopped HT, compared with women who continued HT, had a 2.3-fold (95% CI, 2.12–2.50) greater risk of cardiac death within the first post-HT year and a 1.3-fold (95% CI, 1.21–1.31) greater risk of cardiac death more than 1 year after stopping HT.4 In addition, women who stopped HT, compared with women who continuedHT, had a 2.5-fold (95% CI, 2.28–2.77) greater risk of dying from stroke within the first post-HT year and a 1.3-fold (95% CI, 1.19–1.31) greater risk of dying from stroke more than 1 year after stopping HT. We believe that these data substantially further our understanding of the posttrial data from WHI, as well as HERS and WEST. Thus, cumulative data support that HT withdrawal potentially has detrimental implications for women. In total, the data are highly informative when counseling women regarding use or discontinuation of HT.
Tomi Mikkola, MD
Helsinki, Finland
References
- Heiss G, Wallace R, Anderson GL, et al; WHI investigators. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA. 2008;299(9):1036–1045.
- Grady D, Herrington D, Bittner V, et al; HERS Research Group. Cardiovascular disease outcomes during 6.8 years of hormone therapy: Heart and Estrogen/progestin Replacement Study follow-up (HERS II) [published correction appears in JAMA. 2002;288(9):1064]. JAMA. 2002;288(1):49–57.
- Viscoli CM, Brass LM, Kernan WN, Sarrel PM, Suissa S, Horwitz RI. A clinical trial of estrogen-replacement therapy after ischemic stroke. N Engl J Med. 2001;345(17):1243–1249.
- Mikkola TS, Tuomikoski P, Lyytinen H, et al. Increased cardiovascular mortality risk in women discontinuing postmenopausal hormone therapy. J Clin Endocrinol Metab. 2015;100(12):4588–4594.
Drs. Kaunitz, Manson, and Stuenkel respond
We thank Dr. Mikkola for his response to our commentary, but we do not agree with his interpretation of the WHI reports or our conclusions. As we originally stated, the WHI trial of estrogen-only therapy (ET) and EPT provides an opportunity to observe outcomes in the largest randomized controlled trial of HT in healthy postmenopausal women. Our commentary was based on the most recent, 13-year follow-up of the WHI trials,1 and we are confident in the accuracy of our presentation of the results.
As the debate apparently focuses on the safety of stopping HT, we wish to reiterate, for those who may not be familiar with the data, that, in the ET trial, all-cause mortality declined (although not significantly) after stopping ET, as summarized here:
HR (95% CI) | |
Intervention phase | 1.03 (0.88–1.21) |
Postintervention phase (after stopping study medication) | 0.96 (0.84–1.10) |
Cumulative 13 years of follow-up | 0.99 (0.90–1.10) |
Similarly, in the EPT trial, as the following findings indicate, stopping HT did not increase all-cause mortality:
HR (95% CI) | |
Intervention phase | 0.97 (0.81–1.16) |
Postintervention phase (afterstopping study medication) | 1.01 (0.91–1.11) |
Cumulative 13 years of follow-up | 0.99 (0.91–1.08) |
Again, these findings from the largest randomized trial of HT in healthy postmenopausal women are adequate for us to conclude that stopping HT does not elevate risk of mortality. Among all women participating in the WHI HT trials, HRs for coronary heart disease, pulmonary embolism, stroke, and cardiovascular disease mortality likewise were lower (better) after stopping treatment than during the intervention phase. The results for these outcomes in younger women followed similar patterns but, due to smaller numbers of events, could not be tested formally for differences in time trends.
Moreover, the data Dr. Mikkola cites from analyses conducted 3 years postcessation2 reflected a borderline increased risk of cancer mortality that emerged in the EPT trial after stopping treatment. This clearly was related to the prolonged effects of EPT on breast cancer and other cancers, given the known latency period for cancer, and was not observed in the ET trial postcessation. The risk elevation in the EPT trial became attenuated with longer follow-up and, as of 13 years, the HRs for cancer mortality were 1.07 (0.93–1.23) in the EPT trial and 0.95 (0.81–1.13) in the ET trial.
It is interesting that Dr. Mikkola now inculcates his interpretation of his findings3 with those from secondary prevention trials such as the Heart and Estrogen/progestin Replacement Study and the Women’s Estrogen for Stroke Trial, neither of which was included as corroborative evidence in the discussion section of his originally published manuscript, and neither of which is considered applicable to healthy postmenopausal women taking HT for treatment of menopausal symptoms. Based on these findings, we do not recommend that clinicians counsel women that stopping HT increases their risk of cardiovascular or overall mortality. Thank you for the opportunity to clarify the evidence and our position.
References
- Manson JE, Chlebowski RT, Stefanick ML, et al. Menopausal hormone therapy and health outcomes during the intervention and extended poststopping phases of the Women’s Health Initiative randomized trials. JAMA. 2013;310(13):1353–1368.
- Heiss G, Wallace R, Anderson GL, et al; WHI investigators. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA. 2008;299(9):1036–1045.
ACTRIMS: Ovarian decline linked to MS progression in women
NEW ORLEANS – A decline in levels of anti-müllerian hormone as women approach menopause – a phenomenon dubbed ovarian decline – appears associated with clinical disability and brain atrophy in women with multiple sclerosis, according to the findings of a study of more than 400 women with multiple sclerosis followed up for a decade.
This “accumulation of disability” may explain the often rapid transition from a benign disease course to secondary progressive multiple sclerosis (MS) in women as they approach menopause, Dr. Jennifer S. Graves reported at a meeting held by at the Americas Committee for Treatment and Research in Multiple Sclerosis.
Earlier in life, females often have a more benign initial course of MS than males. The mean age of onset of primary progressive MS and secondary progressive MS are both approximately 45 years. The mean age of menopause is 51 years. Ovarian aging involves up to a 10-year period of decline in ovarian function. After age 50, “women catch up in terms of disability with males” with MS, said Dr. Graves of the University of California, San Francisco. One explanation could be that ovarian aging contributes to the development of progressive disease.
The objective was to determine if ovarian decline as measured by the levels of anti-müllerian hormone (AMH) is associated with clinical disability or brain atrophy in women with MS. The cohort of 412 female patients with MS (mean age, 43 years) was from the UCSF EPIC (Expression, Proteomics, Imaging, Clinical) study, which has followed more than 500 people with MS since 2004 with the aim of identifying factors that drive the disease. Also included were 180 healthy controls with the exact same mean age. AMH levels were measured using an ultrasensitive enzyme-linked immunosorbent assay at baseline and at years 3, 5, 8, 9, and 10. Brain magnetic resonance imaging data also were acquired.
When the data were adjusted for chronologic age, women with MS and healthy controls displayed similar AMH levels (P = .97), implying a normal follicular reserve and rate of ovarian decline in those with MS. White matter volume was associated with AMH levels at baseline (P = .047). The association did not persist when adjusted for age as well as disease duration and body mass index (P = .24), while ovarian reserve was associated with normalized gray matter volume (P = .049) and MS functional composite z scores (P = .036) at baseline. Scrutiny of the follow-up period revealed that a twofold decrease in AMH was associated with a 1.85-mm3 decrease in gray matter volume (P = .060) in MS patients. Almost a third of the MS patients had undetectable levels of AMH, which was associated with a 0.60-point higher expanded disability status scale score (P =.039)
The results support the hypothesized association of ovarian decline with increased severity of MS. Furthermore, AMH may be a useful biomarker of MS progression, said Dr. Graves. “The advantage of this biomarker would be that it captures biological activity in women in their 40s, and so could let you know of imminent change.”
Validation of the findings needs to be done.
The study was funded by the National Institutes of Health, National MS Society, Race to Erase MS, Foundation for the Cedars-Sinai Medical Center, Biogen, and Genentech. Dr. Graves had no relevant financial disclosures.
NEW ORLEANS – A decline in levels of anti-müllerian hormone as women approach menopause – a phenomenon dubbed ovarian decline – appears associated with clinical disability and brain atrophy in women with multiple sclerosis, according to the findings of a study of more than 400 women with multiple sclerosis followed up for a decade.
This “accumulation of disability” may explain the often rapid transition from a benign disease course to secondary progressive multiple sclerosis (MS) in women as they approach menopause, Dr. Jennifer S. Graves reported at a meeting held by at the Americas Committee for Treatment and Research in Multiple Sclerosis.
Earlier in life, females often have a more benign initial course of MS than males. The mean age of onset of primary progressive MS and secondary progressive MS are both approximately 45 years. The mean age of menopause is 51 years. Ovarian aging involves up to a 10-year period of decline in ovarian function. After age 50, “women catch up in terms of disability with males” with MS, said Dr. Graves of the University of California, San Francisco. One explanation could be that ovarian aging contributes to the development of progressive disease.
The objective was to determine if ovarian decline as measured by the levels of anti-müllerian hormone (AMH) is associated with clinical disability or brain atrophy in women with MS. The cohort of 412 female patients with MS (mean age, 43 years) was from the UCSF EPIC (Expression, Proteomics, Imaging, Clinical) study, which has followed more than 500 people with MS since 2004 with the aim of identifying factors that drive the disease. Also included were 180 healthy controls with the exact same mean age. AMH levels were measured using an ultrasensitive enzyme-linked immunosorbent assay at baseline and at years 3, 5, 8, 9, and 10. Brain magnetic resonance imaging data also were acquired.
When the data were adjusted for chronologic age, women with MS and healthy controls displayed similar AMH levels (P = .97), implying a normal follicular reserve and rate of ovarian decline in those with MS. White matter volume was associated with AMH levels at baseline (P = .047). The association did not persist when adjusted for age as well as disease duration and body mass index (P = .24), while ovarian reserve was associated with normalized gray matter volume (P = .049) and MS functional composite z scores (P = .036) at baseline. Scrutiny of the follow-up period revealed that a twofold decrease in AMH was associated with a 1.85-mm3 decrease in gray matter volume (P = .060) in MS patients. Almost a third of the MS patients had undetectable levels of AMH, which was associated with a 0.60-point higher expanded disability status scale score (P =.039)
The results support the hypothesized association of ovarian decline with increased severity of MS. Furthermore, AMH may be a useful biomarker of MS progression, said Dr. Graves. “The advantage of this biomarker would be that it captures biological activity in women in their 40s, and so could let you know of imminent change.”
Validation of the findings needs to be done.
The study was funded by the National Institutes of Health, National MS Society, Race to Erase MS, Foundation for the Cedars-Sinai Medical Center, Biogen, and Genentech. Dr. Graves had no relevant financial disclosures.
NEW ORLEANS – A decline in levels of anti-müllerian hormone as women approach menopause – a phenomenon dubbed ovarian decline – appears associated with clinical disability and brain atrophy in women with multiple sclerosis, according to the findings of a study of more than 400 women with multiple sclerosis followed up for a decade.
This “accumulation of disability” may explain the often rapid transition from a benign disease course to secondary progressive multiple sclerosis (MS) in women as they approach menopause, Dr. Jennifer S. Graves reported at a meeting held by at the Americas Committee for Treatment and Research in Multiple Sclerosis.
Earlier in life, females often have a more benign initial course of MS than males. The mean age of onset of primary progressive MS and secondary progressive MS are both approximately 45 years. The mean age of menopause is 51 years. Ovarian aging involves up to a 10-year period of decline in ovarian function. After age 50, “women catch up in terms of disability with males” with MS, said Dr. Graves of the University of California, San Francisco. One explanation could be that ovarian aging contributes to the development of progressive disease.
The objective was to determine if ovarian decline as measured by the levels of anti-müllerian hormone (AMH) is associated with clinical disability or brain atrophy in women with MS. The cohort of 412 female patients with MS (mean age, 43 years) was from the UCSF EPIC (Expression, Proteomics, Imaging, Clinical) study, which has followed more than 500 people with MS since 2004 with the aim of identifying factors that drive the disease. Also included were 180 healthy controls with the exact same mean age. AMH levels were measured using an ultrasensitive enzyme-linked immunosorbent assay at baseline and at years 3, 5, 8, 9, and 10. Brain magnetic resonance imaging data also were acquired.
When the data were adjusted for chronologic age, women with MS and healthy controls displayed similar AMH levels (P = .97), implying a normal follicular reserve and rate of ovarian decline in those with MS. White matter volume was associated with AMH levels at baseline (P = .047). The association did not persist when adjusted for age as well as disease duration and body mass index (P = .24), while ovarian reserve was associated with normalized gray matter volume (P = .049) and MS functional composite z scores (P = .036) at baseline. Scrutiny of the follow-up period revealed that a twofold decrease in AMH was associated with a 1.85-mm3 decrease in gray matter volume (P = .060) in MS patients. Almost a third of the MS patients had undetectable levels of AMH, which was associated with a 0.60-point higher expanded disability status scale score (P =.039)
The results support the hypothesized association of ovarian decline with increased severity of MS. Furthermore, AMH may be a useful biomarker of MS progression, said Dr. Graves. “The advantage of this biomarker would be that it captures biological activity in women in their 40s, and so could let you know of imminent change.”
Validation of the findings needs to be done.
The study was funded by the National Institutes of Health, National MS Society, Race to Erase MS, Foundation for the Cedars-Sinai Medical Center, Biogen, and Genentech. Dr. Graves had no relevant financial disclosures.
Key clinical point: Anti-m<scaps>ü</scaps>llerian hormone may be a biologic marker of MS progression in women.
Major finding: Scrutiny of the follow-up period revealed that a twofold decrease in AMH was associated with a 1.85-mm3 decrease in gray matter volume (P = 0.060) in MS patients.
Data source: A longitudinal cohort from the UCSF EPIC study.
Disclosures: The study was funded by the National Institutes of Health, National MS Society, Race to Erase MS, Foundation for the Cedars-Sinai Medical Center, Biogen, and Genentech. Dr. Graves had no relevant financial disclosures.
Acupuncture no better than sham for relief of hot flashes
Chinese medicine needle acupuncture was about as effective as a sham blunt needle treatment in the relief of hot flashes, although women reported a 40% drop in symptoms with both treatments.
The findings, published online Jan. 18 in Annals of Internal Medicine, add to a growing, but conflicting body of evidence about the benefits of acupuncture in the treatment of menopause symptoms.
Prior to this study, two trials had demonstrated the effectiveness of acupuncture, compared with self care. And a pilot study had shown the effectiveness of acupuncture, compared with a noninsertive sham control. However, a Cochrane review found that acupuncture was more effective, compared with no treatment, and it had a moderate effect size, but was not effective when compared with a sham control (Cochrane Database Syst Rev. 2013 Jul 30;7:CD007410. doi:10.1002/14651858.CD007410.pub2).
The current trial, conducted at multiple sites in Australia, sought to add to the evidence with an adequately powered trial involving a sham control. But Carolyn Ee and her associates at the University of Melbourne noted that their study did not control for the nonspecific effects of acupuncture, such as regular interaction with a therapist.
The researchers randomly assigned 327 women aged older than 40 years who were in late menopause transition or postmenopause and experiencing at least seven moderate daily hot flashes to receive either a standardized Chinese medicine acupuncture treatment or a noninsertive, blunt needle sham acupuncture treatment. Patients received 10 treatments over 8 weeks, and they were assessed at 4 weeks, at the end of treatment, and at 3 and 6 months after treatment (Ann Intern Med. 2016; Jan 18. doi:10.7326/M15-1380.).
Both groups had about a 40% improvement in their hot flashes at the end of treatment, compared with their mean baseline hot flash score. The improvement was sustained at 3 and 6 months after the trial. In the acupuncture group, the mean hot flash scores at the end of treatment were 15.36, compared with 15.04 in the sham group, which was not statistically different. The researchers also found no advantage for acupuncture in quality of life, anxiety, or depression.
“Unless further high-quality evidence emerges, we cannot recommend skin-penetrating acupuncture as an efficacious treatment of this indication; the effects, if any, of acupuncture on these symptoms seem to be unrelated to needling,” the researchers wrote.
Some of the researchers reported receiving grant, scholarship, or fellowship support from the National Health and Medical Research Council of Australia, which funded the study.
On Twitter @maryellenny
Chinese medicine needle acupuncture was about as effective as a sham blunt needle treatment in the relief of hot flashes, although women reported a 40% drop in symptoms with both treatments.
The findings, published online Jan. 18 in Annals of Internal Medicine, add to a growing, but conflicting body of evidence about the benefits of acupuncture in the treatment of menopause symptoms.
Prior to this study, two trials had demonstrated the effectiveness of acupuncture, compared with self care. And a pilot study had shown the effectiveness of acupuncture, compared with a noninsertive sham control. However, a Cochrane review found that acupuncture was more effective, compared with no treatment, and it had a moderate effect size, but was not effective when compared with a sham control (Cochrane Database Syst Rev. 2013 Jul 30;7:CD007410. doi:10.1002/14651858.CD007410.pub2).
The current trial, conducted at multiple sites in Australia, sought to add to the evidence with an adequately powered trial involving a sham control. But Carolyn Ee and her associates at the University of Melbourne noted that their study did not control for the nonspecific effects of acupuncture, such as regular interaction with a therapist.
The researchers randomly assigned 327 women aged older than 40 years who were in late menopause transition or postmenopause and experiencing at least seven moderate daily hot flashes to receive either a standardized Chinese medicine acupuncture treatment or a noninsertive, blunt needle sham acupuncture treatment. Patients received 10 treatments over 8 weeks, and they were assessed at 4 weeks, at the end of treatment, and at 3 and 6 months after treatment (Ann Intern Med. 2016; Jan 18. doi:10.7326/M15-1380.).
Both groups had about a 40% improvement in their hot flashes at the end of treatment, compared with their mean baseline hot flash score. The improvement was sustained at 3 and 6 months after the trial. In the acupuncture group, the mean hot flash scores at the end of treatment were 15.36, compared with 15.04 in the sham group, which was not statistically different. The researchers also found no advantage for acupuncture in quality of life, anxiety, or depression.
“Unless further high-quality evidence emerges, we cannot recommend skin-penetrating acupuncture as an efficacious treatment of this indication; the effects, if any, of acupuncture on these symptoms seem to be unrelated to needling,” the researchers wrote.
Some of the researchers reported receiving grant, scholarship, or fellowship support from the National Health and Medical Research Council of Australia, which funded the study.
On Twitter @maryellenny
Chinese medicine needle acupuncture was about as effective as a sham blunt needle treatment in the relief of hot flashes, although women reported a 40% drop in symptoms with both treatments.
The findings, published online Jan. 18 in Annals of Internal Medicine, add to a growing, but conflicting body of evidence about the benefits of acupuncture in the treatment of menopause symptoms.
Prior to this study, two trials had demonstrated the effectiveness of acupuncture, compared with self care. And a pilot study had shown the effectiveness of acupuncture, compared with a noninsertive sham control. However, a Cochrane review found that acupuncture was more effective, compared with no treatment, and it had a moderate effect size, but was not effective when compared with a sham control (Cochrane Database Syst Rev. 2013 Jul 30;7:CD007410. doi:10.1002/14651858.CD007410.pub2).
The current trial, conducted at multiple sites in Australia, sought to add to the evidence with an adequately powered trial involving a sham control. But Carolyn Ee and her associates at the University of Melbourne noted that their study did not control for the nonspecific effects of acupuncture, such as regular interaction with a therapist.
The researchers randomly assigned 327 women aged older than 40 years who were in late menopause transition or postmenopause and experiencing at least seven moderate daily hot flashes to receive either a standardized Chinese medicine acupuncture treatment or a noninsertive, blunt needle sham acupuncture treatment. Patients received 10 treatments over 8 weeks, and they were assessed at 4 weeks, at the end of treatment, and at 3 and 6 months after treatment (Ann Intern Med. 2016; Jan 18. doi:10.7326/M15-1380.).
Both groups had about a 40% improvement in their hot flashes at the end of treatment, compared with their mean baseline hot flash score. The improvement was sustained at 3 and 6 months after the trial. In the acupuncture group, the mean hot flash scores at the end of treatment were 15.36, compared with 15.04 in the sham group, which was not statistically different. The researchers also found no advantage for acupuncture in quality of life, anxiety, or depression.
“Unless further high-quality evidence emerges, we cannot recommend skin-penetrating acupuncture as an efficacious treatment of this indication; the effects, if any, of acupuncture on these symptoms seem to be unrelated to needling,” the researchers wrote.
Some of the researchers reported receiving grant, scholarship, or fellowship support from the National Health and Medical Research Council of Australia, which funded the study.
On Twitter @maryellenny
FROM ANNALS OF INTERNAL MEDICINE
Key clinical point: Chinese medicine acupuncture was no better than a sham treatment for the relief of hot flashes.
Major finding: After 8 weeks of treatment, mean hot flash scores were 15.36 in the acupuncture group and 15.04 in the sham treatment group, which was not a statistically significant difference.
Data source: A stratified, blind, parallel, randomized, sham-controlled trial of 327 women in late menopause transition or postmenopause.
Disclosures: Some of the researchers reported receiving grant, scholarship, or fellowship support from the National Health and Medical Research Council of Australia, which funded the study.
Later menopause lowers risk of later depression
The longer a woman’s reproductive years last, the less she may be prone to postmenopausal depression, a large meta-analysis has determined.
The risk of depression declined by 2% for every 2 premenopausal years after age 40. Women who entered menopause after age 40 experienced a 50% decrease in the risk of depression, compared with women who experienced premature menopause, Dr. Marios K. Georgakis and colleagues reported Jan. 6 in JAMA Psychiatry (2016. doi:10.1001/jamapsychiatry.2015.2653).
The findings suggest that longer exposure to endogenous estrogens mediates the pathophysiology of late-life depression, wrote Dr. Georgakis of the National and Kapodistrian University of Athens and coauthors.
“If confirmed in prospective and culturally diverse studies … these findings could have a significant clinical effect by allowing for the identification of a group of women at higher risk for depression who may benefit from psychiatric monitoring or estrogen-based therapies.”
The meta-analysis comprised 14 studies that included 67,714 women. They controlled for numerous factors, including age, body mass index, obesity, smoking, and hormone therapy. However, only two controlled for past depression – one of the biggest risk factors for recurring depression.
In addition to the 2% decline per 2 premenopausal years after 40, a subanalysis of three studies examining severe depression found a 5% decreased risk for the same time measure. Another analysis of women with premature menopause found a doubling in the risk of depression for those who experienced menopause before age 40.
Estrogen is known to have neuroprotective and antidepressive properties, and the brain is richly endowed with estrogen receptors, the authors said. The exact pathway of protection against depression, however, remains unknown. Potentiation of neurotransmitters and moderation of atherosclerosis might play protective roles.
“Given the results of our study, it remains to be investigated whether women with menopause at younger ages could benefit by preventive use of hormone therapy against late-life depression, provided that adverse effects associated with long-term use are considered,” the authors said. “In this context, the development of estrogen receptor subtype–specific ligands could decrease the proportion of estrogen therapy adverse effects.”
Neither Dr. Georgakis nor any of the coauthors declared any financial conflicts.
The study is a “commendable effort” to examine the role of reproductive hormones in postmenopausal depression, but several important caveats should temper enthusiasm for its conclusions, Dr. Hadine Joffe and Joyce T. Bromberger, Ph.D., wrote in an accompanying editorial.
In most of the studies, women were aged 55-60 years – considerably beyond the average menopausal age of 52. Additionally, most were at least 5 years past their menopause, reflecting a group that might have passed the period of highest risk for hormone-mediated depression.
“This meta-analysis does not address depression associated with the gonadal steroid fluctuations of the perimenopause or recent estradiol withdrawal of the immediate postmenopause,” the colleagues wrote. “Rather, the analysis applies to depression in older women whose brains have not recently been exposed to estradiol or other reproductive hormones and for whom hormonal risk factors have previously been considered less relevant.”
However, the study is one of the few to investigate the psychotropic effects of estrogen on aging women. “In contrast to the acute effects of reproductive hormones on mood in cycling women, the article highlights a potential neuroprotective effect of gonadal steroids on mood that is delayed and extends into the stable hypoestrogenic and hypoprogestinemic environment of the postmenopause.”
Its conclusions are strengthened by studies of nonpsychiatric diseases associated with earlier menopause, including cardiovascular disease, cognitive decline, and dementia. Nevertheless, it’s too early to recommend prophylactic hormone therapy, the authors concluded.
“Given the small effect size and limitations of the studies used in this analysis, more direct evidence supporting a sustained and delayed neuroprotective effect of extended exposure to estradiol, cyclic progestins, and their neurosteroid derivatives is required to support use of hormonal therapy as a therapeutic approach to protecting against postmenopausal depression.”
Dr. Joffe is director of the Women’s Hormone and Aging Research Program at Brigham and Women’s Hospital, Boston. Dr. Bromberger is a professor of epidemiology and psychiatry at the University of Pittsburgh.
The study is a “commendable effort” to examine the role of reproductive hormones in postmenopausal depression, but several important caveats should temper enthusiasm for its conclusions, Dr. Hadine Joffe and Joyce T. Bromberger, Ph.D., wrote in an accompanying editorial.
In most of the studies, women were aged 55-60 years – considerably beyond the average menopausal age of 52. Additionally, most were at least 5 years past their menopause, reflecting a group that might have passed the period of highest risk for hormone-mediated depression.
“This meta-analysis does not address depression associated with the gonadal steroid fluctuations of the perimenopause or recent estradiol withdrawal of the immediate postmenopause,” the colleagues wrote. “Rather, the analysis applies to depression in older women whose brains have not recently been exposed to estradiol or other reproductive hormones and for whom hormonal risk factors have previously been considered less relevant.”
However, the study is one of the few to investigate the psychotropic effects of estrogen on aging women. “In contrast to the acute effects of reproductive hormones on mood in cycling women, the article highlights a potential neuroprotective effect of gonadal steroids on mood that is delayed and extends into the stable hypoestrogenic and hypoprogestinemic environment of the postmenopause.”
Its conclusions are strengthened by studies of nonpsychiatric diseases associated with earlier menopause, including cardiovascular disease, cognitive decline, and dementia. Nevertheless, it’s too early to recommend prophylactic hormone therapy, the authors concluded.
“Given the small effect size and limitations of the studies used in this analysis, more direct evidence supporting a sustained and delayed neuroprotective effect of extended exposure to estradiol, cyclic progestins, and their neurosteroid derivatives is required to support use of hormonal therapy as a therapeutic approach to protecting against postmenopausal depression.”
Dr. Joffe is director of the Women’s Hormone and Aging Research Program at Brigham and Women’s Hospital, Boston. Dr. Bromberger is a professor of epidemiology and psychiatry at the University of Pittsburgh.
The study is a “commendable effort” to examine the role of reproductive hormones in postmenopausal depression, but several important caveats should temper enthusiasm for its conclusions, Dr. Hadine Joffe and Joyce T. Bromberger, Ph.D., wrote in an accompanying editorial.
In most of the studies, women were aged 55-60 years – considerably beyond the average menopausal age of 52. Additionally, most were at least 5 years past their menopause, reflecting a group that might have passed the period of highest risk for hormone-mediated depression.
“This meta-analysis does not address depression associated with the gonadal steroid fluctuations of the perimenopause or recent estradiol withdrawal of the immediate postmenopause,” the colleagues wrote. “Rather, the analysis applies to depression in older women whose brains have not recently been exposed to estradiol or other reproductive hormones and for whom hormonal risk factors have previously been considered less relevant.”
However, the study is one of the few to investigate the psychotropic effects of estrogen on aging women. “In contrast to the acute effects of reproductive hormones on mood in cycling women, the article highlights a potential neuroprotective effect of gonadal steroids on mood that is delayed and extends into the stable hypoestrogenic and hypoprogestinemic environment of the postmenopause.”
Its conclusions are strengthened by studies of nonpsychiatric diseases associated with earlier menopause, including cardiovascular disease, cognitive decline, and dementia. Nevertheless, it’s too early to recommend prophylactic hormone therapy, the authors concluded.
“Given the small effect size and limitations of the studies used in this analysis, more direct evidence supporting a sustained and delayed neuroprotective effect of extended exposure to estradiol, cyclic progestins, and their neurosteroid derivatives is required to support use of hormonal therapy as a therapeutic approach to protecting against postmenopausal depression.”
Dr. Joffe is director of the Women’s Hormone and Aging Research Program at Brigham and Women’s Hospital, Boston. Dr. Bromberger is a professor of epidemiology and psychiatry at the University of Pittsburgh.
The longer a woman’s reproductive years last, the less she may be prone to postmenopausal depression, a large meta-analysis has determined.
The risk of depression declined by 2% for every 2 premenopausal years after age 40. Women who entered menopause after age 40 experienced a 50% decrease in the risk of depression, compared with women who experienced premature menopause, Dr. Marios K. Georgakis and colleagues reported Jan. 6 in JAMA Psychiatry (2016. doi:10.1001/jamapsychiatry.2015.2653).
The findings suggest that longer exposure to endogenous estrogens mediates the pathophysiology of late-life depression, wrote Dr. Georgakis of the National and Kapodistrian University of Athens and coauthors.
“If confirmed in prospective and culturally diverse studies … these findings could have a significant clinical effect by allowing for the identification of a group of women at higher risk for depression who may benefit from psychiatric monitoring or estrogen-based therapies.”
The meta-analysis comprised 14 studies that included 67,714 women. They controlled for numerous factors, including age, body mass index, obesity, smoking, and hormone therapy. However, only two controlled for past depression – one of the biggest risk factors for recurring depression.
In addition to the 2% decline per 2 premenopausal years after 40, a subanalysis of three studies examining severe depression found a 5% decreased risk for the same time measure. Another analysis of women with premature menopause found a doubling in the risk of depression for those who experienced menopause before age 40.
Estrogen is known to have neuroprotective and antidepressive properties, and the brain is richly endowed with estrogen receptors, the authors said. The exact pathway of protection against depression, however, remains unknown. Potentiation of neurotransmitters and moderation of atherosclerosis might play protective roles.
“Given the results of our study, it remains to be investigated whether women with menopause at younger ages could benefit by preventive use of hormone therapy against late-life depression, provided that adverse effects associated with long-term use are considered,” the authors said. “In this context, the development of estrogen receptor subtype–specific ligands could decrease the proportion of estrogen therapy adverse effects.”
Neither Dr. Georgakis nor any of the coauthors declared any financial conflicts.
The longer a woman’s reproductive years last, the less she may be prone to postmenopausal depression, a large meta-analysis has determined.
The risk of depression declined by 2% for every 2 premenopausal years after age 40. Women who entered menopause after age 40 experienced a 50% decrease in the risk of depression, compared with women who experienced premature menopause, Dr. Marios K. Georgakis and colleagues reported Jan. 6 in JAMA Psychiatry (2016. doi:10.1001/jamapsychiatry.2015.2653).
The findings suggest that longer exposure to endogenous estrogens mediates the pathophysiology of late-life depression, wrote Dr. Georgakis of the National and Kapodistrian University of Athens and coauthors.
“If confirmed in prospective and culturally diverse studies … these findings could have a significant clinical effect by allowing for the identification of a group of women at higher risk for depression who may benefit from psychiatric monitoring or estrogen-based therapies.”
The meta-analysis comprised 14 studies that included 67,714 women. They controlled for numerous factors, including age, body mass index, obesity, smoking, and hormone therapy. However, only two controlled for past depression – one of the biggest risk factors for recurring depression.
In addition to the 2% decline per 2 premenopausal years after 40, a subanalysis of three studies examining severe depression found a 5% decreased risk for the same time measure. Another analysis of women with premature menopause found a doubling in the risk of depression for those who experienced menopause before age 40.
Estrogen is known to have neuroprotective and antidepressive properties, and the brain is richly endowed with estrogen receptors, the authors said. The exact pathway of protection against depression, however, remains unknown. Potentiation of neurotransmitters and moderation of atherosclerosis might play protective roles.
“Given the results of our study, it remains to be investigated whether women with menopause at younger ages could benefit by preventive use of hormone therapy against late-life depression, provided that adverse effects associated with long-term use are considered,” the authors said. “In this context, the development of estrogen receptor subtype–specific ligands could decrease the proportion of estrogen therapy adverse effects.”
Neither Dr. Georgakis nor any of the coauthors declared any financial conflicts.
FROM JAMA PSYCHIATRY
Key clinical point: Later menopause, with its longer estrogen exposure, appears tied to a lower risk of postmenopausal depression.
Major finding: The risk of depression decreased by 2% for each 2 premenopausal years after age 40.
Data source: The meta-analysis comprised 14 studies with more than 67,700 women.
Disclosures: Neither Dr. Georgakis nor any of the coauthors declared any financial conflicts.
Hormone treatment associated with better kidney function
SAN DIEGO – The use of hormone therapy was associated with a lower urine albumin-to-creatinine ratio and a decreased risk of albuminuria, results from a cross-sectional study suggest.
“This may be useful information for providers taking care of menopausal women who are considering the use of hormone therapy for vasomotor symptoms and are worried about the systemic effects of these medications,” lead study author Dr. Andrea G. Kattah said in an interview after the annual meeting of the American Society of Nephrology. “Though our data only show an association and not cause and effect, hormone therapy is associated with better kidney function in this study.”
Results from many animal studies suggest that estrogen can have beneficial effects on the kidneys, noted Dr. Kattah, who conducted the research with Dr. Vesna D. Garovic and colleagues in the division of hypertension and nephrology at the Mayo Clinic, Rochester, Minn.
“In addition, in human studies of chronic kidney disease, premenopausal women tend to have slower progression of kidney disease than men,” she said. “Studies on the effects of hormone therapy on kidney function in women have had variable results. We wanted to look at the association of hormone therapy and renal function in a large, multiethnic cohort with well-defined health conditions that may confound the relationship between hormone therapy and renal disease.”
Study participants included 2,217 women enrolled in the Family Blood Pressure Program, a multinetwork effort to study the genetics of hypertension. During a study visit between 2000 and 2004, the women completed questionnaires about medical history, menopausal status, and use of hormone therapy (HT) in the past month. Clinicians also took their blood pressure, measured their body mass index, and drew blood to determine levels of serum creatinine and urine albumin-to-creatinine ratio (UACR).
Of the 2,217 women, 673 were on HT and 1,544 were not, and their mean ages were 60 years and 63 years, respectively.
In unadjusted analysis, Dr. Kattah and her associates found that UACR was significantly lower in those on HT, compared with those who were not (3.5 mg/g creatinine vs. 5.2 mg/g creatinine, respectively, P less than .001), as was the number of women with an estimated glomerular filtration rate (eGFR) of less than 60 mL/min per 1.73 m2 (7% vs. 10%, P = .003).
After adjusting for renal and cardiovascular risk factors including age, race, smoking, diabetes, hypertension, and family history of hypertension, the use of HT was still significantly associated with a lower UACR and decreased risk of microalbuminuria (odds ratio, 0.61). The association between HT and eGFR of less than 60 mL/min per 1.73 m2 was no longer significant after adjustment, but there was a trend toward higher eGFR and fewer women with an eGFR of less than 60 mL/min per 1.73 m2 among those on HT.
“Not surprisingly, the women taking hormone therapy were different than those who were not, and they generally had fewer health problems, such as diabetes and hyperlipidemia,” Dr. Kattah said. “However, after taking these differences into account in our models, we still found a significant decrease in the risk of having microalbuminuria in those on hormone therapy.”
Dr. Kattah acknowledged certain limitations of the study, including the fact that its design is “cross-sectional and cannot answer the question of whether or not hormone therapy can improve kidney function.
In addition, “we do not have data on how long women were taking hormone therapy, which other studies have suggested is an important factor.”
The researchers reported having no financial disclosures.
SAN DIEGO – The use of hormone therapy was associated with a lower urine albumin-to-creatinine ratio and a decreased risk of albuminuria, results from a cross-sectional study suggest.
“This may be useful information for providers taking care of menopausal women who are considering the use of hormone therapy for vasomotor symptoms and are worried about the systemic effects of these medications,” lead study author Dr. Andrea G. Kattah said in an interview after the annual meeting of the American Society of Nephrology. “Though our data only show an association and not cause and effect, hormone therapy is associated with better kidney function in this study.”
Results from many animal studies suggest that estrogen can have beneficial effects on the kidneys, noted Dr. Kattah, who conducted the research with Dr. Vesna D. Garovic and colleagues in the division of hypertension and nephrology at the Mayo Clinic, Rochester, Minn.
“In addition, in human studies of chronic kidney disease, premenopausal women tend to have slower progression of kidney disease than men,” she said. “Studies on the effects of hormone therapy on kidney function in women have had variable results. We wanted to look at the association of hormone therapy and renal function in a large, multiethnic cohort with well-defined health conditions that may confound the relationship between hormone therapy and renal disease.”
Study participants included 2,217 women enrolled in the Family Blood Pressure Program, a multinetwork effort to study the genetics of hypertension. During a study visit between 2000 and 2004, the women completed questionnaires about medical history, menopausal status, and use of hormone therapy (HT) in the past month. Clinicians also took their blood pressure, measured their body mass index, and drew blood to determine levels of serum creatinine and urine albumin-to-creatinine ratio (UACR).
Of the 2,217 women, 673 were on HT and 1,544 were not, and their mean ages were 60 years and 63 years, respectively.
In unadjusted analysis, Dr. Kattah and her associates found that UACR was significantly lower in those on HT, compared with those who were not (3.5 mg/g creatinine vs. 5.2 mg/g creatinine, respectively, P less than .001), as was the number of women with an estimated glomerular filtration rate (eGFR) of less than 60 mL/min per 1.73 m2 (7% vs. 10%, P = .003).
After adjusting for renal and cardiovascular risk factors including age, race, smoking, diabetes, hypertension, and family history of hypertension, the use of HT was still significantly associated with a lower UACR and decreased risk of microalbuminuria (odds ratio, 0.61). The association between HT and eGFR of less than 60 mL/min per 1.73 m2 was no longer significant after adjustment, but there was a trend toward higher eGFR and fewer women with an eGFR of less than 60 mL/min per 1.73 m2 among those on HT.
“Not surprisingly, the women taking hormone therapy were different than those who were not, and they generally had fewer health problems, such as diabetes and hyperlipidemia,” Dr. Kattah said. “However, after taking these differences into account in our models, we still found a significant decrease in the risk of having microalbuminuria in those on hormone therapy.”
Dr. Kattah acknowledged certain limitations of the study, including the fact that its design is “cross-sectional and cannot answer the question of whether or not hormone therapy can improve kidney function.
In addition, “we do not have data on how long women were taking hormone therapy, which other studies have suggested is an important factor.”
The researchers reported having no financial disclosures.
SAN DIEGO – The use of hormone therapy was associated with a lower urine albumin-to-creatinine ratio and a decreased risk of albuminuria, results from a cross-sectional study suggest.
“This may be useful information for providers taking care of menopausal women who are considering the use of hormone therapy for vasomotor symptoms and are worried about the systemic effects of these medications,” lead study author Dr. Andrea G. Kattah said in an interview after the annual meeting of the American Society of Nephrology. “Though our data only show an association and not cause and effect, hormone therapy is associated with better kidney function in this study.”
Results from many animal studies suggest that estrogen can have beneficial effects on the kidneys, noted Dr. Kattah, who conducted the research with Dr. Vesna D. Garovic and colleagues in the division of hypertension and nephrology at the Mayo Clinic, Rochester, Minn.
“In addition, in human studies of chronic kidney disease, premenopausal women tend to have slower progression of kidney disease than men,” she said. “Studies on the effects of hormone therapy on kidney function in women have had variable results. We wanted to look at the association of hormone therapy and renal function in a large, multiethnic cohort with well-defined health conditions that may confound the relationship between hormone therapy and renal disease.”
Study participants included 2,217 women enrolled in the Family Blood Pressure Program, a multinetwork effort to study the genetics of hypertension. During a study visit between 2000 and 2004, the women completed questionnaires about medical history, menopausal status, and use of hormone therapy (HT) in the past month. Clinicians also took their blood pressure, measured their body mass index, and drew blood to determine levels of serum creatinine and urine albumin-to-creatinine ratio (UACR).
Of the 2,217 women, 673 were on HT and 1,544 were not, and their mean ages were 60 years and 63 years, respectively.
In unadjusted analysis, Dr. Kattah and her associates found that UACR was significantly lower in those on HT, compared with those who were not (3.5 mg/g creatinine vs. 5.2 mg/g creatinine, respectively, P less than .001), as was the number of women with an estimated glomerular filtration rate (eGFR) of less than 60 mL/min per 1.73 m2 (7% vs. 10%, P = .003).
After adjusting for renal and cardiovascular risk factors including age, race, smoking, diabetes, hypertension, and family history of hypertension, the use of HT was still significantly associated with a lower UACR and decreased risk of microalbuminuria (odds ratio, 0.61). The association between HT and eGFR of less than 60 mL/min per 1.73 m2 was no longer significant after adjustment, but there was a trend toward higher eGFR and fewer women with an eGFR of less than 60 mL/min per 1.73 m2 among those on HT.
“Not surprisingly, the women taking hormone therapy were different than those who were not, and they generally had fewer health problems, such as diabetes and hyperlipidemia,” Dr. Kattah said. “However, after taking these differences into account in our models, we still found a significant decrease in the risk of having microalbuminuria in those on hormone therapy.”
Dr. Kattah acknowledged certain limitations of the study, including the fact that its design is “cross-sectional and cannot answer the question of whether or not hormone therapy can improve kidney function.
In addition, “we do not have data on how long women were taking hormone therapy, which other studies have suggested is an important factor.”
The researchers reported having no financial disclosures.
AT KIDNEY WEEK 2015
Key clinical point: Women using hormone therapy had a significantly lower urine albumin-to-creatinine ratio and decreased risk of microalbuminuria, compared with those who did not.
Major finding: After adjusting for renal and cardiovascular risk factors, hormone therapy was significantly associated with a lower urine albumin-to-creatinine ratio and a decreased risk of microalbuminuria (OR, 0.61).
Data source: An analysis of 2,217 women enrolled in the Family Blood Pressure Program, a multinetwork effort to study the genetics of hypertension.
Disclosures: The researchers reported having no financial disclosures.