User login
Starting Mammography at Age 40 May Backfire Due to False Positives
Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.
The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.
A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.
These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.
The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.
I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.
Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.
A version of this article appeared on Medscape.com.
Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.
The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.
A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.
These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.
The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.
I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.
Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.
A version of this article appeared on Medscape.com.
Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.
The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.
A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.
These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.
The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.
I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.
Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.
A version of this article appeared on Medscape.com.
Diabetes Drug Improved Symptoms in Small Study of Women With Central Centrifugal Cicatricial Alopecia
TOPLINE:
in a retrospective case series.
METHODOLOGY:
- Researchers conducted a case series involving 12 Black women in their 30s, 40s, and 50s, with biopsy-confirmed, treatment-refractory CCCA, a chronic inflammatory hair disorder characterized by permanent hair loss, from the Johns Hopkins University alopecia clinic.
- Participants received CCCA treatment for at least 6 months and had stagnant or worsening symptoms before oral extended-release metformin (500 mg daily) was added to treatment. (Treatments included topical clobetasol, compounded minoxidil, and platelet-rich plasma injections.)
- Scalp biopsies were collected from four patients before and after metformin treatment to evaluate gene expression changes.
- Changes in clinical symptoms were assessed, including pruritus, inflammation, pain, scalp resistance, and hair regrowth, following initiation of metformin treatment.
TAKEAWAY:
- Metformin led to significant clinical improvement in eight patients, which included reductions in scalp pain, scalp resistance, pruritus, and inflammation. However, two patients experienced worsening symptoms.
- Six patients showed clinical evidence of hair regrowth after at least 6 months of metformin treatment with one experiencing hair loss again 3 months after discontinuing treatment.
- Transcriptomic analysis revealed 34 up-regulated genes, which included up-regulated of 23 hair keratin–associated proteins, and pathways related to keratinization, epidermis development, and the hair cycle. In addition, eight genes were down-regulated, with pathways that included those associated with extracellular matrix organization, collagen fibril organization, and collagen metabolism.
- Gene set variation analysis showed reduced expression of T helper 17 cell and epithelial-mesenchymal transition pathways and elevated adenosine monophosphate kinase signaling and keratin-associated proteins after treatment with metformin.
IN PRACTICE:
“Metformin’s ability to concomitantly target fibrosis and inflammation provides a plausible mechanism for its therapeutic effects in CCCA and other fibrosing alopecia disorders,” the authors concluded. But, they added, “larger prospective, placebo-controlled randomized clinical trials are needed to rigorously evaluate metformin’s efficacy and optimal dosing for treatment of cicatricial alopecias.”
SOURCE:
The study was led by Aaron Bao, Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland, and was published online on September 4 in JAMA Dermatology.
LIMITATIONS:
A small sample size, retrospective design, lack of a placebo control group, and the single-center setting limited the generalizability of the study findings. Additionally, the absence of a validated activity or severity scale for CCCA and the single posttreatment sampling limit the assessment and comparison of clinical symptoms and transcriptomic changes.
DISCLOSURES:
The study was supported by the American Academy of Dermatology. One author reported several ties with pharmaceutical companies, a pending patent, and authorship for the UpToDate section on CCCA.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
in a retrospective case series.
METHODOLOGY:
- Researchers conducted a case series involving 12 Black women in their 30s, 40s, and 50s, with biopsy-confirmed, treatment-refractory CCCA, a chronic inflammatory hair disorder characterized by permanent hair loss, from the Johns Hopkins University alopecia clinic.
- Participants received CCCA treatment for at least 6 months and had stagnant or worsening symptoms before oral extended-release metformin (500 mg daily) was added to treatment. (Treatments included topical clobetasol, compounded minoxidil, and platelet-rich plasma injections.)
- Scalp biopsies were collected from four patients before and after metformin treatment to evaluate gene expression changes.
- Changes in clinical symptoms were assessed, including pruritus, inflammation, pain, scalp resistance, and hair regrowth, following initiation of metformin treatment.
TAKEAWAY:
- Metformin led to significant clinical improvement in eight patients, which included reductions in scalp pain, scalp resistance, pruritus, and inflammation. However, two patients experienced worsening symptoms.
- Six patients showed clinical evidence of hair regrowth after at least 6 months of metformin treatment with one experiencing hair loss again 3 months after discontinuing treatment.
- Transcriptomic analysis revealed 34 up-regulated genes, which included up-regulated of 23 hair keratin–associated proteins, and pathways related to keratinization, epidermis development, and the hair cycle. In addition, eight genes were down-regulated, with pathways that included those associated with extracellular matrix organization, collagen fibril organization, and collagen metabolism.
- Gene set variation analysis showed reduced expression of T helper 17 cell and epithelial-mesenchymal transition pathways and elevated adenosine monophosphate kinase signaling and keratin-associated proteins after treatment with metformin.
IN PRACTICE:
“Metformin’s ability to concomitantly target fibrosis and inflammation provides a plausible mechanism for its therapeutic effects in CCCA and other fibrosing alopecia disorders,” the authors concluded. But, they added, “larger prospective, placebo-controlled randomized clinical trials are needed to rigorously evaluate metformin’s efficacy and optimal dosing for treatment of cicatricial alopecias.”
SOURCE:
The study was led by Aaron Bao, Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland, and was published online on September 4 in JAMA Dermatology.
LIMITATIONS:
A small sample size, retrospective design, lack of a placebo control group, and the single-center setting limited the generalizability of the study findings. Additionally, the absence of a validated activity or severity scale for CCCA and the single posttreatment sampling limit the assessment and comparison of clinical symptoms and transcriptomic changes.
DISCLOSURES:
The study was supported by the American Academy of Dermatology. One author reported several ties with pharmaceutical companies, a pending patent, and authorship for the UpToDate section on CCCA.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
in a retrospective case series.
METHODOLOGY:
- Researchers conducted a case series involving 12 Black women in their 30s, 40s, and 50s, with biopsy-confirmed, treatment-refractory CCCA, a chronic inflammatory hair disorder characterized by permanent hair loss, from the Johns Hopkins University alopecia clinic.
- Participants received CCCA treatment for at least 6 months and had stagnant or worsening symptoms before oral extended-release metformin (500 mg daily) was added to treatment. (Treatments included topical clobetasol, compounded minoxidil, and platelet-rich plasma injections.)
- Scalp biopsies were collected from four patients before and after metformin treatment to evaluate gene expression changes.
- Changes in clinical symptoms were assessed, including pruritus, inflammation, pain, scalp resistance, and hair regrowth, following initiation of metformin treatment.
TAKEAWAY:
- Metformin led to significant clinical improvement in eight patients, which included reductions in scalp pain, scalp resistance, pruritus, and inflammation. However, two patients experienced worsening symptoms.
- Six patients showed clinical evidence of hair regrowth after at least 6 months of metformin treatment with one experiencing hair loss again 3 months after discontinuing treatment.
- Transcriptomic analysis revealed 34 up-regulated genes, which included up-regulated of 23 hair keratin–associated proteins, and pathways related to keratinization, epidermis development, and the hair cycle. In addition, eight genes were down-regulated, with pathways that included those associated with extracellular matrix organization, collagen fibril organization, and collagen metabolism.
- Gene set variation analysis showed reduced expression of T helper 17 cell and epithelial-mesenchymal transition pathways and elevated adenosine monophosphate kinase signaling and keratin-associated proteins after treatment with metformin.
IN PRACTICE:
“Metformin’s ability to concomitantly target fibrosis and inflammation provides a plausible mechanism for its therapeutic effects in CCCA and other fibrosing alopecia disorders,” the authors concluded. But, they added, “larger prospective, placebo-controlled randomized clinical trials are needed to rigorously evaluate metformin’s efficacy and optimal dosing for treatment of cicatricial alopecias.”
SOURCE:
The study was led by Aaron Bao, Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland, and was published online on September 4 in JAMA Dermatology.
LIMITATIONS:
A small sample size, retrospective design, lack of a placebo control group, and the single-center setting limited the generalizability of the study findings. Additionally, the absence of a validated activity or severity scale for CCCA and the single posttreatment sampling limit the assessment and comparison of clinical symptoms and transcriptomic changes.
DISCLOSURES:
The study was supported by the American Academy of Dermatology. One author reported several ties with pharmaceutical companies, a pending patent, and authorship for the UpToDate section on CCCA.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Hot Flashes: Do They Predict CVD and Dementia?
This transcript has been edited for clarity.
I’d like to talk about a recent report in the journal Menopause linking menopausal symptoms to increased risk for cognitive impairment. I’d also like to discuss some of the recent studies that have addressed whether hot flashes are linked to increased risk for heart disease and other forms of cardiovascular disease (CVD).
Given that 75%-80% of perimenopausal and postmenopausal women have hot flashes and vasomotor symptoms, it’s undoubtedly a more complex relationship between hot flashes and these outcomes than a simple one-size-fits-all, yes-or-no question.
Increasing evidence shows that several additional factors are important, including the age at which the symptoms are occurring, the time since menopause, the severity of the symptoms, whether they co-occur with night sweats and sleep disruption, and the cardiovascular status of the woman.
Several studies suggest that women who have more severe hot flashes and vasomotor symptoms are more likely to have prevalent cardiovascular risk factors — hypertension, dyslipidemia, high body mass index, endothelial dysfunction — as measured by flow-mediated vasodilation and other measures.
It is quite plausible that hot flashes could be a marker for increased risk for cognitive impairment. But the question remains, are hot flashes associated with cognitive impairment independent of these other risk factors? It appears that the associations between hot flashes, vasomotor symptoms, and CVD, and other adverse outcomes, may be more likely when hot flashes persist after age 60 or are newly occurring in later menopause. In the Women’s Health Initiative observational study, the presence of hot flashes and vasomotor symptoms in early menopause was not linked to any increased risk for heart attack, stroke, total CVD, or all-cause mortality.
However, the onset of these symptoms, especially new onset of these symptoms after age 60 or in later menopause, was in fact linked to increased risk for CVD and all-cause mortality. With respect to cognitive impairment, if a woman is having hot flashes and night sweats with regular sleep disruption, performance on cognitive testing would not be as favorable as it would be in the absence of these symptoms.
This brings us to the new study in Menopause that included approximately 1300 Latino women in nine Latin American countries, with an average age of 55 years. Looking at the association between severe menopausal symptoms and cognitive impairment, researchers found that women with severe symptoms were more likely to have cognitive impairment.
Conversely, they found that the women who had a favorable CVD risk factor status (physically active, lower BMI, healthier) and were ever users of estrogen were less likely to have cognitive impairment.
Clearly, for estrogen therapy, we need randomized clinical trials of the presence or absence of vasomotor symptoms and cognitive and CVD outcomes. Such analyses are ongoing, and new randomized trials focused specifically on women in early menopause would be very beneficial.
At the present time, it’s important that we not alarm women about the associations seen in some of these studies because often they are not independent associations; they aren’t independent of other risk factors that are commonly linked to hot flashes and night sweats. There are many other complexities in the relationship between hot flashes and cognitive impairment.
We need to appreciate that women who have moderate to severe hot flashes (especially when associated with disrupted sleep) do have impaired quality of life. It’s important to treat these symptoms, especially in early menopause, and very effective hormonal and nonhormonal treatments are available.
For women with symptoms that persist into later menopause or who have new onset of symptoms in later menopause, it’s important to prioritize cardiovascular health. For example, be more vigilant about behavioral lifestyle counseling to lower risk, and be even more aggressive in treating dyslipidemia and diabetes.
JoAnn E. Manson, Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School; Chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, Massachusetts; and Past President, North American Menopause Society, 2011-2012, has disclosed the following relevant financial relationships: Received study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to talk about a recent report in the journal Menopause linking menopausal symptoms to increased risk for cognitive impairment. I’d also like to discuss some of the recent studies that have addressed whether hot flashes are linked to increased risk for heart disease and other forms of cardiovascular disease (CVD).
Given that 75%-80% of perimenopausal and postmenopausal women have hot flashes and vasomotor symptoms, it’s undoubtedly a more complex relationship between hot flashes and these outcomes than a simple one-size-fits-all, yes-or-no question.
Increasing evidence shows that several additional factors are important, including the age at which the symptoms are occurring, the time since menopause, the severity of the symptoms, whether they co-occur with night sweats and sleep disruption, and the cardiovascular status of the woman.
Several studies suggest that women who have more severe hot flashes and vasomotor symptoms are more likely to have prevalent cardiovascular risk factors — hypertension, dyslipidemia, high body mass index, endothelial dysfunction — as measured by flow-mediated vasodilation and other measures.
It is quite plausible that hot flashes could be a marker for increased risk for cognitive impairment. But the question remains, are hot flashes associated with cognitive impairment independent of these other risk factors? It appears that the associations between hot flashes, vasomotor symptoms, and CVD, and other adverse outcomes, may be more likely when hot flashes persist after age 60 or are newly occurring in later menopause. In the Women’s Health Initiative observational study, the presence of hot flashes and vasomotor symptoms in early menopause was not linked to any increased risk for heart attack, stroke, total CVD, or all-cause mortality.
However, the onset of these symptoms, especially new onset of these symptoms after age 60 or in later menopause, was in fact linked to increased risk for CVD and all-cause mortality. With respect to cognitive impairment, if a woman is having hot flashes and night sweats with regular sleep disruption, performance on cognitive testing would not be as favorable as it would be in the absence of these symptoms.
This brings us to the new study in Menopause that included approximately 1300 Latino women in nine Latin American countries, with an average age of 55 years. Looking at the association between severe menopausal symptoms and cognitive impairment, researchers found that women with severe symptoms were more likely to have cognitive impairment.
Conversely, they found that the women who had a favorable CVD risk factor status (physically active, lower BMI, healthier) and were ever users of estrogen were less likely to have cognitive impairment.
Clearly, for estrogen therapy, we need randomized clinical trials of the presence or absence of vasomotor symptoms and cognitive and CVD outcomes. Such analyses are ongoing, and new randomized trials focused specifically on women in early menopause would be very beneficial.
At the present time, it’s important that we not alarm women about the associations seen in some of these studies because often they are not independent associations; they aren’t independent of other risk factors that are commonly linked to hot flashes and night sweats. There are many other complexities in the relationship between hot flashes and cognitive impairment.
We need to appreciate that women who have moderate to severe hot flashes (especially when associated with disrupted sleep) do have impaired quality of life. It’s important to treat these symptoms, especially in early menopause, and very effective hormonal and nonhormonal treatments are available.
For women with symptoms that persist into later menopause or who have new onset of symptoms in later menopause, it’s important to prioritize cardiovascular health. For example, be more vigilant about behavioral lifestyle counseling to lower risk, and be even more aggressive in treating dyslipidemia and diabetes.
JoAnn E. Manson, Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School; Chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, Massachusetts; and Past President, North American Menopause Society, 2011-2012, has disclosed the following relevant financial relationships: Received study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’d like to talk about a recent report in the journal Menopause linking menopausal symptoms to increased risk for cognitive impairment. I’d also like to discuss some of the recent studies that have addressed whether hot flashes are linked to increased risk for heart disease and other forms of cardiovascular disease (CVD).
Given that 75%-80% of perimenopausal and postmenopausal women have hot flashes and vasomotor symptoms, it’s undoubtedly a more complex relationship between hot flashes and these outcomes than a simple one-size-fits-all, yes-or-no question.
Increasing evidence shows that several additional factors are important, including the age at which the symptoms are occurring, the time since menopause, the severity of the symptoms, whether they co-occur with night sweats and sleep disruption, and the cardiovascular status of the woman.
Several studies suggest that women who have more severe hot flashes and vasomotor symptoms are more likely to have prevalent cardiovascular risk factors — hypertension, dyslipidemia, high body mass index, endothelial dysfunction — as measured by flow-mediated vasodilation and other measures.
It is quite plausible that hot flashes could be a marker for increased risk for cognitive impairment. But the question remains, are hot flashes associated with cognitive impairment independent of these other risk factors? It appears that the associations between hot flashes, vasomotor symptoms, and CVD, and other adverse outcomes, may be more likely when hot flashes persist after age 60 or are newly occurring in later menopause. In the Women’s Health Initiative observational study, the presence of hot flashes and vasomotor symptoms in early menopause was not linked to any increased risk for heart attack, stroke, total CVD, or all-cause mortality.
However, the onset of these symptoms, especially new onset of these symptoms after age 60 or in later menopause, was in fact linked to increased risk for CVD and all-cause mortality. With respect to cognitive impairment, if a woman is having hot flashes and night sweats with regular sleep disruption, performance on cognitive testing would not be as favorable as it would be in the absence of these symptoms.
This brings us to the new study in Menopause that included approximately 1300 Latino women in nine Latin American countries, with an average age of 55 years. Looking at the association between severe menopausal symptoms and cognitive impairment, researchers found that women with severe symptoms were more likely to have cognitive impairment.
Conversely, they found that the women who had a favorable CVD risk factor status (physically active, lower BMI, healthier) and were ever users of estrogen were less likely to have cognitive impairment.
Clearly, for estrogen therapy, we need randomized clinical trials of the presence or absence of vasomotor symptoms and cognitive and CVD outcomes. Such analyses are ongoing, and new randomized trials focused specifically on women in early menopause would be very beneficial.
At the present time, it’s important that we not alarm women about the associations seen in some of these studies because often they are not independent associations; they aren’t independent of other risk factors that are commonly linked to hot flashes and night sweats. There are many other complexities in the relationship between hot flashes and cognitive impairment.
We need to appreciate that women who have moderate to severe hot flashes (especially when associated with disrupted sleep) do have impaired quality of life. It’s important to treat these symptoms, especially in early menopause, and very effective hormonal and nonhormonal treatments are available.
For women with symptoms that persist into later menopause or who have new onset of symptoms in later menopause, it’s important to prioritize cardiovascular health. For example, be more vigilant about behavioral lifestyle counseling to lower risk, and be even more aggressive in treating dyslipidemia and diabetes.
JoAnn E. Manson, Professor of Medicine and the Michael and Lee Bell Professor of Women’s Health, Harvard Medical School; Chief, Division of Preventive Medicine, Brigham and Women’s Hospital, Boston, Massachusetts; and Past President, North American Menopause Society, 2011-2012, has disclosed the following relevant financial relationships: Received study pill donation and infrastructure support from Mars Symbioscience (for the COSMOS trial).
A version of this article first appeared on Medscape.com.
Should Genetic Testing Be Routine for Breast Cancer?
TOPLINE:
METHODOLOGY:
- Traditional risk-based criteria, including family history and ancestry, are used to guide genetic testing decisions in women with breast cancer. However, these criteria may overlook patients with actionable genetic variants, particularly those outside the typical risk profile.
- To assess the efficacy of universal genetic testing, researchers conducted a cross-sectional study that included 729 women (median age at diagnosis, 53 years; 65.4% White women) newly diagnosed with invasive breast cancer between September 2019 and April 2022 at three Canadian institutions.
- All patients received genetic counseling followed by testing for the presence of germline pathogenic variants in 17 breast cancer susceptibility genes. The primary gene panel included screening for BRCA1, BRCA2, and PALB2, and the optional secondary panel included 14 additional breast cancer susceptibility genes.
- Of the participants, 659 (90.4%) were tested for both primary and secondary gene panels, whereas 70 (9.6%) underwent testing for only the primary panel. The majority of the cohort (66.8) were diagnosed with estrogen receptor–positive breast cancer, while 15.4% had triple-negative breast cancer.
TAKEAWAY:
- The prevalence of germline pathogenic variants was 7.3% (53 patients) — 5.3% for the primary gene panel and 2.1% for the secondary panel.
- Younger age (< 40 years; odds ratio [OR], 6.83), family history of ovarian cancer (OR, 9.75), high-grade disease (OR, 1.68), and triple-negative breast cancer (OR, 3.19) were independently associated with the presence of pathogenic genetic variants in BRCA1, BRCA2, or PALB2.
- Overall, 34.3% of patients with germline pathogenic variants in BRCA1, BRCA2, or PALB2, and 85.7% of carriers of secondary panel variants would not have qualified for traditional genetic testing according to the current risk factors.
- A total of 13 patients with BRCA1, BRCA2, or PALB2 variants had confirmed pathogenic mutations and were eligible for poly(adenosine diphosphate–ribose) polymerase (PARP) inhibitors.
IN PRACTICE:
These findings have “informed our clinical practice, and we now offer mainstream, oncology-led genetic testing to all women diagnosed with incident invasive breast cancer younger than 50 years of age, those with triple-negative breast cancer and/or bilateral breast cancer, those potentially eligible for PARP inhibitors,” as well as to men with breast cancer, the authors wrote.
SOURCE:
The study was led by Zoulikha Rezoug, MSc, Lady Davis Institute of the Jewish General Hospital, McGill University in Montreal, Québec, Canada. It was published online on September 3, 2024, in JAMA Network Open.
LIMITATIONS:
The COVID-19 pandemic resulted in a 6-month recruitment pause. Adjustments in recruitment criteria, focus on younger patients and those with triple-negative breast cancer could have overestimated prevalence of genetic pathogenic variants among women aged ≥ 70 years.
DISCLOSURES:
The study was supported by grants from the Jewish General Hospital Foundation and the Québec Breast Cancer Foundation, as well as an award from the Fonds de Recherche du Québec - Santé. Two authors reported receiving grants or personal fees from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Traditional risk-based criteria, including family history and ancestry, are used to guide genetic testing decisions in women with breast cancer. However, these criteria may overlook patients with actionable genetic variants, particularly those outside the typical risk profile.
- To assess the efficacy of universal genetic testing, researchers conducted a cross-sectional study that included 729 women (median age at diagnosis, 53 years; 65.4% White women) newly diagnosed with invasive breast cancer between September 2019 and April 2022 at three Canadian institutions.
- All patients received genetic counseling followed by testing for the presence of germline pathogenic variants in 17 breast cancer susceptibility genes. The primary gene panel included screening for BRCA1, BRCA2, and PALB2, and the optional secondary panel included 14 additional breast cancer susceptibility genes.
- Of the participants, 659 (90.4%) were tested for both primary and secondary gene panels, whereas 70 (9.6%) underwent testing for only the primary panel. The majority of the cohort (66.8) were diagnosed with estrogen receptor–positive breast cancer, while 15.4% had triple-negative breast cancer.
TAKEAWAY:
- The prevalence of germline pathogenic variants was 7.3% (53 patients) — 5.3% for the primary gene panel and 2.1% for the secondary panel.
- Younger age (< 40 years; odds ratio [OR], 6.83), family history of ovarian cancer (OR, 9.75), high-grade disease (OR, 1.68), and triple-negative breast cancer (OR, 3.19) were independently associated with the presence of pathogenic genetic variants in BRCA1, BRCA2, or PALB2.
- Overall, 34.3% of patients with germline pathogenic variants in BRCA1, BRCA2, or PALB2, and 85.7% of carriers of secondary panel variants would not have qualified for traditional genetic testing according to the current risk factors.
- A total of 13 patients with BRCA1, BRCA2, or PALB2 variants had confirmed pathogenic mutations and were eligible for poly(adenosine diphosphate–ribose) polymerase (PARP) inhibitors.
IN PRACTICE:
These findings have “informed our clinical practice, and we now offer mainstream, oncology-led genetic testing to all women diagnosed with incident invasive breast cancer younger than 50 years of age, those with triple-negative breast cancer and/or bilateral breast cancer, those potentially eligible for PARP inhibitors,” as well as to men with breast cancer, the authors wrote.
SOURCE:
The study was led by Zoulikha Rezoug, MSc, Lady Davis Institute of the Jewish General Hospital, McGill University in Montreal, Québec, Canada. It was published online on September 3, 2024, in JAMA Network Open.
LIMITATIONS:
The COVID-19 pandemic resulted in a 6-month recruitment pause. Adjustments in recruitment criteria, focus on younger patients and those with triple-negative breast cancer could have overestimated prevalence of genetic pathogenic variants among women aged ≥ 70 years.
DISCLOSURES:
The study was supported by grants from the Jewish General Hospital Foundation and the Québec Breast Cancer Foundation, as well as an award from the Fonds de Recherche du Québec - Santé. Two authors reported receiving grants or personal fees from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Traditional risk-based criteria, including family history and ancestry, are used to guide genetic testing decisions in women with breast cancer. However, these criteria may overlook patients with actionable genetic variants, particularly those outside the typical risk profile.
- To assess the efficacy of universal genetic testing, researchers conducted a cross-sectional study that included 729 women (median age at diagnosis, 53 years; 65.4% White women) newly diagnosed with invasive breast cancer between September 2019 and April 2022 at three Canadian institutions.
- All patients received genetic counseling followed by testing for the presence of germline pathogenic variants in 17 breast cancer susceptibility genes. The primary gene panel included screening for BRCA1, BRCA2, and PALB2, and the optional secondary panel included 14 additional breast cancer susceptibility genes.
- Of the participants, 659 (90.4%) were tested for both primary and secondary gene panels, whereas 70 (9.6%) underwent testing for only the primary panel. The majority of the cohort (66.8) were diagnosed with estrogen receptor–positive breast cancer, while 15.4% had triple-negative breast cancer.
TAKEAWAY:
- The prevalence of germline pathogenic variants was 7.3% (53 patients) — 5.3% for the primary gene panel and 2.1% for the secondary panel.
- Younger age (< 40 years; odds ratio [OR], 6.83), family history of ovarian cancer (OR, 9.75), high-grade disease (OR, 1.68), and triple-negative breast cancer (OR, 3.19) were independently associated with the presence of pathogenic genetic variants in BRCA1, BRCA2, or PALB2.
- Overall, 34.3% of patients with germline pathogenic variants in BRCA1, BRCA2, or PALB2, and 85.7% of carriers of secondary panel variants would not have qualified for traditional genetic testing according to the current risk factors.
- A total of 13 patients with BRCA1, BRCA2, or PALB2 variants had confirmed pathogenic mutations and were eligible for poly(adenosine diphosphate–ribose) polymerase (PARP) inhibitors.
IN PRACTICE:
These findings have “informed our clinical practice, and we now offer mainstream, oncology-led genetic testing to all women diagnosed with incident invasive breast cancer younger than 50 years of age, those with triple-negative breast cancer and/or bilateral breast cancer, those potentially eligible for PARP inhibitors,” as well as to men with breast cancer, the authors wrote.
SOURCE:
The study was led by Zoulikha Rezoug, MSc, Lady Davis Institute of the Jewish General Hospital, McGill University in Montreal, Québec, Canada. It was published online on September 3, 2024, in JAMA Network Open.
LIMITATIONS:
The COVID-19 pandemic resulted in a 6-month recruitment pause. Adjustments in recruitment criteria, focus on younger patients and those with triple-negative breast cancer could have overestimated prevalence of genetic pathogenic variants among women aged ≥ 70 years.
DISCLOSURES:
The study was supported by grants from the Jewish General Hospital Foundation and the Québec Breast Cancer Foundation, as well as an award from the Fonds de Recherche du Québec - Santé. Two authors reported receiving grants or personal fees from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Stones, Bones, Groans, and Moans: Could This Be Primary Hyperparathyroidism?
This transcript has been edited for clarity.
Matthew F. Watto, MD: Welcome back to The Curbsiders. I’m Dr Matthew Frank Watto, here with my great friend and America’s primary care physician, Dr. Paul Nelson Williams.
Paul, we’re going to talk about our primary hyperparathyroidism podcast with Dr. Lindsay Kuo. It’s a topic that I feel much more clear on now.
Now, Paul, in primary care, you see a lot of calcium that is just slightly high. Can we just blame that on thiazide diuretics?
Paul N. Williams, MD: It’s a place to start. As you’re starting to think about the possible etiologies, primary hyperparathyroidism and malignancy are the two that roll right off the tongue, but it is worth going back to the patient’s medication list and making sure you’re not missing something.
Thiazides famously cause hypercalcemia, but in some of the reading I did for this episode, they may just uncover it a little bit early. Patients who are on thiazides who become hypercalcemic seem to go on to develop primary hyperthyroidism anyway. So I don’t think you can solely blame the thiazide.
Another medication that can be causative is lithium. So a good place to look first after you’ve repeated the labs and confirmed hypercalcemia is the patient’s medication list.
Dr. Watto: We’ve talked before about the basic workup for hypercalcemia, and determining whether it’s PTH dependent or PTH independent. On the podcast, we talk more about the full workup, but I wanted to talk about the classic symptoms. Our expert made the point that we don’t see them as much anymore, although we do see kidney stones. People used to present very late in the disease because they weren’t having labs done routinely.
The classic symptoms include osteoporosis and bone tumors. People can get nephrocalcinosis and kidney stones. I hadn’t really thought of it this way because we’re used to diagnosing it early now. Do you feel the same?
Dr. Williams: As labs have started routinely reporting calcium levels, this is more and more often how it’s picked up. The other aspect is that as we are screening for and finding osteoporosis, part of the workup almost always involves getting a parathyroid hormone and a calcium level. We’re seeing these lab abnormalities before we’re seeing symptoms, which is good.
But it also makes things more diagnostically thorny.
Dr. Watto: Dr. Lindsay Kuo made the point that when she sees patients before and after surgery, she’s aware of these nonclassic symptoms — the stones, bones, groans, and the psychiatric overtones that can be anything from fatigue or irritability to dysphoria.
Some people have a generalized weakness that’s very nonspecific. Dr. Kuo said that sometimes these symptoms will disappear after surgery. The patients may just have gotten used to them, or they thought these symptoms were caused by something else, but after surgery they went away.
There are these nonclassic symptoms that are harder to pin down. I was surprised by that.
Dr. Williams: She mentioned polydipsia and polyuria, which have been reported in other studies. It seems like it can be anything. You have to take a good history, but none of those things in and of themselves is an indication for operating unless the patient has the classic renal or bone manifestations.
Dr. Watto: The other thing we talked about is a normal calcium level in a patient with primary hyperparathyroidism, or the finding of a PTH level in the normal range but with a high calcium level that is inappropriate. Can you talk a little bit about those two situations?
Dr. Williams: They’re hard to say but kind of easy to manage because you treat them the same way as someone who has elevated calcium and PTH levels.
The normocalcemic patient is something we might stumble across with osteoporosis screening. Initially the calcium level is elevated, so you repeat it and it’s normal but with an elevated PTH level. You’re like, shoot. Now what?
It turns out that most endocrine surgeons say that the indications for surgery for the classic form of primary hyperparathyroidism apply to these patients as well, and it probably helps with the bone outcomes, which is one of the things they follow most closely. If you have hypercalcemia, you should have a suppressed PTH level, the so-called normohormonal hyperparathyroidism, which is not normal at all. So even if the PTH is in the normal range, it’s still relatively elevated compared with what it should be. That situation is treated in the same way as the classic elevated PTH and elevated calcium levels.
Dr. Watto: If the calcium is abnormal and the PTH is not quite what you’d expect it to be, you can always ask your friendly neighborhood endocrinologist to help you figure out whether the patient really has one of these conditions. You have to make sure that they don’t have a simple secondary cause like a low vitamin D level. In that case, you fix the vitamin D and then recheck the numbers to see if they’ve normalized. But I have found a bunch of these edge cases in which it has been helpful to confer with an endocrinologist, especially before you send someone to a surgeon to take out their parathyroid gland.
This was a really fantastic conversation. If you want to hear the full podcast episode, click here.
Dr. Watto, Clinical Assistant Professor, Department of Medicine, Perelman School of Medicine at University of Pennsylvania; Internist, Department of Medicine, Hospital Medicine Section, Pennsylvania Hospital, Philadelphia, Pennsylvania, has disclosed no relevant financial relationships. Dr. Williams, Associate Professor of Clinical Medicine, Department of General Internal Medicine, Lewis Katz School of Medicine; Staff Physician, Department of General Internal Medicine, Temple Internal Medicine Associates, Philadelphia, Pennsylvania, served as a director, officer, partner, employee, adviser, consultant, or trustee for The Curbsiders, and has received income in an amount equal to or greater than $250 from The Curbsiders.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Matthew F. Watto, MD: Welcome back to The Curbsiders. I’m Dr Matthew Frank Watto, here with my great friend and America’s primary care physician, Dr. Paul Nelson Williams.
Paul, we’re going to talk about our primary hyperparathyroidism podcast with Dr. Lindsay Kuo. It’s a topic that I feel much more clear on now.
Now, Paul, in primary care, you see a lot of calcium that is just slightly high. Can we just blame that on thiazide diuretics?
Paul N. Williams, MD: It’s a place to start. As you’re starting to think about the possible etiologies, primary hyperparathyroidism and malignancy are the two that roll right off the tongue, but it is worth going back to the patient’s medication list and making sure you’re not missing something.
Thiazides famously cause hypercalcemia, but in some of the reading I did for this episode, they may just uncover it a little bit early. Patients who are on thiazides who become hypercalcemic seem to go on to develop primary hyperthyroidism anyway. So I don’t think you can solely blame the thiazide.
Another medication that can be causative is lithium. So a good place to look first after you’ve repeated the labs and confirmed hypercalcemia is the patient’s medication list.
Dr. Watto: We’ve talked before about the basic workup for hypercalcemia, and determining whether it’s PTH dependent or PTH independent. On the podcast, we talk more about the full workup, but I wanted to talk about the classic symptoms. Our expert made the point that we don’t see them as much anymore, although we do see kidney stones. People used to present very late in the disease because they weren’t having labs done routinely.
The classic symptoms include osteoporosis and bone tumors. People can get nephrocalcinosis and kidney stones. I hadn’t really thought of it this way because we’re used to diagnosing it early now. Do you feel the same?
Dr. Williams: As labs have started routinely reporting calcium levels, this is more and more often how it’s picked up. The other aspect is that as we are screening for and finding osteoporosis, part of the workup almost always involves getting a parathyroid hormone and a calcium level. We’re seeing these lab abnormalities before we’re seeing symptoms, which is good.
But it also makes things more diagnostically thorny.
Dr. Watto: Dr. Lindsay Kuo made the point that when she sees patients before and after surgery, she’s aware of these nonclassic symptoms — the stones, bones, groans, and the psychiatric overtones that can be anything from fatigue or irritability to dysphoria.
Some people have a generalized weakness that’s very nonspecific. Dr. Kuo said that sometimes these symptoms will disappear after surgery. The patients may just have gotten used to them, or they thought these symptoms were caused by something else, but after surgery they went away.
There are these nonclassic symptoms that are harder to pin down. I was surprised by that.
Dr. Williams: She mentioned polydipsia and polyuria, which have been reported in other studies. It seems like it can be anything. You have to take a good history, but none of those things in and of themselves is an indication for operating unless the patient has the classic renal or bone manifestations.
Dr. Watto: The other thing we talked about is a normal calcium level in a patient with primary hyperparathyroidism, or the finding of a PTH level in the normal range but with a high calcium level that is inappropriate. Can you talk a little bit about those two situations?
Dr. Williams: They’re hard to say but kind of easy to manage because you treat them the same way as someone who has elevated calcium and PTH levels.
The normocalcemic patient is something we might stumble across with osteoporosis screening. Initially the calcium level is elevated, so you repeat it and it’s normal but with an elevated PTH level. You’re like, shoot. Now what?
It turns out that most endocrine surgeons say that the indications for surgery for the classic form of primary hyperparathyroidism apply to these patients as well, and it probably helps with the bone outcomes, which is one of the things they follow most closely. If you have hypercalcemia, you should have a suppressed PTH level, the so-called normohormonal hyperparathyroidism, which is not normal at all. So even if the PTH is in the normal range, it’s still relatively elevated compared with what it should be. That situation is treated in the same way as the classic elevated PTH and elevated calcium levels.
Dr. Watto: If the calcium is abnormal and the PTH is not quite what you’d expect it to be, you can always ask your friendly neighborhood endocrinologist to help you figure out whether the patient really has one of these conditions. You have to make sure that they don’t have a simple secondary cause like a low vitamin D level. In that case, you fix the vitamin D and then recheck the numbers to see if they’ve normalized. But I have found a bunch of these edge cases in which it has been helpful to confer with an endocrinologist, especially before you send someone to a surgeon to take out their parathyroid gland.
This was a really fantastic conversation. If you want to hear the full podcast episode, click here.
Dr. Watto, Clinical Assistant Professor, Department of Medicine, Perelman School of Medicine at University of Pennsylvania; Internist, Department of Medicine, Hospital Medicine Section, Pennsylvania Hospital, Philadelphia, Pennsylvania, has disclosed no relevant financial relationships. Dr. Williams, Associate Professor of Clinical Medicine, Department of General Internal Medicine, Lewis Katz School of Medicine; Staff Physician, Department of General Internal Medicine, Temple Internal Medicine Associates, Philadelphia, Pennsylvania, served as a director, officer, partner, employee, adviser, consultant, or trustee for The Curbsiders, and has received income in an amount equal to or greater than $250 from The Curbsiders.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Matthew F. Watto, MD: Welcome back to The Curbsiders. I’m Dr Matthew Frank Watto, here with my great friend and America’s primary care physician, Dr. Paul Nelson Williams.
Paul, we’re going to talk about our primary hyperparathyroidism podcast with Dr. Lindsay Kuo. It’s a topic that I feel much more clear on now.
Now, Paul, in primary care, you see a lot of calcium that is just slightly high. Can we just blame that on thiazide diuretics?
Paul N. Williams, MD: It’s a place to start. As you’re starting to think about the possible etiologies, primary hyperparathyroidism and malignancy are the two that roll right off the tongue, but it is worth going back to the patient’s medication list and making sure you’re not missing something.
Thiazides famously cause hypercalcemia, but in some of the reading I did for this episode, they may just uncover it a little bit early. Patients who are on thiazides who become hypercalcemic seem to go on to develop primary hyperthyroidism anyway. So I don’t think you can solely blame the thiazide.
Another medication that can be causative is lithium. So a good place to look first after you’ve repeated the labs and confirmed hypercalcemia is the patient’s medication list.
Dr. Watto: We’ve talked before about the basic workup for hypercalcemia, and determining whether it’s PTH dependent or PTH independent. On the podcast, we talk more about the full workup, but I wanted to talk about the classic symptoms. Our expert made the point that we don’t see them as much anymore, although we do see kidney stones. People used to present very late in the disease because they weren’t having labs done routinely.
The classic symptoms include osteoporosis and bone tumors. People can get nephrocalcinosis and kidney stones. I hadn’t really thought of it this way because we’re used to diagnosing it early now. Do you feel the same?
Dr. Williams: As labs have started routinely reporting calcium levels, this is more and more often how it’s picked up. The other aspect is that as we are screening for and finding osteoporosis, part of the workup almost always involves getting a parathyroid hormone and a calcium level. We’re seeing these lab abnormalities before we’re seeing symptoms, which is good.
But it also makes things more diagnostically thorny.
Dr. Watto: Dr. Lindsay Kuo made the point that when she sees patients before and after surgery, she’s aware of these nonclassic symptoms — the stones, bones, groans, and the psychiatric overtones that can be anything from fatigue or irritability to dysphoria.
Some people have a generalized weakness that’s very nonspecific. Dr. Kuo said that sometimes these symptoms will disappear after surgery. The patients may just have gotten used to them, or they thought these symptoms were caused by something else, but after surgery they went away.
There are these nonclassic symptoms that are harder to pin down. I was surprised by that.
Dr. Williams: She mentioned polydipsia and polyuria, which have been reported in other studies. It seems like it can be anything. You have to take a good history, but none of those things in and of themselves is an indication for operating unless the patient has the classic renal or bone manifestations.
Dr. Watto: The other thing we talked about is a normal calcium level in a patient with primary hyperparathyroidism, or the finding of a PTH level in the normal range but with a high calcium level that is inappropriate. Can you talk a little bit about those two situations?
Dr. Williams: They’re hard to say but kind of easy to manage because you treat them the same way as someone who has elevated calcium and PTH levels.
The normocalcemic patient is something we might stumble across with osteoporosis screening. Initially the calcium level is elevated, so you repeat it and it’s normal but with an elevated PTH level. You’re like, shoot. Now what?
It turns out that most endocrine surgeons say that the indications for surgery for the classic form of primary hyperparathyroidism apply to these patients as well, and it probably helps with the bone outcomes, which is one of the things they follow most closely. If you have hypercalcemia, you should have a suppressed PTH level, the so-called normohormonal hyperparathyroidism, which is not normal at all. So even if the PTH is in the normal range, it’s still relatively elevated compared with what it should be. That situation is treated in the same way as the classic elevated PTH and elevated calcium levels.
Dr. Watto: If the calcium is abnormal and the PTH is not quite what you’d expect it to be, you can always ask your friendly neighborhood endocrinologist to help you figure out whether the patient really has one of these conditions. You have to make sure that they don’t have a simple secondary cause like a low vitamin D level. In that case, you fix the vitamin D and then recheck the numbers to see if they’ve normalized. But I have found a bunch of these edge cases in which it has been helpful to confer with an endocrinologist, especially before you send someone to a surgeon to take out their parathyroid gland.
This was a really fantastic conversation. If you want to hear the full podcast episode, click here.
Dr. Watto, Clinical Assistant Professor, Department of Medicine, Perelman School of Medicine at University of Pennsylvania; Internist, Department of Medicine, Hospital Medicine Section, Pennsylvania Hospital, Philadelphia, Pennsylvania, has disclosed no relevant financial relationships. Dr. Williams, Associate Professor of Clinical Medicine, Department of General Internal Medicine, Lewis Katz School of Medicine; Staff Physician, Department of General Internal Medicine, Temple Internal Medicine Associates, Philadelphia, Pennsylvania, served as a director, officer, partner, employee, adviser, consultant, or trustee for The Curbsiders, and has received income in an amount equal to or greater than $250 from The Curbsiders.
A version of this article first appeared on Medscape.com.
NSAIDs Offer No Relief for Pain From IUD Placement
Research on pain management during placement of intrauterine devices (IUD) is lacking, but most studies so far indicate that nonsteroidal anti-inflammatory drugs (NSAIDs) are not effective, according to a poster presented at Pain Week 2024 in Las Vegas.
Roughly 79% of the 14 studies included in the systematic review found NSAIDs — one of the most common drugs clinicians advise patients to take before placement — did not diminish discomfort.
“We’re challenging the current practice of using just NSAIDs as a first-line of treatment,” said Kevin Rowland, PhD, professor and chair of biomedical sciences at Tilman J. Fertitta Family College of Medicine in Houston, who helped conduct the meta-analysis. “We need additional measures.”
Some studies found the drugs offered virtually no improvement for patients, while the biggest drop in pain shown in one study was about 40%. The range of pain levels women reported while using NSAIDs was between 1.8 and 7.3 on the visual analog scale (VAS), with an average score of 4.25.
The review included 10 types of NSAIDs and dosages administered to patients before the procedure. One intramuscular NSAID was included while the remaining were oral. All studies were peer-reviewed, used the VAS pain scale, and were not limited to any specific population.
The findings highlight a longstanding but unresolved problem in reproductive health: An overall lack of effective pain management strategies for gynecologic procedures.
“We went into this having a pretty good idea of what we were going to find because [the lack of NSAID efficacy] has been shown before, it’s been talked about before, and we’re just not listening as a medical community,” said Isabella D. Martingano, an MD candidate at Tilman J. Fertitta Family College of Medicine, who led the review.
The research also points to a lack of robust studies on pain during IUD placement, said Emma Lakey, a coauthor and medical student at Tilman J. Fertitta Family College of Medicine.
“We were only able to review 14 studies, which was enough to go off of, but considering we were looking for trials about pain control for a procedure that helps prevent pregnancy, that’s just not enough research,” Ms. Lakey said.
Discomfort associated with IUD placement ranges from mild to severe, can last for over a week, and includes cramping, bleeding, lightheadedness, nausea, and fainting. Some research suggests that providers may underestimate the level of pain the procedures cause.
“Unfortunately, the pain associated with IUD insertion and removal has been underplayed for a long time and many practitioners in the field likely haven’t counseled patients fully on what the procedure will feel like,” said Jennifer Chin, MD, an ob.gyn. and assistant professor of obstetrics and gynecology at the University of Washington in Seattle.
NSAIDs are not mentioned in the recently expanded guidelines on IUD placement from the US Centers for Disease Control and Prevention (CDC). The CDC recommends lidocaine paracervical blocks, gels, sprays, and creams, plus counseling women about pain ahead of the procedures.
IUDs are one of the most effective forms of birth control, with a failure rate below 1%.
Yet hearing about painful placement keeps many women from seeking out an IUD or replacing an existing device, Dr. Rowland said. The review adds to the body of evidence that current strategies are not working and that more research is needed, he said.
According to Dr. Chin, making IUDs more accessible means taking a more personalized approach to pain management while understanding that what may be a painless procedure for one patient may be excruciating for another.
Dr. Chin offers a range of options for her patients, including NSAIDs, lorazepam for anxiety, paracervical blocks, lidocaine jelly and spray, intravenous sedation, and general anesthesia. She also talks to her patients through the procedure and provides guided imagery and meditation.
“We should always make sure we’re prioritizing the patients and providing evidence-based, compassionate, and individualized care,” said Dr. Chin. “Each patient comes to us in a particular context and with a specific set of experiences and history that will make a difference in how we’re best able to take care of them.”
The authors reported no disclosures and no sources of funding. Dr. Chin reported no disclosures.
A version of this article first appeared on Medscape.com.
Research on pain management during placement of intrauterine devices (IUD) is lacking, but most studies so far indicate that nonsteroidal anti-inflammatory drugs (NSAIDs) are not effective, according to a poster presented at Pain Week 2024 in Las Vegas.
Roughly 79% of the 14 studies included in the systematic review found NSAIDs — one of the most common drugs clinicians advise patients to take before placement — did not diminish discomfort.
“We’re challenging the current practice of using just NSAIDs as a first-line of treatment,” said Kevin Rowland, PhD, professor and chair of biomedical sciences at Tilman J. Fertitta Family College of Medicine in Houston, who helped conduct the meta-analysis. “We need additional measures.”
Some studies found the drugs offered virtually no improvement for patients, while the biggest drop in pain shown in one study was about 40%. The range of pain levels women reported while using NSAIDs was between 1.8 and 7.3 on the visual analog scale (VAS), with an average score of 4.25.
The review included 10 types of NSAIDs and dosages administered to patients before the procedure. One intramuscular NSAID was included while the remaining were oral. All studies were peer-reviewed, used the VAS pain scale, and were not limited to any specific population.
The findings highlight a longstanding but unresolved problem in reproductive health: An overall lack of effective pain management strategies for gynecologic procedures.
“We went into this having a pretty good idea of what we were going to find because [the lack of NSAID efficacy] has been shown before, it’s been talked about before, and we’re just not listening as a medical community,” said Isabella D. Martingano, an MD candidate at Tilman J. Fertitta Family College of Medicine, who led the review.
The research also points to a lack of robust studies on pain during IUD placement, said Emma Lakey, a coauthor and medical student at Tilman J. Fertitta Family College of Medicine.
“We were only able to review 14 studies, which was enough to go off of, but considering we were looking for trials about pain control for a procedure that helps prevent pregnancy, that’s just not enough research,” Ms. Lakey said.
Discomfort associated with IUD placement ranges from mild to severe, can last for over a week, and includes cramping, bleeding, lightheadedness, nausea, and fainting. Some research suggests that providers may underestimate the level of pain the procedures cause.
“Unfortunately, the pain associated with IUD insertion and removal has been underplayed for a long time and many practitioners in the field likely haven’t counseled patients fully on what the procedure will feel like,” said Jennifer Chin, MD, an ob.gyn. and assistant professor of obstetrics and gynecology at the University of Washington in Seattle.
NSAIDs are not mentioned in the recently expanded guidelines on IUD placement from the US Centers for Disease Control and Prevention (CDC). The CDC recommends lidocaine paracervical blocks, gels, sprays, and creams, plus counseling women about pain ahead of the procedures.
IUDs are one of the most effective forms of birth control, with a failure rate below 1%.
Yet hearing about painful placement keeps many women from seeking out an IUD or replacing an existing device, Dr. Rowland said. The review adds to the body of evidence that current strategies are not working and that more research is needed, he said.
According to Dr. Chin, making IUDs more accessible means taking a more personalized approach to pain management while understanding that what may be a painless procedure for one patient may be excruciating for another.
Dr. Chin offers a range of options for her patients, including NSAIDs, lorazepam for anxiety, paracervical blocks, lidocaine jelly and spray, intravenous sedation, and general anesthesia. She also talks to her patients through the procedure and provides guided imagery and meditation.
“We should always make sure we’re prioritizing the patients and providing evidence-based, compassionate, and individualized care,” said Dr. Chin. “Each patient comes to us in a particular context and with a specific set of experiences and history that will make a difference in how we’re best able to take care of them.”
The authors reported no disclosures and no sources of funding. Dr. Chin reported no disclosures.
A version of this article first appeared on Medscape.com.
Research on pain management during placement of intrauterine devices (IUD) is lacking, but most studies so far indicate that nonsteroidal anti-inflammatory drugs (NSAIDs) are not effective, according to a poster presented at Pain Week 2024 in Las Vegas.
Roughly 79% of the 14 studies included in the systematic review found NSAIDs — one of the most common drugs clinicians advise patients to take before placement — did not diminish discomfort.
“We’re challenging the current practice of using just NSAIDs as a first-line of treatment,” said Kevin Rowland, PhD, professor and chair of biomedical sciences at Tilman J. Fertitta Family College of Medicine in Houston, who helped conduct the meta-analysis. “We need additional measures.”
Some studies found the drugs offered virtually no improvement for patients, while the biggest drop in pain shown in one study was about 40%. The range of pain levels women reported while using NSAIDs was between 1.8 and 7.3 on the visual analog scale (VAS), with an average score of 4.25.
The review included 10 types of NSAIDs and dosages administered to patients before the procedure. One intramuscular NSAID was included while the remaining were oral. All studies were peer-reviewed, used the VAS pain scale, and were not limited to any specific population.
The findings highlight a longstanding but unresolved problem in reproductive health: An overall lack of effective pain management strategies for gynecologic procedures.
“We went into this having a pretty good idea of what we were going to find because [the lack of NSAID efficacy] has been shown before, it’s been talked about before, and we’re just not listening as a medical community,” said Isabella D. Martingano, an MD candidate at Tilman J. Fertitta Family College of Medicine, who led the review.
The research also points to a lack of robust studies on pain during IUD placement, said Emma Lakey, a coauthor and medical student at Tilman J. Fertitta Family College of Medicine.
“We were only able to review 14 studies, which was enough to go off of, but considering we were looking for trials about pain control for a procedure that helps prevent pregnancy, that’s just not enough research,” Ms. Lakey said.
Discomfort associated with IUD placement ranges from mild to severe, can last for over a week, and includes cramping, bleeding, lightheadedness, nausea, and fainting. Some research suggests that providers may underestimate the level of pain the procedures cause.
“Unfortunately, the pain associated with IUD insertion and removal has been underplayed for a long time and many practitioners in the field likely haven’t counseled patients fully on what the procedure will feel like,” said Jennifer Chin, MD, an ob.gyn. and assistant professor of obstetrics and gynecology at the University of Washington in Seattle.
NSAIDs are not mentioned in the recently expanded guidelines on IUD placement from the US Centers for Disease Control and Prevention (CDC). The CDC recommends lidocaine paracervical blocks, gels, sprays, and creams, plus counseling women about pain ahead of the procedures.
IUDs are one of the most effective forms of birth control, with a failure rate below 1%.
Yet hearing about painful placement keeps many women from seeking out an IUD or replacing an existing device, Dr. Rowland said. The review adds to the body of evidence that current strategies are not working and that more research is needed, he said.
According to Dr. Chin, making IUDs more accessible means taking a more personalized approach to pain management while understanding that what may be a painless procedure for one patient may be excruciating for another.
Dr. Chin offers a range of options for her patients, including NSAIDs, lorazepam for anxiety, paracervical blocks, lidocaine jelly and spray, intravenous sedation, and general anesthesia. She also talks to her patients through the procedure and provides guided imagery and meditation.
“We should always make sure we’re prioritizing the patients and providing evidence-based, compassionate, and individualized care,” said Dr. Chin. “Each patient comes to us in a particular context and with a specific set of experiences and history that will make a difference in how we’re best able to take care of them.”
The authors reported no disclosures and no sources of funding. Dr. Chin reported no disclosures.
A version of this article first appeared on Medscape.com.
Long-Term Exposure to Road Traffic Noise and Air Pollution Linked to Infertility
TOPLINE:
Long-term exposure to particulate matter < 2.5 μm in diameter (PM2.5) is linked to a higher risk for infertility in men. Exposure to road traffic noise is associated with a higher risk for infertility in women aged > 35 years and possibly in men aged > 37 years.
METHODOLOGY:
- This nationwide prospective cohort study evaluated the association between long-term exposure to road traffic noise and PM2.5 and infertility in 526,056 men (mean age, 33.6 years) and 377,850 women (mean age, 32.7 years) who were cohabiting or married, had fewer than two children, and lived in Denmark between 2000 and 2017.
- Residential exposure to road traffic noise (most exposed facade of the home) and PM2.5 was estimated using validated models and linked to data from national registers.
- Diagnoses of infertility were identified in men and women from the Danish National Patient Register over a mean follow-up of 4.3 years and 4.2 years, respectively.
TAKEAWAY:
- Each 2.9 µg/m3 increase in the 5-year average exposure to PM2.5 was associated with a 24% increase in the risk for infertility in men aged 30-45 years (adjusted hazard ratio [aHR], 1.24).
- No significant association was found between exposure to PM2.5 and infertility in women.
- Each 10.2 dB increase in the 5-year average exposure to road traffic noise was associated with a 14% increase in infertility (aHR, 1.14; 95% CI, 1.10-1.18) in women aged 35-45 years.
- Exposure to noise was associated with a reduced risk for infertility in men aged 30.0-36.9 years (aHR, 0.93; 95% CI, 0.91-0.96) and an increased risk in those aged 37-45 years (aHR, 1.06; 95% CI, 1.02-1.11).
IN PRACTICE:
“As many Western countries are facing declining birth rates and increasing maternal age at the birth of a first child, knowledge on environmental pollutants affecting fertility is crucial,” the authors of the study wrote. “It suggests that political implementation of air pollution and noise mitigations may be important tools for improving birth rates in the Western world,” they added.
SOURCE:
The study, led by Mette Sorensen, of the Danish Cancer Institute in Copenhagen, Denmark, was published online in The BMJ.
LIMITATIONS:
The study’s reliance on register data meant information on lifestyle factors such as alcohol use, body mass index, and smoking was unavailable. The lack of data on exposure to noise and PM2.5 at work and during leisure activities may affect the size and statistical precision of risk estimates.
DISCLOSURES:
The study did not receive any external funding. The authors declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
Long-term exposure to particulate matter < 2.5 μm in diameter (PM2.5) is linked to a higher risk for infertility in men. Exposure to road traffic noise is associated with a higher risk for infertility in women aged > 35 years and possibly in men aged > 37 years.
METHODOLOGY:
- This nationwide prospective cohort study evaluated the association between long-term exposure to road traffic noise and PM2.5 and infertility in 526,056 men (mean age, 33.6 years) and 377,850 women (mean age, 32.7 years) who were cohabiting or married, had fewer than two children, and lived in Denmark between 2000 and 2017.
- Residential exposure to road traffic noise (most exposed facade of the home) and PM2.5 was estimated using validated models and linked to data from national registers.
- Diagnoses of infertility were identified in men and women from the Danish National Patient Register over a mean follow-up of 4.3 years and 4.2 years, respectively.
TAKEAWAY:
- Each 2.9 µg/m3 increase in the 5-year average exposure to PM2.5 was associated with a 24% increase in the risk for infertility in men aged 30-45 years (adjusted hazard ratio [aHR], 1.24).
- No significant association was found between exposure to PM2.5 and infertility in women.
- Each 10.2 dB increase in the 5-year average exposure to road traffic noise was associated with a 14% increase in infertility (aHR, 1.14; 95% CI, 1.10-1.18) in women aged 35-45 years.
- Exposure to noise was associated with a reduced risk for infertility in men aged 30.0-36.9 years (aHR, 0.93; 95% CI, 0.91-0.96) and an increased risk in those aged 37-45 years (aHR, 1.06; 95% CI, 1.02-1.11).
IN PRACTICE:
“As many Western countries are facing declining birth rates and increasing maternal age at the birth of a first child, knowledge on environmental pollutants affecting fertility is crucial,” the authors of the study wrote. “It suggests that political implementation of air pollution and noise mitigations may be important tools for improving birth rates in the Western world,” they added.
SOURCE:
The study, led by Mette Sorensen, of the Danish Cancer Institute in Copenhagen, Denmark, was published online in The BMJ.
LIMITATIONS:
The study’s reliance on register data meant information on lifestyle factors such as alcohol use, body mass index, and smoking was unavailable. The lack of data on exposure to noise and PM2.5 at work and during leisure activities may affect the size and statistical precision of risk estimates.
DISCLOSURES:
The study did not receive any external funding. The authors declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
Long-term exposure to particulate matter < 2.5 μm in diameter (PM2.5) is linked to a higher risk for infertility in men. Exposure to road traffic noise is associated with a higher risk for infertility in women aged > 35 years and possibly in men aged > 37 years.
METHODOLOGY:
- This nationwide prospective cohort study evaluated the association between long-term exposure to road traffic noise and PM2.5 and infertility in 526,056 men (mean age, 33.6 years) and 377,850 women (mean age, 32.7 years) who were cohabiting or married, had fewer than two children, and lived in Denmark between 2000 and 2017.
- Residential exposure to road traffic noise (most exposed facade of the home) and PM2.5 was estimated using validated models and linked to data from national registers.
- Diagnoses of infertility were identified in men and women from the Danish National Patient Register over a mean follow-up of 4.3 years and 4.2 years, respectively.
TAKEAWAY:
- Each 2.9 µg/m3 increase in the 5-year average exposure to PM2.5 was associated with a 24% increase in the risk for infertility in men aged 30-45 years (adjusted hazard ratio [aHR], 1.24).
- No significant association was found between exposure to PM2.5 and infertility in women.
- Each 10.2 dB increase in the 5-year average exposure to road traffic noise was associated with a 14% increase in infertility (aHR, 1.14; 95% CI, 1.10-1.18) in women aged 35-45 years.
- Exposure to noise was associated with a reduced risk for infertility in men aged 30.0-36.9 years (aHR, 0.93; 95% CI, 0.91-0.96) and an increased risk in those aged 37-45 years (aHR, 1.06; 95% CI, 1.02-1.11).
IN PRACTICE:
“As many Western countries are facing declining birth rates and increasing maternal age at the birth of a first child, knowledge on environmental pollutants affecting fertility is crucial,” the authors of the study wrote. “It suggests that political implementation of air pollution and noise mitigations may be important tools for improving birth rates in the Western world,” they added.
SOURCE:
The study, led by Mette Sorensen, of the Danish Cancer Institute in Copenhagen, Denmark, was published online in The BMJ.
LIMITATIONS:
The study’s reliance on register data meant information on lifestyle factors such as alcohol use, body mass index, and smoking was unavailable. The lack of data on exposure to noise and PM2.5 at work and during leisure activities may affect the size and statistical precision of risk estimates.
DISCLOSURES:
The study did not receive any external funding. The authors declared no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
FDA Investigates Tampons for Potential Lead and Metal Risks
The FDA has launched an investigation of the potential exposure to heavy metals when using tampons, the agency announced.
The move follows the publication earlier this year of concerning laboratory test results that detected the presence of more than a dozen metals in a variety of popular nonorganic and organic tampon products. That small study was a combined effort by researchers from Columbia University, Michigan State University, and the University of California, Berkeley.
“We want the public to know that before tampons can be legally sold in the US, they must meet FDA requirements for safety and effectiveness. Manufacturers must test the product and its component materials before, during, and after manufacturing,” the FDA wrote in the announcement of its own upcoming study. “Before a product is allowed onto the market, biocompatibility testing is undertaken by the manufacturing company, which is part of safety testing, and is reviewed by the FDA prior to market authorization.”
There will be two studies, the FDA said. One of the studies will involve laboratory tests to evaluate metals in tampons and potential exposure people may experience when using them. The other study will be a review of current research regarding the health effects of metals that may be found in tampons.
The earlier study, published by the journal Environment International, found levels of lead in every product the researchers tested and detectable levels of more than a dozen other metals like arsenic and cadmium.
The researchers tested 24 tampon products from a range of major brands as well as store brands. The tampons were purchased at stores and online between September 2022 and March 2023. Metal content tended to differ by whether or not a product was labeled as organic, the researchers reported. Lead concentrations were higher in nonorganic tampons, and organic tampons had higher levels of arsenic.
There is no safe level of lead exposure, the US Environmental Protection Agency says, and the effects are cumulative throughout the course of life. The study authors noted that the average age that girls begin menstruation is 12 years old, and the onset of menopause occurs, on average, at age 51. One study mentioned by the researchers estimated that between 52% and 86% of people who menstruate use tampons.
The FDA plans a more expansive set of analyses than the earlier study, the agency announced.
“While the study found metals in some tampons, the study did not test whether metals are released from tampons when used. It also did not test for metals being released, absorbed into the vaginal lining, and getting into the bloodstream during tampon use,” the FDA announcement stated. “The FDA’s laboratory study will measure the amount of metals that come out of tampons under conditions that more closely mimic normal use.”
The absorbent materials in tampons, like cotton, rayon, and viscose, are potential sources of the metals. Cotton plants are particularly known to readily take up metals from the soil, although there are other ways that metals may enter the products, like during the manufacturing process.
Exposure to metals found in the initial analysis can affect a range of body systems and processes, including the brain, the kidneys, the heart, blood, and the reproductive and immune systems.
The vagina, the researchers noted, is highly permeable and substances absorbed there do not get filtered for toxins, such as by being metabolized or passing through the liver, before entering the body’s circulatory system.
The FDA announcement did not specify a timeframe for the completion of its investigation.
A version of this article first appeared on WebMD.
The FDA has launched an investigation of the potential exposure to heavy metals when using tampons, the agency announced.
The move follows the publication earlier this year of concerning laboratory test results that detected the presence of more than a dozen metals in a variety of popular nonorganic and organic tampon products. That small study was a combined effort by researchers from Columbia University, Michigan State University, and the University of California, Berkeley.
“We want the public to know that before tampons can be legally sold in the US, they must meet FDA requirements for safety and effectiveness. Manufacturers must test the product and its component materials before, during, and after manufacturing,” the FDA wrote in the announcement of its own upcoming study. “Before a product is allowed onto the market, biocompatibility testing is undertaken by the manufacturing company, which is part of safety testing, and is reviewed by the FDA prior to market authorization.”
There will be two studies, the FDA said. One of the studies will involve laboratory tests to evaluate metals in tampons and potential exposure people may experience when using them. The other study will be a review of current research regarding the health effects of metals that may be found in tampons.
The earlier study, published by the journal Environment International, found levels of lead in every product the researchers tested and detectable levels of more than a dozen other metals like arsenic and cadmium.
The researchers tested 24 tampon products from a range of major brands as well as store brands. The tampons were purchased at stores and online between September 2022 and March 2023. Metal content tended to differ by whether or not a product was labeled as organic, the researchers reported. Lead concentrations were higher in nonorganic tampons, and organic tampons had higher levels of arsenic.
There is no safe level of lead exposure, the US Environmental Protection Agency says, and the effects are cumulative throughout the course of life. The study authors noted that the average age that girls begin menstruation is 12 years old, and the onset of menopause occurs, on average, at age 51. One study mentioned by the researchers estimated that between 52% and 86% of people who menstruate use tampons.
The FDA plans a more expansive set of analyses than the earlier study, the agency announced.
“While the study found metals in some tampons, the study did not test whether metals are released from tampons when used. It also did not test for metals being released, absorbed into the vaginal lining, and getting into the bloodstream during tampon use,” the FDA announcement stated. “The FDA’s laboratory study will measure the amount of metals that come out of tampons under conditions that more closely mimic normal use.”
The absorbent materials in tampons, like cotton, rayon, and viscose, are potential sources of the metals. Cotton plants are particularly known to readily take up metals from the soil, although there are other ways that metals may enter the products, like during the manufacturing process.
Exposure to metals found in the initial analysis can affect a range of body systems and processes, including the brain, the kidneys, the heart, blood, and the reproductive and immune systems.
The vagina, the researchers noted, is highly permeable and substances absorbed there do not get filtered for toxins, such as by being metabolized or passing through the liver, before entering the body’s circulatory system.
The FDA announcement did not specify a timeframe for the completion of its investigation.
A version of this article first appeared on WebMD.
The FDA has launched an investigation of the potential exposure to heavy metals when using tampons, the agency announced.
The move follows the publication earlier this year of concerning laboratory test results that detected the presence of more than a dozen metals in a variety of popular nonorganic and organic tampon products. That small study was a combined effort by researchers from Columbia University, Michigan State University, and the University of California, Berkeley.
“We want the public to know that before tampons can be legally sold in the US, they must meet FDA requirements for safety and effectiveness. Manufacturers must test the product and its component materials before, during, and after manufacturing,” the FDA wrote in the announcement of its own upcoming study. “Before a product is allowed onto the market, biocompatibility testing is undertaken by the manufacturing company, which is part of safety testing, and is reviewed by the FDA prior to market authorization.”
There will be two studies, the FDA said. One of the studies will involve laboratory tests to evaluate metals in tampons and potential exposure people may experience when using them. The other study will be a review of current research regarding the health effects of metals that may be found in tampons.
The earlier study, published by the journal Environment International, found levels of lead in every product the researchers tested and detectable levels of more than a dozen other metals like arsenic and cadmium.
The researchers tested 24 tampon products from a range of major brands as well as store brands. The tampons were purchased at stores and online between September 2022 and March 2023. Metal content tended to differ by whether or not a product was labeled as organic, the researchers reported. Lead concentrations were higher in nonorganic tampons, and organic tampons had higher levels of arsenic.
There is no safe level of lead exposure, the US Environmental Protection Agency says, and the effects are cumulative throughout the course of life. The study authors noted that the average age that girls begin menstruation is 12 years old, and the onset of menopause occurs, on average, at age 51. One study mentioned by the researchers estimated that between 52% and 86% of people who menstruate use tampons.
The FDA plans a more expansive set of analyses than the earlier study, the agency announced.
“While the study found metals in some tampons, the study did not test whether metals are released from tampons when used. It also did not test for metals being released, absorbed into the vaginal lining, and getting into the bloodstream during tampon use,” the FDA announcement stated. “The FDA’s laboratory study will measure the amount of metals that come out of tampons under conditions that more closely mimic normal use.”
The absorbent materials in tampons, like cotton, rayon, and viscose, are potential sources of the metals. Cotton plants are particularly known to readily take up metals from the soil, although there are other ways that metals may enter the products, like during the manufacturing process.
Exposure to metals found in the initial analysis can affect a range of body systems and processes, including the brain, the kidneys, the heart, blood, and the reproductive and immune systems.
The vagina, the researchers noted, is highly permeable and substances absorbed there do not get filtered for toxins, such as by being metabolized or passing through the liver, before entering the body’s circulatory system.
The FDA announcement did not specify a timeframe for the completion of its investigation.
A version of this article first appeared on WebMD.
Can AI Improve Cardiomyopathy Detection in Pregnant Women?
TOPLINE:
Artificial intelligence (AI)–guided screening using digital stethoscopes doubled the detection of left ventricular systolic dysfunction (LVSD) in pregnant and postpartum women in Nigeria. Cardiomyopathy during pregnancy and post partum is challenging to diagnose because of symptom overlap with normal pregnancy changes. AI-guided screening showed a significant improvement in diagnosis rates, compared with usual care.
METHODOLOGY:
- Researchers conducted an open-label, randomized clinical trial involving 1232 pregnant and postpartum women in Nigeria.
- Participants were randomized to either AI-guided screening using digital stethoscopes and 12-lead ECGs or usual care.
- The primary outcome was the identification of LVSD confirmed by echocardiography.
- Secondary outcomes were AI model performance across subgroups and the effectiveness of AI in identifying various levels of LVSD.
TAKEAWAY:
- AI-guided screening using digital stethoscopes detected LVSD in 4.1% of participants, compared with 2.0% of controls (P = .032).
- The 12-lead AI-ECG model detected LVSD in 3.4% of participants in the intervention arm, compared with 2.0% of those in the control arm (P = .125).
- No serious adverse events related to study participation were reported.
- The study highlighted the potential of AI-guided screening to improve the diagnosis of pregnancy-related cardiomyopathy.
IN PRACTICE:
“Delays in the diagnosis of cardiomyopathy during the peripartum period is associated with poorer outcomes as such, it is imperative that we are able to identify cardiac dysfunction early so that appropriate care can be initiated to reduce associated adverse maternal and infant outcomes,” wrote the authors of the study.
SOURCE:
This study was led by Demilade A. Adedinsewo, MBchB, Mayo Clinic in Jacksonville, Florida. It was published online in Nature Medicine.
LIMITATIONS:
The study’s pragmatic design and enrollment at teaching hospitals with echocardiography capabilities limited generalizability. Two thirds of participants were in the third trimester or postpartum at study entry, which limited follow-up visits. The study did not require completion of all seven visits, which led to potential attrition bias. The selected cutoff for LVSD (left ventricular ejection fraction < 50%) did not match the original model specifications, which potentially affected results.
DISCLOSURES:
Dr. Adedinsewo disclosed receiving grants from the Mayo Clinic BIRCWH program funded by the National Institutes of Health. Two coauthors reported holding patents for AI algorithms licensed to Anumana, AliveCor, and Eko Health. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Artificial intelligence (AI)–guided screening using digital stethoscopes doubled the detection of left ventricular systolic dysfunction (LVSD) in pregnant and postpartum women in Nigeria. Cardiomyopathy during pregnancy and post partum is challenging to diagnose because of symptom overlap with normal pregnancy changes. AI-guided screening showed a significant improvement in diagnosis rates, compared with usual care.
METHODOLOGY:
- Researchers conducted an open-label, randomized clinical trial involving 1232 pregnant and postpartum women in Nigeria.
- Participants were randomized to either AI-guided screening using digital stethoscopes and 12-lead ECGs or usual care.
- The primary outcome was the identification of LVSD confirmed by echocardiography.
- Secondary outcomes were AI model performance across subgroups and the effectiveness of AI in identifying various levels of LVSD.
TAKEAWAY:
- AI-guided screening using digital stethoscopes detected LVSD in 4.1% of participants, compared with 2.0% of controls (P = .032).
- The 12-lead AI-ECG model detected LVSD in 3.4% of participants in the intervention arm, compared with 2.0% of those in the control arm (P = .125).
- No serious adverse events related to study participation were reported.
- The study highlighted the potential of AI-guided screening to improve the diagnosis of pregnancy-related cardiomyopathy.
IN PRACTICE:
“Delays in the diagnosis of cardiomyopathy during the peripartum period is associated with poorer outcomes as such, it is imperative that we are able to identify cardiac dysfunction early so that appropriate care can be initiated to reduce associated adverse maternal and infant outcomes,” wrote the authors of the study.
SOURCE:
This study was led by Demilade A. Adedinsewo, MBchB, Mayo Clinic in Jacksonville, Florida. It was published online in Nature Medicine.
LIMITATIONS:
The study’s pragmatic design and enrollment at teaching hospitals with echocardiography capabilities limited generalizability. Two thirds of participants were in the third trimester or postpartum at study entry, which limited follow-up visits. The study did not require completion of all seven visits, which led to potential attrition bias. The selected cutoff for LVSD (left ventricular ejection fraction < 50%) did not match the original model specifications, which potentially affected results.
DISCLOSURES:
Dr. Adedinsewo disclosed receiving grants from the Mayo Clinic BIRCWH program funded by the National Institutes of Health. Two coauthors reported holding patents for AI algorithms licensed to Anumana, AliveCor, and Eko Health. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Artificial intelligence (AI)–guided screening using digital stethoscopes doubled the detection of left ventricular systolic dysfunction (LVSD) in pregnant and postpartum women in Nigeria. Cardiomyopathy during pregnancy and post partum is challenging to diagnose because of symptom overlap with normal pregnancy changes. AI-guided screening showed a significant improvement in diagnosis rates, compared with usual care.
METHODOLOGY:
- Researchers conducted an open-label, randomized clinical trial involving 1232 pregnant and postpartum women in Nigeria.
- Participants were randomized to either AI-guided screening using digital stethoscopes and 12-lead ECGs or usual care.
- The primary outcome was the identification of LVSD confirmed by echocardiography.
- Secondary outcomes were AI model performance across subgroups and the effectiveness of AI in identifying various levels of LVSD.
TAKEAWAY:
- AI-guided screening using digital stethoscopes detected LVSD in 4.1% of participants, compared with 2.0% of controls (P = .032).
- The 12-lead AI-ECG model detected LVSD in 3.4% of participants in the intervention arm, compared with 2.0% of those in the control arm (P = .125).
- No serious adverse events related to study participation were reported.
- The study highlighted the potential of AI-guided screening to improve the diagnosis of pregnancy-related cardiomyopathy.
IN PRACTICE:
“Delays in the diagnosis of cardiomyopathy during the peripartum period is associated with poorer outcomes as such, it is imperative that we are able to identify cardiac dysfunction early so that appropriate care can be initiated to reduce associated adverse maternal and infant outcomes,” wrote the authors of the study.
SOURCE:
This study was led by Demilade A. Adedinsewo, MBchB, Mayo Clinic in Jacksonville, Florida. It was published online in Nature Medicine.
LIMITATIONS:
The study’s pragmatic design and enrollment at teaching hospitals with echocardiography capabilities limited generalizability. Two thirds of participants were in the third trimester or postpartum at study entry, which limited follow-up visits. The study did not require completion of all seven visits, which led to potential attrition bias. The selected cutoff for LVSD (left ventricular ejection fraction < 50%) did not match the original model specifications, which potentially affected results.
DISCLOSURES:
Dr. Adedinsewo disclosed receiving grants from the Mayo Clinic BIRCWH program funded by the National Institutes of Health. Two coauthors reported holding patents for AI algorithms licensed to Anumana, AliveCor, and Eko Health. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
A Simple Blood Test May Predict Cancer Risk in T2D
TOPLINE:
potentially enabling the identification of higher-risk individuals through a simple blood test.
METHODOLOGY:
- T2D is associated with an increased risk for obesity-related cancers, including breast, renal, uterine, thyroid, ovarian, and gastrointestinal cancers, as well as multiple myeloma, possibly because of chronic low-grade inflammation.
- Researchers explored whether the markers of inflammation IL-6, tumor necrosis factor alpha (TNF-alpha), and high-sensitivity C-reactive protein (hsCRP) can serve as predictive biomarkers for obesity-related cancers in patients recently diagnosed with T2D.
- They identified patients with recent-onset T2D and no prior history of cancer participating in the ongoing Danish Centre for Strategic Research in Type 2 Diabetes cohort study.
- At study initiation, plasma levels of IL-6 and TNF-alpha were measured using Meso Scale Discovery assays, and serum levels of hsCRP were measured using immunofluorometric assays.
TAKEAWAY:
- Among 6,466 eligible patients (40.5% women; median age, 60.9 years), 327 developed obesity-related cancers over a median follow-up of 8.8 years.
- Each SD increase in log-transformed IL-6 levels increased the risk for obesity-related cancers by 19%.
- The researchers did not find a strong association between TNF-alpha or hsCRP and obesity-related cancers.
- The addition of baseline IL-6 levels to other well-known risk factors for obesity-related cancers improved the performance of a cancer prediction model from 0.685 to 0.693, translating to a small but important increase in the ability to predict whether an individual would develop one of these cancers.
IN PRACTICE:
“In future, a simple blood test could identify those at higher risk of the cancers,” said the study’s lead author in an accompanying press release.
SOURCE:
The study was led by Mathilde D. Bennetsen, Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark, and published online on August 27 as an early release from the European Association for the Study of Diabetes (EASD) 2024 Annual Meeting.
LIMITATIONS:
No limitations were discussed in this abstract. However, the reliance on registry data may have introduced potential biases related to data accuracy and completeness.
DISCLOSURES:
The Danish Centre for Strategic Research in Type 2 Diabetes was supported by grants from the Danish Agency for Science and the Novo Nordisk Foundation. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
potentially enabling the identification of higher-risk individuals through a simple blood test.
METHODOLOGY:
- T2D is associated with an increased risk for obesity-related cancers, including breast, renal, uterine, thyroid, ovarian, and gastrointestinal cancers, as well as multiple myeloma, possibly because of chronic low-grade inflammation.
- Researchers explored whether the markers of inflammation IL-6, tumor necrosis factor alpha (TNF-alpha), and high-sensitivity C-reactive protein (hsCRP) can serve as predictive biomarkers for obesity-related cancers in patients recently diagnosed with T2D.
- They identified patients with recent-onset T2D and no prior history of cancer participating in the ongoing Danish Centre for Strategic Research in Type 2 Diabetes cohort study.
- At study initiation, plasma levels of IL-6 and TNF-alpha were measured using Meso Scale Discovery assays, and serum levels of hsCRP were measured using immunofluorometric assays.
TAKEAWAY:
- Among 6,466 eligible patients (40.5% women; median age, 60.9 years), 327 developed obesity-related cancers over a median follow-up of 8.8 years.
- Each SD increase in log-transformed IL-6 levels increased the risk for obesity-related cancers by 19%.
- The researchers did not find a strong association between TNF-alpha or hsCRP and obesity-related cancers.
- The addition of baseline IL-6 levels to other well-known risk factors for obesity-related cancers improved the performance of a cancer prediction model from 0.685 to 0.693, translating to a small but important increase in the ability to predict whether an individual would develop one of these cancers.
IN PRACTICE:
“In future, a simple blood test could identify those at higher risk of the cancers,” said the study’s lead author in an accompanying press release.
SOURCE:
The study was led by Mathilde D. Bennetsen, Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark, and published online on August 27 as an early release from the European Association for the Study of Diabetes (EASD) 2024 Annual Meeting.
LIMITATIONS:
No limitations were discussed in this abstract. However, the reliance on registry data may have introduced potential biases related to data accuracy and completeness.
DISCLOSURES:
The Danish Centre for Strategic Research in Type 2 Diabetes was supported by grants from the Danish Agency for Science and the Novo Nordisk Foundation. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
potentially enabling the identification of higher-risk individuals through a simple blood test.
METHODOLOGY:
- T2D is associated with an increased risk for obesity-related cancers, including breast, renal, uterine, thyroid, ovarian, and gastrointestinal cancers, as well as multiple myeloma, possibly because of chronic low-grade inflammation.
- Researchers explored whether the markers of inflammation IL-6, tumor necrosis factor alpha (TNF-alpha), and high-sensitivity C-reactive protein (hsCRP) can serve as predictive biomarkers for obesity-related cancers in patients recently diagnosed with T2D.
- They identified patients with recent-onset T2D and no prior history of cancer participating in the ongoing Danish Centre for Strategic Research in Type 2 Diabetes cohort study.
- At study initiation, plasma levels of IL-6 and TNF-alpha were measured using Meso Scale Discovery assays, and serum levels of hsCRP were measured using immunofluorometric assays.
TAKEAWAY:
- Among 6,466 eligible patients (40.5% women; median age, 60.9 years), 327 developed obesity-related cancers over a median follow-up of 8.8 years.
- Each SD increase in log-transformed IL-6 levels increased the risk for obesity-related cancers by 19%.
- The researchers did not find a strong association between TNF-alpha or hsCRP and obesity-related cancers.
- The addition of baseline IL-6 levels to other well-known risk factors for obesity-related cancers improved the performance of a cancer prediction model from 0.685 to 0.693, translating to a small but important increase in the ability to predict whether an individual would develop one of these cancers.
IN PRACTICE:
“In future, a simple blood test could identify those at higher risk of the cancers,” said the study’s lead author in an accompanying press release.
SOURCE:
The study was led by Mathilde D. Bennetsen, Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark, and published online on August 27 as an early release from the European Association for the Study of Diabetes (EASD) 2024 Annual Meeting.
LIMITATIONS:
No limitations were discussed in this abstract. However, the reliance on registry data may have introduced potential biases related to data accuracy and completeness.
DISCLOSURES:
The Danish Centre for Strategic Research in Type 2 Diabetes was supported by grants from the Danish Agency for Science and the Novo Nordisk Foundation. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.