User login
MRD: Powerful metric for CLL research
“MRD measurement is now a key feature of CLL clinical trials reporting. It can change CLL care by enabling approval of medication use in the wider (nontrial) patient population based on MRD data, without having to wait (ever-increasing) times for conventional trial outcomes, such as progression-free survival [PFS],” said study author Tahla Munir MD, of the department of hematology, at the Leeds (England) Teaching Hospitals of the National Health Service Trust.
“It also has potential to direct our treatment duration and follow-up strategies based on MRD results taken during or at the end of treatment, and to direct new treatment strategies, such as intermittent (as opposed to fixed-duration or continuous) treatment,” Dr. Munir said in an interview.
The review study defined MRD according to the detectable proportion of residual CLL cells. (Current international consensus for undetectable is U-MRD4 1 leukemic cell in 10,000 leukocytes.) The advantages and disadvantages of different MRD assays were analyzed. Multiparameter flow cytometry, an older technology, proved less sensitive to newer tests. It is reliable measuring to a sensitivity of U-MRD4 and more widely available than next-generation real-time quantitative polymerase chain reaction tests (NG-PCR).
“NG-PCR has the most potential for use in laboratory practice. It doesn’t require patient-specific primers and can detect around 1 CLL cell in 1x106 leukocytes. The biggest challenge is laboratory sequencing and bioinformatic capacity,” said lead study author Amelia Fisher, clinical research fellow at the division of cancer studies and pathology, University of Leeds.
“Multiple wells are required to gather adequate data to match the sensitivity of NGS. As this technology improves to match NGS sensitivity using fewer wells, once primers (bespoke to each patient) are designed it will provide a simple to use, rapid and easily reportable MRD tool, that could be scaled up in the event of MRD testing becoming routine practice,” explained Dr. Fisher.
The study also demonstrated how MRD can offer more in-depth insights into the success of treatments versus PFS. In the MURANO clinical trial, which compared venetoclax-rituximab treatment with standard chemoimmunotherapy (SC) to treat relapsed or refractory CLL, the PFS and overall survival (OS) remained significantly prolonged in the VR group at 5 years after therapy.
Analysis of MRD levels in the VR arm demonstrated that those with U-MRD4 had superior OS, with survival at 5 years of 95.3%, compared with those with higher rates of MRD (72.9%). A slower rate of MRD doubling time in the VR-treated patients, compared with the SC-treated patients, also buttressed the notion of moving from SC to VR treatment for the general CLL patient population.
Researchers cautioned that “a lot of the data is very recent, and therefore we do not have conventional trial outcomes, e.g., PFS and OS for all the studies. Some of the data we have is over a relatively short time period.”
An independent expert not associated with the study, Alessandra Ferrajoli, MD, associate medical director of the department of leukemia at the University of Texas MD Anderson Cancer Center, Houston, expressed agreement with the study’s main findings.
“It is very likely that MRD assessment will be incorporated as a standard measurement of treatment efficacy in patients with CLL in the near future. The technologies have evolved to high levels of sensitivity, and the methods are being successfully harmonized and standardized,” she said.
Neither the study authors nor Dr. Ferrajoli reported conflicts of interest.
“MRD measurement is now a key feature of CLL clinical trials reporting. It can change CLL care by enabling approval of medication use in the wider (nontrial) patient population based on MRD data, without having to wait (ever-increasing) times for conventional trial outcomes, such as progression-free survival [PFS],” said study author Tahla Munir MD, of the department of hematology, at the Leeds (England) Teaching Hospitals of the National Health Service Trust.
“It also has potential to direct our treatment duration and follow-up strategies based on MRD results taken during or at the end of treatment, and to direct new treatment strategies, such as intermittent (as opposed to fixed-duration or continuous) treatment,” Dr. Munir said in an interview.
The review study defined MRD according to the detectable proportion of residual CLL cells. (Current international consensus for undetectable is U-MRD4 1 leukemic cell in 10,000 leukocytes.) The advantages and disadvantages of different MRD assays were analyzed. Multiparameter flow cytometry, an older technology, proved less sensitive to newer tests. It is reliable measuring to a sensitivity of U-MRD4 and more widely available than next-generation real-time quantitative polymerase chain reaction tests (NG-PCR).
“NG-PCR has the most potential for use in laboratory practice. It doesn’t require patient-specific primers and can detect around 1 CLL cell in 1x106 leukocytes. The biggest challenge is laboratory sequencing and bioinformatic capacity,” said lead study author Amelia Fisher, clinical research fellow at the division of cancer studies and pathology, University of Leeds.
“Multiple wells are required to gather adequate data to match the sensitivity of NGS. As this technology improves to match NGS sensitivity using fewer wells, once primers (bespoke to each patient) are designed it will provide a simple to use, rapid and easily reportable MRD tool, that could be scaled up in the event of MRD testing becoming routine practice,” explained Dr. Fisher.
The study also demonstrated how MRD can offer more in-depth insights into the success of treatments versus PFS. In the MURANO clinical trial, which compared venetoclax-rituximab treatment with standard chemoimmunotherapy (SC) to treat relapsed or refractory CLL, the PFS and overall survival (OS) remained significantly prolonged in the VR group at 5 years after therapy.
Analysis of MRD levels in the VR arm demonstrated that those with U-MRD4 had superior OS, with survival at 5 years of 95.3%, compared with those with higher rates of MRD (72.9%). A slower rate of MRD doubling time in the VR-treated patients, compared with the SC-treated patients, also buttressed the notion of moving from SC to VR treatment for the general CLL patient population.
Researchers cautioned that “a lot of the data is very recent, and therefore we do not have conventional trial outcomes, e.g., PFS and OS for all the studies. Some of the data we have is over a relatively short time period.”
An independent expert not associated with the study, Alessandra Ferrajoli, MD, associate medical director of the department of leukemia at the University of Texas MD Anderson Cancer Center, Houston, expressed agreement with the study’s main findings.
“It is very likely that MRD assessment will be incorporated as a standard measurement of treatment efficacy in patients with CLL in the near future. The technologies have evolved to high levels of sensitivity, and the methods are being successfully harmonized and standardized,” she said.
Neither the study authors nor Dr. Ferrajoli reported conflicts of interest.
“MRD measurement is now a key feature of CLL clinical trials reporting. It can change CLL care by enabling approval of medication use in the wider (nontrial) patient population based on MRD data, without having to wait (ever-increasing) times for conventional trial outcomes, such as progression-free survival [PFS],” said study author Tahla Munir MD, of the department of hematology, at the Leeds (England) Teaching Hospitals of the National Health Service Trust.
“It also has potential to direct our treatment duration and follow-up strategies based on MRD results taken during or at the end of treatment, and to direct new treatment strategies, such as intermittent (as opposed to fixed-duration or continuous) treatment,” Dr. Munir said in an interview.
The review study defined MRD according to the detectable proportion of residual CLL cells. (Current international consensus for undetectable is U-MRD4 1 leukemic cell in 10,000 leukocytes.) The advantages and disadvantages of different MRD assays were analyzed. Multiparameter flow cytometry, an older technology, proved less sensitive to newer tests. It is reliable measuring to a sensitivity of U-MRD4 and more widely available than next-generation real-time quantitative polymerase chain reaction tests (NG-PCR).
“NG-PCR has the most potential for use in laboratory practice. It doesn’t require patient-specific primers and can detect around 1 CLL cell in 1x106 leukocytes. The biggest challenge is laboratory sequencing and bioinformatic capacity,” said lead study author Amelia Fisher, clinical research fellow at the division of cancer studies and pathology, University of Leeds.
“Multiple wells are required to gather adequate data to match the sensitivity of NGS. As this technology improves to match NGS sensitivity using fewer wells, once primers (bespoke to each patient) are designed it will provide a simple to use, rapid and easily reportable MRD tool, that could be scaled up in the event of MRD testing becoming routine practice,” explained Dr. Fisher.
The study also demonstrated how MRD can offer more in-depth insights into the success of treatments versus PFS. In the MURANO clinical trial, which compared venetoclax-rituximab treatment with standard chemoimmunotherapy (SC) to treat relapsed or refractory CLL, the PFS and overall survival (OS) remained significantly prolonged in the VR group at 5 years after therapy.
Analysis of MRD levels in the VR arm demonstrated that those with U-MRD4 had superior OS, with survival at 5 years of 95.3%, compared with those with higher rates of MRD (72.9%). A slower rate of MRD doubling time in the VR-treated patients, compared with the SC-treated patients, also buttressed the notion of moving from SC to VR treatment for the general CLL patient population.
Researchers cautioned that “a lot of the data is very recent, and therefore we do not have conventional trial outcomes, e.g., PFS and OS for all the studies. Some of the data we have is over a relatively short time period.”
An independent expert not associated with the study, Alessandra Ferrajoli, MD, associate medical director of the department of leukemia at the University of Texas MD Anderson Cancer Center, Houston, expressed agreement with the study’s main findings.
“It is very likely that MRD assessment will be incorporated as a standard measurement of treatment efficacy in patients with CLL in the near future. The technologies have evolved to high levels of sensitivity, and the methods are being successfully harmonized and standardized,” she said.
Neither the study authors nor Dr. Ferrajoli reported conflicts of interest.
FROM FRONTIERS IN ONCOLOGY
Cervical screening often stops at 65, but should it?
“Did you love your wife?” asks a character in “Rose,” a book by Martin Cruz Smith.
“No, but she became a fact through perseverance,” the man replied.
Medicine also has such relationships, it seems – tentative ideas that turned into fact simply by existing long enough.
Age 65 as the cutoff for cervical screening may be one such example. It has existed for 27 years with limited science to back it up. That may soon change with the launch of a $3.3 million study that is being funded by the National Institutes of Health (NIH). The study is intended to provide a more solid foundation for the benefits and harms of cervical screening for women older than 65.
It’s an important issue: 20% of all cervical cancer cases are found in women who are older than 65. Most of these patients have late-stage disease, which can be fatal. In the United States, 35% of cervical cancer deaths occur after age 65. But women in this age group are usually no longer screened for cervical cancer.
Back in 1996, the U.S. Preventive Services Task Force recommended that for women at average risk with adequate prior screening, cervical screening should stop at the age of 65. This recommendation has been carried forward year after year and has been incorporated into several other guidelines.
For example, current guidelines from the American Cancer Society, the American College of Obstetricians and Gynecologists, and the USPSTF recommend that cervical screening stop at aged 65 for patients with adequate prior screening.
“Adequate screening” is defined as three consecutive normal Pap tests or two consecutive negative human papillomavirus tests or two consecutive negative co-tests within the prior 10 years, with the most recent screening within 5 years and with no precancerous lesions in the past 25 years.
This all sounds reasonable; however, for most women, medical records aren’t up to the task of providing a clean bill of cervical health over many decades.
Explained Sarah Feldman, MD, an associate professor in obstetrics, gynecology, and reproductive biology at Harvard Medical School, Boston: “You know, when a patient says to me at 65, ‘Should I continue screening?’ I say, ‘Do you have all your results?’ And they’ll say, ‘Well, I remember I had a sort of abnormal pap 15 years ago,’ and I say, ‘All right; well, who knows what that was?’ So I’ll continue screening.”
According to George Sawaya, MD, professor of obstetrics, gynecology, and reproductive sciences at the University of California, San Francisco, up to 60% of women do not meet the criteria to end screening at age 65. This means that each year in the United States, approximately 1.7 million women turn 65 and should, in theory, continue to undergo screening for cervical cancer.
Unfortunately, the evidence base for the harms and benefits of cervical screening after age 65 is almost nonexistent – at least by the current standards of evidence-based medicine.
“We need to be clear that we don’t really know the appropriateness of the screening after 65,” said Dr. Sawaya, “which is ironic, because cervical cancer screening is probably the most commonly implemented cancer screening test in the country because it starts so early and ends so late and it’s applied so frequently.”
Dr. Feldman agrees that the age 65 cutoff is “somewhat arbitrary.” She said, “Why don’t they want to consider it continuing past 65? I don’t really understand, I have to be honest with you.”
So what’s the scientific evidence backing up the 27-year-old recommendation?
In 2018, the USPSTF’s cervical-screening guidelines concluded “with moderate certainty that the benefits of screening in women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer do not outweigh the potential harms.”
This recommendation was based on a new decision model commissioned by the USPSTF. The model was needed because, as noted by the guidelines’ authors, “None of the screening trials enrolled women older than 65 years, so direct evidence on when to stop screening is not available.”
In 2020, the ACS carried out a fresh literature review and published its own recommendations. The ACS concluded that “the evidence for the effectiveness of screening beyond age 65 is limited, based solely on observational and modeling studies.”
As a result, the ACS assigned a “qualified recommendation” to the age-65 moratorium (defined as “less certainty about the balance of benefits and harms or about patients’ values and preferences”).
Most recently, the 2021 Updated Cervical Cancer Screening Guidelines, published by the American College of Obstetricians and Gynecologists, endorsed the recommendations of the USPSTF.
Dr. Sawaya said, “The whole issue about screening over 65 is complicated from a lot of perspectives. We don’t know a lot about the safety. We don’t really know a lot about patients’ perceptions of it. But we do know that there has to be an upper age limit after which screening is just simply imprudent.”
Dr. Sawaya acknowledges that there exists a “heck-why-not” attitude toward cervical screening after 65 among some physicians, given that the tests are quick and cheap and could save a life, but he sounds a note of caution.
“It’s like when we used to use old cameras: the film was cheap, but the developing was really expensive,” Dr. Sawaya said. “So it’s not necessarily about the tests being cheap, it’s about the cascade of events [that follow].”
Follow-up for cervical cancer can be more hazardous for a postmenopausal patient than for a younger woman, explained Dr. Sawaya, because the transformation zone of the cervix may be difficult to see on colposcopy. Instead of a straightforward 5-minute procedure in the doctor’s office, the older patient may need the operating room simply to provide the first biopsy.
In addition, treatments such as cone biopsy, loop excision, or ablation are also more worrying for older women, said Dr. Sawaya, “So you start thinking about the risks of anesthesia, you start thinking about the risks of bleeding and infection, etc. And these have not been well described in older people.”
To add to the uncertainty about the merits and risks of hunting out cervical cancer in older women, a lot has changed in women’s health since 1996.
Explained Dr. Sawaya, “This stake was put in the ground in 1996, ... but since that time, life expectancy has gained 5 years. So a logical person would say, ‘Oh, well, let’s just say it should be 70 now, right?’ [But] can we even use old studies to inform the current cohort of women who are entering this 65-year-and-older age group?”
To answer all these questions, a 5-year, $3.3 million study funded by the NIH through the National Cancer Institute is now underway.
The project, named Comparative Effectiveness Research to Validate and Improve Cervical Cancer Screening (CERVICCS 2), will be led by Dr. Sawaya and Michael Silverberg, PhD, associate director of the Behavioral Health, Aging and Infectious Diseases Section of Kaiser Permanente Northern California’s Division of Research.
It’s not possible to conduct a true randomized controlled trial in this field of medicine for ethical reasons, so CERVICCS 2 will emulate a randomized study by following the fate of approximately 280,000 women older than 65 who were long-term members of two large health systems during 2005-2022. – both before and after the crucial age 65 cutoff.
The California study will also look at the downsides of diagnostic procedures and surgical interventions that follow a positive screening result after the age of 65 and the personal experiences of the women involved.
Dr. Sawaya and Dr. Silverberg’s team will use software that emulates a clinical trial by utilizing observational data to compare the benefits and risks of screening continuation or screening cessation after age 65.
In effect, after 27 years of loyalty to a recommendation supported by low-quality evidence, medicine will finally have a reliable answer to the question, Should we continue to look for cervical cancer in women over 65?
Dr. Sawaya concluded: “There’s very few things that are packaged away and thought to be just the truth. And this is why we always have to be vigilant. ... And that’s what keeps science so interesting and exciting.”
Dr. Sawaya has disclosed no relevant financial relationships. Dr. Feldman writes for UpToDate and receives several NIH grants.
A version of this article first appeared on Medscape.com.
“Did you love your wife?” asks a character in “Rose,” a book by Martin Cruz Smith.
“No, but she became a fact through perseverance,” the man replied.
Medicine also has such relationships, it seems – tentative ideas that turned into fact simply by existing long enough.
Age 65 as the cutoff for cervical screening may be one such example. It has existed for 27 years with limited science to back it up. That may soon change with the launch of a $3.3 million study that is being funded by the National Institutes of Health (NIH). The study is intended to provide a more solid foundation for the benefits and harms of cervical screening for women older than 65.
It’s an important issue: 20% of all cervical cancer cases are found in women who are older than 65. Most of these patients have late-stage disease, which can be fatal. In the United States, 35% of cervical cancer deaths occur after age 65. But women in this age group are usually no longer screened for cervical cancer.
Back in 1996, the U.S. Preventive Services Task Force recommended that for women at average risk with adequate prior screening, cervical screening should stop at the age of 65. This recommendation has been carried forward year after year and has been incorporated into several other guidelines.
For example, current guidelines from the American Cancer Society, the American College of Obstetricians and Gynecologists, and the USPSTF recommend that cervical screening stop at aged 65 for patients with adequate prior screening.
“Adequate screening” is defined as three consecutive normal Pap tests or two consecutive negative human papillomavirus tests or two consecutive negative co-tests within the prior 10 years, with the most recent screening within 5 years and with no precancerous lesions in the past 25 years.
This all sounds reasonable; however, for most women, medical records aren’t up to the task of providing a clean bill of cervical health over many decades.
Explained Sarah Feldman, MD, an associate professor in obstetrics, gynecology, and reproductive biology at Harvard Medical School, Boston: “You know, when a patient says to me at 65, ‘Should I continue screening?’ I say, ‘Do you have all your results?’ And they’ll say, ‘Well, I remember I had a sort of abnormal pap 15 years ago,’ and I say, ‘All right; well, who knows what that was?’ So I’ll continue screening.”
According to George Sawaya, MD, professor of obstetrics, gynecology, and reproductive sciences at the University of California, San Francisco, up to 60% of women do not meet the criteria to end screening at age 65. This means that each year in the United States, approximately 1.7 million women turn 65 and should, in theory, continue to undergo screening for cervical cancer.
Unfortunately, the evidence base for the harms and benefits of cervical screening after age 65 is almost nonexistent – at least by the current standards of evidence-based medicine.
“We need to be clear that we don’t really know the appropriateness of the screening after 65,” said Dr. Sawaya, “which is ironic, because cervical cancer screening is probably the most commonly implemented cancer screening test in the country because it starts so early and ends so late and it’s applied so frequently.”
Dr. Feldman agrees that the age 65 cutoff is “somewhat arbitrary.” She said, “Why don’t they want to consider it continuing past 65? I don’t really understand, I have to be honest with you.”
So what’s the scientific evidence backing up the 27-year-old recommendation?
In 2018, the USPSTF’s cervical-screening guidelines concluded “with moderate certainty that the benefits of screening in women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer do not outweigh the potential harms.”
This recommendation was based on a new decision model commissioned by the USPSTF. The model was needed because, as noted by the guidelines’ authors, “None of the screening trials enrolled women older than 65 years, so direct evidence on when to stop screening is not available.”
In 2020, the ACS carried out a fresh literature review and published its own recommendations. The ACS concluded that “the evidence for the effectiveness of screening beyond age 65 is limited, based solely on observational and modeling studies.”
As a result, the ACS assigned a “qualified recommendation” to the age-65 moratorium (defined as “less certainty about the balance of benefits and harms or about patients’ values and preferences”).
Most recently, the 2021 Updated Cervical Cancer Screening Guidelines, published by the American College of Obstetricians and Gynecologists, endorsed the recommendations of the USPSTF.
Dr. Sawaya said, “The whole issue about screening over 65 is complicated from a lot of perspectives. We don’t know a lot about the safety. We don’t really know a lot about patients’ perceptions of it. But we do know that there has to be an upper age limit after which screening is just simply imprudent.”
Dr. Sawaya acknowledges that there exists a “heck-why-not” attitude toward cervical screening after 65 among some physicians, given that the tests are quick and cheap and could save a life, but he sounds a note of caution.
“It’s like when we used to use old cameras: the film was cheap, but the developing was really expensive,” Dr. Sawaya said. “So it’s not necessarily about the tests being cheap, it’s about the cascade of events [that follow].”
Follow-up for cervical cancer can be more hazardous for a postmenopausal patient than for a younger woman, explained Dr. Sawaya, because the transformation zone of the cervix may be difficult to see on colposcopy. Instead of a straightforward 5-minute procedure in the doctor’s office, the older patient may need the operating room simply to provide the first biopsy.
In addition, treatments such as cone biopsy, loop excision, or ablation are also more worrying for older women, said Dr. Sawaya, “So you start thinking about the risks of anesthesia, you start thinking about the risks of bleeding and infection, etc. And these have not been well described in older people.”
To add to the uncertainty about the merits and risks of hunting out cervical cancer in older women, a lot has changed in women’s health since 1996.
Explained Dr. Sawaya, “This stake was put in the ground in 1996, ... but since that time, life expectancy has gained 5 years. So a logical person would say, ‘Oh, well, let’s just say it should be 70 now, right?’ [But] can we even use old studies to inform the current cohort of women who are entering this 65-year-and-older age group?”
To answer all these questions, a 5-year, $3.3 million study funded by the NIH through the National Cancer Institute is now underway.
The project, named Comparative Effectiveness Research to Validate and Improve Cervical Cancer Screening (CERVICCS 2), will be led by Dr. Sawaya and Michael Silverberg, PhD, associate director of the Behavioral Health, Aging and Infectious Diseases Section of Kaiser Permanente Northern California’s Division of Research.
It’s not possible to conduct a true randomized controlled trial in this field of medicine for ethical reasons, so CERVICCS 2 will emulate a randomized study by following the fate of approximately 280,000 women older than 65 who were long-term members of two large health systems during 2005-2022. – both before and after the crucial age 65 cutoff.
The California study will also look at the downsides of diagnostic procedures and surgical interventions that follow a positive screening result after the age of 65 and the personal experiences of the women involved.
Dr. Sawaya and Dr. Silverberg’s team will use software that emulates a clinical trial by utilizing observational data to compare the benefits and risks of screening continuation or screening cessation after age 65.
In effect, after 27 years of loyalty to a recommendation supported by low-quality evidence, medicine will finally have a reliable answer to the question, Should we continue to look for cervical cancer in women over 65?
Dr. Sawaya concluded: “There’s very few things that are packaged away and thought to be just the truth. And this is why we always have to be vigilant. ... And that’s what keeps science so interesting and exciting.”
Dr. Sawaya has disclosed no relevant financial relationships. Dr. Feldman writes for UpToDate and receives several NIH grants.
A version of this article first appeared on Medscape.com.
“Did you love your wife?” asks a character in “Rose,” a book by Martin Cruz Smith.
“No, but she became a fact through perseverance,” the man replied.
Medicine also has such relationships, it seems – tentative ideas that turned into fact simply by existing long enough.
Age 65 as the cutoff for cervical screening may be one such example. It has existed for 27 years with limited science to back it up. That may soon change with the launch of a $3.3 million study that is being funded by the National Institutes of Health (NIH). The study is intended to provide a more solid foundation for the benefits and harms of cervical screening for women older than 65.
It’s an important issue: 20% of all cervical cancer cases are found in women who are older than 65. Most of these patients have late-stage disease, which can be fatal. In the United States, 35% of cervical cancer deaths occur after age 65. But women in this age group are usually no longer screened for cervical cancer.
Back in 1996, the U.S. Preventive Services Task Force recommended that for women at average risk with adequate prior screening, cervical screening should stop at the age of 65. This recommendation has been carried forward year after year and has been incorporated into several other guidelines.
For example, current guidelines from the American Cancer Society, the American College of Obstetricians and Gynecologists, and the USPSTF recommend that cervical screening stop at aged 65 for patients with adequate prior screening.
“Adequate screening” is defined as three consecutive normal Pap tests or two consecutive negative human papillomavirus tests or two consecutive negative co-tests within the prior 10 years, with the most recent screening within 5 years and with no precancerous lesions in the past 25 years.
This all sounds reasonable; however, for most women, medical records aren’t up to the task of providing a clean bill of cervical health over many decades.
Explained Sarah Feldman, MD, an associate professor in obstetrics, gynecology, and reproductive biology at Harvard Medical School, Boston: “You know, when a patient says to me at 65, ‘Should I continue screening?’ I say, ‘Do you have all your results?’ And they’ll say, ‘Well, I remember I had a sort of abnormal pap 15 years ago,’ and I say, ‘All right; well, who knows what that was?’ So I’ll continue screening.”
According to George Sawaya, MD, professor of obstetrics, gynecology, and reproductive sciences at the University of California, San Francisco, up to 60% of women do not meet the criteria to end screening at age 65. This means that each year in the United States, approximately 1.7 million women turn 65 and should, in theory, continue to undergo screening for cervical cancer.
Unfortunately, the evidence base for the harms and benefits of cervical screening after age 65 is almost nonexistent – at least by the current standards of evidence-based medicine.
“We need to be clear that we don’t really know the appropriateness of the screening after 65,” said Dr. Sawaya, “which is ironic, because cervical cancer screening is probably the most commonly implemented cancer screening test in the country because it starts so early and ends so late and it’s applied so frequently.”
Dr. Feldman agrees that the age 65 cutoff is “somewhat arbitrary.” She said, “Why don’t they want to consider it continuing past 65? I don’t really understand, I have to be honest with you.”
So what’s the scientific evidence backing up the 27-year-old recommendation?
In 2018, the USPSTF’s cervical-screening guidelines concluded “with moderate certainty that the benefits of screening in women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer do not outweigh the potential harms.”
This recommendation was based on a new decision model commissioned by the USPSTF. The model was needed because, as noted by the guidelines’ authors, “None of the screening trials enrolled women older than 65 years, so direct evidence on when to stop screening is not available.”
In 2020, the ACS carried out a fresh literature review and published its own recommendations. The ACS concluded that “the evidence for the effectiveness of screening beyond age 65 is limited, based solely on observational and modeling studies.”
As a result, the ACS assigned a “qualified recommendation” to the age-65 moratorium (defined as “less certainty about the balance of benefits and harms or about patients’ values and preferences”).
Most recently, the 2021 Updated Cervical Cancer Screening Guidelines, published by the American College of Obstetricians and Gynecologists, endorsed the recommendations of the USPSTF.
Dr. Sawaya said, “The whole issue about screening over 65 is complicated from a lot of perspectives. We don’t know a lot about the safety. We don’t really know a lot about patients’ perceptions of it. But we do know that there has to be an upper age limit after which screening is just simply imprudent.”
Dr. Sawaya acknowledges that there exists a “heck-why-not” attitude toward cervical screening after 65 among some physicians, given that the tests are quick and cheap and could save a life, but he sounds a note of caution.
“It’s like when we used to use old cameras: the film was cheap, but the developing was really expensive,” Dr. Sawaya said. “So it’s not necessarily about the tests being cheap, it’s about the cascade of events [that follow].”
Follow-up for cervical cancer can be more hazardous for a postmenopausal patient than for a younger woman, explained Dr. Sawaya, because the transformation zone of the cervix may be difficult to see on colposcopy. Instead of a straightforward 5-minute procedure in the doctor’s office, the older patient may need the operating room simply to provide the first biopsy.
In addition, treatments such as cone biopsy, loop excision, or ablation are also more worrying for older women, said Dr. Sawaya, “So you start thinking about the risks of anesthesia, you start thinking about the risks of bleeding and infection, etc. And these have not been well described in older people.”
To add to the uncertainty about the merits and risks of hunting out cervical cancer in older women, a lot has changed in women’s health since 1996.
Explained Dr. Sawaya, “This stake was put in the ground in 1996, ... but since that time, life expectancy has gained 5 years. So a logical person would say, ‘Oh, well, let’s just say it should be 70 now, right?’ [But] can we even use old studies to inform the current cohort of women who are entering this 65-year-and-older age group?”
To answer all these questions, a 5-year, $3.3 million study funded by the NIH through the National Cancer Institute is now underway.
The project, named Comparative Effectiveness Research to Validate and Improve Cervical Cancer Screening (CERVICCS 2), will be led by Dr. Sawaya and Michael Silverberg, PhD, associate director of the Behavioral Health, Aging and Infectious Diseases Section of Kaiser Permanente Northern California’s Division of Research.
It’s not possible to conduct a true randomized controlled trial in this field of medicine for ethical reasons, so CERVICCS 2 will emulate a randomized study by following the fate of approximately 280,000 women older than 65 who were long-term members of two large health systems during 2005-2022. – both before and after the crucial age 65 cutoff.
The California study will also look at the downsides of diagnostic procedures and surgical interventions that follow a positive screening result after the age of 65 and the personal experiences of the women involved.
Dr. Sawaya and Dr. Silverberg’s team will use software that emulates a clinical trial by utilizing observational data to compare the benefits and risks of screening continuation or screening cessation after age 65.
In effect, after 27 years of loyalty to a recommendation supported by low-quality evidence, medicine will finally have a reliable answer to the question, Should we continue to look for cervical cancer in women over 65?
Dr. Sawaya concluded: “There’s very few things that are packaged away and thought to be just the truth. And this is why we always have to be vigilant. ... And that’s what keeps science so interesting and exciting.”
Dr. Sawaya has disclosed no relevant financial relationships. Dr. Feldman writes for UpToDate and receives several NIH grants.
A version of this article first appeared on Medscape.com.
Magnesium-rich diet linked to lower dementia risk
Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).
“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.
Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.
The study was published online in the European Journal of Nutrition.
Promising target
The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.
“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.
Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.
Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.
Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.
Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.
In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.
They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).
Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
Brain volume differences
The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.
For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.
Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.
Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.
They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”
Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.
Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.
The latent class analysis identified three classes of magnesium intake:
In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.
Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:
Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.
“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.
“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”
Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
Association, not causation
Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.
“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.
She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.
“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.
Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).
“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.
Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.
The study was published online in the European Journal of Nutrition.
Promising target
The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.
“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.
Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.
Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.
Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.
Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.
In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.
They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).
Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
Brain volume differences
The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.
For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.
Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.
Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.
They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”
Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.
Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.
The latent class analysis identified three classes of magnesium intake:
In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.
Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:
Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.
“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.
“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”
Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
Association, not causation
Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.
“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.
She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.
“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.
Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Investigators studied more than 6,000 cognitively healthy individuals, aged 40-73, and found that those who consumed more than 550 mg of magnesium daily had a brain age approximately 1 year younger by age 55 years, compared with a person who consumed a normal magnesium intake (~360 mg per day).
“This research highlights the potential benefits of a diet high in magnesium and the role it plays in promoting good brain health,” lead author Khawlah Alateeq, a PhD candidate in neuroscience at Australian National University’s National Centre for Epidemiology and Population Health, said in an interview.
Clinicians “can use [the findings] to counsel patients on the benefits of increasing magnesium intake through a healthy diet and monitoring magnesium levels to prevent deficiencies,” she stated.
The study was published online in the European Journal of Nutrition.
Promising target
The researchers were motivated to conduct the study because of “the growing concern over the increasing prevalence of dementia,” Ms. Alateeq said.
“Since there is no cure for dementia, and the development of pharmacological treatment for dementia has been unsuccessful over the last 30 years, prevention has been suggested as an effective approach to address the issue,” she added.
Nutrition, Ms. Alateeq said, is a “modifiable risk factor that can influence brain health and is highly amenable to scalable and cost-effective interventions.” It represents “a promising target” for risk reduction at a population level.
Previous research shows individuals with lower magnesium levels are at higher risk for AD, while those with higher dietary magnesium intake may be at lower risk of progressing from normal aging to cognitive impairment.
Most previous studies, however, included participants older than age 60 years, and it’s “unclear when the neuroprotective effects of dietary magnesium become detectable,” the researchers note.
Moreover, dietary patterns change and fluctuate, potentially leading to changes in magnesium intake over time. These changes may have as much impact as absolute magnesium at any point in time.
In light of the “current lack of understanding of when and to what extent dietary magnesium exerts its protective effects on the brain,” the researchers examined the association between magnesium trajectories over time, brain matter, and white matter lesions.
They also examined the association between magnesium and several different blood pressure measures (mean arterial pressure, systolic blood pressure, diastolic blood pressure, and pulse pressure).
Since cardiovascular health, neurodegeneration, and brain shrinkage patterns differ between men and women, the researchers stratified their analyses by sex.
Brain volume differences
The researchers analyzed the dietary magnesium intake of 6,001 individuals (mean age, 55.3 years) selected from the UK Biobank – a prospective cohort study of participants aged 37-73 at baseline, who were assessed between 2005 and 2023.
For the current study, only participants with baseline DBP and SBP measurements and structural MRI scans were included. Participants were also required to be free of neurologic disorders and to have an available record of dietary magnesium intake.
Covariates included age, sex, education, health conditions, smoking status, body mass index, amount of physical activity, smoking status, and alcohol intake.
Over a 16-month period, participants completed an online questionnaire five times. Their responses were used to calculate daily magnesium intake. Foods of particular interest included leafy green vegetables, legumes, nuts, seeds, and whole grains, all of which are magnesium rich.
They used latent class analysis (LCA) to “identify mutually exclusive subgroup (classes) of magnesium intake trajectory separately for men and women.”
Men had a slightly higher prevalence of BP medication and diabetes, compared with women, and postmenopausal women had a higher prevalence of BP medication and diabetes, compared with premenopausal women.
Compared with lower baseline magnesium intake, higher baseline dietary intake of magnesium was associated with larger brain volumes in several regions in both men and women.
The latent class analysis identified three classes of magnesium intake:
In women in particular, the “high-decreasing” trajectory was significantly associated with larger brain volumes, compared with the “normal-stable” trajectory, while the “low-increasing” trajectory was associated with smaller brain volumes.
Even an increase of 1 mg of magnesium per day (above 350 mg/day) made a difference in brain volume, especially in women. The changes associated with every 1-mg increase are found in the table below:
Associations between magnesium and BP measures were “mostly nonsignificant,” the researchers say, and the neuroprotective effect of higher magnesium intake in the high-decreasing trajectory was greater in postmenopausal versus premenopausal women.
“Our models indicate that compared to somebody with a normal magnesium intake (~350 mg per day), somebody in the top quartile of magnesium intake (≥ 550 mg per day) would be predicted to have a ~0.20% larger GM and ~0.46% larger RHC,” the authors summarize.
“In a population with an average age of 55 years, this effect corresponds to ~1 year of typical aging,” they note. “In other words, if this effect is generalizable to other populations, a 41% increase in magnesium intake may lead to significantly better brain health.”
Although the exact mechanisms underlying magnesium’s protective effects are “not yet clearly understood, there’s considerable evidence that magnesium levels are related to better cardiovascular health. Magnesium supplementation has been found to decrease blood pressure – and high blood pressure is a well-established risk factor for dementia,” said Ms. Alateeq.
Association, not causation
Yuko Hara, PhD, director of Aging and Prevention, Alzheimer’s Drug Discovery Foundation, noted that the study is observational and therefore shows an association, not causation.
“People eating a high-magnesium diet may also be eating a brain-healthy diet and getting high levels of nutrients/minerals other than magnesium alone,” suggested Dr. Hara, who was not involved with the study.
She noted that many foods are good sources of magnesium, including spinach, almonds, cashews, legumes, yogurt, brown rice, and avocados.
“Eating a brain-healthy diet (for example, the Mediterranean diet) is one of the Seven Steps to Protect Your Cognitive Vitality that ADDF’s Cognitive Vitality promotes,” she said.
Open Access funding was enabled and organized by the Council of Australian University Librarians and its Member Institutions. Ms. Alateeq, her co-authors, and Dr. Hara declare no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
FROM EUROPEAN JOURNAL OF NUTRITION
Autism: Is it in the water?
This transcript has been edited for clarity.
Few diseases have stymied explanation like autism spectrum disorder (ASD). We know that the prevalence has been increasing dramatically, but we aren’t quite sure whether that is because of more screening and awareness or more fundamental changes. We know that much of the risk appears to be genetic, but there may be 1,000 genes involved in the syndrome. We know that certain environmental exposures, like pollution, might increase the risk – perhaps on a susceptible genetic background – but we’re not really sure which exposures are most harmful.
So, the search continues, across all domains of inquiry from cell culture to large epidemiologic analyses. And this week, a new player enters the field, and, as they say, it’s something in the water.
We’re talking about this paper, by Zeyan Liew and colleagues, appearing in JAMA Pediatrics.
Using the incredibly robust health data infrastructure in Denmark, the researchers were able to identify 8,842 children born between 2000 and 2013 with ASD and matched each one to five control kids of the same sex and age without autism.
They then mapped the location the mothers of these kids lived while they were pregnant – down to 5 meters resolution, actually – to groundwater lithium levels.
Once that was done, the analysis was straightforward. Would moms who were pregnant in areas with higher groundwater lithium levels be more likely to have kids with ASD?
The results show a rather steady and consistent association between higher lithium levels in groundwater and the prevalence of ASD in children.
We’re not talking huge numbers, but moms who lived in the areas of the highest quartile of lithium were about 46% more likely to have a child with ASD. That’s a relative risk, of course – this would be like an increase from 1 in 100 kids to 1.5 in 100 kids. But still, it’s intriguing.
But the case is far from closed here.
Groundwater concentration of lithium and the amount of lithium a pregnant mother ingests are not the same thing. It does turn out that virtually all drinking water in Denmark comes from groundwater sources – but not all lithium comes from drinking water. There are plenty of dietary sources of lithium as well. And, of course, there is medical lithium, but we’ll get to that in a second.
First, let’s talk about those lithium measurements. They were taken in 2013 – after all these kids were born. The authors acknowledge this limitation but show a high correlation between measured levels in 2013 and earlier measured levels from prior studies, suggesting that lithium levels in a given area are quite constant over time. That’s great – but if lithium levels are constant over time, this study does nothing to shed light on why autism diagnoses seem to be increasing.
Let’s put some numbers to the lithium concentrations the authors examined. The average was about 12 mcg/L.
As a reminder, a standard therapeutic dose of lithium used for bipolar disorder is like 600 mg. That means you’d need to drink more than 2,500 of those 5-gallon jugs that sit on your water cooler, per day, to approximate the dose you’d get from a lithium tablet. Of course, small doses can still cause toxicity – but I wanted to put this in perspective.
Also, we have some data on pregnant women who take medical lithium. An analysis of nine studies showed that first-trimester lithium use may be associated with congenital malformations – particularly some specific heart malformations – and some birth complications. But three of four separate studies looking at longer-term neurodevelopmental outcomes did not find any effect on development, attainment of milestones, or IQ. One study of 15 kids exposed to medical lithium in utero did note minor neurologic dysfunction in one child and a low verbal IQ in another – but that’s a very small study.
Of course, lithium levels vary around the world as well. The U.S. Geological Survey examined lithium content in groundwater in the United States, as you can see here.
Our numbers are pretty similar to Denmark’s – in the 0-60 range. But an area in the Argentine Andes has levels as high as 1,600 mcg/L. A study of 194 babies from that area found higher lithium exposure was associated with lower fetal size, but I haven’t seen follow-up on neurodevelopmental outcomes.
The point is that there is a lot of variability here. It would be really interesting to map groundwater lithium levels to autism rates around the world. As a teaser, I will point out that, if you look at worldwide autism rates, you may be able to convince yourself that they are higher in more arid climates, and arid climates tend to have more groundwater lithium. But I’m really reaching here. More work needs to be done.
And I hope it is done quickly. Lithium is in the midst of becoming a very important commodity thanks to the shift to electric vehicles. While we can hope that recycling will claim most of those batteries at the end of their life, some will escape reclamation and potentially put more lithium into the drinking water. I’d like to know how risky that is before it happens.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He has disclosed no relevant financial relationships. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t”, is available now.
A version of this article originally appeared on Medscape.com.
This transcript has been edited for clarity.
Few diseases have stymied explanation like autism spectrum disorder (ASD). We know that the prevalence has been increasing dramatically, but we aren’t quite sure whether that is because of more screening and awareness or more fundamental changes. We know that much of the risk appears to be genetic, but there may be 1,000 genes involved in the syndrome. We know that certain environmental exposures, like pollution, might increase the risk – perhaps on a susceptible genetic background – but we’re not really sure which exposures are most harmful.
So, the search continues, across all domains of inquiry from cell culture to large epidemiologic analyses. And this week, a new player enters the field, and, as they say, it’s something in the water.
We’re talking about this paper, by Zeyan Liew and colleagues, appearing in JAMA Pediatrics.
Using the incredibly robust health data infrastructure in Denmark, the researchers were able to identify 8,842 children born between 2000 and 2013 with ASD and matched each one to five control kids of the same sex and age without autism.
They then mapped the location the mothers of these kids lived while they were pregnant – down to 5 meters resolution, actually – to groundwater lithium levels.
Once that was done, the analysis was straightforward. Would moms who were pregnant in areas with higher groundwater lithium levels be more likely to have kids with ASD?
The results show a rather steady and consistent association between higher lithium levels in groundwater and the prevalence of ASD in children.
We’re not talking huge numbers, but moms who lived in the areas of the highest quartile of lithium were about 46% more likely to have a child with ASD. That’s a relative risk, of course – this would be like an increase from 1 in 100 kids to 1.5 in 100 kids. But still, it’s intriguing.
But the case is far from closed here.
Groundwater concentration of lithium and the amount of lithium a pregnant mother ingests are not the same thing. It does turn out that virtually all drinking water in Denmark comes from groundwater sources – but not all lithium comes from drinking water. There are plenty of dietary sources of lithium as well. And, of course, there is medical lithium, but we’ll get to that in a second.
First, let’s talk about those lithium measurements. They were taken in 2013 – after all these kids were born. The authors acknowledge this limitation but show a high correlation between measured levels in 2013 and earlier measured levels from prior studies, suggesting that lithium levels in a given area are quite constant over time. That’s great – but if lithium levels are constant over time, this study does nothing to shed light on why autism diagnoses seem to be increasing.
Let’s put some numbers to the lithium concentrations the authors examined. The average was about 12 mcg/L.
As a reminder, a standard therapeutic dose of lithium used for bipolar disorder is like 600 mg. That means you’d need to drink more than 2,500 of those 5-gallon jugs that sit on your water cooler, per day, to approximate the dose you’d get from a lithium tablet. Of course, small doses can still cause toxicity – but I wanted to put this in perspective.
Also, we have some data on pregnant women who take medical lithium. An analysis of nine studies showed that first-trimester lithium use may be associated with congenital malformations – particularly some specific heart malformations – and some birth complications. But three of four separate studies looking at longer-term neurodevelopmental outcomes did not find any effect on development, attainment of milestones, or IQ. One study of 15 kids exposed to medical lithium in utero did note minor neurologic dysfunction in one child and a low verbal IQ in another – but that’s a very small study.
Of course, lithium levels vary around the world as well. The U.S. Geological Survey examined lithium content in groundwater in the United States, as you can see here.
Our numbers are pretty similar to Denmark’s – in the 0-60 range. But an area in the Argentine Andes has levels as high as 1,600 mcg/L. A study of 194 babies from that area found higher lithium exposure was associated with lower fetal size, but I haven’t seen follow-up on neurodevelopmental outcomes.
The point is that there is a lot of variability here. It would be really interesting to map groundwater lithium levels to autism rates around the world. As a teaser, I will point out that, if you look at worldwide autism rates, you may be able to convince yourself that they are higher in more arid climates, and arid climates tend to have more groundwater lithium. But I’m really reaching here. More work needs to be done.
And I hope it is done quickly. Lithium is in the midst of becoming a very important commodity thanks to the shift to electric vehicles. While we can hope that recycling will claim most of those batteries at the end of their life, some will escape reclamation and potentially put more lithium into the drinking water. I’d like to know how risky that is before it happens.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He has disclosed no relevant financial relationships. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t”, is available now.
A version of this article originally appeared on Medscape.com.
This transcript has been edited for clarity.
Few diseases have stymied explanation like autism spectrum disorder (ASD). We know that the prevalence has been increasing dramatically, but we aren’t quite sure whether that is because of more screening and awareness or more fundamental changes. We know that much of the risk appears to be genetic, but there may be 1,000 genes involved in the syndrome. We know that certain environmental exposures, like pollution, might increase the risk – perhaps on a susceptible genetic background – but we’re not really sure which exposures are most harmful.
So, the search continues, across all domains of inquiry from cell culture to large epidemiologic analyses. And this week, a new player enters the field, and, as they say, it’s something in the water.
We’re talking about this paper, by Zeyan Liew and colleagues, appearing in JAMA Pediatrics.
Using the incredibly robust health data infrastructure in Denmark, the researchers were able to identify 8,842 children born between 2000 and 2013 with ASD and matched each one to five control kids of the same sex and age without autism.
They then mapped the location the mothers of these kids lived while they were pregnant – down to 5 meters resolution, actually – to groundwater lithium levels.
Once that was done, the analysis was straightforward. Would moms who were pregnant in areas with higher groundwater lithium levels be more likely to have kids with ASD?
The results show a rather steady and consistent association between higher lithium levels in groundwater and the prevalence of ASD in children.
We’re not talking huge numbers, but moms who lived in the areas of the highest quartile of lithium were about 46% more likely to have a child with ASD. That’s a relative risk, of course – this would be like an increase from 1 in 100 kids to 1.5 in 100 kids. But still, it’s intriguing.
But the case is far from closed here.
Groundwater concentration of lithium and the amount of lithium a pregnant mother ingests are not the same thing. It does turn out that virtually all drinking water in Denmark comes from groundwater sources – but not all lithium comes from drinking water. There are plenty of dietary sources of lithium as well. And, of course, there is medical lithium, but we’ll get to that in a second.
First, let’s talk about those lithium measurements. They were taken in 2013 – after all these kids were born. The authors acknowledge this limitation but show a high correlation between measured levels in 2013 and earlier measured levels from prior studies, suggesting that lithium levels in a given area are quite constant over time. That’s great – but if lithium levels are constant over time, this study does nothing to shed light on why autism diagnoses seem to be increasing.
Let’s put some numbers to the lithium concentrations the authors examined. The average was about 12 mcg/L.
As a reminder, a standard therapeutic dose of lithium used for bipolar disorder is like 600 mg. That means you’d need to drink more than 2,500 of those 5-gallon jugs that sit on your water cooler, per day, to approximate the dose you’d get from a lithium tablet. Of course, small doses can still cause toxicity – but I wanted to put this in perspective.
Also, we have some data on pregnant women who take medical lithium. An analysis of nine studies showed that first-trimester lithium use may be associated with congenital malformations – particularly some specific heart malformations – and some birth complications. But three of four separate studies looking at longer-term neurodevelopmental outcomes did not find any effect on development, attainment of milestones, or IQ. One study of 15 kids exposed to medical lithium in utero did note minor neurologic dysfunction in one child and a low verbal IQ in another – but that’s a very small study.
Of course, lithium levels vary around the world as well. The U.S. Geological Survey examined lithium content in groundwater in the United States, as you can see here.
Our numbers are pretty similar to Denmark’s – in the 0-60 range. But an area in the Argentine Andes has levels as high as 1,600 mcg/L. A study of 194 babies from that area found higher lithium exposure was associated with lower fetal size, but I haven’t seen follow-up on neurodevelopmental outcomes.
The point is that there is a lot of variability here. It would be really interesting to map groundwater lithium levels to autism rates around the world. As a teaser, I will point out that, if you look at worldwide autism rates, you may be able to convince yourself that they are higher in more arid climates, and arid climates tend to have more groundwater lithium. But I’m really reaching here. More work needs to be done.
And I hope it is done quickly. Lithium is in the midst of becoming a very important commodity thanks to the shift to electric vehicles. While we can hope that recycling will claim most of those batteries at the end of their life, some will escape reclamation and potentially put more lithium into the drinking water. I’d like to know how risky that is before it happens.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He has disclosed no relevant financial relationships. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t”, is available now.
A version of this article originally appeared on Medscape.com.
The sacrifice of orthodoxy: Maintaining collegiality in psychiatry
Psychiatrists practice in a wide array of ways. We approach our work and our patients with beliefs and preconceptions that develop over time. Our training has significant influence, though our own personalities and biases also affect our understanding.
Psychiatrists have philosophical lenses through which they see patients. We can reflect and see some standard archetypes. We are familiar with the reductionistic pharmacologist, the somatic treatment specialist, the psychodynamic ‘guru,’ and the medicolegally paralyzed practitioner. It is without judgment that we lay these out, for our very point is that we have these constituent parts within our own clinical identities. The intensity with which we subscribe to these clinical sensibilities could contribute to a biased orthodoxy.
Orthodoxy can be defined as an accepted theory that stems from an authoritative entity. This is a well-known phenomenon that continues to be visible. For example, one can quickly peruse psychodynamic literature to find one school of thought criticizing another. It is not without some confrontation and even interpersonal rifts that the lineage of psychoanalytic theory has evolved. This has always been of interest to us. A core facet of psychoanalysis is empathy, truly knowing the inner state of a different person. And yet, the very bastions of this clinical sensibility frequently resort to veiled attacks on those in their field who have opposing views. It then begs the question: If even enlightened institutions fail at a nonjudgmental approach toward their colleagues, what hope is there for the rest of us clinicians, mired in the thick of day-to-day clinical practice?
It is our contention that the odds are against us. Even the aforementioned critique of psychoanalytic orthodoxy is just another example of how we humans organize our experience. Even as we write an article in argument against unbridled critique, we find it difficult to do so without engaging in it. For to criticize another is to help shore up our own personal identities. This is especially the case when clinicians deal with issues that we feel strongly about. The human psyche has a need to organize its experience, as “our experience of ourselves is fundamental to how we operate in the world. Our subjective experience is the phenomenology of all that one might be aware of.”1
In this vein, we would like to cite attribution theory. This is a view of human behavior within social psychology. The Austrian psychologist Fritz Heider, PhD, investigated “the domain of social interactions, wondering how people perceive each other in interaction and especially how they make sense of each other’s behavior.”2 Attribution theory suggests that as humans organize our social interactions, we may make two basic assumptions. One is that our own behavior is highly affected by an environment that is beyond our control. The second is that when judging the behavior of others, we are more likely to attribute it to internal traits that they have. A classic example is automobile traffic. When we see someone driving erratically, we are more likely to blame them for being an inherently bad driver. However, if attention is called to our own driving, we are more likely to cite external factors such as rush hour, a bad driver around us, or a faulty vehicle.
We would like to reference one last model of human behavior. It has become customary within the field of neuroscience to view the brain as a predictive organ: “Theories of prediction in perception, action, and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error.”3 Perception itself has recently been described as a controlled hallucination, where the brain makes predictions of what it thinks it is about to see based on past experiences. Visual stimulus ultimately takes time to enter our eyes and be processed in the brain – “predictions would need to preactivate neural representations that would typically be driven by sensory input, before the actual arrival of that input.”4 It thus seems to be an inherent method of the brain to anticipate visual and even social events to help human beings sustain themselves.
Having spoken of a psychoanalytic conceptualization of self-organization, the theory of attribution, and research into social neuroscience, we turn our attention back to the central question that this article would like to address.
When we find ourselves busy in rote clinical practice, we believe the likelihood of intercollegiate mentalization is low; our ability to relate to our peers becomes strained. We ultimately do not practice in a vacuum. Psychiatrists, even those in a solo private practice, are ultimately part of a community of providers who, more or less, follow some emergent ‘standard of care.’ This can be a vague concept; but one that takes on a concrete form in the minds of certain clinicians and certainly in the setting of a medicolegal court. Yet, the psychiatrists that we know all have very stereotyped ways of practice. And at the heart of it, we all think that we are right.
We can use polypharmacy as an example. Imagine that you have a new patient intake, who tells you that they are transferring care from another psychiatrist. They inform you of their medication regimen. This patient presents on eight or more psychotropics. Many of us may have a visceral reaction at this point and, following the aforementioned attribution theory, we may ask ourselves what ‘quack’ of a doctor would do this. Yet some among us would think that a very competent psychopharmacologist was daring enough to use the full armamentarium of psychopharmacology to help this patient, who must be treatment refractory.
When speaking with such a patient, we would be quick to reflect on our own parsimonious use of medications. We would tell ourselves that we are responsible providers and would be quick to recommend discontinuation of medications. This would help us feel better about ourselves, and would of course assuage the ever-present medicolegal ‘big brother’ in our minds. It is through this very process that we affirm our self-identities. For if this patient’s previous physician was a bad psychiatrist, then we are a good psychiatrist. It is through this process that our clinical selves find confirmation.
We do not mean to reduce the complexities of human behavior to quick stereotypes. However, it is our belief that when confronted with clinical or philosophical disputes with our colleagues, the basic rules of human behavior will attempt to dissolve and override efforts at mentalization, collegiality, or interpersonal sensitivity. For to accept a clinical practice view that is different from ours would be akin to giving up the essence of our clinical identities. It could be compared to the fragmentation process of a vulnerable psyche when confronted with a reality that is at odds with preconceived notions and experiences.
While we may be able to appreciate the nuances and sensibilities of another provider, we believe it would be particularly difficult for most of us to actually attempt to practice in a fashion that is not congruent with our own organizers of experience. Whether or not our practice style is ‘perfect,’ it has worked for us. Social neuroscience and our understanding of the organization of the self would predict that we would hold onto our way of practice with all the mind’s defenses. Externalization, denial, and projection could all be called into action in this battle against existential fragmentation.
Do we seek to portray a clinical world where there is no hope for genuine modeling of clinical sensibilities to other psychiatrists? That is not our intention. Yet it seems that many of the theoretical frameworks that we subscribe to argue against this possibility. We would be hypocritical if we did not here state that our own theoretical frameworks are yet other examples of “organizers of experience.” Attribution theory, intersubjectivity, and social neuroscience are simply our ways of organizing the chaos of perceptions, ideas, and intricacies of human behavior.
If we accept that psychiatrists, like all human beings, are trapped in a subjective experience, then we can be more playful and flexible when interacting with our colleagues. We do not have to be as defensive of our practices and accusatory of others. If we practice daily according to some orthodoxy, then we color our experiences of the patient and of our colleagues’ ways of practice. We automatically start off on the wrong foot. And yet, to give up this orthodoxy would, by definition, be disorganizing and fragmenting to us. For as Nietzsche said, “truth is an illusion without which a certain species could not survive.”5
Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Badre and Dr. Khalafian have no conflicts of interest.
References
1. Buirski P and Haglund P. Making sense together: The intersubjective approach to psychotherapy. Northvale, NJ: Jason Aronson; 2001.
2. Malle BF. Attribution theories: How people make sense of behavior. In Chadee D (ed.), Theories in social psychology. pp. 72-95. Wiley-Blackwell; 2011.
3. Brown EC and Brune M. The role of prediction in social neuroscience. Front Hum Neurosci. 2012 May 24;6:147. doi: 10.3389/fnhum.2012.00147.
4. Blom T et al. Predictions drive neural representations of visual events ahead of incoming sensory information. Proc Natl Acad Sci USA. 2020 Mar 31;117(13):7510-7515. doi: 10.1073/pnas.1917777117.
5. Yalom I. The Gift of Therapy. Harper Perennial; 2002.
Psychiatrists practice in a wide array of ways. We approach our work and our patients with beliefs and preconceptions that develop over time. Our training has significant influence, though our own personalities and biases also affect our understanding.
Psychiatrists have philosophical lenses through which they see patients. We can reflect and see some standard archetypes. We are familiar with the reductionistic pharmacologist, the somatic treatment specialist, the psychodynamic ‘guru,’ and the medicolegally paralyzed practitioner. It is without judgment that we lay these out, for our very point is that we have these constituent parts within our own clinical identities. The intensity with which we subscribe to these clinical sensibilities could contribute to a biased orthodoxy.
Orthodoxy can be defined as an accepted theory that stems from an authoritative entity. This is a well-known phenomenon that continues to be visible. For example, one can quickly peruse psychodynamic literature to find one school of thought criticizing another. It is not without some confrontation and even interpersonal rifts that the lineage of psychoanalytic theory has evolved. This has always been of interest to us. A core facet of psychoanalysis is empathy, truly knowing the inner state of a different person. And yet, the very bastions of this clinical sensibility frequently resort to veiled attacks on those in their field who have opposing views. It then begs the question: If even enlightened institutions fail at a nonjudgmental approach toward their colleagues, what hope is there for the rest of us clinicians, mired in the thick of day-to-day clinical practice?
It is our contention that the odds are against us. Even the aforementioned critique of psychoanalytic orthodoxy is just another example of how we humans organize our experience. Even as we write an article in argument against unbridled critique, we find it difficult to do so without engaging in it. For to criticize another is to help shore up our own personal identities. This is especially the case when clinicians deal with issues that we feel strongly about. The human psyche has a need to organize its experience, as “our experience of ourselves is fundamental to how we operate in the world. Our subjective experience is the phenomenology of all that one might be aware of.”1
In this vein, we would like to cite attribution theory. This is a view of human behavior within social psychology. The Austrian psychologist Fritz Heider, PhD, investigated “the domain of social interactions, wondering how people perceive each other in interaction and especially how they make sense of each other’s behavior.”2 Attribution theory suggests that as humans organize our social interactions, we may make two basic assumptions. One is that our own behavior is highly affected by an environment that is beyond our control. The second is that when judging the behavior of others, we are more likely to attribute it to internal traits that they have. A classic example is automobile traffic. When we see someone driving erratically, we are more likely to blame them for being an inherently bad driver. However, if attention is called to our own driving, we are more likely to cite external factors such as rush hour, a bad driver around us, or a faulty vehicle.
We would like to reference one last model of human behavior. It has become customary within the field of neuroscience to view the brain as a predictive organ: “Theories of prediction in perception, action, and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error.”3 Perception itself has recently been described as a controlled hallucination, where the brain makes predictions of what it thinks it is about to see based on past experiences. Visual stimulus ultimately takes time to enter our eyes and be processed in the brain – “predictions would need to preactivate neural representations that would typically be driven by sensory input, before the actual arrival of that input.”4 It thus seems to be an inherent method of the brain to anticipate visual and even social events to help human beings sustain themselves.
Having spoken of a psychoanalytic conceptualization of self-organization, the theory of attribution, and research into social neuroscience, we turn our attention back to the central question that this article would like to address.
When we find ourselves busy in rote clinical practice, we believe the likelihood of intercollegiate mentalization is low; our ability to relate to our peers becomes strained. We ultimately do not practice in a vacuum. Psychiatrists, even those in a solo private practice, are ultimately part of a community of providers who, more or less, follow some emergent ‘standard of care.’ This can be a vague concept; but one that takes on a concrete form in the minds of certain clinicians and certainly in the setting of a medicolegal court. Yet, the psychiatrists that we know all have very stereotyped ways of practice. And at the heart of it, we all think that we are right.
We can use polypharmacy as an example. Imagine that you have a new patient intake, who tells you that they are transferring care from another psychiatrist. They inform you of their medication regimen. This patient presents on eight or more psychotropics. Many of us may have a visceral reaction at this point and, following the aforementioned attribution theory, we may ask ourselves what ‘quack’ of a doctor would do this. Yet some among us would think that a very competent psychopharmacologist was daring enough to use the full armamentarium of psychopharmacology to help this patient, who must be treatment refractory.
When speaking with such a patient, we would be quick to reflect on our own parsimonious use of medications. We would tell ourselves that we are responsible providers and would be quick to recommend discontinuation of medications. This would help us feel better about ourselves, and would of course assuage the ever-present medicolegal ‘big brother’ in our minds. It is through this very process that we affirm our self-identities. For if this patient’s previous physician was a bad psychiatrist, then we are a good psychiatrist. It is through this process that our clinical selves find confirmation.
We do not mean to reduce the complexities of human behavior to quick stereotypes. However, it is our belief that when confronted with clinical or philosophical disputes with our colleagues, the basic rules of human behavior will attempt to dissolve and override efforts at mentalization, collegiality, or interpersonal sensitivity. For to accept a clinical practice view that is different from ours would be akin to giving up the essence of our clinical identities. It could be compared to the fragmentation process of a vulnerable psyche when confronted with a reality that is at odds with preconceived notions and experiences.
While we may be able to appreciate the nuances and sensibilities of another provider, we believe it would be particularly difficult for most of us to actually attempt to practice in a fashion that is not congruent with our own organizers of experience. Whether or not our practice style is ‘perfect,’ it has worked for us. Social neuroscience and our understanding of the organization of the self would predict that we would hold onto our way of practice with all the mind’s defenses. Externalization, denial, and projection could all be called into action in this battle against existential fragmentation.
Do we seek to portray a clinical world where there is no hope for genuine modeling of clinical sensibilities to other psychiatrists? That is not our intention. Yet it seems that many of the theoretical frameworks that we subscribe to argue against this possibility. We would be hypocritical if we did not here state that our own theoretical frameworks are yet other examples of “organizers of experience.” Attribution theory, intersubjectivity, and social neuroscience are simply our ways of organizing the chaos of perceptions, ideas, and intricacies of human behavior.
If we accept that psychiatrists, like all human beings, are trapped in a subjective experience, then we can be more playful and flexible when interacting with our colleagues. We do not have to be as defensive of our practices and accusatory of others. If we practice daily according to some orthodoxy, then we color our experiences of the patient and of our colleagues’ ways of practice. We automatically start off on the wrong foot. And yet, to give up this orthodoxy would, by definition, be disorganizing and fragmenting to us. For as Nietzsche said, “truth is an illusion without which a certain species could not survive.”5
Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Badre and Dr. Khalafian have no conflicts of interest.
References
1. Buirski P and Haglund P. Making sense together: The intersubjective approach to psychotherapy. Northvale, NJ: Jason Aronson; 2001.
2. Malle BF. Attribution theories: How people make sense of behavior. In Chadee D (ed.), Theories in social psychology. pp. 72-95. Wiley-Blackwell; 2011.
3. Brown EC and Brune M. The role of prediction in social neuroscience. Front Hum Neurosci. 2012 May 24;6:147. doi: 10.3389/fnhum.2012.00147.
4. Blom T et al. Predictions drive neural representations of visual events ahead of incoming sensory information. Proc Natl Acad Sci USA. 2020 Mar 31;117(13):7510-7515. doi: 10.1073/pnas.1917777117.
5. Yalom I. The Gift of Therapy. Harper Perennial; 2002.
Psychiatrists practice in a wide array of ways. We approach our work and our patients with beliefs and preconceptions that develop over time. Our training has significant influence, though our own personalities and biases also affect our understanding.
Psychiatrists have philosophical lenses through which they see patients. We can reflect and see some standard archetypes. We are familiar with the reductionistic pharmacologist, the somatic treatment specialist, the psychodynamic ‘guru,’ and the medicolegally paralyzed practitioner. It is without judgment that we lay these out, for our very point is that we have these constituent parts within our own clinical identities. The intensity with which we subscribe to these clinical sensibilities could contribute to a biased orthodoxy.
Orthodoxy can be defined as an accepted theory that stems from an authoritative entity. This is a well-known phenomenon that continues to be visible. For example, one can quickly peruse psychodynamic literature to find one school of thought criticizing another. It is not without some confrontation and even interpersonal rifts that the lineage of psychoanalytic theory has evolved. This has always been of interest to us. A core facet of psychoanalysis is empathy, truly knowing the inner state of a different person. And yet, the very bastions of this clinical sensibility frequently resort to veiled attacks on those in their field who have opposing views. It then begs the question: If even enlightened institutions fail at a nonjudgmental approach toward their colleagues, what hope is there for the rest of us clinicians, mired in the thick of day-to-day clinical practice?
It is our contention that the odds are against us. Even the aforementioned critique of psychoanalytic orthodoxy is just another example of how we humans organize our experience. Even as we write an article in argument against unbridled critique, we find it difficult to do so without engaging in it. For to criticize another is to help shore up our own personal identities. This is especially the case when clinicians deal with issues that we feel strongly about. The human psyche has a need to organize its experience, as “our experience of ourselves is fundamental to how we operate in the world. Our subjective experience is the phenomenology of all that one might be aware of.”1
In this vein, we would like to cite attribution theory. This is a view of human behavior within social psychology. The Austrian psychologist Fritz Heider, PhD, investigated “the domain of social interactions, wondering how people perceive each other in interaction and especially how they make sense of each other’s behavior.”2 Attribution theory suggests that as humans organize our social interactions, we may make two basic assumptions. One is that our own behavior is highly affected by an environment that is beyond our control. The second is that when judging the behavior of others, we are more likely to attribute it to internal traits that they have. A classic example is automobile traffic. When we see someone driving erratically, we are more likely to blame them for being an inherently bad driver. However, if attention is called to our own driving, we are more likely to cite external factors such as rush hour, a bad driver around us, or a faulty vehicle.
We would like to reference one last model of human behavior. It has become customary within the field of neuroscience to view the brain as a predictive organ: “Theories of prediction in perception, action, and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error.”3 Perception itself has recently been described as a controlled hallucination, where the brain makes predictions of what it thinks it is about to see based on past experiences. Visual stimulus ultimately takes time to enter our eyes and be processed in the brain – “predictions would need to preactivate neural representations that would typically be driven by sensory input, before the actual arrival of that input.”4 It thus seems to be an inherent method of the brain to anticipate visual and even social events to help human beings sustain themselves.
Having spoken of a psychoanalytic conceptualization of self-organization, the theory of attribution, and research into social neuroscience, we turn our attention back to the central question that this article would like to address.
When we find ourselves busy in rote clinical practice, we believe the likelihood of intercollegiate mentalization is low; our ability to relate to our peers becomes strained. We ultimately do not practice in a vacuum. Psychiatrists, even those in a solo private practice, are ultimately part of a community of providers who, more or less, follow some emergent ‘standard of care.’ This can be a vague concept; but one that takes on a concrete form in the minds of certain clinicians and certainly in the setting of a medicolegal court. Yet, the psychiatrists that we know all have very stereotyped ways of practice. And at the heart of it, we all think that we are right.
We can use polypharmacy as an example. Imagine that you have a new patient intake, who tells you that they are transferring care from another psychiatrist. They inform you of their medication regimen. This patient presents on eight or more psychotropics. Many of us may have a visceral reaction at this point and, following the aforementioned attribution theory, we may ask ourselves what ‘quack’ of a doctor would do this. Yet some among us would think that a very competent psychopharmacologist was daring enough to use the full armamentarium of psychopharmacology to help this patient, who must be treatment refractory.
When speaking with such a patient, we would be quick to reflect on our own parsimonious use of medications. We would tell ourselves that we are responsible providers and would be quick to recommend discontinuation of medications. This would help us feel better about ourselves, and would of course assuage the ever-present medicolegal ‘big brother’ in our minds. It is through this very process that we affirm our self-identities. For if this patient’s previous physician was a bad psychiatrist, then we are a good psychiatrist. It is through this process that our clinical selves find confirmation.
We do not mean to reduce the complexities of human behavior to quick stereotypes. However, it is our belief that when confronted with clinical or philosophical disputes with our colleagues, the basic rules of human behavior will attempt to dissolve and override efforts at mentalization, collegiality, or interpersonal sensitivity. For to accept a clinical practice view that is different from ours would be akin to giving up the essence of our clinical identities. It could be compared to the fragmentation process of a vulnerable psyche when confronted with a reality that is at odds with preconceived notions and experiences.
While we may be able to appreciate the nuances and sensibilities of another provider, we believe it would be particularly difficult for most of us to actually attempt to practice in a fashion that is not congruent with our own organizers of experience. Whether or not our practice style is ‘perfect,’ it has worked for us. Social neuroscience and our understanding of the organization of the self would predict that we would hold onto our way of practice with all the mind’s defenses. Externalization, denial, and projection could all be called into action in this battle against existential fragmentation.
Do we seek to portray a clinical world where there is no hope for genuine modeling of clinical sensibilities to other psychiatrists? That is not our intention. Yet it seems that many of the theoretical frameworks that we subscribe to argue against this possibility. We would be hypocritical if we did not here state that our own theoretical frameworks are yet other examples of “organizers of experience.” Attribution theory, intersubjectivity, and social neuroscience are simply our ways of organizing the chaos of perceptions, ideas, and intricacies of human behavior.
If we accept that psychiatrists, like all human beings, are trapped in a subjective experience, then we can be more playful and flexible when interacting with our colleagues. We do not have to be as defensive of our practices and accusatory of others. If we practice daily according to some orthodoxy, then we color our experiences of the patient and of our colleagues’ ways of practice. We automatically start off on the wrong foot. And yet, to give up this orthodoxy would, by definition, be disorganizing and fragmenting to us. For as Nietzsche said, “truth is an illusion without which a certain species could not survive.”5
Dr. Khalafian practices full time as a general outpatient psychiatrist. He trained at the University of California, San Diego, for his psychiatric residency and currently works as a telepsychiatrist, serving an outpatient clinic population in northern California. Dr. Badre is a clinical and forensic psychiatrist in San Diego. He holds teaching positions at the University of California, San Diego, and the University of San Diego. He teaches medical education, psychopharmacology, ethics in psychiatry, and correctional care. Dr. Badre can be reached at his website, BadreMD.com. Dr. Badre and Dr. Khalafian have no conflicts of interest.
References
1. Buirski P and Haglund P. Making sense together: The intersubjective approach to psychotherapy. Northvale, NJ: Jason Aronson; 2001.
2. Malle BF. Attribution theories: How people make sense of behavior. In Chadee D (ed.), Theories in social psychology. pp. 72-95. Wiley-Blackwell; 2011.
3. Brown EC and Brune M. The role of prediction in social neuroscience. Front Hum Neurosci. 2012 May 24;6:147. doi: 10.3389/fnhum.2012.00147.
4. Blom T et al. Predictions drive neural representations of visual events ahead of incoming sensory information. Proc Natl Acad Sci USA. 2020 Mar 31;117(13):7510-7515. doi: 10.1073/pnas.1917777117.
5. Yalom I. The Gift of Therapy. Harper Perennial; 2002.
Melasma
THE COMPARISON
A Melasma on the face of a Hispanic woman, with hyperpigmentation on the cheeks, bridge of the nose, and upper lip.
B Melasma on the face of a Malaysian woman, with hyperpigmentation on the upper cheeks and bridge of the nose.
C Melasma on the face of an African woman, with hyperpigmentation on the upper cheeks and lateral to the eyes.
Melasma (also known as chloasma) is a pigmentary disorder that causes chronic symmetric hyperpigmentation on the face. In patients with darker skin tones, centrofacial areas are affected.1 Increased deposition of melanin distributed in the dermis leads to dermal melanosis. Newer research suggests that mast cell and keratinocyte interactions, altered gene regulation, neovascularization, and disruptions in the basement membrane cause melasma.2 Patients present with epidermal or dermal melasma or a combination of both (mixed melasma).3 Wood lamp examination is helpful to distinguish between epidermal and dermal melasma. Dermal and mixed melasma can be difficult to treat and require multimodal treatments.
Epidemiology
Melasma commonly affects women aged 20 to 40 years,4 with a female to male ratio of 9:1.5 Potential triggers of melasma include hormones (eg, pregnancy, oral contraceptives, hormone replacement therapy) and exposure to UV light.2,5 Melasma occurs in patients of all racial and ethnic backgrounds; however, the prevalence is higher in patients with darker skin tones.2
Key clinical features in people with darker skin tones
Melasma commonly manifests as symmetrically distributed, reticulated (lacy), dark brown to grayish brown patches on the cheeks, nose, forehead, upper lip, and chin in patients with darker skin tones.5 The pigment can be tan brown in patients with lighter skin tones. Given that postinflammatory hyperpigmentation and other pigmentary disorders can cause a similar appearance, a biopsy sometimes is needed to confirm the diagnosis, but melasma is diagnosed via physical examination in most patients. Melasma can be misdiagnosed as postinflammatory hyperpigmentation, solar lentigines, exogenous ochronosis, and Hori nevus.5
Worth noting
Prevention
• Daily sunscreen use is critical to prevent worsening of melasma. Sunscreen may not appear cosmetically elegant on darker skin tones, which creates a barrier to its use.6 Protection from both sunlight and visible light is necessary. Visible light, including light from light bulbs and device-emitted blue light, can worsen melasma. Iron oxides in tinted sunscreen offer protection from visible light.
• Physicians can recommend sunscreens that are more transparent or tinted for a better cosmetic match.
• Severe flares of melasma can occur with sun exposure despite good control with medications and laser modalities.
Treatment
• First-line therapies include topical hydroquinone 2% to 4%, tretinoin, azelaic acid, kojic acid, or ascorbic acid (vitamin C). A popular topical compound is a steroid, tretinoin, and hydroquinone.1,5 Over-the-counter hydroquinone has been removed from the market due to safety concerns; however, it is still first line in the treatment of melasma. If hydroquinone is prescribed, treatment intervals of 6 to 8 weeks followed by a hydroquinone-free period is advised to reduce the risk for exogenous ochronosis (a paradoxical darkening of the skin).
• Chemical peels are second-line treatments that are effective for melasma. Improvement in epidermal melasma has been shown with chemical peels containing Jessner solution, salicylic acid, or α-hydroxy acid. Patients with dermal and mixed melasma have seen improvement with trichloroacetic acid 25% to 35% with or without Jessner solution.1
• Cysteamine is a topical treatment created from the degradation of coenzyme A. It disrupts the synthesis of melanin to create a more even skin tone. It may be recommended in combination with sunscreen as a first-line or second-line topical therapy.
• Oral tranexamic acid is a third-line treatment that is an analogue for lysine. It decreases prostaglandin production, which leads to a lower number of tyrosine precursors available for the creation of melanin. Tranexamic acid has been shown to lighten the appearance of melasma.7 The most common and dangerous adverse effect of tranexamic acid is blood clots and this treatment should be avoided in those on combination (estrogen and progestin) contraceptives or those with a personal or family history of clotting disorders.8
• Fourth-line treatments such as lasers (performed by dermatologists) can destroy the deposition of pigment while avoiding destruction of epidermal keratinocytes.1,9,10 They also are commonly employed in refractive melasma. The most common lasers are nonablative fractionated lasers and low-fluence Q-switched lasers. The Q-switched Nd:YAG and picosecond lasers are safe for treating melasma in darker skin tones. Ablative fractionated lasers such as CO2 lasers and erbium:YAG lasers also have been used in the treatment of melasma; however, there is still an extremely high risk for postinflammatory dyspigmentation 1 to 2 months after the procedure.10
• Although there is still a risk for rebound hyperpigmentation after laser treatment, use of topical hydroquinone pretreatment may help decrease postoperative hyperpigmentation.1,5 Patients who are treated with the incorrect laser or overtreated may develop postinflammatory hyperpigmentation, rebound hyperpigmentation, or hypopigmentation.
Health disparity highlight
Melasma, most common in patients with skin of color, is a common chronic pigmentation disorder that is cosmetically and psychologically burdensome,11 leading to decreased quality of life, emotional functioning, and selfesteem.12 Clinicians should counsel patients and work closely on long-term management. The treatment options for melasma are considered cosmetic and may be cost prohibitive for many to cover out-of-pocket. Topical treatments have been found to be the most cost-effective.13 Some compounding pharmacies and drug discount programs provide more affordable treatment pricing; however, some patients are still unable to afford these options.
- Cunha PR, Kroumpouzos G. Melasma and vitiligo: novel and experimental therapies. J Clin Exp Derm Res. 2016;7:2. doi:10.4172/2155-9554.1000e106
- Rajanala S, Maymone MBC, Vashi NA. Melasma pathogenesis: a review of the latest research, pathological findings, and investigational therapies. Dermatol Online J. 2019;25:13030/qt47b7r28c.
- Grimes PE, Yamada N, Bhawan J. Light microscopic, immunohistochemical, and ultrastructural alterations in patients with melasma. Am J Dermatopathol. 2005;27:96-101.
- Achar A, Rathi SK. Melasma: a clinico-epidemiological study of 312 cases. Indian J Dermatol. 2011;56:380-382.
- Ogbechie-Godec OA, Elbuluk N. Melasma: an up-to-date comprehensive review. Dermatol Ther. 2017;7:305-318.
- Morquette AJ, Waples ER, Heath CR. The importance of cosmetically elegant sunscreen in skin of color populations. J Cosmet Dermatol. 2022;21:1337-1338.
- Taraz M, Nikham S, Ehsani AH. Tranexamic acid in treatment of melasma: a comprehensive review of clinical studies [published online January 30, 2017]. Dermatol Ther. doi:10.1111/dth.12465
- Bala HR, Lee S, Wong C, et al. Oral tranexamic acid for the treatment of melasma: a review. Dermatol Surg. 2018;44:814-825.
- Castanedo-Cazares JP, Hernandez-Blanco D, Carlos-Ortega B, et al. Near-visible light and UV photoprotection in the treatment of melasma: a double-blind randomized trial. Photodermatol Photoimmunol Photomed. 2014;30:35-42.
- Trivedi MK, Yang FC, Cho BK. A review of laser and light therapy in melasma. Int J Womens Dermatol. 2017;3:11-20.
- Dodmani PN, Deshmukh AR. Assessment of quality of life of melasma patients as per melasma quality of life scale (MELASQoL). Pigment Int. 2020;7:75-79.
- Balkrishnan R, McMichael A, Camacho FT, et al. Development and validation of a health‐related quality of life instrument for women with melasma. Br J Dermatol. 2003;149:572-577.
- Alikhan A, Daly M, Wu J, et al. Cost-effectiveness of a hydroquinone /tretinoin/fluocinolone acetonide cream combination in treating melasma in the United States. J Dermatolog Treat. 2010;21:276-281.
THE COMPARISON
A Melasma on the face of a Hispanic woman, with hyperpigmentation on the cheeks, bridge of the nose, and upper lip.
B Melasma on the face of a Malaysian woman, with hyperpigmentation on the upper cheeks and bridge of the nose.
C Melasma on the face of an African woman, with hyperpigmentation on the upper cheeks and lateral to the eyes.
Melasma (also known as chloasma) is a pigmentary disorder that causes chronic symmetric hyperpigmentation on the face. In patients with darker skin tones, centrofacial areas are affected.1 Increased deposition of melanin distributed in the dermis leads to dermal melanosis. Newer research suggests that mast cell and keratinocyte interactions, altered gene regulation, neovascularization, and disruptions in the basement membrane cause melasma.2 Patients present with epidermal or dermal melasma or a combination of both (mixed melasma).3 Wood lamp examination is helpful to distinguish between epidermal and dermal melasma. Dermal and mixed melasma can be difficult to treat and require multimodal treatments.
Epidemiology
Melasma commonly affects women aged 20 to 40 years,4 with a female to male ratio of 9:1.5 Potential triggers of melasma include hormones (eg, pregnancy, oral contraceptives, hormone replacement therapy) and exposure to UV light.2,5 Melasma occurs in patients of all racial and ethnic backgrounds; however, the prevalence is higher in patients with darker skin tones.2
Key clinical features in people with darker skin tones
Melasma commonly manifests as symmetrically distributed, reticulated (lacy), dark brown to grayish brown patches on the cheeks, nose, forehead, upper lip, and chin in patients with darker skin tones.5 The pigment can be tan brown in patients with lighter skin tones. Given that postinflammatory hyperpigmentation and other pigmentary disorders can cause a similar appearance, a biopsy sometimes is needed to confirm the diagnosis, but melasma is diagnosed via physical examination in most patients. Melasma can be misdiagnosed as postinflammatory hyperpigmentation, solar lentigines, exogenous ochronosis, and Hori nevus.5
Worth noting
Prevention
• Daily sunscreen use is critical to prevent worsening of melasma. Sunscreen may not appear cosmetically elegant on darker skin tones, which creates a barrier to its use.6 Protection from both sunlight and visible light is necessary. Visible light, including light from light bulbs and device-emitted blue light, can worsen melasma. Iron oxides in tinted sunscreen offer protection from visible light.
• Physicians can recommend sunscreens that are more transparent or tinted for a better cosmetic match.
• Severe flares of melasma can occur with sun exposure despite good control with medications and laser modalities.
Treatment
• First-line therapies include topical hydroquinone 2% to 4%, tretinoin, azelaic acid, kojic acid, or ascorbic acid (vitamin C). A popular topical compound is a steroid, tretinoin, and hydroquinone.1,5 Over-the-counter hydroquinone has been removed from the market due to safety concerns; however, it is still first line in the treatment of melasma. If hydroquinone is prescribed, treatment intervals of 6 to 8 weeks followed by a hydroquinone-free period is advised to reduce the risk for exogenous ochronosis (a paradoxical darkening of the skin).
• Chemical peels are second-line treatments that are effective for melasma. Improvement in epidermal melasma has been shown with chemical peels containing Jessner solution, salicylic acid, or α-hydroxy acid. Patients with dermal and mixed melasma have seen improvement with trichloroacetic acid 25% to 35% with or without Jessner solution.1
• Cysteamine is a topical treatment created from the degradation of coenzyme A. It disrupts the synthesis of melanin to create a more even skin tone. It may be recommended in combination with sunscreen as a first-line or second-line topical therapy.
• Oral tranexamic acid is a third-line treatment that is an analogue for lysine. It decreases prostaglandin production, which leads to a lower number of tyrosine precursors available for the creation of melanin. Tranexamic acid has been shown to lighten the appearance of melasma.7 The most common and dangerous adverse effect of tranexamic acid is blood clots and this treatment should be avoided in those on combination (estrogen and progestin) contraceptives or those with a personal or family history of clotting disorders.8
• Fourth-line treatments such as lasers (performed by dermatologists) can destroy the deposition of pigment while avoiding destruction of epidermal keratinocytes.1,9,10 They also are commonly employed in refractive melasma. The most common lasers are nonablative fractionated lasers and low-fluence Q-switched lasers. The Q-switched Nd:YAG and picosecond lasers are safe for treating melasma in darker skin tones. Ablative fractionated lasers such as CO2 lasers and erbium:YAG lasers also have been used in the treatment of melasma; however, there is still an extremely high risk for postinflammatory dyspigmentation 1 to 2 months after the procedure.10
• Although there is still a risk for rebound hyperpigmentation after laser treatment, use of topical hydroquinone pretreatment may help decrease postoperative hyperpigmentation.1,5 Patients who are treated with the incorrect laser or overtreated may develop postinflammatory hyperpigmentation, rebound hyperpigmentation, or hypopigmentation.
Health disparity highlight
Melasma, most common in patients with skin of color, is a common chronic pigmentation disorder that is cosmetically and psychologically burdensome,11 leading to decreased quality of life, emotional functioning, and selfesteem.12 Clinicians should counsel patients and work closely on long-term management. The treatment options for melasma are considered cosmetic and may be cost prohibitive for many to cover out-of-pocket. Topical treatments have been found to be the most cost-effective.13 Some compounding pharmacies and drug discount programs provide more affordable treatment pricing; however, some patients are still unable to afford these options.
THE COMPARISON
A Melasma on the face of a Hispanic woman, with hyperpigmentation on the cheeks, bridge of the nose, and upper lip.
B Melasma on the face of a Malaysian woman, with hyperpigmentation on the upper cheeks and bridge of the nose.
C Melasma on the face of an African woman, with hyperpigmentation on the upper cheeks and lateral to the eyes.
Melasma (also known as chloasma) is a pigmentary disorder that causes chronic symmetric hyperpigmentation on the face. In patients with darker skin tones, centrofacial areas are affected.1 Increased deposition of melanin distributed in the dermis leads to dermal melanosis. Newer research suggests that mast cell and keratinocyte interactions, altered gene regulation, neovascularization, and disruptions in the basement membrane cause melasma.2 Patients present with epidermal or dermal melasma or a combination of both (mixed melasma).3 Wood lamp examination is helpful to distinguish between epidermal and dermal melasma. Dermal and mixed melasma can be difficult to treat and require multimodal treatments.
Epidemiology
Melasma commonly affects women aged 20 to 40 years,4 with a female to male ratio of 9:1.5 Potential triggers of melasma include hormones (eg, pregnancy, oral contraceptives, hormone replacement therapy) and exposure to UV light.2,5 Melasma occurs in patients of all racial and ethnic backgrounds; however, the prevalence is higher in patients with darker skin tones.2
Key clinical features in people with darker skin tones
Melasma commonly manifests as symmetrically distributed, reticulated (lacy), dark brown to grayish brown patches on the cheeks, nose, forehead, upper lip, and chin in patients with darker skin tones.5 The pigment can be tan brown in patients with lighter skin tones. Given that postinflammatory hyperpigmentation and other pigmentary disorders can cause a similar appearance, a biopsy sometimes is needed to confirm the diagnosis, but melasma is diagnosed via physical examination in most patients. Melasma can be misdiagnosed as postinflammatory hyperpigmentation, solar lentigines, exogenous ochronosis, and Hori nevus.5
Worth noting
Prevention
• Daily sunscreen use is critical to prevent worsening of melasma. Sunscreen may not appear cosmetically elegant on darker skin tones, which creates a barrier to its use.6 Protection from both sunlight and visible light is necessary. Visible light, including light from light bulbs and device-emitted blue light, can worsen melasma. Iron oxides in tinted sunscreen offer protection from visible light.
• Physicians can recommend sunscreens that are more transparent or tinted for a better cosmetic match.
• Severe flares of melasma can occur with sun exposure despite good control with medications and laser modalities.
Treatment
• First-line therapies include topical hydroquinone 2% to 4%, tretinoin, azelaic acid, kojic acid, or ascorbic acid (vitamin C). A popular topical compound is a steroid, tretinoin, and hydroquinone.1,5 Over-the-counter hydroquinone has been removed from the market due to safety concerns; however, it is still first line in the treatment of melasma. If hydroquinone is prescribed, treatment intervals of 6 to 8 weeks followed by a hydroquinone-free period is advised to reduce the risk for exogenous ochronosis (a paradoxical darkening of the skin).
• Chemical peels are second-line treatments that are effective for melasma. Improvement in epidermal melasma has been shown with chemical peels containing Jessner solution, salicylic acid, or α-hydroxy acid. Patients with dermal and mixed melasma have seen improvement with trichloroacetic acid 25% to 35% with or without Jessner solution.1
• Cysteamine is a topical treatment created from the degradation of coenzyme A. It disrupts the synthesis of melanin to create a more even skin tone. It may be recommended in combination with sunscreen as a first-line or second-line topical therapy.
• Oral tranexamic acid is a third-line treatment that is an analogue for lysine. It decreases prostaglandin production, which leads to a lower number of tyrosine precursors available for the creation of melanin. Tranexamic acid has been shown to lighten the appearance of melasma.7 The most common and dangerous adverse effect of tranexamic acid is blood clots and this treatment should be avoided in those on combination (estrogen and progestin) contraceptives or those with a personal or family history of clotting disorders.8
• Fourth-line treatments such as lasers (performed by dermatologists) can destroy the deposition of pigment while avoiding destruction of epidermal keratinocytes.1,9,10 They also are commonly employed in refractive melasma. The most common lasers are nonablative fractionated lasers and low-fluence Q-switched lasers. The Q-switched Nd:YAG and picosecond lasers are safe for treating melasma in darker skin tones. Ablative fractionated lasers such as CO2 lasers and erbium:YAG lasers also have been used in the treatment of melasma; however, there is still an extremely high risk for postinflammatory dyspigmentation 1 to 2 months after the procedure.10
• Although there is still a risk for rebound hyperpigmentation after laser treatment, use of topical hydroquinone pretreatment may help decrease postoperative hyperpigmentation.1,5 Patients who are treated with the incorrect laser or overtreated may develop postinflammatory hyperpigmentation, rebound hyperpigmentation, or hypopigmentation.
Health disparity highlight
Melasma, most common in patients with skin of color, is a common chronic pigmentation disorder that is cosmetically and psychologically burdensome,11 leading to decreased quality of life, emotional functioning, and selfesteem.12 Clinicians should counsel patients and work closely on long-term management. The treatment options for melasma are considered cosmetic and may be cost prohibitive for many to cover out-of-pocket. Topical treatments have been found to be the most cost-effective.13 Some compounding pharmacies and drug discount programs provide more affordable treatment pricing; however, some patients are still unable to afford these options.
- Cunha PR, Kroumpouzos G. Melasma and vitiligo: novel and experimental therapies. J Clin Exp Derm Res. 2016;7:2. doi:10.4172/2155-9554.1000e106
- Rajanala S, Maymone MBC, Vashi NA. Melasma pathogenesis: a review of the latest research, pathological findings, and investigational therapies. Dermatol Online J. 2019;25:13030/qt47b7r28c.
- Grimes PE, Yamada N, Bhawan J. Light microscopic, immunohistochemical, and ultrastructural alterations in patients with melasma. Am J Dermatopathol. 2005;27:96-101.
- Achar A, Rathi SK. Melasma: a clinico-epidemiological study of 312 cases. Indian J Dermatol. 2011;56:380-382.
- Ogbechie-Godec OA, Elbuluk N. Melasma: an up-to-date comprehensive review. Dermatol Ther. 2017;7:305-318.
- Morquette AJ, Waples ER, Heath CR. The importance of cosmetically elegant sunscreen in skin of color populations. J Cosmet Dermatol. 2022;21:1337-1338.
- Taraz M, Nikham S, Ehsani AH. Tranexamic acid in treatment of melasma: a comprehensive review of clinical studies [published online January 30, 2017]. Dermatol Ther. doi:10.1111/dth.12465
- Bala HR, Lee S, Wong C, et al. Oral tranexamic acid for the treatment of melasma: a review. Dermatol Surg. 2018;44:814-825.
- Castanedo-Cazares JP, Hernandez-Blanco D, Carlos-Ortega B, et al. Near-visible light and UV photoprotection in the treatment of melasma: a double-blind randomized trial. Photodermatol Photoimmunol Photomed. 2014;30:35-42.
- Trivedi MK, Yang FC, Cho BK. A review of laser and light therapy in melasma. Int J Womens Dermatol. 2017;3:11-20.
- Dodmani PN, Deshmukh AR. Assessment of quality of life of melasma patients as per melasma quality of life scale (MELASQoL). Pigment Int. 2020;7:75-79.
- Balkrishnan R, McMichael A, Camacho FT, et al. Development and validation of a health‐related quality of life instrument for women with melasma. Br J Dermatol. 2003;149:572-577.
- Alikhan A, Daly M, Wu J, et al. Cost-effectiveness of a hydroquinone /tretinoin/fluocinolone acetonide cream combination in treating melasma in the United States. J Dermatolog Treat. 2010;21:276-281.
- Cunha PR, Kroumpouzos G. Melasma and vitiligo: novel and experimental therapies. J Clin Exp Derm Res. 2016;7:2. doi:10.4172/2155-9554.1000e106
- Rajanala S, Maymone MBC, Vashi NA. Melasma pathogenesis: a review of the latest research, pathological findings, and investigational therapies. Dermatol Online J. 2019;25:13030/qt47b7r28c.
- Grimes PE, Yamada N, Bhawan J. Light microscopic, immunohistochemical, and ultrastructural alterations in patients with melasma. Am J Dermatopathol. 2005;27:96-101.
- Achar A, Rathi SK. Melasma: a clinico-epidemiological study of 312 cases. Indian J Dermatol. 2011;56:380-382.
- Ogbechie-Godec OA, Elbuluk N. Melasma: an up-to-date comprehensive review. Dermatol Ther. 2017;7:305-318.
- Morquette AJ, Waples ER, Heath CR. The importance of cosmetically elegant sunscreen in skin of color populations. J Cosmet Dermatol. 2022;21:1337-1338.
- Taraz M, Nikham S, Ehsani AH. Tranexamic acid in treatment of melasma: a comprehensive review of clinical studies [published online January 30, 2017]. Dermatol Ther. doi:10.1111/dth.12465
- Bala HR, Lee S, Wong C, et al. Oral tranexamic acid for the treatment of melasma: a review. Dermatol Surg. 2018;44:814-825.
- Castanedo-Cazares JP, Hernandez-Blanco D, Carlos-Ortega B, et al. Near-visible light and UV photoprotection in the treatment of melasma: a double-blind randomized trial. Photodermatol Photoimmunol Photomed. 2014;30:35-42.
- Trivedi MK, Yang FC, Cho BK. A review of laser and light therapy in melasma. Int J Womens Dermatol. 2017;3:11-20.
- Dodmani PN, Deshmukh AR. Assessment of quality of life of melasma patients as per melasma quality of life scale (MELASQoL). Pigment Int. 2020;7:75-79.
- Balkrishnan R, McMichael A, Camacho FT, et al. Development and validation of a health‐related quality of life instrument for women with melasma. Br J Dermatol. 2003;149:572-577.
- Alikhan A, Daly M, Wu J, et al. Cost-effectiveness of a hydroquinone /tretinoin/fluocinolone acetonide cream combination in treating melasma in the United States. J Dermatolog Treat. 2010;21:276-281.
Children ate more fruits and vegetables during longer meals: Study
Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.
Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.
However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.
In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.
The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.
Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.
During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).
A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.
Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.
“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”
Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.
The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.
However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.
“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
Findings intriguing, more data needed
The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.
The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”
Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”
The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.
Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said.
The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.
Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.
Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.
However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.
In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.
The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.
Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.
During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).
A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.
Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.
“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”
Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.
The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.
However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.
“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
Findings intriguing, more data needed
The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.
The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”
Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”
The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.
Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said.
The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.
Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.
Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.
However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.
In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.
The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.
Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.
During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).
A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.
Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.
“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”
Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.
The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.
However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.
“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
Findings intriguing, more data needed
The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.
The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”
Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”
The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.
Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said.
The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.
FROM JAMA NETWORK OPEN
Recurrent Oral and Gluteal Cleft Erosions
The Diagnosis: Lichen Planus Pemphigoides
Lichen planus pemphigoides (LPP) is a rare acquired autoimmune blistering disorder with an estimated worldwide prevalence of approximately 1 in 1,000,000 individuals.1 It often manifests with overlapping features of both LP and bullous pemphigoid (BP). The condition usually presents in the fifth decade of life and has a slight female predominance.2 Although primarily idiopathic, it has been associated with certain medications and treatments, such as angiotensin-converting enzyme inhibitors, programmed cell death protein 1 inhibitors, programmed cell death ligand 1 inhibitors, labetalol, narrowband UVB, and psoralen plus UVA.3,4
Patients initially present with lesions of classic lichen planus (LP) with pink-purple, flat-topped, pruritic, polygonal papules and plaques.5 After weeks to months, tense vesicles and bullae usually develop on the sites of LP as well as on uninvolved skin. One study found a mean lag time of about 8.3 months for blistering to present after LP,5 but concurrent presentations of both have been reported.1 In addition, oral mucosal involvement has been seen in 36% of cases. The most commonly affected sites are the extremities; however, involvement can be widespread.2
The pathogenesis of LPP currently is unknown. It has been proposed that in LP, injury of basal keratinocytes exposes hidden basement membrane and hemidesmosome antigens including BP180, a 180 kDa transmembrane protein of the basement membrane zone (BMZ),6 which triggers an immune response where T cells recognize the extracellular portion of BP180 and antibodies are formed against the likely autoantigen.1 One study has suggested that the autoantigen in LPP is the MCW-4 epitope within the C-terminal end of the NC16A domain of BP180.7
Histopathology of LPP reveals characteristics of both LP as well as BP. Typical features of LP on hematoxylin and eosin (H&E) staining include lichenoid lymphocytic interface dermatitis, sawtooth rete ridges, wedge-shaped hypergranulosis, and colloid bodies, as demonstrated from the biopsy of our patient’s gluteal cleft lesion (quiz image 1), while the predominant feature of BP on H&E staining includes a subepidermal bulla with eosinophils.2 Typically, direct immunofluorescence (DIF) shows linear deposits of IgG and/or C3 along the BMZ. Indirect immunofluorescence (IIF) often reveals IgG against the roof of the BMZ in a human split-skin substrate.1 Antibodies against BP180 or uncommonly BP230 often are detected on enzyme-linked immunosorbent assay (ELISA). For our patient, IIF and ELISA tests were positive. Given the clinical presentation with recurrent oral and gluteal cleft erosions, histologic findings, and the results of our patient’s immunological testing, the diagnosis of LPP was made.
Topical steroids often are used to treat localized disease of LPP.8 Oral prednisone also may be given for widespread or unresponsive disease.9 Other treatments include azathioprine, mycophenolate mofetil, hydroxychloroquine, dapsone, tetracycline in combination with nicotinamide, acitretin, ustekinumab, baricitinib, and rituximab with intravenous immunoglobulin.3,8,10-12 Any potential medication culprits should be discontinued.9 Patients with oral involvement may require a soft diet to avoid further mucosal insult.10 Additionally, providers should consider dentistry, ophthalmology, and/or otolaryngology referrals depending on disease severity.
Bullous pemphigoid, the most common autoimmune blistering disease, has an estimated incidence of 10 to 43 per million individuals per year.2 Classically, it presents with tense bullae on the skin of the lower abdomen, thighs, groin, forearms, and axillae. Circulating antibodies against 2 BMZ proteins—BP180 and BP230—are important factors in BP pathogenesis.2 Diagnosis of BP is based on clinical features, histologic findings, and immunological studies including DIF, IIF, and ELISA. An eosinophil-rich subepidermal split typically is seen on H&E staining (Figure 1).
Direct immunofluorescence displays linear IgG and/ or C3 staining at the BMZ. Indirect immunofluorescence on a human salt-split skin substrate commonly shows linear BMZ deposition on the roof of the blister.2 Indirect immunofluorescence for IgG deposition on monkey esophagus substrate shows linear BMZ deposition. Antibodies against the NC16A domain of BP180 (NC16A-BP180) are dominant, but BP230 antibodies against BP230 also are detected with ELISA.2 Further studies have indicated that the NC16A epitopes of BP180 that are targeted in BP are MCW-0-3,2 different from the autoantigen MCW-4 that is targeted in LPP.7
Paraneoplastic pemphigus (PNP) is another diagnosis to consider. Patients with PNP initially present with oral findings—most commonly chronic, erosive, and painful mucositis—followed by cutaneous involvement, which varies from the development of bullae to the formation of plaques similar to those of LP.13 The latter, in combination with oral erosions, may appear clinically similar to LPP. The results of DIF in conjugation with IIF and ELISA may help to further differentiate these disorders. Direct immunofluorescence in PNP typically reveals positive intercellular and/or BMZ IgG and C3, while DIF in LPP reveals depositions along the BMZ alone. Indirect immunofluorescence performed on rat bladder epithelium is particularly useful, as binding of IgG to rat bladder epithelium is characteristic of PNP and not seen in other disorders.14 Lastly, patients with PNP may develop IgG antibodies to various antigens such as desmoplakin I, desmoplakin II, envoplakin, periplakin, BP230, desmoglein 1, and desmoglein 3, which would not be expected in LPP patients.15 Hematoxylin and eosin staining differs from LPP, primarily with the location of the blister being intraepidermal. Acantholysis with hemorrhagic bullae can be seen (Figure 2).
Classic LP is an inflammatory disorder that mainly affects adults, with an estimated incidence of less than 1%.16 The classic form presents with purple, flat-topped, pruritic, polygonal papules and plaques of varying size that often are characterized by Wickham striae. Lichen planus possesses a broad spectrum of subtypes involving different locations, though skin lesions usually are localized to the extremities. Despite an unknown etiology, activated T cells and T helper type 1 cytokines are considered key in keratinocyte injury. Compact orthokeratosis, wedge-shaped hypergranulosis, focal dyskeratosis, and colloid bodies typically are found on H&E staining, along with a dense bandlike lymphohistiocytic infiltrate at the dermoepidermal junction (DEJ)(Figure 3). Direct immunofluorescence typically shows a shaggy band of fibrinogen along the DEJ in addition to colloid bodies that stain with various autoantibodies including IgM, IgG, IgA, and C3.16
Bullous LP is a rare variant of LP that commonly develops on the oral mucosa and the legs, with blisters confined on pre-existing LP lesions.9 The pathogenesis is related to an epidermal inflammatory infiltrate that leads to basal layer destruction followed by dermal-epidermal separations that cause blistering.17 Bullous LP does not have positive DIF, IIF, or ELISA because the pathophysiology does not involve autoantibody production. Histopathology typically displays an extensive inflammatory infiltrate and degeneration of the basal keratinocytes, resulting in large dermal-epidermal separations called Max-Joseph spaces (Figure 4).17 Colloid bodies are prominent in bullous LP but rarely are seen in LPP; eosinophils also are much more prominent in LPP compared to bullous LP.18 Unlike in LPP, DIF usually is negative in bullous LP, though lichenoid lesions may exhibit globular deposition of IgM, IgG, and IgA in the colloid bodies of the lower epidermis and/or papillary dermis. Similar to LP, DIF of the biopsy specimen shows linear or shaggy deposits of fibrinogen at the DEJ.17
- Hübner F, Langan EA, Recke A. Lichen planus pemphigoides: from lichenoid inflammation to autoantibody-mediated blistering. Front Immunol. 2019;10:1389.
- Montagnon CM, Tolkachjov SN, Murrell DF, et al. Subepithelial autoimmune blistering dermatoses: clinical features and diagnosis. J Am Acad Dermatol. 2021;85:1-14.
- Hackländer K, Lehmann P, Hofmann SC. Successful treatment of lichen planus pemphigoides using acitretin as monotherapy. J Dtsch Dermatol Ges. 2014;12:818-819.
- Boyle M, Ashi S, Puiu T, et al. Lichen planus pemphigoides associated with PD-1 and PD-L1 inhibitors: a case series and review of the literature. Am J Dermatopathol. 2022;44:360-367.
- Zaraa I, Mahfoudh A, Sellami MK, et al. Lichen planus pemphigoides: four new cases and a review of the literature. Int J Dermatol. 2013;52:406-412.
- Bolognia J, Schaffer J, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2018.
- Zillikens D, Caux F, Mascaru JM Jr, et al. Autoantibodies in lichen planus pemphigoides react with a novel epitope within the C-terminal NC16A domain of BP180. J Invest Dermatol. 1999;113:117-121.
- Knisley RR, Petropolis AA, Mackey VT. Lichen planus pemphigoides treated with ustekinumab. Cutis. 2017;100:415-418.
- Liakopoulou A, Rallis E. Bullous lichen planus—a review. J Dermatol Case Rep. 2017;11:1-4.
- Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149.
- Moussa A, Colla TG, Asfour L, et al. Effective treatment of refractory lichen planus pemphigoides with a Janus kinase-1/2 inhibitor. Clin Exp Dermatol. 2022;47:2040-2041.
- Brennan M, Baldissano M, King L, et al. Successful use of rituximab and intravenous gamma globulin to treat checkpoint inhibitor-induced severe lichen planus pemphigoides. Skinmed. 2020;18:246-249.
- Kim JH, Kim SC. Paraneoplastic pemphigus: paraneoplastic autoimmune disease of the skin and mucosa. Front Immunol. 2019;10:1259.
- Stevens SR, Griffiths CE, Anhalt GJ, et al. Paraneoplastic pemphigus presenting as a lichen planus pemphigoides-like eruption. Arch Dermatol. 1993;129:866-869.
- Ohzono A, Sogame R, Li X, et al. Clinical and immunological findings in 104 cases of paraneoplastic pemphigus. Br J Dermatol. 2015;173:1447-1452.
- Tziotzios C, Lee JYW, Brier T, et al. Lichen planus and lichenoid dermatoses: clinical overview and molecular basis. J Am Acad Dermatol. 2018;79:789-804.
- Papara C, Danescu S, Sitaru C, et al. Challenges and pitfalls between lichen planus pemphigoides and bullous lichen planus. Australas J Dermatol. 2022;63:165-171.
- Tripathy DM, Vashisht D, Rathore G, et al. Bullous lichen planus vs lichen planus pemphigoides: a diagnostic dilemma. Indian Dermatol Online J. 2022;13:282-284.
The Diagnosis: Lichen Planus Pemphigoides
Lichen planus pemphigoides (LPP) is a rare acquired autoimmune blistering disorder with an estimated worldwide prevalence of approximately 1 in 1,000,000 individuals.1 It often manifests with overlapping features of both LP and bullous pemphigoid (BP). The condition usually presents in the fifth decade of life and has a slight female predominance.2 Although primarily idiopathic, it has been associated with certain medications and treatments, such as angiotensin-converting enzyme inhibitors, programmed cell death protein 1 inhibitors, programmed cell death ligand 1 inhibitors, labetalol, narrowband UVB, and psoralen plus UVA.3,4
Patients initially present with lesions of classic lichen planus (LP) with pink-purple, flat-topped, pruritic, polygonal papules and plaques.5 After weeks to months, tense vesicles and bullae usually develop on the sites of LP as well as on uninvolved skin. One study found a mean lag time of about 8.3 months for blistering to present after LP,5 but concurrent presentations of both have been reported.1 In addition, oral mucosal involvement has been seen in 36% of cases. The most commonly affected sites are the extremities; however, involvement can be widespread.2
The pathogenesis of LPP currently is unknown. It has been proposed that in LP, injury of basal keratinocytes exposes hidden basement membrane and hemidesmosome antigens including BP180, a 180 kDa transmembrane protein of the basement membrane zone (BMZ),6 which triggers an immune response where T cells recognize the extracellular portion of BP180 and antibodies are formed against the likely autoantigen.1 One study has suggested that the autoantigen in LPP is the MCW-4 epitope within the C-terminal end of the NC16A domain of BP180.7
Histopathology of LPP reveals characteristics of both LP as well as BP. Typical features of LP on hematoxylin and eosin (H&E) staining include lichenoid lymphocytic interface dermatitis, sawtooth rete ridges, wedge-shaped hypergranulosis, and colloid bodies, as demonstrated from the biopsy of our patient’s gluteal cleft lesion (quiz image 1), while the predominant feature of BP on H&E staining includes a subepidermal bulla with eosinophils.2 Typically, direct immunofluorescence (DIF) shows linear deposits of IgG and/or C3 along the BMZ. Indirect immunofluorescence (IIF) often reveals IgG against the roof of the BMZ in a human split-skin substrate.1 Antibodies against BP180 or uncommonly BP230 often are detected on enzyme-linked immunosorbent assay (ELISA). For our patient, IIF and ELISA tests were positive. Given the clinical presentation with recurrent oral and gluteal cleft erosions, histologic findings, and the results of our patient’s immunological testing, the diagnosis of LPP was made.
Topical steroids often are used to treat localized disease of LPP.8 Oral prednisone also may be given for widespread or unresponsive disease.9 Other treatments include azathioprine, mycophenolate mofetil, hydroxychloroquine, dapsone, tetracycline in combination with nicotinamide, acitretin, ustekinumab, baricitinib, and rituximab with intravenous immunoglobulin.3,8,10-12 Any potential medication culprits should be discontinued.9 Patients with oral involvement may require a soft diet to avoid further mucosal insult.10 Additionally, providers should consider dentistry, ophthalmology, and/or otolaryngology referrals depending on disease severity.
Bullous pemphigoid, the most common autoimmune blistering disease, has an estimated incidence of 10 to 43 per million individuals per year.2 Classically, it presents with tense bullae on the skin of the lower abdomen, thighs, groin, forearms, and axillae. Circulating antibodies against 2 BMZ proteins—BP180 and BP230—are important factors in BP pathogenesis.2 Diagnosis of BP is based on clinical features, histologic findings, and immunological studies including DIF, IIF, and ELISA. An eosinophil-rich subepidermal split typically is seen on H&E staining (Figure 1).
Direct immunofluorescence displays linear IgG and/ or C3 staining at the BMZ. Indirect immunofluorescence on a human salt-split skin substrate commonly shows linear BMZ deposition on the roof of the blister.2 Indirect immunofluorescence for IgG deposition on monkey esophagus substrate shows linear BMZ deposition. Antibodies against the NC16A domain of BP180 (NC16A-BP180) are dominant, but BP230 antibodies against BP230 also are detected with ELISA.2 Further studies have indicated that the NC16A epitopes of BP180 that are targeted in BP are MCW-0-3,2 different from the autoantigen MCW-4 that is targeted in LPP.7
Paraneoplastic pemphigus (PNP) is another diagnosis to consider. Patients with PNP initially present with oral findings—most commonly chronic, erosive, and painful mucositis—followed by cutaneous involvement, which varies from the development of bullae to the formation of plaques similar to those of LP.13 The latter, in combination with oral erosions, may appear clinically similar to LPP. The results of DIF in conjugation with IIF and ELISA may help to further differentiate these disorders. Direct immunofluorescence in PNP typically reveals positive intercellular and/or BMZ IgG and C3, while DIF in LPP reveals depositions along the BMZ alone. Indirect immunofluorescence performed on rat bladder epithelium is particularly useful, as binding of IgG to rat bladder epithelium is characteristic of PNP and not seen in other disorders.14 Lastly, patients with PNP may develop IgG antibodies to various antigens such as desmoplakin I, desmoplakin II, envoplakin, periplakin, BP230, desmoglein 1, and desmoglein 3, which would not be expected in LPP patients.15 Hematoxylin and eosin staining differs from LPP, primarily with the location of the blister being intraepidermal. Acantholysis with hemorrhagic bullae can be seen (Figure 2).
Classic LP is an inflammatory disorder that mainly affects adults, with an estimated incidence of less than 1%.16 The classic form presents with purple, flat-topped, pruritic, polygonal papules and plaques of varying size that often are characterized by Wickham striae. Lichen planus possesses a broad spectrum of subtypes involving different locations, though skin lesions usually are localized to the extremities. Despite an unknown etiology, activated T cells and T helper type 1 cytokines are considered key in keratinocyte injury. Compact orthokeratosis, wedge-shaped hypergranulosis, focal dyskeratosis, and colloid bodies typically are found on H&E staining, along with a dense bandlike lymphohistiocytic infiltrate at the dermoepidermal junction (DEJ)(Figure 3). Direct immunofluorescence typically shows a shaggy band of fibrinogen along the DEJ in addition to colloid bodies that stain with various autoantibodies including IgM, IgG, IgA, and C3.16
Bullous LP is a rare variant of LP that commonly develops on the oral mucosa and the legs, with blisters confined on pre-existing LP lesions.9 The pathogenesis is related to an epidermal inflammatory infiltrate that leads to basal layer destruction followed by dermal-epidermal separations that cause blistering.17 Bullous LP does not have positive DIF, IIF, or ELISA because the pathophysiology does not involve autoantibody production. Histopathology typically displays an extensive inflammatory infiltrate and degeneration of the basal keratinocytes, resulting in large dermal-epidermal separations called Max-Joseph spaces (Figure 4).17 Colloid bodies are prominent in bullous LP but rarely are seen in LPP; eosinophils also are much more prominent in LPP compared to bullous LP.18 Unlike in LPP, DIF usually is negative in bullous LP, though lichenoid lesions may exhibit globular deposition of IgM, IgG, and IgA in the colloid bodies of the lower epidermis and/or papillary dermis. Similar to LP, DIF of the biopsy specimen shows linear or shaggy deposits of fibrinogen at the DEJ.17
The Diagnosis: Lichen Planus Pemphigoides
Lichen planus pemphigoides (LPP) is a rare acquired autoimmune blistering disorder with an estimated worldwide prevalence of approximately 1 in 1,000,000 individuals.1 It often manifests with overlapping features of both LP and bullous pemphigoid (BP). The condition usually presents in the fifth decade of life and has a slight female predominance.2 Although primarily idiopathic, it has been associated with certain medications and treatments, such as angiotensin-converting enzyme inhibitors, programmed cell death protein 1 inhibitors, programmed cell death ligand 1 inhibitors, labetalol, narrowband UVB, and psoralen plus UVA.3,4
Patients initially present with lesions of classic lichen planus (LP) with pink-purple, flat-topped, pruritic, polygonal papules and plaques.5 After weeks to months, tense vesicles and bullae usually develop on the sites of LP as well as on uninvolved skin. One study found a mean lag time of about 8.3 months for blistering to present after LP,5 but concurrent presentations of both have been reported.1 In addition, oral mucosal involvement has been seen in 36% of cases. The most commonly affected sites are the extremities; however, involvement can be widespread.2
The pathogenesis of LPP currently is unknown. It has been proposed that in LP, injury of basal keratinocytes exposes hidden basement membrane and hemidesmosome antigens including BP180, a 180 kDa transmembrane protein of the basement membrane zone (BMZ),6 which triggers an immune response where T cells recognize the extracellular portion of BP180 and antibodies are formed against the likely autoantigen.1 One study has suggested that the autoantigen in LPP is the MCW-4 epitope within the C-terminal end of the NC16A domain of BP180.7
Histopathology of LPP reveals characteristics of both LP as well as BP. Typical features of LP on hematoxylin and eosin (H&E) staining include lichenoid lymphocytic interface dermatitis, sawtooth rete ridges, wedge-shaped hypergranulosis, and colloid bodies, as demonstrated from the biopsy of our patient’s gluteal cleft lesion (quiz image 1), while the predominant feature of BP on H&E staining includes a subepidermal bulla with eosinophils.2 Typically, direct immunofluorescence (DIF) shows linear deposits of IgG and/or C3 along the BMZ. Indirect immunofluorescence (IIF) often reveals IgG against the roof of the BMZ in a human split-skin substrate.1 Antibodies against BP180 or uncommonly BP230 often are detected on enzyme-linked immunosorbent assay (ELISA). For our patient, IIF and ELISA tests were positive. Given the clinical presentation with recurrent oral and gluteal cleft erosions, histologic findings, and the results of our patient’s immunological testing, the diagnosis of LPP was made.
Topical steroids often are used to treat localized disease of LPP.8 Oral prednisone also may be given for widespread or unresponsive disease.9 Other treatments include azathioprine, mycophenolate mofetil, hydroxychloroquine, dapsone, tetracycline in combination with nicotinamide, acitretin, ustekinumab, baricitinib, and rituximab with intravenous immunoglobulin.3,8,10-12 Any potential medication culprits should be discontinued.9 Patients with oral involvement may require a soft diet to avoid further mucosal insult.10 Additionally, providers should consider dentistry, ophthalmology, and/or otolaryngology referrals depending on disease severity.
Bullous pemphigoid, the most common autoimmune blistering disease, has an estimated incidence of 10 to 43 per million individuals per year.2 Classically, it presents with tense bullae on the skin of the lower abdomen, thighs, groin, forearms, and axillae. Circulating antibodies against 2 BMZ proteins—BP180 and BP230—are important factors in BP pathogenesis.2 Diagnosis of BP is based on clinical features, histologic findings, and immunological studies including DIF, IIF, and ELISA. An eosinophil-rich subepidermal split typically is seen on H&E staining (Figure 1).
Direct immunofluorescence displays linear IgG and/ or C3 staining at the BMZ. Indirect immunofluorescence on a human salt-split skin substrate commonly shows linear BMZ deposition on the roof of the blister.2 Indirect immunofluorescence for IgG deposition on monkey esophagus substrate shows linear BMZ deposition. Antibodies against the NC16A domain of BP180 (NC16A-BP180) are dominant, but BP230 antibodies against BP230 also are detected with ELISA.2 Further studies have indicated that the NC16A epitopes of BP180 that are targeted in BP are MCW-0-3,2 different from the autoantigen MCW-4 that is targeted in LPP.7
Paraneoplastic pemphigus (PNP) is another diagnosis to consider. Patients with PNP initially present with oral findings—most commonly chronic, erosive, and painful mucositis—followed by cutaneous involvement, which varies from the development of bullae to the formation of plaques similar to those of LP.13 The latter, in combination with oral erosions, may appear clinically similar to LPP. The results of DIF in conjugation with IIF and ELISA may help to further differentiate these disorders. Direct immunofluorescence in PNP typically reveals positive intercellular and/or BMZ IgG and C3, while DIF in LPP reveals depositions along the BMZ alone. Indirect immunofluorescence performed on rat bladder epithelium is particularly useful, as binding of IgG to rat bladder epithelium is characteristic of PNP and not seen in other disorders.14 Lastly, patients with PNP may develop IgG antibodies to various antigens such as desmoplakin I, desmoplakin II, envoplakin, periplakin, BP230, desmoglein 1, and desmoglein 3, which would not be expected in LPP patients.15 Hematoxylin and eosin staining differs from LPP, primarily with the location of the blister being intraepidermal. Acantholysis with hemorrhagic bullae can be seen (Figure 2).
Classic LP is an inflammatory disorder that mainly affects adults, with an estimated incidence of less than 1%.16 The classic form presents with purple, flat-topped, pruritic, polygonal papules and plaques of varying size that often are characterized by Wickham striae. Lichen planus possesses a broad spectrum of subtypes involving different locations, though skin lesions usually are localized to the extremities. Despite an unknown etiology, activated T cells and T helper type 1 cytokines are considered key in keratinocyte injury. Compact orthokeratosis, wedge-shaped hypergranulosis, focal dyskeratosis, and colloid bodies typically are found on H&E staining, along with a dense bandlike lymphohistiocytic infiltrate at the dermoepidermal junction (DEJ)(Figure 3). Direct immunofluorescence typically shows a shaggy band of fibrinogen along the DEJ in addition to colloid bodies that stain with various autoantibodies including IgM, IgG, IgA, and C3.16
Bullous LP is a rare variant of LP that commonly develops on the oral mucosa and the legs, with blisters confined on pre-existing LP lesions.9 The pathogenesis is related to an epidermal inflammatory infiltrate that leads to basal layer destruction followed by dermal-epidermal separations that cause blistering.17 Bullous LP does not have positive DIF, IIF, or ELISA because the pathophysiology does not involve autoantibody production. Histopathology typically displays an extensive inflammatory infiltrate and degeneration of the basal keratinocytes, resulting in large dermal-epidermal separations called Max-Joseph spaces (Figure 4).17 Colloid bodies are prominent in bullous LP but rarely are seen in LPP; eosinophils also are much more prominent in LPP compared to bullous LP.18 Unlike in LPP, DIF usually is negative in bullous LP, though lichenoid lesions may exhibit globular deposition of IgM, IgG, and IgA in the colloid bodies of the lower epidermis and/or papillary dermis. Similar to LP, DIF of the biopsy specimen shows linear or shaggy deposits of fibrinogen at the DEJ.17
- Hübner F, Langan EA, Recke A. Lichen planus pemphigoides: from lichenoid inflammation to autoantibody-mediated blistering. Front Immunol. 2019;10:1389.
- Montagnon CM, Tolkachjov SN, Murrell DF, et al. Subepithelial autoimmune blistering dermatoses: clinical features and diagnosis. J Am Acad Dermatol. 2021;85:1-14.
- Hackländer K, Lehmann P, Hofmann SC. Successful treatment of lichen planus pemphigoides using acitretin as monotherapy. J Dtsch Dermatol Ges. 2014;12:818-819.
- Boyle M, Ashi S, Puiu T, et al. Lichen planus pemphigoides associated with PD-1 and PD-L1 inhibitors: a case series and review of the literature. Am J Dermatopathol. 2022;44:360-367.
- Zaraa I, Mahfoudh A, Sellami MK, et al. Lichen planus pemphigoides: four new cases and a review of the literature. Int J Dermatol. 2013;52:406-412.
- Bolognia J, Schaffer J, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2018.
- Zillikens D, Caux F, Mascaru JM Jr, et al. Autoantibodies in lichen planus pemphigoides react with a novel epitope within the C-terminal NC16A domain of BP180. J Invest Dermatol. 1999;113:117-121.
- Knisley RR, Petropolis AA, Mackey VT. Lichen planus pemphigoides treated with ustekinumab. Cutis. 2017;100:415-418.
- Liakopoulou A, Rallis E. Bullous lichen planus—a review. J Dermatol Case Rep. 2017;11:1-4.
- Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149.
- Moussa A, Colla TG, Asfour L, et al. Effective treatment of refractory lichen planus pemphigoides with a Janus kinase-1/2 inhibitor. Clin Exp Dermatol. 2022;47:2040-2041.
- Brennan M, Baldissano M, King L, et al. Successful use of rituximab and intravenous gamma globulin to treat checkpoint inhibitor-induced severe lichen planus pemphigoides. Skinmed. 2020;18:246-249.
- Kim JH, Kim SC. Paraneoplastic pemphigus: paraneoplastic autoimmune disease of the skin and mucosa. Front Immunol. 2019;10:1259.
- Stevens SR, Griffiths CE, Anhalt GJ, et al. Paraneoplastic pemphigus presenting as a lichen planus pemphigoides-like eruption. Arch Dermatol. 1993;129:866-869.
- Ohzono A, Sogame R, Li X, et al. Clinical and immunological findings in 104 cases of paraneoplastic pemphigus. Br J Dermatol. 2015;173:1447-1452.
- Tziotzios C, Lee JYW, Brier T, et al. Lichen planus and lichenoid dermatoses: clinical overview and molecular basis. J Am Acad Dermatol. 2018;79:789-804.
- Papara C, Danescu S, Sitaru C, et al. Challenges and pitfalls between lichen planus pemphigoides and bullous lichen planus. Australas J Dermatol. 2022;63:165-171.
- Tripathy DM, Vashisht D, Rathore G, et al. Bullous lichen planus vs lichen planus pemphigoides: a diagnostic dilemma. Indian Dermatol Online J. 2022;13:282-284.
- Hübner F, Langan EA, Recke A. Lichen planus pemphigoides: from lichenoid inflammation to autoantibody-mediated blistering. Front Immunol. 2019;10:1389.
- Montagnon CM, Tolkachjov SN, Murrell DF, et al. Subepithelial autoimmune blistering dermatoses: clinical features and diagnosis. J Am Acad Dermatol. 2021;85:1-14.
- Hackländer K, Lehmann P, Hofmann SC. Successful treatment of lichen planus pemphigoides using acitretin as monotherapy. J Dtsch Dermatol Ges. 2014;12:818-819.
- Boyle M, Ashi S, Puiu T, et al. Lichen planus pemphigoides associated with PD-1 and PD-L1 inhibitors: a case series and review of the literature. Am J Dermatopathol. 2022;44:360-367.
- Zaraa I, Mahfoudh A, Sellami MK, et al. Lichen planus pemphigoides: four new cases and a review of the literature. Int J Dermatol. 2013;52:406-412.
- Bolognia J, Schaffer J, Cerroni L, eds. Dermatology. 4th ed. Elsevier; 2018.
- Zillikens D, Caux F, Mascaru JM Jr, et al. Autoantibodies in lichen planus pemphigoides react with a novel epitope within the C-terminal NC16A domain of BP180. J Invest Dermatol. 1999;113:117-121.
- Knisley RR, Petropolis AA, Mackey VT. Lichen planus pemphigoides treated with ustekinumab. Cutis. 2017;100:415-418.
- Liakopoulou A, Rallis E. Bullous lichen planus—a review. J Dermatol Case Rep. 2017;11:1-4.
- Weston G, Payette M. Update on lichen planus and its clinical variants. Int J Womens Dermatol. 2015;1:140-149.
- Moussa A, Colla TG, Asfour L, et al. Effective treatment of refractory lichen planus pemphigoides with a Janus kinase-1/2 inhibitor. Clin Exp Dermatol. 2022;47:2040-2041.
- Brennan M, Baldissano M, King L, et al. Successful use of rituximab and intravenous gamma globulin to treat checkpoint inhibitor-induced severe lichen planus pemphigoides. Skinmed. 2020;18:246-249.
- Kim JH, Kim SC. Paraneoplastic pemphigus: paraneoplastic autoimmune disease of the skin and mucosa. Front Immunol. 2019;10:1259.
- Stevens SR, Griffiths CE, Anhalt GJ, et al. Paraneoplastic pemphigus presenting as a lichen planus pemphigoides-like eruption. Arch Dermatol. 1993;129:866-869.
- Ohzono A, Sogame R, Li X, et al. Clinical and immunological findings in 104 cases of paraneoplastic pemphigus. Br J Dermatol. 2015;173:1447-1452.
- Tziotzios C, Lee JYW, Brier T, et al. Lichen planus and lichenoid dermatoses: clinical overview and molecular basis. J Am Acad Dermatol. 2018;79:789-804.
- Papara C, Danescu S, Sitaru C, et al. Challenges and pitfalls between lichen planus pemphigoides and bullous lichen planus. Australas J Dermatol. 2022;63:165-171.
- Tripathy DM, Vashisht D, Rathore G, et al. Bullous lichen planus vs lichen planus pemphigoides: a diagnostic dilemma. Indian Dermatol Online J. 2022;13:282-284.
A 71-year-old woman with no relevant medical history presented with recurrent painful erosions on the gingivae and gluteal cleft of 1 year’s duration. She previously was diagnosed by her periodontist with erosive lichen planus and was prescribed topical and oral steroids with minimal improvement. She denied fever, chills, weakness, fatigue, vision changes, eye pain, and sore throat. Dermatologic examination revealed edematous and erythematous upper and lower gingivae with mild erosions, as well as thin, eroded, erythematous plaques within the gluteal cleft. Indirect immunofluorescence revealed IgG with epidermal localization in a human split-skin substrate, and an enzyme-linked immunosorbent assay revealed positive IgG to bullous pemphigoid (BP) 180 and negative IgG to BP230. A 4-mm punch biopsy of the gluteal cleft was performed.
Likely cause of mysterious hepatitis outbreak in children identified
Coinfection with AAV2 and a human adenovirus (HAdV), in particular, appears to leave some children more vulnerable to this acute hepatitis of unknown origin, researchers reported in three studies published online in Nature. Coinfection with Epstein-Barr virus (EBV), herpes, and enterovirus also were found. Adeno-associated viruses are not considered pathogenic on their own and require a “helper” virus for productive infection.
“I am quite confident that we have identified the key viruses involved because we used a comprehensive metagenomic sequencing approach to look for potential infections from any virus or non-viral pathogen,” Charles Chiu, MD, PhD, senior author and professor of laboratory medicine and medicine/infectious diseases at the University of California, San Francisco, said in an interview.
Dr. Chiu and colleagues propose that lockdowns and social isolation during the COVID-19 pandemic left more children susceptible. A major aspect of immunity in childhood is the adaptive immune response – both cell-mediated and humoral – shaped in part by exposure to viruses and other pathogens early in life, Dr. Chiu said.
“Due to COVID-19, a large population of children did not experience this, so it is possible once restrictions were lifted, they were suddenly exposed over a short period of time to multiple viruses that, in a poorly trained immune system, would have increased their risk of developing severe disease,” he said.
This theory has been popular, especially because cases of unexplained acute hepatitis peaked during the height of the COVID-19 pandemic when isolation was common, William F. Balistreri, MD, who was not affiliated with the study, told this news organization. Dr. Balistreri is professor of pediatrics and director emeritus of the Pediatric Liver Care Center at Cincinnati Children’s Hospital Medical Center.
Identifying the culprits
Determining what factors might be involved was the main aim of the etiology study by Dr. Chiu and colleagues published online in Nature.
The journal simultaneously published a genomic study confirming the presence of AAV2 and other suspected viruses and a genomic and laboratory study further corroborating the results.
More than 1,000 children worldwide had been diagnosed with unexplained acute pediatric hepatitis as of August 2022. In the United States, there have been 358 cases, including 22 in which the child required a liver transplant and 13 in which the child died.
This new form of hepatitis, first detected in October 2021, does not fit into existing classifications of types A through E, so some researchers refer to the condition as acute non–A-E hepatitis of unknown etiology.
The investigators started with an important clue based on previous research: the role adenovirus might play. Dr. Chiu and colleagues assessed 27 blood, stool, and other samples from 16 affected children who each previously tested positive for adenoviruses. The researchers included cases of the condition identified up until May 22, 2022. The median age was 3 years, and approximately half were boys.
They compared viruses present in these children with those in 113 controls without the mysterious hepatitis. The control group consisted of 15 children who were hospitalized with a nonhepatitis inflammatory condition, 27 with a noninflammatory condition, 30 with acute hepatitis of known origin, 12 with acute gastroenteritis and an HAdV-positive stool sample, and 11 with acute gastroenteritis and an HAdV-negative stool sample, as well as 18 blood donors. The median age was 7 years.
The researchers assessed samples using multiple technologies, including metagenomic sequencing, tiling multiplex polymerase chain reaction (PCR) amplicon sequencing, metagenomic sequencing with probe capture viral enrichment, and virus-specific PCR. Many of these advanced techniques were not even available 5-10 years ago, Dr. Chiu said.
Key findings
Blood samples were available for 14 of the 16 children with acute hepatitis of unknown origin. Among this study group, AAV2 was found in 13 (93%). No other adeno-associated viruses were found. HAdV was detected in all 14 children: HAdV-41 in 11 children and HAdV-40, HAdV-2, and an untypeable strain in one child each. This finding was not intuitive because HAdVs are not commonly associated with hepatitis, according to the study.
AAV2 was much less common in the control group. For example, it was found in none of the children with hepatitis of known origin and in only four children (3.5%) with acute gastroenteritis and HAdV-positive stool. Of note, neither AAV2 nor HAdV-41 was detected among the 30 pediatric controls with acute hepatitis of defined etiology nor 42 of the hospitalized children without hepatitis, the researchers wrote.
In the search for other viruses in the study group, metagenomic sequencing detected EBV, also known as human herpesvirus (HHV)–4, in two children, cytomegalovirus (CMV) in one child, and HAdV type C in one child.
Analysis of whole blood revealed enterovirus A71 in one patient. HAdV type C also was detected in one child on the basis of a nasopharyngeal swab, and picobirnavirus was found in a stool sample from another patient.
Researchers conducted virus-specific PCR tests on both patient groups to identify additional viruses that may be associated with the unexplained acute hepatitis. EBV/HHV-4 was detected in 11 children (79%) in the study group vs. in 1 child (0.88%) in the control group. HHV-6 was detected in seven children (50%) in the study group, compared with one case in the control group. CMV was not detected in any of the children in the study group versus vs. two children (1.8%) in the control group.
“Although we found significant differences in the relative proportions of EBV and HHV-6 in cases compared to controls, we do not believe that these viruses are the primary cause of acute severe hepatitis,” the researchers wrote. The viral load of the two herpes viruses were very low, so the positive results could represent integrated proviral DNA rather than bona fide low-level herpesvirus. In addition, herpesvirus can be reactivated by an inflammatory condition.
“Nevertheless, it is striking that among the 16 cases (in the study group), dual, triple, or quadruple infections with AAV2, adenovirus, and one or both herpesviruses were detected in whole blood from at least 12 cases (75%),” the researchers wrote.
Management of suspected hepatitis
The study’s key messages for parents and health care providers “are awareness and reassurance,” Dr. Balistreri said in an interview.
Vigilance also is warranted if a child develops prodromal symptoms including respiratory and/or gastrointestinal signs such as nausea, vomiting, diarrhea, and abdomen pain, he said. If jaundice or scleral icterus is noted, then hepatitis should be suspected.
Some patients need hospitalization and quickly recover. In very rare instances, the inflammation may progress to liver failure and transplantation, Dr. Balistreri said.
“Reassurance is based on the good news that most children with acute hepatitis get better. If a case arises, it is good practice to keep the child well hydrated, offer a normal diet, and avoid medications that may be cleared by the liver,” Dr. Balistreri added.
“Of course, COVID-19 vaccination is strongly suggested,” he said.
Some existing treatments could help against unexplained acute hepatitis, Dr. Chiu said. “The findings suggest that antiviral therapy might be effective in these cases.”
Cidofovir can be effective against adenovirus, according to a report in The Lancet . Similarly, ganciclovir or valganciclovir may have activity against EBV/HHV-4 or HHV-6, Dr. Chiu said. “However, antiviral therapy is not available for AAV2.”
The three studies published in Nature “offer compelling evidence, from disparate centers, of a linkage of outbreak cases to infection by AAV2,” Dr. Balistreri said. The studies also suggest that liver injury was related to abnormal immune responses. This is an important clinical distinction, indicating a potential therapeutic approach to future cases – immunosuppression rather than anti-adenoviral agents, he said.
“We await further studies of this important concept,” Dr. Balistreri said.
Many unanswered questions remain about the condition’s etiology, he added. Is there a synergy or shared susceptibility related to SARS-CoV-2? Is the COVID-19 virus helping to trigger these infections, or does it increase the risk once infected? Also, are other epigenetic factors or viruses involved?
Moving forward
The next steps in the research could go beyond identifying presence of these different viruses and determining which one(s) are contributing the most to the acute pediatric hepatitis, Dr. Chiu said.
The researchers also would like to test early results from the United Kingdom that identified a potential association of acute severe hepatitis with the presence of human leukocyte antigen genotype DRB1*04:01, he added.
They also might investigate other unintended potential clinical consequences of the COVID-19 pandemic, including long COVID and resurgence of infections from other viruses, such as respiratory syncytial virus, influenza, and enterovirus D68.
The study was supported by the Centers for Disease Control and Prevention, the National Institutes of Health, the Department of Homeland Security, and other grants. Dr. Chiu is a founder of Delve Bio and on the scientific advisory board for Delve Bio, Mammoth Biosciences, BiomeSense, and Poppy Health. Dr. Balistreri had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Coinfection with AAV2 and a human adenovirus (HAdV), in particular, appears to leave some children more vulnerable to this acute hepatitis of unknown origin, researchers reported in three studies published online in Nature. Coinfection with Epstein-Barr virus (EBV), herpes, and enterovirus also were found. Adeno-associated viruses are not considered pathogenic on their own and require a “helper” virus for productive infection.
“I am quite confident that we have identified the key viruses involved because we used a comprehensive metagenomic sequencing approach to look for potential infections from any virus or non-viral pathogen,” Charles Chiu, MD, PhD, senior author and professor of laboratory medicine and medicine/infectious diseases at the University of California, San Francisco, said in an interview.
Dr. Chiu and colleagues propose that lockdowns and social isolation during the COVID-19 pandemic left more children susceptible. A major aspect of immunity in childhood is the adaptive immune response – both cell-mediated and humoral – shaped in part by exposure to viruses and other pathogens early in life, Dr. Chiu said.
“Due to COVID-19, a large population of children did not experience this, so it is possible once restrictions were lifted, they were suddenly exposed over a short period of time to multiple viruses that, in a poorly trained immune system, would have increased their risk of developing severe disease,” he said.
This theory has been popular, especially because cases of unexplained acute hepatitis peaked during the height of the COVID-19 pandemic when isolation was common, William F. Balistreri, MD, who was not affiliated with the study, told this news organization. Dr. Balistreri is professor of pediatrics and director emeritus of the Pediatric Liver Care Center at Cincinnati Children’s Hospital Medical Center.
Identifying the culprits
Determining what factors might be involved was the main aim of the etiology study by Dr. Chiu and colleagues published online in Nature.
The journal simultaneously published a genomic study confirming the presence of AAV2 and other suspected viruses and a genomic and laboratory study further corroborating the results.
More than 1,000 children worldwide had been diagnosed with unexplained acute pediatric hepatitis as of August 2022. In the United States, there have been 358 cases, including 22 in which the child required a liver transplant and 13 in which the child died.
This new form of hepatitis, first detected in October 2021, does not fit into existing classifications of types A through E, so some researchers refer to the condition as acute non–A-E hepatitis of unknown etiology.
The investigators started with an important clue based on previous research: the role adenovirus might play. Dr. Chiu and colleagues assessed 27 blood, stool, and other samples from 16 affected children who each previously tested positive for adenoviruses. The researchers included cases of the condition identified up until May 22, 2022. The median age was 3 years, and approximately half were boys.
They compared viruses present in these children with those in 113 controls without the mysterious hepatitis. The control group consisted of 15 children who were hospitalized with a nonhepatitis inflammatory condition, 27 with a noninflammatory condition, 30 with acute hepatitis of known origin, 12 with acute gastroenteritis and an HAdV-positive stool sample, and 11 with acute gastroenteritis and an HAdV-negative stool sample, as well as 18 blood donors. The median age was 7 years.
The researchers assessed samples using multiple technologies, including metagenomic sequencing, tiling multiplex polymerase chain reaction (PCR) amplicon sequencing, metagenomic sequencing with probe capture viral enrichment, and virus-specific PCR. Many of these advanced techniques were not even available 5-10 years ago, Dr. Chiu said.
Key findings
Blood samples were available for 14 of the 16 children with acute hepatitis of unknown origin. Among this study group, AAV2 was found in 13 (93%). No other adeno-associated viruses were found. HAdV was detected in all 14 children: HAdV-41 in 11 children and HAdV-40, HAdV-2, and an untypeable strain in one child each. This finding was not intuitive because HAdVs are not commonly associated with hepatitis, according to the study.
AAV2 was much less common in the control group. For example, it was found in none of the children with hepatitis of known origin and in only four children (3.5%) with acute gastroenteritis and HAdV-positive stool. Of note, neither AAV2 nor HAdV-41 was detected among the 30 pediatric controls with acute hepatitis of defined etiology nor 42 of the hospitalized children without hepatitis, the researchers wrote.
In the search for other viruses in the study group, metagenomic sequencing detected EBV, also known as human herpesvirus (HHV)–4, in two children, cytomegalovirus (CMV) in one child, and HAdV type C in one child.
Analysis of whole blood revealed enterovirus A71 in one patient. HAdV type C also was detected in one child on the basis of a nasopharyngeal swab, and picobirnavirus was found in a stool sample from another patient.
Researchers conducted virus-specific PCR tests on both patient groups to identify additional viruses that may be associated with the unexplained acute hepatitis. EBV/HHV-4 was detected in 11 children (79%) in the study group vs. in 1 child (0.88%) in the control group. HHV-6 was detected in seven children (50%) in the study group, compared with one case in the control group. CMV was not detected in any of the children in the study group versus vs. two children (1.8%) in the control group.
“Although we found significant differences in the relative proportions of EBV and HHV-6 in cases compared to controls, we do not believe that these viruses are the primary cause of acute severe hepatitis,” the researchers wrote. The viral load of the two herpes viruses were very low, so the positive results could represent integrated proviral DNA rather than bona fide low-level herpesvirus. In addition, herpesvirus can be reactivated by an inflammatory condition.
“Nevertheless, it is striking that among the 16 cases (in the study group), dual, triple, or quadruple infections with AAV2, adenovirus, and one or both herpesviruses were detected in whole blood from at least 12 cases (75%),” the researchers wrote.
Management of suspected hepatitis
The study’s key messages for parents and health care providers “are awareness and reassurance,” Dr. Balistreri said in an interview.
Vigilance also is warranted if a child develops prodromal symptoms including respiratory and/or gastrointestinal signs such as nausea, vomiting, diarrhea, and abdomen pain, he said. If jaundice or scleral icterus is noted, then hepatitis should be suspected.
Some patients need hospitalization and quickly recover. In very rare instances, the inflammation may progress to liver failure and transplantation, Dr. Balistreri said.
“Reassurance is based on the good news that most children with acute hepatitis get better. If a case arises, it is good practice to keep the child well hydrated, offer a normal diet, and avoid medications that may be cleared by the liver,” Dr. Balistreri added.
“Of course, COVID-19 vaccination is strongly suggested,” he said.
Some existing treatments could help against unexplained acute hepatitis, Dr. Chiu said. “The findings suggest that antiviral therapy might be effective in these cases.”
Cidofovir can be effective against adenovirus, according to a report in The Lancet . Similarly, ganciclovir or valganciclovir may have activity against EBV/HHV-4 or HHV-6, Dr. Chiu said. “However, antiviral therapy is not available for AAV2.”
The three studies published in Nature “offer compelling evidence, from disparate centers, of a linkage of outbreak cases to infection by AAV2,” Dr. Balistreri said. The studies also suggest that liver injury was related to abnormal immune responses. This is an important clinical distinction, indicating a potential therapeutic approach to future cases – immunosuppression rather than anti-adenoviral agents, he said.
“We await further studies of this important concept,” Dr. Balistreri said.
Many unanswered questions remain about the condition’s etiology, he added. Is there a synergy or shared susceptibility related to SARS-CoV-2? Is the COVID-19 virus helping to trigger these infections, or does it increase the risk once infected? Also, are other epigenetic factors or viruses involved?
Moving forward
The next steps in the research could go beyond identifying presence of these different viruses and determining which one(s) are contributing the most to the acute pediatric hepatitis, Dr. Chiu said.
The researchers also would like to test early results from the United Kingdom that identified a potential association of acute severe hepatitis with the presence of human leukocyte antigen genotype DRB1*04:01, he added.
They also might investigate other unintended potential clinical consequences of the COVID-19 pandemic, including long COVID and resurgence of infections from other viruses, such as respiratory syncytial virus, influenza, and enterovirus D68.
The study was supported by the Centers for Disease Control and Prevention, the National Institutes of Health, the Department of Homeland Security, and other grants. Dr. Chiu is a founder of Delve Bio and on the scientific advisory board for Delve Bio, Mammoth Biosciences, BiomeSense, and Poppy Health. Dr. Balistreri had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Coinfection with AAV2 and a human adenovirus (HAdV), in particular, appears to leave some children more vulnerable to this acute hepatitis of unknown origin, researchers reported in three studies published online in Nature. Coinfection with Epstein-Barr virus (EBV), herpes, and enterovirus also were found. Adeno-associated viruses are not considered pathogenic on their own and require a “helper” virus for productive infection.
“I am quite confident that we have identified the key viruses involved because we used a comprehensive metagenomic sequencing approach to look for potential infections from any virus or non-viral pathogen,” Charles Chiu, MD, PhD, senior author and professor of laboratory medicine and medicine/infectious diseases at the University of California, San Francisco, said in an interview.
Dr. Chiu and colleagues propose that lockdowns and social isolation during the COVID-19 pandemic left more children susceptible. A major aspect of immunity in childhood is the adaptive immune response – both cell-mediated and humoral – shaped in part by exposure to viruses and other pathogens early in life, Dr. Chiu said.
“Due to COVID-19, a large population of children did not experience this, so it is possible once restrictions were lifted, they were suddenly exposed over a short period of time to multiple viruses that, in a poorly trained immune system, would have increased their risk of developing severe disease,” he said.
This theory has been popular, especially because cases of unexplained acute hepatitis peaked during the height of the COVID-19 pandemic when isolation was common, William F. Balistreri, MD, who was not affiliated with the study, told this news organization. Dr. Balistreri is professor of pediatrics and director emeritus of the Pediatric Liver Care Center at Cincinnati Children’s Hospital Medical Center.
Identifying the culprits
Determining what factors might be involved was the main aim of the etiology study by Dr. Chiu and colleagues published online in Nature.
The journal simultaneously published a genomic study confirming the presence of AAV2 and other suspected viruses and a genomic and laboratory study further corroborating the results.
More than 1,000 children worldwide had been diagnosed with unexplained acute pediatric hepatitis as of August 2022. In the United States, there have been 358 cases, including 22 in which the child required a liver transplant and 13 in which the child died.
This new form of hepatitis, first detected in October 2021, does not fit into existing classifications of types A through E, so some researchers refer to the condition as acute non–A-E hepatitis of unknown etiology.
The investigators started with an important clue based on previous research: the role adenovirus might play. Dr. Chiu and colleagues assessed 27 blood, stool, and other samples from 16 affected children who each previously tested positive for adenoviruses. The researchers included cases of the condition identified up until May 22, 2022. The median age was 3 years, and approximately half were boys.
They compared viruses present in these children with those in 113 controls without the mysterious hepatitis. The control group consisted of 15 children who were hospitalized with a nonhepatitis inflammatory condition, 27 with a noninflammatory condition, 30 with acute hepatitis of known origin, 12 with acute gastroenteritis and an HAdV-positive stool sample, and 11 with acute gastroenteritis and an HAdV-negative stool sample, as well as 18 blood donors. The median age was 7 years.
The researchers assessed samples using multiple technologies, including metagenomic sequencing, tiling multiplex polymerase chain reaction (PCR) amplicon sequencing, metagenomic sequencing with probe capture viral enrichment, and virus-specific PCR. Many of these advanced techniques were not even available 5-10 years ago, Dr. Chiu said.
Key findings
Blood samples were available for 14 of the 16 children with acute hepatitis of unknown origin. Among this study group, AAV2 was found in 13 (93%). No other adeno-associated viruses were found. HAdV was detected in all 14 children: HAdV-41 in 11 children and HAdV-40, HAdV-2, and an untypeable strain in one child each. This finding was not intuitive because HAdVs are not commonly associated with hepatitis, according to the study.
AAV2 was much less common in the control group. For example, it was found in none of the children with hepatitis of known origin and in only four children (3.5%) with acute gastroenteritis and HAdV-positive stool. Of note, neither AAV2 nor HAdV-41 was detected among the 30 pediatric controls with acute hepatitis of defined etiology nor 42 of the hospitalized children without hepatitis, the researchers wrote.
In the search for other viruses in the study group, metagenomic sequencing detected EBV, also known as human herpesvirus (HHV)–4, in two children, cytomegalovirus (CMV) in one child, and HAdV type C in one child.
Analysis of whole blood revealed enterovirus A71 in one patient. HAdV type C also was detected in one child on the basis of a nasopharyngeal swab, and picobirnavirus was found in a stool sample from another patient.
Researchers conducted virus-specific PCR tests on both patient groups to identify additional viruses that may be associated with the unexplained acute hepatitis. EBV/HHV-4 was detected in 11 children (79%) in the study group vs. in 1 child (0.88%) in the control group. HHV-6 was detected in seven children (50%) in the study group, compared with one case in the control group. CMV was not detected in any of the children in the study group versus vs. two children (1.8%) in the control group.
“Although we found significant differences in the relative proportions of EBV and HHV-6 in cases compared to controls, we do not believe that these viruses are the primary cause of acute severe hepatitis,” the researchers wrote. The viral load of the two herpes viruses were very low, so the positive results could represent integrated proviral DNA rather than bona fide low-level herpesvirus. In addition, herpesvirus can be reactivated by an inflammatory condition.
“Nevertheless, it is striking that among the 16 cases (in the study group), dual, triple, or quadruple infections with AAV2, adenovirus, and one or both herpesviruses were detected in whole blood from at least 12 cases (75%),” the researchers wrote.
Management of suspected hepatitis
The study’s key messages for parents and health care providers “are awareness and reassurance,” Dr. Balistreri said in an interview.
Vigilance also is warranted if a child develops prodromal symptoms including respiratory and/or gastrointestinal signs such as nausea, vomiting, diarrhea, and abdomen pain, he said. If jaundice or scleral icterus is noted, then hepatitis should be suspected.
Some patients need hospitalization and quickly recover. In very rare instances, the inflammation may progress to liver failure and transplantation, Dr. Balistreri said.
“Reassurance is based on the good news that most children with acute hepatitis get better. If a case arises, it is good practice to keep the child well hydrated, offer a normal diet, and avoid medications that may be cleared by the liver,” Dr. Balistreri added.
“Of course, COVID-19 vaccination is strongly suggested,” he said.
Some existing treatments could help against unexplained acute hepatitis, Dr. Chiu said. “The findings suggest that antiviral therapy might be effective in these cases.”
Cidofovir can be effective against adenovirus, according to a report in The Lancet . Similarly, ganciclovir or valganciclovir may have activity against EBV/HHV-4 or HHV-6, Dr. Chiu said. “However, antiviral therapy is not available for AAV2.”
The three studies published in Nature “offer compelling evidence, from disparate centers, of a linkage of outbreak cases to infection by AAV2,” Dr. Balistreri said. The studies also suggest that liver injury was related to abnormal immune responses. This is an important clinical distinction, indicating a potential therapeutic approach to future cases – immunosuppression rather than anti-adenoviral agents, he said.
“We await further studies of this important concept,” Dr. Balistreri said.
Many unanswered questions remain about the condition’s etiology, he added. Is there a synergy or shared susceptibility related to SARS-CoV-2? Is the COVID-19 virus helping to trigger these infections, or does it increase the risk once infected? Also, are other epigenetic factors or viruses involved?
Moving forward
The next steps in the research could go beyond identifying presence of these different viruses and determining which one(s) are contributing the most to the acute pediatric hepatitis, Dr. Chiu said.
The researchers also would like to test early results from the United Kingdom that identified a potential association of acute severe hepatitis with the presence of human leukocyte antigen genotype DRB1*04:01, he added.
They also might investigate other unintended potential clinical consequences of the COVID-19 pandemic, including long COVID and resurgence of infections from other viruses, such as respiratory syncytial virus, influenza, and enterovirus D68.
The study was supported by the Centers for Disease Control and Prevention, the National Institutes of Health, the Department of Homeland Security, and other grants. Dr. Chiu is a founder of Delve Bio and on the scientific advisory board for Delve Bio, Mammoth Biosciences, BiomeSense, and Poppy Health. Dr. Balistreri had no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM NATURE
Is vaping a gateway to cigarettes for kids?
Vaping may not be a gateway to long-term cigarette use for adolescents, a new study published in JAMA Network Open suggests.
Many studies have found that youth who vape are more likely to take up cigarette smoking, but whether that new habit lasts for a month or a lifetime has been unclear.
The percentage of adolescents who move on to smoking after starting to vape remains low, and those who do start smoking are unlikely to continue doing so for a long time, the new research shows.
“If they simply experiment with smoking but do not continue, their risks of smoking-related adverse health outcomes are low,” said Ruoyan Sun, PhD, assistant professor with the department of health policy and organization at the University of Alabama at Birmingham and the study’s lead author. “But if they do become regular or established smokers, then the risks can be substantial.”
Dr. Sun and her colleagues analyzed data from several waves of the longitudinal Population Assessment of Tobacco and Health study. Participants included 8,671 children and adolescents aged 12-17 years. Among teens who had ever vaped, 6% began smoking cigarettes and continued to smoke in the subsequent 3 years, the researchers found (95% confidence interval, 4.5%-8.0%), compared with 1.1% among teens who never vaped (95% CI, 0.8%-1.3%).
“The real concern is whether vaping is inducing significant numbers of young people to become confirmed smokers,” said Dr. Sun. “The answer is that it does not.”
Previous studies using PATH data have suggested that adolescents who use e-cigarettes are up to 3.5 times more likely than nonusers to start smoking tobacco cigarettes and that they may continue to use both products.
But in the new study, despite the low overall number of cigarette smokers, those in the group who used e-cigarettes were 81% more likely to continue smoking tobacco cigarettes after 3 years, compared with those who did not use e-cigarettes, researchers found (95% CI, 1.03-3.18).
Rachel Boykan, MD, clinical professor of pediatrics and attending physician at Stony Brook (N.Y.) Children’s Hospital, said that despite the findings, the overall messaging to patients remains the same: Vaping is linked to smoking.
“There is still a risk of initiation smoking among e-cigarette users – that is the take-home message,” Dr. Boykan, who was not affiliated with the study, said. “No risk of smoking initiation is acceptable. And of course, as we are learning, there are significant health risks with e-cigarette use alone.”
Among the entire group of teens, approximately 4% of the adolescents began smoking cigarettes; only 2.5% continued to smoke in the subsequent 3 years, the researchers found.
“Based on our odds ratio result, e-cigarette users are more likely to report continued cigarette smoking,” said Dr. Sun. “However, the risk differences were not significant.”
The low numbers of teens who continued to smoke also suggests that adolescents are more likely to quit than become long-term smokers.
Nicotine dependence may adversely affect the ability of adolescents to learn, remember, and maintain attention. Early research has suggested that long-term e-cigarette smokers may be at increased risk of developing some of the same conditions as tobacco smokers, such as chronic lung disease.
Brian Jenssen, MD, a pediatrician at Children’s Hospital of Philadelphia and assistant professor in the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, said that the analysis is limited in part because it does not include changes in smoking and vaping trends since the pandemic started, “which seems to have increased the risk of smoking and vaping use.”
Data from the 2022 National Youth Tobacco survey found that although the rate of middle school and high school students who begin to use e-cigarettes has steadily decreased during the past two decades, those who vape report using the devices more frequently.
Subsequent use of cigarettes is also only one measure of risk from vapes.
“The goal isn’t just about cigarettes,” said Dr. Jenssen, who was not affiliated with the new study. “The goal is about helping children live tobacco- and nicotine-free lives, and there seems to be an increasing intensity of use, which is causing its own health risks.”
The current study findings do not change how clinicians should counsel their patients, and they should continue to advise teens to abstain from vaping, he added.
Dr. Sun said it’s common for youth to experiment with multiple tobacco products.
“Clinicians should continue to monitor youth tobacco-use behaviors but with their concern being focused on youthful patients who sustain smoking instead of just trying cigarettes,” she said.
Some of the study authors received support from the National Cancer Institute of the National Institutes of Health and the U.S. Food and Drug Administration’s Center for Tobacco Products.
A version of this article first appeared on Medscape.com.
Vaping may not be a gateway to long-term cigarette use for adolescents, a new study published in JAMA Network Open suggests.
Many studies have found that youth who vape are more likely to take up cigarette smoking, but whether that new habit lasts for a month or a lifetime has been unclear.
The percentage of adolescents who move on to smoking after starting to vape remains low, and those who do start smoking are unlikely to continue doing so for a long time, the new research shows.
“If they simply experiment with smoking but do not continue, their risks of smoking-related adverse health outcomes are low,” said Ruoyan Sun, PhD, assistant professor with the department of health policy and organization at the University of Alabama at Birmingham and the study’s lead author. “But if they do become regular or established smokers, then the risks can be substantial.”
Dr. Sun and her colleagues analyzed data from several waves of the longitudinal Population Assessment of Tobacco and Health study. Participants included 8,671 children and adolescents aged 12-17 years. Among teens who had ever vaped, 6% began smoking cigarettes and continued to smoke in the subsequent 3 years, the researchers found (95% confidence interval, 4.5%-8.0%), compared with 1.1% among teens who never vaped (95% CI, 0.8%-1.3%).
“The real concern is whether vaping is inducing significant numbers of young people to become confirmed smokers,” said Dr. Sun. “The answer is that it does not.”
Previous studies using PATH data have suggested that adolescents who use e-cigarettes are up to 3.5 times more likely than nonusers to start smoking tobacco cigarettes and that they may continue to use both products.
But in the new study, despite the low overall number of cigarette smokers, those in the group who used e-cigarettes were 81% more likely to continue smoking tobacco cigarettes after 3 years, compared with those who did not use e-cigarettes, researchers found (95% CI, 1.03-3.18).
Rachel Boykan, MD, clinical professor of pediatrics and attending physician at Stony Brook (N.Y.) Children’s Hospital, said that despite the findings, the overall messaging to patients remains the same: Vaping is linked to smoking.
“There is still a risk of initiation smoking among e-cigarette users – that is the take-home message,” Dr. Boykan, who was not affiliated with the study, said. “No risk of smoking initiation is acceptable. And of course, as we are learning, there are significant health risks with e-cigarette use alone.”
Among the entire group of teens, approximately 4% of the adolescents began smoking cigarettes; only 2.5% continued to smoke in the subsequent 3 years, the researchers found.
“Based on our odds ratio result, e-cigarette users are more likely to report continued cigarette smoking,” said Dr. Sun. “However, the risk differences were not significant.”
The low numbers of teens who continued to smoke also suggests that adolescents are more likely to quit than become long-term smokers.
Nicotine dependence may adversely affect the ability of adolescents to learn, remember, and maintain attention. Early research has suggested that long-term e-cigarette smokers may be at increased risk of developing some of the same conditions as tobacco smokers, such as chronic lung disease.
Brian Jenssen, MD, a pediatrician at Children’s Hospital of Philadelphia and assistant professor in the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, said that the analysis is limited in part because it does not include changes in smoking and vaping trends since the pandemic started, “which seems to have increased the risk of smoking and vaping use.”
Data from the 2022 National Youth Tobacco survey found that although the rate of middle school and high school students who begin to use e-cigarettes has steadily decreased during the past two decades, those who vape report using the devices more frequently.
Subsequent use of cigarettes is also only one measure of risk from vapes.
“The goal isn’t just about cigarettes,” said Dr. Jenssen, who was not affiliated with the new study. “The goal is about helping children live tobacco- and nicotine-free lives, and there seems to be an increasing intensity of use, which is causing its own health risks.”
The current study findings do not change how clinicians should counsel their patients, and they should continue to advise teens to abstain from vaping, he added.
Dr. Sun said it’s common for youth to experiment with multiple tobacco products.
“Clinicians should continue to monitor youth tobacco-use behaviors but with their concern being focused on youthful patients who sustain smoking instead of just trying cigarettes,” she said.
Some of the study authors received support from the National Cancer Institute of the National Institutes of Health and the U.S. Food and Drug Administration’s Center for Tobacco Products.
A version of this article first appeared on Medscape.com.
Vaping may not be a gateway to long-term cigarette use for adolescents, a new study published in JAMA Network Open suggests.
Many studies have found that youth who vape are more likely to take up cigarette smoking, but whether that new habit lasts for a month or a lifetime has been unclear.
The percentage of adolescents who move on to smoking after starting to vape remains low, and those who do start smoking are unlikely to continue doing so for a long time, the new research shows.
“If they simply experiment with smoking but do not continue, their risks of smoking-related adverse health outcomes are low,” said Ruoyan Sun, PhD, assistant professor with the department of health policy and organization at the University of Alabama at Birmingham and the study’s lead author. “But if they do become regular or established smokers, then the risks can be substantial.”
Dr. Sun and her colleagues analyzed data from several waves of the longitudinal Population Assessment of Tobacco and Health study. Participants included 8,671 children and adolescents aged 12-17 years. Among teens who had ever vaped, 6% began smoking cigarettes and continued to smoke in the subsequent 3 years, the researchers found (95% confidence interval, 4.5%-8.0%), compared with 1.1% among teens who never vaped (95% CI, 0.8%-1.3%).
“The real concern is whether vaping is inducing significant numbers of young people to become confirmed smokers,” said Dr. Sun. “The answer is that it does not.”
Previous studies using PATH data have suggested that adolescents who use e-cigarettes are up to 3.5 times more likely than nonusers to start smoking tobacco cigarettes and that they may continue to use both products.
But in the new study, despite the low overall number of cigarette smokers, those in the group who used e-cigarettes were 81% more likely to continue smoking tobacco cigarettes after 3 years, compared with those who did not use e-cigarettes, researchers found (95% CI, 1.03-3.18).
Rachel Boykan, MD, clinical professor of pediatrics and attending physician at Stony Brook (N.Y.) Children’s Hospital, said that despite the findings, the overall messaging to patients remains the same: Vaping is linked to smoking.
“There is still a risk of initiation smoking among e-cigarette users – that is the take-home message,” Dr. Boykan, who was not affiliated with the study, said. “No risk of smoking initiation is acceptable. And of course, as we are learning, there are significant health risks with e-cigarette use alone.”
Among the entire group of teens, approximately 4% of the adolescents began smoking cigarettes; only 2.5% continued to smoke in the subsequent 3 years, the researchers found.
“Based on our odds ratio result, e-cigarette users are more likely to report continued cigarette smoking,” said Dr. Sun. “However, the risk differences were not significant.”
The low numbers of teens who continued to smoke also suggests that adolescents are more likely to quit than become long-term smokers.
Nicotine dependence may adversely affect the ability of adolescents to learn, remember, and maintain attention. Early research has suggested that long-term e-cigarette smokers may be at increased risk of developing some of the same conditions as tobacco smokers, such as chronic lung disease.
Brian Jenssen, MD, a pediatrician at Children’s Hospital of Philadelphia and assistant professor in the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, said that the analysis is limited in part because it does not include changes in smoking and vaping trends since the pandemic started, “which seems to have increased the risk of smoking and vaping use.”
Data from the 2022 National Youth Tobacco survey found that although the rate of middle school and high school students who begin to use e-cigarettes has steadily decreased during the past two decades, those who vape report using the devices more frequently.
Subsequent use of cigarettes is also only one measure of risk from vapes.
“The goal isn’t just about cigarettes,” said Dr. Jenssen, who was not affiliated with the new study. “The goal is about helping children live tobacco- and nicotine-free lives, and there seems to be an increasing intensity of use, which is causing its own health risks.”
The current study findings do not change how clinicians should counsel their patients, and they should continue to advise teens to abstain from vaping, he added.
Dr. Sun said it’s common for youth to experiment with multiple tobacco products.
“Clinicians should continue to monitor youth tobacco-use behaviors but with their concern being focused on youthful patients who sustain smoking instead of just trying cigarettes,” she said.
Some of the study authors received support from the National Cancer Institute of the National Institutes of Health and the U.S. Food and Drug Administration’s Center for Tobacco Products.
A version of this article first appeared on Medscape.com.