User login
Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Chatbots Seem More Empathetic Than Docs in Cancer Discussions
Large language models (LLM) such as ChatGPT have shown mixed results in the quality of their responses to consumer questions about cancer.
One recent study found AI chatbots to churn out incomplete, inaccurate, or even nonsensical cancer treatment recommendations, while another found them to generate largely accurate — if technical — responses to the most common cancer questions.
While researchers have seen success with purpose-built chatbots created to address patient concerns about specific cancers, the consensus to date has been that the generalized models like ChatGPT remain works in progress and that physicians should avoid pointing patients to them, for now.
Yet new findings suggest that these chatbots may do better than individual physicians, at least on some measures, when it comes to answering queries about cancer. For research published May 16 in JAMA Oncology (doi: 10.1001/jamaoncol.2024.0836), David Chen, a medical student at the University of Toronto, and his colleagues, isolated a random sample of 200 questions related to cancer care addressed to doctors on the public online forum Reddit. They then compared responses from oncologists with responses generated by three different AI chatbots. The blinded responses were rated for quality, readability, and empathy by six physicians, including oncologists and palliative and supportive care specialists.
Mr. Chen and colleagues’ research was modeled after a 2023 study that measured the quality of physician responses compared with chatbots for general medicine questions addressed to doctors on Reddit. That study found that the chatbots produced more empathetic-sounding answers, something Mr. Chen’s study also found. : quality, empathy, and readability.
Q&A With Author of New Research
Mr. Chen discussed his new study’s implications during an interview with this news organization.
Question: What is novel about this study?
Mr. Chen: We’ve seen many evaluations of chatbots that test for medical accuracy, but this study occurs in the domain of oncology care, where there are unique psychosocial and emotional considerations that are not precisely reflected in a general medicine setting. In effect, this study is putting these chatbots through a harder challenge.
Question: Why would chatbot responses seem more empathetic than those of physicians?
Mr. Chen: With the physician responses that we observed in our sample data set, we saw that there was very high variation of amount of apparent effort [in the physician responses]. Some physicians would put in a lot of time and effort, thinking through their response, and others wouldn’t do so as much. These chatbots don’t face fatigue the way humans do, or burnout. So they’re able to consistently provide responses with less variation in empathy.
Question: Do chatbots just seem empathetic because they are chattier?
Mr. Chen: We did think of verbosity as a potential confounder in this study. So we set a word count limit for the chatbot responses to keep it in the range of the physician responses. That way, verbosity was no longer a significant factor.
Question: How were quality and empathy measured by the reviewers?
Mr. Chen: For our study we used two teams of readers, each team composed of three physicians. In terms of the actual metrics we used, they were pilot metrics. There are no well-defined measurement scales or checklists that we could use to measure empathy. This is an emerging field of research. So we came up by consensus with our own set of ratings, and we feel that this is an area for the research to define a standardized set of guidelines.
Another novel aspect of this study is that we separated out different dimensions of quality and empathy. A quality response didn’t just mean it was medically accurate — quality also had to do with the focus and completeness of the response.
With empathy there are cognitive and emotional dimensions. Cognitive empathy uses critical thinking to understand the person’s emotions and thoughts and then adjusting a response to fit that. A patient may not want the best medically indicated treatment for their condition, because they want to preserve their quality of life. The chatbot may be able to adjust its recommendation with consideration of some of those humanistic elements that the patient is presenting with.
Emotional empathy is more about being supportive of the patient’s emotions by using expressions like ‘I understand where you’re coming from.’ or, ‘I can see how that makes you feel.’
Question: Why would physicians, not patients, be the best evaluators of empathy?
Mr. Chen: We’re actually very interested in evaluating patient ratings of empathy. We are conducting a follow-up study that evaluates patient ratings of empathy to the same set of chatbot and physician responses,to see if there are differences.
Question: Should cancer patients go ahead and consult chatbots?
Mr. Chen: Although we did observe increases in all of the metrics compared with physicians, this is a very specialized evaluation scenario where we’re using these Reddit questions and responses.
Naturally, we would need to do a trial, a head to head randomized comparison of physicians versus chatbots.
This pilot study does highlight the promising potential of these chatbots to suggest responses. But we can’t fully recommend that they should be used as standalone clinical tools without physicians.
This Q&A was edited for clarity.
Large language models (LLM) such as ChatGPT have shown mixed results in the quality of their responses to consumer questions about cancer.
One recent study found AI chatbots to churn out incomplete, inaccurate, or even nonsensical cancer treatment recommendations, while another found them to generate largely accurate — if technical — responses to the most common cancer questions.
While researchers have seen success with purpose-built chatbots created to address patient concerns about specific cancers, the consensus to date has been that the generalized models like ChatGPT remain works in progress and that physicians should avoid pointing patients to them, for now.
Yet new findings suggest that these chatbots may do better than individual physicians, at least on some measures, when it comes to answering queries about cancer. For research published May 16 in JAMA Oncology (doi: 10.1001/jamaoncol.2024.0836), David Chen, a medical student at the University of Toronto, and his colleagues, isolated a random sample of 200 questions related to cancer care addressed to doctors on the public online forum Reddit. They then compared responses from oncologists with responses generated by three different AI chatbots. The blinded responses were rated for quality, readability, and empathy by six physicians, including oncologists and palliative and supportive care specialists.
Mr. Chen and colleagues’ research was modeled after a 2023 study that measured the quality of physician responses compared with chatbots for general medicine questions addressed to doctors on Reddit. That study found that the chatbots produced more empathetic-sounding answers, something Mr. Chen’s study also found. : quality, empathy, and readability.
Q&A With Author of New Research
Mr. Chen discussed his new study’s implications during an interview with this news organization.
Question: What is novel about this study?
Mr. Chen: We’ve seen many evaluations of chatbots that test for medical accuracy, but this study occurs in the domain of oncology care, where there are unique psychosocial and emotional considerations that are not precisely reflected in a general medicine setting. In effect, this study is putting these chatbots through a harder challenge.
Question: Why would chatbot responses seem more empathetic than those of physicians?
Mr. Chen: With the physician responses that we observed in our sample data set, we saw that there was very high variation of amount of apparent effort [in the physician responses]. Some physicians would put in a lot of time and effort, thinking through their response, and others wouldn’t do so as much. These chatbots don’t face fatigue the way humans do, or burnout. So they’re able to consistently provide responses with less variation in empathy.
Question: Do chatbots just seem empathetic because they are chattier?
Mr. Chen: We did think of verbosity as a potential confounder in this study. So we set a word count limit for the chatbot responses to keep it in the range of the physician responses. That way, verbosity was no longer a significant factor.
Question: How were quality and empathy measured by the reviewers?
Mr. Chen: For our study we used two teams of readers, each team composed of three physicians. In terms of the actual metrics we used, they were pilot metrics. There are no well-defined measurement scales or checklists that we could use to measure empathy. This is an emerging field of research. So we came up by consensus with our own set of ratings, and we feel that this is an area for the research to define a standardized set of guidelines.
Another novel aspect of this study is that we separated out different dimensions of quality and empathy. A quality response didn’t just mean it was medically accurate — quality also had to do with the focus and completeness of the response.
With empathy there are cognitive and emotional dimensions. Cognitive empathy uses critical thinking to understand the person’s emotions and thoughts and then adjusting a response to fit that. A patient may not want the best medically indicated treatment for their condition, because they want to preserve their quality of life. The chatbot may be able to adjust its recommendation with consideration of some of those humanistic elements that the patient is presenting with.
Emotional empathy is more about being supportive of the patient’s emotions by using expressions like ‘I understand where you’re coming from.’ or, ‘I can see how that makes you feel.’
Question: Why would physicians, not patients, be the best evaluators of empathy?
Mr. Chen: We’re actually very interested in evaluating patient ratings of empathy. We are conducting a follow-up study that evaluates patient ratings of empathy to the same set of chatbot and physician responses,to see if there are differences.
Question: Should cancer patients go ahead and consult chatbots?
Mr. Chen: Although we did observe increases in all of the metrics compared with physicians, this is a very specialized evaluation scenario where we’re using these Reddit questions and responses.
Naturally, we would need to do a trial, a head to head randomized comparison of physicians versus chatbots.
This pilot study does highlight the promising potential of these chatbots to suggest responses. But we can’t fully recommend that they should be used as standalone clinical tools without physicians.
This Q&A was edited for clarity.
Large language models (LLM) such as ChatGPT have shown mixed results in the quality of their responses to consumer questions about cancer.
One recent study found AI chatbots to churn out incomplete, inaccurate, or even nonsensical cancer treatment recommendations, while another found them to generate largely accurate — if technical — responses to the most common cancer questions.
While researchers have seen success with purpose-built chatbots created to address patient concerns about specific cancers, the consensus to date has been that the generalized models like ChatGPT remain works in progress and that physicians should avoid pointing patients to them, for now.
Yet new findings suggest that these chatbots may do better than individual physicians, at least on some measures, when it comes to answering queries about cancer. For research published May 16 in JAMA Oncology (doi: 10.1001/jamaoncol.2024.0836), David Chen, a medical student at the University of Toronto, and his colleagues, isolated a random sample of 200 questions related to cancer care addressed to doctors on the public online forum Reddit. They then compared responses from oncologists with responses generated by three different AI chatbots. The blinded responses were rated for quality, readability, and empathy by six physicians, including oncologists and palliative and supportive care specialists.
Mr. Chen and colleagues’ research was modeled after a 2023 study that measured the quality of physician responses compared with chatbots for general medicine questions addressed to doctors on Reddit. That study found that the chatbots produced more empathetic-sounding answers, something Mr. Chen’s study also found. : quality, empathy, and readability.
Q&A With Author of New Research
Mr. Chen discussed his new study’s implications during an interview with this news organization.
Question: What is novel about this study?
Mr. Chen: We’ve seen many evaluations of chatbots that test for medical accuracy, but this study occurs in the domain of oncology care, where there are unique psychosocial and emotional considerations that are not precisely reflected in a general medicine setting. In effect, this study is putting these chatbots through a harder challenge.
Question: Why would chatbot responses seem more empathetic than those of physicians?
Mr. Chen: With the physician responses that we observed in our sample data set, we saw that there was very high variation of amount of apparent effort [in the physician responses]. Some physicians would put in a lot of time and effort, thinking through their response, and others wouldn’t do so as much. These chatbots don’t face fatigue the way humans do, or burnout. So they’re able to consistently provide responses with less variation in empathy.
Question: Do chatbots just seem empathetic because they are chattier?
Mr. Chen: We did think of verbosity as a potential confounder in this study. So we set a word count limit for the chatbot responses to keep it in the range of the physician responses. That way, verbosity was no longer a significant factor.
Question: How were quality and empathy measured by the reviewers?
Mr. Chen: For our study we used two teams of readers, each team composed of three physicians. In terms of the actual metrics we used, they were pilot metrics. There are no well-defined measurement scales or checklists that we could use to measure empathy. This is an emerging field of research. So we came up by consensus with our own set of ratings, and we feel that this is an area for the research to define a standardized set of guidelines.
Another novel aspect of this study is that we separated out different dimensions of quality and empathy. A quality response didn’t just mean it was medically accurate — quality also had to do with the focus and completeness of the response.
With empathy there are cognitive and emotional dimensions. Cognitive empathy uses critical thinking to understand the person’s emotions and thoughts and then adjusting a response to fit that. A patient may not want the best medically indicated treatment for their condition, because they want to preserve their quality of life. The chatbot may be able to adjust its recommendation with consideration of some of those humanistic elements that the patient is presenting with.
Emotional empathy is more about being supportive of the patient’s emotions by using expressions like ‘I understand where you’re coming from.’ or, ‘I can see how that makes you feel.’
Question: Why would physicians, not patients, be the best evaluators of empathy?
Mr. Chen: We’re actually very interested in evaluating patient ratings of empathy. We are conducting a follow-up study that evaluates patient ratings of empathy to the same set of chatbot and physician responses,to see if there are differences.
Question: Should cancer patients go ahead and consult chatbots?
Mr. Chen: Although we did observe increases in all of the metrics compared with physicians, this is a very specialized evaluation scenario where we’re using these Reddit questions and responses.
Naturally, we would need to do a trial, a head to head randomized comparison of physicians versus chatbots.
This pilot study does highlight the promising potential of these chatbots to suggest responses. But we can’t fully recommend that they should be used as standalone clinical tools without physicians.
This Q&A was edited for clarity.
FROM JAMA ONCOLOGY
Pediatric Dermatologists Beat ChatGPT on Board Questions
In an experiment that pitted the wits of results from a small single-center study showed.
“We were relieved to find that the pediatric dermatologists in our study performed better than ChatGPT on both multiple choice and case-based questions; however, the latest iteration of ChatGPT (4.0) was very close,” one of the study’s first authors Charles Huang, a fourth-year medical student at Thomas Jefferson University, Philadelphia, said in an interview. “Something else that was interesting in our data was that the pediatric dermatologists performed much better than ChatGPT on questions related to procedural dermatology/surgical techniques, perhaps indicating that knowledge/reasoning gained through practical experience isn’t easily replicated in AI tools such as ChatGPT.”
For the study, which was published on May 9 in Pediatric Dermatology, Mr. Huang, and co-first author Esther Zhang, BS, a medical student at the University of Pennsylvania, Philadelphia, and coauthors from the Department of Dermatology, Children’s Hospital of Philadelphia, asked five pediatric dermatologists to answer 24 text-based questions including 16 single-answer, multiple-choice questions and two multiple answer questions drawn from the American Board of Dermatology 2021 Certification Sample Test and six free-response case-based questions drawn from the “Photoquiz” section of Pediatric Dermatology between July 2022 and July 2023. The researchers then processed the same set of questions through ChatGPT versions 3.5 and 4.0 and used statistical analysis to compare responses between the pediatric dermatologists and ChatGPT. A 5-point scale adapted from current AI tools was used to score replies to case-based questions.
On average, study participants had 5.6 years of clinical experience. Pediatric dermatologists performed significantly better than ChatGPT version 3.5 on multiple-choice and multiple answer questions (91.4% vs 76.2%, respectively; P = .021) but not significantly better than ChatGPT version 4.0 (90.5%; P = .44). As for replies to case-based questions, the average performance based on the 5-point scale was 3.81 for pediatric dermatologists and 3.53 for ChatGPT overall. The mean scores were significantly greater for pediatric dermatologists than for ChatGPT version 3.5 (P = .039) but not ChatGPT version 4.0 (P = .43).
The researchers acknowledged certain limitations of the analysis, including the evolving nature of AI tools, which may affect the reproducibility of results with subsequent model updates. And, while participating pediatric dermatologists said they were unfamiliar with the questions and cases used in the study, “there is potential for prior exposure through other dermatology board examination review processes,” they wrote.
“AI tools such as ChatGPT and similar large language models can be a valuable tool in your clinical practice, but be aware of potential pitfalls such as patient privacy, medical inaccuracies, [and] intrinsic biases in the tools,” Mr. Huang told this news organization. “As these technologies continue to advance, it is essential for all of us as medical clinicians to gain familiarity and stay abreast of new developments, just as we adapted to electronic health records and the use of the Internet.”
Maria Buethe, MD, PhD, a pediatric dermatology fellow at Rady Children’s Hospital–San Diego, who was asked to comment on the study, said she found it “interesting” that ChatGPT’s version 4.0 started to produce comparable results to clinician responses in some of the tested scenarios.
“The authors propose a set of best practices for pediatric dermatology clinicians using ChatGPT and other AI tools,” said Dr. Buethe, who was senior author of a recent literature review on AI and its application to pediatric dermatology. It was published in SKIN The Journal of Cutaneous Medicine. “One interesting recommended use for AI tools is to utilize it to generate differential diagnosis, which can broaden the list of pathologies previously considered.”
Asked to comment on the study, Erum Ilyas, MD, who practices dermatology in King of Prussia, Pennsylvania, and is a member of the Society for Pediatric Dermatology, said she was not surprised that ChatGPT “can perform fairly well on multiple-choice questions as we find available in testing circumstances,” as presented in the study. “Just as board questions only support testing a base of medical knowledge and facts for clinicians to master, they do not necessarily provide real-life circumstances that apply to caring for patients, which is inherently nuanced.”
In addition, the study “highlights that ChatGPT can be an aid to support thinking through differentials based on data entered by a clinician who understands how to phrase queries, especially if provided with enough data while respecting patient privacy, in the context of fact checking responses,” Dr. Ilyas said. “This underscores the fact that AI tools can be helpful to clinicians in assimilating various data points entered. However, ultimately, the tool is only able to support an output based on the information it has access to.” She added, “ChatGPT cannot be relied on to provide a single diagnosis with the clinician still responsible for making a final diagnosis. The tool is not definitive and cannot assimilate data that is not entered correctly.”
The study was not funded, and the study authors reported having no disclosures. Dr. Buethe and Dr. Ilyas, who were not involved with the study, had no disclosures.
A version of this article appeared on Medscape.com .
In an experiment that pitted the wits of results from a small single-center study showed.
“We were relieved to find that the pediatric dermatologists in our study performed better than ChatGPT on both multiple choice and case-based questions; however, the latest iteration of ChatGPT (4.0) was very close,” one of the study’s first authors Charles Huang, a fourth-year medical student at Thomas Jefferson University, Philadelphia, said in an interview. “Something else that was interesting in our data was that the pediatric dermatologists performed much better than ChatGPT on questions related to procedural dermatology/surgical techniques, perhaps indicating that knowledge/reasoning gained through practical experience isn’t easily replicated in AI tools such as ChatGPT.”
For the study, which was published on May 9 in Pediatric Dermatology, Mr. Huang, and co-first author Esther Zhang, BS, a medical student at the University of Pennsylvania, Philadelphia, and coauthors from the Department of Dermatology, Children’s Hospital of Philadelphia, asked five pediatric dermatologists to answer 24 text-based questions including 16 single-answer, multiple-choice questions and two multiple answer questions drawn from the American Board of Dermatology 2021 Certification Sample Test and six free-response case-based questions drawn from the “Photoquiz” section of Pediatric Dermatology between July 2022 and July 2023. The researchers then processed the same set of questions through ChatGPT versions 3.5 and 4.0 and used statistical analysis to compare responses between the pediatric dermatologists and ChatGPT. A 5-point scale adapted from current AI tools was used to score replies to case-based questions.
On average, study participants had 5.6 years of clinical experience. Pediatric dermatologists performed significantly better than ChatGPT version 3.5 on multiple-choice and multiple answer questions (91.4% vs 76.2%, respectively; P = .021) but not significantly better than ChatGPT version 4.0 (90.5%; P = .44). As for replies to case-based questions, the average performance based on the 5-point scale was 3.81 for pediatric dermatologists and 3.53 for ChatGPT overall. The mean scores were significantly greater for pediatric dermatologists than for ChatGPT version 3.5 (P = .039) but not ChatGPT version 4.0 (P = .43).
The researchers acknowledged certain limitations of the analysis, including the evolving nature of AI tools, which may affect the reproducibility of results with subsequent model updates. And, while participating pediatric dermatologists said they were unfamiliar with the questions and cases used in the study, “there is potential for prior exposure through other dermatology board examination review processes,” they wrote.
“AI tools such as ChatGPT and similar large language models can be a valuable tool in your clinical practice, but be aware of potential pitfalls such as patient privacy, medical inaccuracies, [and] intrinsic biases in the tools,” Mr. Huang told this news organization. “As these technologies continue to advance, it is essential for all of us as medical clinicians to gain familiarity and stay abreast of new developments, just as we adapted to electronic health records and the use of the Internet.”
Maria Buethe, MD, PhD, a pediatric dermatology fellow at Rady Children’s Hospital–San Diego, who was asked to comment on the study, said she found it “interesting” that ChatGPT’s version 4.0 started to produce comparable results to clinician responses in some of the tested scenarios.
“The authors propose a set of best practices for pediatric dermatology clinicians using ChatGPT and other AI tools,” said Dr. Buethe, who was senior author of a recent literature review on AI and its application to pediatric dermatology. It was published in SKIN The Journal of Cutaneous Medicine. “One interesting recommended use for AI tools is to utilize it to generate differential diagnosis, which can broaden the list of pathologies previously considered.”
Asked to comment on the study, Erum Ilyas, MD, who practices dermatology in King of Prussia, Pennsylvania, and is a member of the Society for Pediatric Dermatology, said she was not surprised that ChatGPT “can perform fairly well on multiple-choice questions as we find available in testing circumstances,” as presented in the study. “Just as board questions only support testing a base of medical knowledge and facts for clinicians to master, they do not necessarily provide real-life circumstances that apply to caring for patients, which is inherently nuanced.”
In addition, the study “highlights that ChatGPT can be an aid to support thinking through differentials based on data entered by a clinician who understands how to phrase queries, especially if provided with enough data while respecting patient privacy, in the context of fact checking responses,” Dr. Ilyas said. “This underscores the fact that AI tools can be helpful to clinicians in assimilating various data points entered. However, ultimately, the tool is only able to support an output based on the information it has access to.” She added, “ChatGPT cannot be relied on to provide a single diagnosis with the clinician still responsible for making a final diagnosis. The tool is not definitive and cannot assimilate data that is not entered correctly.”
The study was not funded, and the study authors reported having no disclosures. Dr. Buethe and Dr. Ilyas, who were not involved with the study, had no disclosures.
A version of this article appeared on Medscape.com .
In an experiment that pitted the wits of results from a small single-center study showed.
“We were relieved to find that the pediatric dermatologists in our study performed better than ChatGPT on both multiple choice and case-based questions; however, the latest iteration of ChatGPT (4.0) was very close,” one of the study’s first authors Charles Huang, a fourth-year medical student at Thomas Jefferson University, Philadelphia, said in an interview. “Something else that was interesting in our data was that the pediatric dermatologists performed much better than ChatGPT on questions related to procedural dermatology/surgical techniques, perhaps indicating that knowledge/reasoning gained through practical experience isn’t easily replicated in AI tools such as ChatGPT.”
For the study, which was published on May 9 in Pediatric Dermatology, Mr. Huang, and co-first author Esther Zhang, BS, a medical student at the University of Pennsylvania, Philadelphia, and coauthors from the Department of Dermatology, Children’s Hospital of Philadelphia, asked five pediatric dermatologists to answer 24 text-based questions including 16 single-answer, multiple-choice questions and two multiple answer questions drawn from the American Board of Dermatology 2021 Certification Sample Test and six free-response case-based questions drawn from the “Photoquiz” section of Pediatric Dermatology between July 2022 and July 2023. The researchers then processed the same set of questions through ChatGPT versions 3.5 and 4.0 and used statistical analysis to compare responses between the pediatric dermatologists and ChatGPT. A 5-point scale adapted from current AI tools was used to score replies to case-based questions.
On average, study participants had 5.6 years of clinical experience. Pediatric dermatologists performed significantly better than ChatGPT version 3.5 on multiple-choice and multiple answer questions (91.4% vs 76.2%, respectively; P = .021) but not significantly better than ChatGPT version 4.0 (90.5%; P = .44). As for replies to case-based questions, the average performance based on the 5-point scale was 3.81 for pediatric dermatologists and 3.53 for ChatGPT overall. The mean scores were significantly greater for pediatric dermatologists than for ChatGPT version 3.5 (P = .039) but not ChatGPT version 4.0 (P = .43).
The researchers acknowledged certain limitations of the analysis, including the evolving nature of AI tools, which may affect the reproducibility of results with subsequent model updates. And, while participating pediatric dermatologists said they were unfamiliar with the questions and cases used in the study, “there is potential for prior exposure through other dermatology board examination review processes,” they wrote.
“AI tools such as ChatGPT and similar large language models can be a valuable tool in your clinical practice, but be aware of potential pitfalls such as patient privacy, medical inaccuracies, [and] intrinsic biases in the tools,” Mr. Huang told this news organization. “As these technologies continue to advance, it is essential for all of us as medical clinicians to gain familiarity and stay abreast of new developments, just as we adapted to electronic health records and the use of the Internet.”
Maria Buethe, MD, PhD, a pediatric dermatology fellow at Rady Children’s Hospital–San Diego, who was asked to comment on the study, said she found it “interesting” that ChatGPT’s version 4.0 started to produce comparable results to clinician responses in some of the tested scenarios.
“The authors propose a set of best practices for pediatric dermatology clinicians using ChatGPT and other AI tools,” said Dr. Buethe, who was senior author of a recent literature review on AI and its application to pediatric dermatology. It was published in SKIN The Journal of Cutaneous Medicine. “One interesting recommended use for AI tools is to utilize it to generate differential diagnosis, which can broaden the list of pathologies previously considered.”
Asked to comment on the study, Erum Ilyas, MD, who practices dermatology in King of Prussia, Pennsylvania, and is a member of the Society for Pediatric Dermatology, said she was not surprised that ChatGPT “can perform fairly well on multiple-choice questions as we find available in testing circumstances,” as presented in the study. “Just as board questions only support testing a base of medical knowledge and facts for clinicians to master, they do not necessarily provide real-life circumstances that apply to caring for patients, which is inherently nuanced.”
In addition, the study “highlights that ChatGPT can be an aid to support thinking through differentials based on data entered by a clinician who understands how to phrase queries, especially if provided with enough data while respecting patient privacy, in the context of fact checking responses,” Dr. Ilyas said. “This underscores the fact that AI tools can be helpful to clinicians in assimilating various data points entered. However, ultimately, the tool is only able to support an output based on the information it has access to.” She added, “ChatGPT cannot be relied on to provide a single diagnosis with the clinician still responsible for making a final diagnosis. The tool is not definitive and cannot assimilate data that is not entered correctly.”
The study was not funded, and the study authors reported having no disclosures. Dr. Buethe and Dr. Ilyas, who were not involved with the study, had no disclosures.
A version of this article appeared on Medscape.com .
Global Analysis Identifies Drugs Associated With SJS-TEN in Children
TOPLINE:
METHODOLOGY:
- SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
- Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
- They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
- A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.
TAKEAWAY:
- Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
- The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
- All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
- Vaccines were not associated with significant signals.
IN PRACTICE:
The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”
SOURCE:
The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.
LIMITATIONS:
Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.
DISCLOSURES:
This study did not receive any funding. The authors declared no conflict of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
- Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
- They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
- A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.
TAKEAWAY:
- Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
- The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
- All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
- Vaccines were not associated with significant signals.
IN PRACTICE:
The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”
SOURCE:
The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.
LIMITATIONS:
Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.
DISCLOSURES:
This study did not receive any funding. The authors declared no conflict of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- SJS and TEN are rare, life-threatening mucocutaneous reactions mainly associated with medications, but large pharmacovigilance studies of drugs associated with SJS-TEN in the pediatric population are still lacking.
- Using the WHO’s pharmacovigilance database (VigiBase) containing individual case safety reports from January 1967 to July 2022, researchers identified 7342 adverse drug reaction reports of SJS-TEN in children (younger than 18 years; median age, 9 years) in all six continents. Median onset was 5 days, and 3.2% were fatal.
- They analyzed drugs reported as suspected treatments, and for each molecule, they performed a case–non-case study to assess a potential pharmacovigilance signal by computing the information component (IC).
- A positive IC value suggested more frequent reporting of a specific drug-adverse reaction pair. A positive IC025, a traditional threshold for statistical signal detection, is suggestive of a potential pharmacovigilance signal.
TAKEAWAY:
- Overall, 165 drugs were associated with a diagnosis of SJS-TEN; antiepileptic and anti-infectious drugs were the most common drug classes represented.
- The five most frequently reported drugs were carbamazepine (11.7%), lamotrigine (10.6%), sulfamethoxazole-trimethoprim (9%), acetaminophen (8.4%), and phenytoin (6.6%). The five drugs with the highest IC025 were lamotrigine, carbamazepine, phenobarbital, phenytoin, and nimesulide.
- All antiepileptics, many antibiotic families, dapsone, antiretroviral drugs, some antifungal drugs, and nonsteroidal anti-inflammatory drugs were identified in reports, with penicillins the most frequently reported antibiotic family and sulfonamides having the strongest pharmacovigilance signal.
- Vaccines were not associated with significant signals.
IN PRACTICE:
The study provides an update on “the spectrum of drugs potentially associated with SJS-TEN in the pediatric population,” the authors concluded, and “underlines the importance of reporting to pharmacovigilance the suspicion of this severe side effect of drugs with the most precise and detailed clinical description possible.”
SOURCE:
The study, led by Pauline Bataille, MD, of the Department of Pediatric Dermatology, Hôpital Necker-Enfants Malades, Paris City University, France, was published online in the Journal of the European Academy of Dermatology and Venereology.
LIMITATIONS:
Limitations include the possibility that some cases could have had an infectious or idiopathic cause not related to a drug and the lack of detailed clinical data in the database.
DISCLOSURES:
This study did not receive any funding. The authors declared no conflict of interest.
A version of this article first appeared on Medscape.com.
Aquagenic Wrinkling Among Skin-Related Signs of Cystic Fibrosis
TOPLINE:
METHODOLOGY:
- Patients with CF, caused by a mutation in the CF Transmembrane Conductance Regulator (CFTR) gene, can develop diverse dermatologic manifestations.
- Researchers reviewed the literature and provided their own clinical experience regarding dermatologic manifestations of CF.
- They also reviewed the cutaneous side effects of CFTR modulators and antibiotics used to treat CF.
TAKEAWAY:
- Aquagenic wrinkling of the palm is common in individuals with CF, affecting up to 80% of patients (and 25% of CF gene carriers), and can be an early manifestation of CF. Treatments include topical medications (such as aluminum chloride, corticosteroids, and salicylic acid), botulinum toxin injections, and recently, CFTR-modulating treatments.
- CF nutrient deficiency dermatitis, often in a diaper distribution, usually appears in infancy and, before newborn screening was available, was sometimes the first sign of CF in some cases. It usually resolves with an adequate diet, pancreatic enzymes, and/or nutritional supplements. Zinc and essential fatty acid deficiencies can lead to acrodermatitis enteropathica–like symptoms and psoriasiform rashes, respectively.
- CF is also associated with vascular disorders, including cutaneous and, rarely, systemic vasculitis. Treatment includes topical and oral steroids and immune-modulating therapies.
- CFTR modulators, now the most common and highly effective treatment for CF, are associated with several skin reactions, which can be managed with treatments that include topical steroids and oral antihistamines. Frequent antibiotic treatment can also trigger skin reactions.
IN PRACTICE:
“Recognition and familiarity with dermatologic clinical manifestations of CF are important for multidisciplinary care” for patients with CF, the authors wrote, adding that “dermatology providers may play a significant role in the diagnosis and management of CF cutaneous comorbidities.”
SOURCE:
Aaron D. Smith, BS, from the University of Virginia (UVA) School of Medicine, Charlottesville, and coauthors were from the departments of dermatology and pulmonology/critical care medicine at UVA. The study was published online in the Journal of the American Academy of Dermatology.
LIMITATIONS:
The authors did not make a comment about the limitations of their review.
DISCLOSURES:
No funding was received for the review. The authors had no disclosures.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Patients with CF, caused by a mutation in the CF Transmembrane Conductance Regulator (CFTR) gene, can develop diverse dermatologic manifestations.
- Researchers reviewed the literature and provided their own clinical experience regarding dermatologic manifestations of CF.
- They also reviewed the cutaneous side effects of CFTR modulators and antibiotics used to treat CF.
TAKEAWAY:
- Aquagenic wrinkling of the palm is common in individuals with CF, affecting up to 80% of patients (and 25% of CF gene carriers), and can be an early manifestation of CF. Treatments include topical medications (such as aluminum chloride, corticosteroids, and salicylic acid), botulinum toxin injections, and recently, CFTR-modulating treatments.
- CF nutrient deficiency dermatitis, often in a diaper distribution, usually appears in infancy and, before newborn screening was available, was sometimes the first sign of CF in some cases. It usually resolves with an adequate diet, pancreatic enzymes, and/or nutritional supplements. Zinc and essential fatty acid deficiencies can lead to acrodermatitis enteropathica–like symptoms and psoriasiform rashes, respectively.
- CF is also associated with vascular disorders, including cutaneous and, rarely, systemic vasculitis. Treatment includes topical and oral steroids and immune-modulating therapies.
- CFTR modulators, now the most common and highly effective treatment for CF, are associated with several skin reactions, which can be managed with treatments that include topical steroids and oral antihistamines. Frequent antibiotic treatment can also trigger skin reactions.
IN PRACTICE:
“Recognition and familiarity with dermatologic clinical manifestations of CF are important for multidisciplinary care” for patients with CF, the authors wrote, adding that “dermatology providers may play a significant role in the diagnosis and management of CF cutaneous comorbidities.”
SOURCE:
Aaron D. Smith, BS, from the University of Virginia (UVA) School of Medicine, Charlottesville, and coauthors were from the departments of dermatology and pulmonology/critical care medicine at UVA. The study was published online in the Journal of the American Academy of Dermatology.
LIMITATIONS:
The authors did not make a comment about the limitations of their review.
DISCLOSURES:
No funding was received for the review. The authors had no disclosures.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Patients with CF, caused by a mutation in the CF Transmembrane Conductance Regulator (CFTR) gene, can develop diverse dermatologic manifestations.
- Researchers reviewed the literature and provided their own clinical experience regarding dermatologic manifestations of CF.
- They also reviewed the cutaneous side effects of CFTR modulators and antibiotics used to treat CF.
TAKEAWAY:
- Aquagenic wrinkling of the palm is common in individuals with CF, affecting up to 80% of patients (and 25% of CF gene carriers), and can be an early manifestation of CF. Treatments include topical medications (such as aluminum chloride, corticosteroids, and salicylic acid), botulinum toxin injections, and recently, CFTR-modulating treatments.
- CF nutrient deficiency dermatitis, often in a diaper distribution, usually appears in infancy and, before newborn screening was available, was sometimes the first sign of CF in some cases. It usually resolves with an adequate diet, pancreatic enzymes, and/or nutritional supplements. Zinc and essential fatty acid deficiencies can lead to acrodermatitis enteropathica–like symptoms and psoriasiform rashes, respectively.
- CF is also associated with vascular disorders, including cutaneous and, rarely, systemic vasculitis. Treatment includes topical and oral steroids and immune-modulating therapies.
- CFTR modulators, now the most common and highly effective treatment for CF, are associated with several skin reactions, which can be managed with treatments that include topical steroids and oral antihistamines. Frequent antibiotic treatment can also trigger skin reactions.
IN PRACTICE:
“Recognition and familiarity with dermatologic clinical manifestations of CF are important for multidisciplinary care” for patients with CF, the authors wrote, adding that “dermatology providers may play a significant role in the diagnosis and management of CF cutaneous comorbidities.”
SOURCE:
Aaron D. Smith, BS, from the University of Virginia (UVA) School of Medicine, Charlottesville, and coauthors were from the departments of dermatology and pulmonology/critical care medicine at UVA. The study was published online in the Journal of the American Academy of Dermatology.
LIMITATIONS:
The authors did not make a comment about the limitations of their review.
DISCLOSURES:
No funding was received for the review. The authors had no disclosures.
A version of this article first appeared on Medscape.com.
Upadacitinib Improves Standards of Care in Adults With Moderate to Severe Atopic Dermatitis
Key clinical point: Treatment with 15 mg or 30 mg upadacitinib demonstrated rapid and durable improvements in symptoms and quality of life in adults with moderate to severe atopic dermatitis (AD), based on a treat-to-target approach.
Major finding: Overall, >80%, >78%, and ≥87% of patients achieved the 3-month initial acceptable target, whereas ≥53%, >61%, and >73% of patients achieved the 6-month optimal target goal with 15 mg or 30 mg upadacitinib vs placebo at weeks 2, 16, and 52, respectively. The proportion of patients achieving a higher number of individual target criteria increased over time for both 3- and 6-month target goals.
Study details: This treat-to-target analysis of Measure Up 1 and Measure Up 2 phase 3 studies included 1282 adults with moderate to severe AD who were randomly assigned to receive 15 mg upadacitinib (n = 428), 30 mg upadacitinib (n = 424), or placebo (n = 430).
Disclosures: This study was funded by AbbVie. Five authors declared being employees of AbbVie or holding AbbVie stock, stock options, or patents. Several authors declared having ties with various sources, including AbbVie.
Source: Kwatra SG, de Bruin-Weller M, Silverberg JI, et al. Targeted combined endpoint improvement in patient and disease domains in atopic dermatitis: A treat-to-target analysis of adults with moderate-to-severe atopic dermatitis treated with upadacitinib. Acta Derm Venereol. 2024;104:adv18452 (May 6). doi: 10.2340/actadv.v104.18452 Source
Key clinical point: Treatment with 15 mg or 30 mg upadacitinib demonstrated rapid and durable improvements in symptoms and quality of life in adults with moderate to severe atopic dermatitis (AD), based on a treat-to-target approach.
Major finding: Overall, >80%, >78%, and ≥87% of patients achieved the 3-month initial acceptable target, whereas ≥53%, >61%, and >73% of patients achieved the 6-month optimal target goal with 15 mg or 30 mg upadacitinib vs placebo at weeks 2, 16, and 52, respectively. The proportion of patients achieving a higher number of individual target criteria increased over time for both 3- and 6-month target goals.
Study details: This treat-to-target analysis of Measure Up 1 and Measure Up 2 phase 3 studies included 1282 adults with moderate to severe AD who were randomly assigned to receive 15 mg upadacitinib (n = 428), 30 mg upadacitinib (n = 424), or placebo (n = 430).
Disclosures: This study was funded by AbbVie. Five authors declared being employees of AbbVie or holding AbbVie stock, stock options, or patents. Several authors declared having ties with various sources, including AbbVie.
Source: Kwatra SG, de Bruin-Weller M, Silverberg JI, et al. Targeted combined endpoint improvement in patient and disease domains in atopic dermatitis: A treat-to-target analysis of adults with moderate-to-severe atopic dermatitis treated with upadacitinib. Acta Derm Venereol. 2024;104:adv18452 (May 6). doi: 10.2340/actadv.v104.18452 Source
Key clinical point: Treatment with 15 mg or 30 mg upadacitinib demonstrated rapid and durable improvements in symptoms and quality of life in adults with moderate to severe atopic dermatitis (AD), based on a treat-to-target approach.
Major finding: Overall, >80%, >78%, and ≥87% of patients achieved the 3-month initial acceptable target, whereas ≥53%, >61%, and >73% of patients achieved the 6-month optimal target goal with 15 mg or 30 mg upadacitinib vs placebo at weeks 2, 16, and 52, respectively. The proportion of patients achieving a higher number of individual target criteria increased over time for both 3- and 6-month target goals.
Study details: This treat-to-target analysis of Measure Up 1 and Measure Up 2 phase 3 studies included 1282 adults with moderate to severe AD who were randomly assigned to receive 15 mg upadacitinib (n = 428), 30 mg upadacitinib (n = 424), or placebo (n = 430).
Disclosures: This study was funded by AbbVie. Five authors declared being employees of AbbVie or holding AbbVie stock, stock options, or patents. Several authors declared having ties with various sources, including AbbVie.
Source: Kwatra SG, de Bruin-Weller M, Silverberg JI, et al. Targeted combined endpoint improvement in patient and disease domains in atopic dermatitis: A treat-to-target analysis of adults with moderate-to-severe atopic dermatitis treated with upadacitinib. Acta Derm Venereol. 2024;104:adv18452 (May 6). doi: 10.2340/actadv.v104.18452 Source
Dupilumab Boosts Clinical and Molecular Responses in Pediatric Atopic Dermatitis
Key clinical point: Dupilumab treatment was well-tolerated and demonstrated improved clinical and molecular responses in pediatric patients with moderate to severe atopic dermatitis (AD).
Major finding: Dupilumab significantly reduced Eczema Area and Severity Index, SCORing Atopic Dermatitis index, and Investigator’s Global Assessment scores at 3 and 6 months (all P < .05), along with significant reduction in AD-associated stratum corneum biomarker levels at 3 months (P < .01). Dupilumab showed good tolerability, with adverse events reported in only four patients.
Study details: This study included 314 pediatric patients with moderate to severe AD from the German TREATkids registry, of whom 87 received dupilumab.
Disclosures: TREATkids is the child and adolescent section of the TREATgermany registry, which is supported by AbbVie Deutschland GmbH & Co. KG, Almirall Hermal GmbH, Galderma S.A., LEO Pharma GmbH, Lilly Deutschland GmbH, Pfizer Inc., and Sanofi. Several authors declared receiving research grants, lecture, or consultancy fees from or having other ties with various sources, including the supporters of TREATgermany.
Source: Stölzl D, Sander N, Siegels D, et al, and the TREATgermany study group. Clinical and molecular response to dupilumab treatment in pediatric atopic dermatitis: Results of the German TREATkids registry. Allergy. 2024 (May 7). doi: 0.1111/all.16147 Source
Key clinical point: Dupilumab treatment was well-tolerated and demonstrated improved clinical and molecular responses in pediatric patients with moderate to severe atopic dermatitis (AD).
Major finding: Dupilumab significantly reduced Eczema Area and Severity Index, SCORing Atopic Dermatitis index, and Investigator’s Global Assessment scores at 3 and 6 months (all P < .05), along with significant reduction in AD-associated stratum corneum biomarker levels at 3 months (P < .01). Dupilumab showed good tolerability, with adverse events reported in only four patients.
Study details: This study included 314 pediatric patients with moderate to severe AD from the German TREATkids registry, of whom 87 received dupilumab.
Disclosures: TREATkids is the child and adolescent section of the TREATgermany registry, which is supported by AbbVie Deutschland GmbH & Co. KG, Almirall Hermal GmbH, Galderma S.A., LEO Pharma GmbH, Lilly Deutschland GmbH, Pfizer Inc., and Sanofi. Several authors declared receiving research grants, lecture, or consultancy fees from or having other ties with various sources, including the supporters of TREATgermany.
Source: Stölzl D, Sander N, Siegels D, et al, and the TREATgermany study group. Clinical and molecular response to dupilumab treatment in pediatric atopic dermatitis: Results of the German TREATkids registry. Allergy. 2024 (May 7). doi: 0.1111/all.16147 Source
Key clinical point: Dupilumab treatment was well-tolerated and demonstrated improved clinical and molecular responses in pediatric patients with moderate to severe atopic dermatitis (AD).
Major finding: Dupilumab significantly reduced Eczema Area and Severity Index, SCORing Atopic Dermatitis index, and Investigator’s Global Assessment scores at 3 and 6 months (all P < .05), along with significant reduction in AD-associated stratum corneum biomarker levels at 3 months (P < .01). Dupilumab showed good tolerability, with adverse events reported in only four patients.
Study details: This study included 314 pediatric patients with moderate to severe AD from the German TREATkids registry, of whom 87 received dupilumab.
Disclosures: TREATkids is the child and adolescent section of the TREATgermany registry, which is supported by AbbVie Deutschland GmbH & Co. KG, Almirall Hermal GmbH, Galderma S.A., LEO Pharma GmbH, Lilly Deutschland GmbH, Pfizer Inc., and Sanofi. Several authors declared receiving research grants, lecture, or consultancy fees from or having other ties with various sources, including the supporters of TREATgermany.
Source: Stölzl D, Sander N, Siegels D, et al, and the TREATgermany study group. Clinical and molecular response to dupilumab treatment in pediatric atopic dermatitis: Results of the German TREATkids registry. Allergy. 2024 (May 7). doi: 0.1111/all.16147 Source
Lebrikizumab Shows Prompt Clinical Response in Moderate to Severe Atopic Dermatitis
Key clinical point: Lebrikizumab monotherapy rapidly and consistently reduced atopic dermatitis (AD) extent and severity in patients with moderate to severe AD across all Eczema Area and Severity Index (EASI) clinical signs and body regions.
Major finding: At week 16, lebrikizumab vs placebo led to greater improvements in EASI scores and clinical signs (both P < .001) across all body regions in ADvocate1 and ADvocate2, with improvements observed as early as week 2 for all signs except erythema on head/neck (P < .05) and lower extremity erythema, edema/papulation, and lichenification (all P < .001), which improved significantly only by week 4 in ADvocate2.
Study details: This post hoc analysis of ADvocate1 (n = 424) and ADvocate2 (n = 427) included adolescent and adult patients with moderate to severe AD who were randomly assigned to receive 250 mg lebrikizumab biweekly or placebo.
Disclosures: This study was funded by Dermira, Inc., a wholly owned subsidiary of Eli Lilly and Company. Several authors declared having various ties with Dermira, Eli Lilly, and others. Five authors declared being employees or stockholders of Eli Lilly.
Source: Simpson EL, de Bruin-Weller M, Hong HC, et al. Lebrikizumab provides rapid clinical responses across all Eczema Area and Severity Index body regions and clinical signs in adolescents and adults with moderate-to-severe atopic dermatitis. Dermatol Ther (Heidelb). 2024 (May 3). doi: 10.1007/s13555-024-01158-4 Source
Key clinical point: Lebrikizumab monotherapy rapidly and consistently reduced atopic dermatitis (AD) extent and severity in patients with moderate to severe AD across all Eczema Area and Severity Index (EASI) clinical signs and body regions.
Major finding: At week 16, lebrikizumab vs placebo led to greater improvements in EASI scores and clinical signs (both P < .001) across all body regions in ADvocate1 and ADvocate2, with improvements observed as early as week 2 for all signs except erythema on head/neck (P < .05) and lower extremity erythema, edema/papulation, and lichenification (all P < .001), which improved significantly only by week 4 in ADvocate2.
Study details: This post hoc analysis of ADvocate1 (n = 424) and ADvocate2 (n = 427) included adolescent and adult patients with moderate to severe AD who were randomly assigned to receive 250 mg lebrikizumab biweekly or placebo.
Disclosures: This study was funded by Dermira, Inc., a wholly owned subsidiary of Eli Lilly and Company. Several authors declared having various ties with Dermira, Eli Lilly, and others. Five authors declared being employees or stockholders of Eli Lilly.
Source: Simpson EL, de Bruin-Weller M, Hong HC, et al. Lebrikizumab provides rapid clinical responses across all Eczema Area and Severity Index body regions and clinical signs in adolescents and adults with moderate-to-severe atopic dermatitis. Dermatol Ther (Heidelb). 2024 (May 3). doi: 10.1007/s13555-024-01158-4 Source
Key clinical point: Lebrikizumab monotherapy rapidly and consistently reduced atopic dermatitis (AD) extent and severity in patients with moderate to severe AD across all Eczema Area and Severity Index (EASI) clinical signs and body regions.
Major finding: At week 16, lebrikizumab vs placebo led to greater improvements in EASI scores and clinical signs (both P < .001) across all body regions in ADvocate1 and ADvocate2, with improvements observed as early as week 2 for all signs except erythema on head/neck (P < .05) and lower extremity erythema, edema/papulation, and lichenification (all P < .001), which improved significantly only by week 4 in ADvocate2.
Study details: This post hoc analysis of ADvocate1 (n = 424) and ADvocate2 (n = 427) included adolescent and adult patients with moderate to severe AD who were randomly assigned to receive 250 mg lebrikizumab biweekly or placebo.
Disclosures: This study was funded by Dermira, Inc., a wholly owned subsidiary of Eli Lilly and Company. Several authors declared having various ties with Dermira, Eli Lilly, and others. Five authors declared being employees or stockholders of Eli Lilly.
Source: Simpson EL, de Bruin-Weller M, Hong HC, et al. Lebrikizumab provides rapid clinical responses across all Eczema Area and Severity Index body regions and clinical signs in adolescents and adults with moderate-to-severe atopic dermatitis. Dermatol Ther (Heidelb). 2024 (May 3). doi: 10.1007/s13555-024-01158-4 Source
Causal Relationship Exists Between Atopic Dermatitis and Brain Cancer
Key clinical point: A causal relationship was observed between genetically related atopic dermatitis (AD) and brain cancer, delineating AD as a potential risk factor for brain cancer.
Major finding: The presence of AD led to an increased risk for brain cancer (odds ratio 1.0005; P = .0096); however, no significant causal association was observed on conducting reverse Mendelian randomization analysis.
Study details: This cohort study analyzed the data on AD-associated single nucleotide polymorphisms of patients with AD (n = 15,208) and control individuals without AD (n = 367,046) from the FinnGen database (10th release) and the summary data of patients with brain cancer (n = 606) and control individuals without cancer (n = 372,016) from the IEU Open GWAS database.
Disclosures: This study did not disclose any funding source. The authors declared no conflicts of interest.
Source: Xin Y, Yuan T, Wang J. The causal relationship between atopic dermatitis and brain cancer: A bidirectional Mendelian randomization study. Skin Res Technol. 2024;30(4):e13715. doi: 10.1111/srt.13715 Source
Key clinical point: A causal relationship was observed between genetically related atopic dermatitis (AD) and brain cancer, delineating AD as a potential risk factor for brain cancer.
Major finding: The presence of AD led to an increased risk for brain cancer (odds ratio 1.0005; P = .0096); however, no significant causal association was observed on conducting reverse Mendelian randomization analysis.
Study details: This cohort study analyzed the data on AD-associated single nucleotide polymorphisms of patients with AD (n = 15,208) and control individuals without AD (n = 367,046) from the FinnGen database (10th release) and the summary data of patients with brain cancer (n = 606) and control individuals without cancer (n = 372,016) from the IEU Open GWAS database.
Disclosures: This study did not disclose any funding source. The authors declared no conflicts of interest.
Source: Xin Y, Yuan T, Wang J. The causal relationship between atopic dermatitis and brain cancer: A bidirectional Mendelian randomization study. Skin Res Technol. 2024;30(4):e13715. doi: 10.1111/srt.13715 Source
Key clinical point: A causal relationship was observed between genetically related atopic dermatitis (AD) and brain cancer, delineating AD as a potential risk factor for brain cancer.
Major finding: The presence of AD led to an increased risk for brain cancer (odds ratio 1.0005; P = .0096); however, no significant causal association was observed on conducting reverse Mendelian randomization analysis.
Study details: This cohort study analyzed the data on AD-associated single nucleotide polymorphisms of patients with AD (n = 15,208) and control individuals without AD (n = 367,046) from the FinnGen database (10th release) and the summary data of patients with brain cancer (n = 606) and control individuals without cancer (n = 372,016) from the IEU Open GWAS database.
Disclosures: This study did not disclose any funding source. The authors declared no conflicts of interest.
Source: Xin Y, Yuan T, Wang J. The causal relationship between atopic dermatitis and brain cancer: A bidirectional Mendelian randomization study. Skin Res Technol. 2024;30(4):e13715. doi: 10.1111/srt.13715 Source
Preventive Effect of Maternal Probiotic Supplementation in Atopic Dermatitis
Key clinical point: Maternal probiotic supplementation was effective in preventing atopic dermatitis (AD) in children regardless of their filaggrin (FLG) gene mutation status.
Major finding: Heterozygous FLG mutations were observed in 7% of children. The risk for AD after maternal probiotic supplementation was similar between children who expressed a FLG mutation (risk ratio [RR] 0.6; 95% CI 0.1-4.1) and those having a wild-type FLG (RR 0.6; 95% CI 0.4-0.9).
Study details: This exploratory study included the data of 228 children from the Probiotic in the Prevention of Allergy among Children in Trondheim (ProPACT) study who did or did not have FLG mutations and whose mothers received probiotic or placebo milk from 36 weeks of gestation until 3 months post delivery while breastfeeding.
Disclosures: This study was funded by the Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology, and the Norwegian Research Council. The authors declared no conflicts of interest.
Source: Zakiudin DP, Thyssen JP, Zachariae C, Videm V, Øien T, Simpson MR. Filaggrin mutation status and prevention of atopic dermatitis with maternal probiotic supplementation. Acta Derm Venereol. 2024;104:adv24360 (Apr 24). doi: 10.2340/actadv.v104.24360 Source
Key clinical point: Maternal probiotic supplementation was effective in preventing atopic dermatitis (AD) in children regardless of their filaggrin (FLG) gene mutation status.
Major finding: Heterozygous FLG mutations were observed in 7% of children. The risk for AD after maternal probiotic supplementation was similar between children who expressed a FLG mutation (risk ratio [RR] 0.6; 95% CI 0.1-4.1) and those having a wild-type FLG (RR 0.6; 95% CI 0.4-0.9).
Study details: This exploratory study included the data of 228 children from the Probiotic in the Prevention of Allergy among Children in Trondheim (ProPACT) study who did or did not have FLG mutations and whose mothers received probiotic or placebo milk from 36 weeks of gestation until 3 months post delivery while breastfeeding.
Disclosures: This study was funded by the Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology, and the Norwegian Research Council. The authors declared no conflicts of interest.
Source: Zakiudin DP, Thyssen JP, Zachariae C, Videm V, Øien T, Simpson MR. Filaggrin mutation status and prevention of atopic dermatitis with maternal probiotic supplementation. Acta Derm Venereol. 2024;104:adv24360 (Apr 24). doi: 10.2340/actadv.v104.24360 Source
Key clinical point: Maternal probiotic supplementation was effective in preventing atopic dermatitis (AD) in children regardless of their filaggrin (FLG) gene mutation status.
Major finding: Heterozygous FLG mutations were observed in 7% of children. The risk for AD after maternal probiotic supplementation was similar between children who expressed a FLG mutation (risk ratio [RR] 0.6; 95% CI 0.1-4.1) and those having a wild-type FLG (RR 0.6; 95% CI 0.4-0.9).
Study details: This exploratory study included the data of 228 children from the Probiotic in the Prevention of Allergy among Children in Trondheim (ProPACT) study who did or did not have FLG mutations and whose mothers received probiotic or placebo milk from 36 weeks of gestation until 3 months post delivery while breastfeeding.
Disclosures: This study was funded by the Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology, and the Norwegian Research Council. The authors declared no conflicts of interest.
Source: Zakiudin DP, Thyssen JP, Zachariae C, Videm V, Øien T, Simpson MR. Filaggrin mutation status and prevention of atopic dermatitis with maternal probiotic supplementation. Acta Derm Venereol. 2024;104:adv24360 (Apr 24). doi: 10.2340/actadv.v104.24360 Source
Pharmacological Interventions in Atopic Dermatitis Reduce Anxiety and Depression
Key clinical point: Pharmacological interventions aimed at reducing disease severity in patients with moderate to severe atopic dermatitis (AD) are also effective for improving anxiety and depression.
Major finding: Pharmacologic interventions for AD led to significant improvements in anxiety levels (standardized mean difference [SMD] −0.29; 95% CI −0.49 to −0.09) and depression severity (SMD −0.27; 95% CI −0.45 to −0.08) and an overall significant improvement in Hospital Anxiety and Depression scale scores (SMD −0.50; 95% CI −0.064 to −0.35).
Study details: This meta-analysis of seven phase 2b or 3 randomized controlled trials included 4723 patients with AD who were treated with either abrocitinib, baricitinib, dupilumab, tralokinumab, or placebo.
Disclosures: This study did not disclose any funding source. The authors declared no conflicts of interest.
Source: Hartono SP, Chatrath S, Aktas ON, et al. Interventions for anxiety and depression in patients with atopic dermatitis: A systematic review and meta-analysis. Sci Rep. 2024;14:8844 (Apr 17). Source
Key clinical point: Pharmacological interventions aimed at reducing disease severity in patients with moderate to severe atopic dermatitis (AD) are also effective for improving anxiety and depression.
Major finding: Pharmacologic interventions for AD led to significant improvements in anxiety levels (standardized mean difference [SMD] −0.29; 95% CI −0.49 to −0.09) and depression severity (SMD −0.27; 95% CI −0.45 to −0.08) and an overall significant improvement in Hospital Anxiety and Depression scale scores (SMD −0.50; 95% CI −0.064 to −0.35).
Study details: This meta-analysis of seven phase 2b or 3 randomized controlled trials included 4723 patients with AD who were treated with either abrocitinib, baricitinib, dupilumab, tralokinumab, or placebo.
Disclosures: This study did not disclose any funding source. The authors declared no conflicts of interest.
Source: Hartono SP, Chatrath S, Aktas ON, et al. Interventions for anxiety and depression in patients with atopic dermatitis: A systematic review and meta-analysis. Sci Rep. 2024;14:8844 (Apr 17). Source
Key clinical point: Pharmacological interventions aimed at reducing disease severity in patients with moderate to severe atopic dermatitis (AD) are also effective for improving anxiety and depression.
Major finding: Pharmacologic interventions for AD led to significant improvements in anxiety levels (standardized mean difference [SMD] −0.29; 95% CI −0.49 to −0.09) and depression severity (SMD −0.27; 95% CI −0.45 to −0.08) and an overall significant improvement in Hospital Anxiety and Depression scale scores (SMD −0.50; 95% CI −0.064 to −0.35).
Study details: This meta-analysis of seven phase 2b or 3 randomized controlled trials included 4723 patients with AD who were treated with either abrocitinib, baricitinib, dupilumab, tralokinumab, or placebo.
Disclosures: This study did not disclose any funding source. The authors declared no conflicts of interest.
Source: Hartono SP, Chatrath S, Aktas ON, et al. Interventions for anxiety and depression in patients with atopic dermatitis: A systematic review and meta-analysis. Sci Rep. 2024;14:8844 (Apr 17). Source