User login
Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Alcohol dependence in teens tied to subsequent depression
TOPLINE
Alcohol dependence, but not consumption, at age 18 years increases the risk for depression at age 24 years.
METHODOLOGY
- The study included 3,902 mostly White adolescents, about 58% female, born in England from April 1991 to December 1992, who were part of the Avon Longitudinal Study of Parents and Children (ALSPAC) that examined genetic and environmental determinants of health and development.
- Participants completed the self-report Alcohol Use Disorders Identification Test (AUDIT) between the ages of 16 and 23 years, a period when average alcohol use increases rapidly.
- The primary outcome was probability for depression at age 24 years, using the Clinical Interview Schedule Revised (CIS-R), a self-administered computerized clinical assessment of common mental disorder symptoms during the past week.
- Researchers assessed frequency and quantity of alcohol consumption as well as alcohol dependence.
- Confounders included sex, housing type, maternal education and depressive symptoms, parents’ alcohol use, conduct problems at age 4 years, being bullied, and smoking status.
TAKEAWAYS
- After adjustments, alcohol dependence at age 18 years was associated with depression at age 24 years (unstandardized probit coefficient 0.13; 95% confidence interval, 0.02-0.25; P = .019)
- The relationship appeared to persist for alcohol dependence at each age of the growth curve (17-22 years).
- There was no evidence that frequency or quantity of alcohol consumption at age 18 was significantly associated with depression at age 24, suggesting these factors may not increase the risk for later depression unless there are also features of dependency.
IN PRACTICE
“Our findings suggest that preventing alcohol dependence during adolescence, or treating it early, could reduce the risk of depression,” which could have important public health implications, the researchers write.
STUDY DETAILS
The study was carried out by researchers at the University of Bristol; University College London; Critical Thinking Unit, Public Health Directorate, NHS; University of Nottingham, all in the United Kingdom. It was published online in Lancet Psychiatry
LIMITATIONS
There was substantial attrition in the ALSPAC cohort from birth to age 24 years. The sample was recruited from one U.K. region and most participants were White. Measures of alcohol consumption and dependence excluded some features of abuse. And as this is an observational study, the possibility of residual confounding can’t be excluded.
DISCLOSURES
The investigators report no relevant disclosures. The study received support from the UK Medical Research Council and Alcohol Research UK.
A version of this article first appeared on Medscape.com.
TOPLINE
Alcohol dependence, but not consumption, at age 18 years increases the risk for depression at age 24 years.
METHODOLOGY
- The study included 3,902 mostly White adolescents, about 58% female, born in England from April 1991 to December 1992, who were part of the Avon Longitudinal Study of Parents and Children (ALSPAC) that examined genetic and environmental determinants of health and development.
- Participants completed the self-report Alcohol Use Disorders Identification Test (AUDIT) between the ages of 16 and 23 years, a period when average alcohol use increases rapidly.
- The primary outcome was probability for depression at age 24 years, using the Clinical Interview Schedule Revised (CIS-R), a self-administered computerized clinical assessment of common mental disorder symptoms during the past week.
- Researchers assessed frequency and quantity of alcohol consumption as well as alcohol dependence.
- Confounders included sex, housing type, maternal education and depressive symptoms, parents’ alcohol use, conduct problems at age 4 years, being bullied, and smoking status.
TAKEAWAYS
- After adjustments, alcohol dependence at age 18 years was associated with depression at age 24 years (unstandardized probit coefficient 0.13; 95% confidence interval, 0.02-0.25; P = .019)
- The relationship appeared to persist for alcohol dependence at each age of the growth curve (17-22 years).
- There was no evidence that frequency or quantity of alcohol consumption at age 18 was significantly associated with depression at age 24, suggesting these factors may not increase the risk for later depression unless there are also features of dependency.
IN PRACTICE
“Our findings suggest that preventing alcohol dependence during adolescence, or treating it early, could reduce the risk of depression,” which could have important public health implications, the researchers write.
STUDY DETAILS
The study was carried out by researchers at the University of Bristol; University College London; Critical Thinking Unit, Public Health Directorate, NHS; University of Nottingham, all in the United Kingdom. It was published online in Lancet Psychiatry
LIMITATIONS
There was substantial attrition in the ALSPAC cohort from birth to age 24 years. The sample was recruited from one U.K. region and most participants were White. Measures of alcohol consumption and dependence excluded some features of abuse. And as this is an observational study, the possibility of residual confounding can’t be excluded.
DISCLOSURES
The investigators report no relevant disclosures. The study received support from the UK Medical Research Council and Alcohol Research UK.
A version of this article first appeared on Medscape.com.
TOPLINE
Alcohol dependence, but not consumption, at age 18 years increases the risk for depression at age 24 years.
METHODOLOGY
- The study included 3,902 mostly White adolescents, about 58% female, born in England from April 1991 to December 1992, who were part of the Avon Longitudinal Study of Parents and Children (ALSPAC) that examined genetic and environmental determinants of health and development.
- Participants completed the self-report Alcohol Use Disorders Identification Test (AUDIT) between the ages of 16 and 23 years, a period when average alcohol use increases rapidly.
- The primary outcome was probability for depression at age 24 years, using the Clinical Interview Schedule Revised (CIS-R), a self-administered computerized clinical assessment of common mental disorder symptoms during the past week.
- Researchers assessed frequency and quantity of alcohol consumption as well as alcohol dependence.
- Confounders included sex, housing type, maternal education and depressive symptoms, parents’ alcohol use, conduct problems at age 4 years, being bullied, and smoking status.
TAKEAWAYS
- After adjustments, alcohol dependence at age 18 years was associated with depression at age 24 years (unstandardized probit coefficient 0.13; 95% confidence interval, 0.02-0.25; P = .019)
- The relationship appeared to persist for alcohol dependence at each age of the growth curve (17-22 years).
- There was no evidence that frequency or quantity of alcohol consumption at age 18 was significantly associated with depression at age 24, suggesting these factors may not increase the risk for later depression unless there are also features of dependency.
IN PRACTICE
“Our findings suggest that preventing alcohol dependence during adolescence, or treating it early, could reduce the risk of depression,” which could have important public health implications, the researchers write.
STUDY DETAILS
The study was carried out by researchers at the University of Bristol; University College London; Critical Thinking Unit, Public Health Directorate, NHS; University of Nottingham, all in the United Kingdom. It was published online in Lancet Psychiatry
LIMITATIONS
There was substantial attrition in the ALSPAC cohort from birth to age 24 years. The sample was recruited from one U.K. region and most participants were White. Measures of alcohol consumption and dependence excluded some features of abuse. And as this is an observational study, the possibility of residual confounding can’t be excluded.
DISCLOSURES
The investigators report no relevant disclosures. The study received support from the UK Medical Research Council and Alcohol Research UK.
A version of this article first appeared on Medscape.com.
After backlash, publisher to retract article that surveyed parents of children with gender dysphoria, says coauthor
The move is “due to concerns about lack of informed consent,” according to tweets by one of the paper’s authors.
The article, “Rapid Onset Gender Dysphoria: Parent Reports on 1655 Possible Cases,” was published in March in the Archives of Sexual Behavior. It has not been cited in the scientific literature, according to Clarivate’s Web of Science, but Altmetric, which tracks the online attention papers receive, ranks the article in the top 1% of all articles of a similar age.
Rapid Onset Gender Dysphoria (ROGD) is, the article stated, a “controversial theory” that “common cultural beliefs, values, and preoccupations cause some adolescents (especially female adolescents) to attribute their social problems, feelings, and mental health issues to gender dysphoria,” and that “youth with ROGD falsely believe that they are transgender,” in part due to social influences.
Michael Bailey, a psychology professor at Northwestern University in Evanston, Ill., and the paper’s corresponding author, tweeted:
Bailey told Retraction Watch that he would “respond when [he] can” to our request for comment, following “new developments on our end.” Neither Springer Nature nor Kenneth Zucker, editor in chief of Archives of Sexual Behavior, has responded to similar requests.
The paper reported the results of a survey of parents who contacted the website ParentsofROGDKids.com, with which the first author is affiliated. According to the abstract, the authors found:
“Pre-existing mental health issues were common, and youths with these issues were more likely than those without them to have socially and medically transitioned. Parents reported that they had often felt pressured by clinicians to affirm their AYA [adolescent and young adult] child’s new gender and support their transition. According to the parents, AYA children’s mental health deteriorated considerably after social transition.”
Soon after publication, the paper attracted criticism that its method of gathering study participants was biased, and that the authors ignored information that didn’t support the theory of ROGD.
Archives of Sexual Behavior is the official publication of the International Academy of Sex Research, which tweeted on April 19:
The episode prompted a May 5 “Open Letter in Support of Dr. Kenneth Zucker and the Need to Promote Robust Scientific Debate” from the Foundation Against Intolerance and Racism that has now been signed by nearly 2000 people.
On May 10, the following publisher’s note was added to the article:
“readers are alerted that concerns have been raised regarding methodology as described in this article. The publisher is currently investigating this matter and a further response will follow the conclusion of this investigation.
Six days later, the publisher removed the article’s supplementary information “due to a lack of documented consent by study participants.”
The story may feel familiar to readers who recall what happened to another paper in 2018. In that paper, Brown University’s Lisa Littman coined the term ROGD. Following a backlash, Brown took down a press release touting the results, and the paper was eventually republished with corrections.
Bailey has been accused of mistreating transgender research participants, but an investigation by bioethicist Alice Dreger found that of the many accusations, “almost none appear to have been legitimate.”
In a post on UnHerd earlier this month, Bailey responded to the reported concerns about the study lacking approval by an Institutional Review Board (IRB), and that the way the participants were recruited biased the results.
IRB approval was not necessary, Bailey wrote, because Suzanna Diaz, the first author who collected the data, was not affiliated with an institution that required it. “Suzanna Diaz” is a pseudonym for “the mother of a gender dysphoric child she believes has ROGD” who wishes to remain anonymous for the sake of her family, Bailey wrote.
The paper included the following statement about its ethical approval:
“The first author and creator of the survey is not affiliated with any university or hospital. Thus, she did not seek approval from an IRB. After seeing a presentation of preliminary survey results by the first author, the second author suggested the data to be analyzed and submitted as an academic article (he was not involved in collecting the data). The second author consulted with his university’s IRB, who declined to certify the study because data were already collected. However, they advised that publishing the results was likely ethical provided data were deidentified. Editor’s note: After I reviewed the manuscript, I concluded that its publication is ethically appropriate, consistent with Springer policy.”
In his UnHerd post, Bailey quoted from the journal’s submission guidelines:
“If a study has not been granted ethics committee approval prior to commencing, retrospective ethics approval usually cannot be obtained and it may not be possible to consider the manuscript for peer review. The decision on whether to proceed to peer review in such cases is at the Editor’s discretion.”
“Regarding the methodological limitations of the study, these were addressed forthrightly and thoroughly in our article,” Bailey wrote.
Adam Marcus, a cofounder of Retraction Watch, is an editor at this news organization.
A version of this article first appeared on RetractionWatch.com.
The move is “due to concerns about lack of informed consent,” according to tweets by one of the paper’s authors.
The article, “Rapid Onset Gender Dysphoria: Parent Reports on 1655 Possible Cases,” was published in March in the Archives of Sexual Behavior. It has not been cited in the scientific literature, according to Clarivate’s Web of Science, but Altmetric, which tracks the online attention papers receive, ranks the article in the top 1% of all articles of a similar age.
Rapid Onset Gender Dysphoria (ROGD) is, the article stated, a “controversial theory” that “common cultural beliefs, values, and preoccupations cause some adolescents (especially female adolescents) to attribute their social problems, feelings, and mental health issues to gender dysphoria,” and that “youth with ROGD falsely believe that they are transgender,” in part due to social influences.
Michael Bailey, a psychology professor at Northwestern University in Evanston, Ill., and the paper’s corresponding author, tweeted:
Bailey told Retraction Watch that he would “respond when [he] can” to our request for comment, following “new developments on our end.” Neither Springer Nature nor Kenneth Zucker, editor in chief of Archives of Sexual Behavior, has responded to similar requests.
The paper reported the results of a survey of parents who contacted the website ParentsofROGDKids.com, with which the first author is affiliated. According to the abstract, the authors found:
“Pre-existing mental health issues were common, and youths with these issues were more likely than those without them to have socially and medically transitioned. Parents reported that they had often felt pressured by clinicians to affirm their AYA [adolescent and young adult] child’s new gender and support their transition. According to the parents, AYA children’s mental health deteriorated considerably after social transition.”
Soon after publication, the paper attracted criticism that its method of gathering study participants was biased, and that the authors ignored information that didn’t support the theory of ROGD.
Archives of Sexual Behavior is the official publication of the International Academy of Sex Research, which tweeted on April 19:
The episode prompted a May 5 “Open Letter in Support of Dr. Kenneth Zucker and the Need to Promote Robust Scientific Debate” from the Foundation Against Intolerance and Racism that has now been signed by nearly 2000 people.
On May 10, the following publisher’s note was added to the article:
“readers are alerted that concerns have been raised regarding methodology as described in this article. The publisher is currently investigating this matter and a further response will follow the conclusion of this investigation.
Six days later, the publisher removed the article’s supplementary information “due to a lack of documented consent by study participants.”
The story may feel familiar to readers who recall what happened to another paper in 2018. In that paper, Brown University’s Lisa Littman coined the term ROGD. Following a backlash, Brown took down a press release touting the results, and the paper was eventually republished with corrections.
Bailey has been accused of mistreating transgender research participants, but an investigation by bioethicist Alice Dreger found that of the many accusations, “almost none appear to have been legitimate.”
In a post on UnHerd earlier this month, Bailey responded to the reported concerns about the study lacking approval by an Institutional Review Board (IRB), and that the way the participants were recruited biased the results.
IRB approval was not necessary, Bailey wrote, because Suzanna Diaz, the first author who collected the data, was not affiliated with an institution that required it. “Suzanna Diaz” is a pseudonym for “the mother of a gender dysphoric child she believes has ROGD” who wishes to remain anonymous for the sake of her family, Bailey wrote.
The paper included the following statement about its ethical approval:
“The first author and creator of the survey is not affiliated with any university or hospital. Thus, she did not seek approval from an IRB. After seeing a presentation of preliminary survey results by the first author, the second author suggested the data to be analyzed and submitted as an academic article (he was not involved in collecting the data). The second author consulted with his university’s IRB, who declined to certify the study because data were already collected. However, they advised that publishing the results was likely ethical provided data were deidentified. Editor’s note: After I reviewed the manuscript, I concluded that its publication is ethically appropriate, consistent with Springer policy.”
In his UnHerd post, Bailey quoted from the journal’s submission guidelines:
“If a study has not been granted ethics committee approval prior to commencing, retrospective ethics approval usually cannot be obtained and it may not be possible to consider the manuscript for peer review. The decision on whether to proceed to peer review in such cases is at the Editor’s discretion.”
“Regarding the methodological limitations of the study, these were addressed forthrightly and thoroughly in our article,” Bailey wrote.
Adam Marcus, a cofounder of Retraction Watch, is an editor at this news organization.
A version of this article first appeared on RetractionWatch.com.
The move is “due to concerns about lack of informed consent,” according to tweets by one of the paper’s authors.
The article, “Rapid Onset Gender Dysphoria: Parent Reports on 1655 Possible Cases,” was published in March in the Archives of Sexual Behavior. It has not been cited in the scientific literature, according to Clarivate’s Web of Science, but Altmetric, which tracks the online attention papers receive, ranks the article in the top 1% of all articles of a similar age.
Rapid Onset Gender Dysphoria (ROGD) is, the article stated, a “controversial theory” that “common cultural beliefs, values, and preoccupations cause some adolescents (especially female adolescents) to attribute their social problems, feelings, and mental health issues to gender dysphoria,” and that “youth with ROGD falsely believe that they are transgender,” in part due to social influences.
Michael Bailey, a psychology professor at Northwestern University in Evanston, Ill., and the paper’s corresponding author, tweeted:
Bailey told Retraction Watch that he would “respond when [he] can” to our request for comment, following “new developments on our end.” Neither Springer Nature nor Kenneth Zucker, editor in chief of Archives of Sexual Behavior, has responded to similar requests.
The paper reported the results of a survey of parents who contacted the website ParentsofROGDKids.com, with which the first author is affiliated. According to the abstract, the authors found:
“Pre-existing mental health issues were common, and youths with these issues were more likely than those without them to have socially and medically transitioned. Parents reported that they had often felt pressured by clinicians to affirm their AYA [adolescent and young adult] child’s new gender and support their transition. According to the parents, AYA children’s mental health deteriorated considerably after social transition.”
Soon after publication, the paper attracted criticism that its method of gathering study participants was biased, and that the authors ignored information that didn’t support the theory of ROGD.
Archives of Sexual Behavior is the official publication of the International Academy of Sex Research, which tweeted on April 19:
The episode prompted a May 5 “Open Letter in Support of Dr. Kenneth Zucker and the Need to Promote Robust Scientific Debate” from the Foundation Against Intolerance and Racism that has now been signed by nearly 2000 people.
On May 10, the following publisher’s note was added to the article:
“readers are alerted that concerns have been raised regarding methodology as described in this article. The publisher is currently investigating this matter and a further response will follow the conclusion of this investigation.
Six days later, the publisher removed the article’s supplementary information “due to a lack of documented consent by study participants.”
The story may feel familiar to readers who recall what happened to another paper in 2018. In that paper, Brown University’s Lisa Littman coined the term ROGD. Following a backlash, Brown took down a press release touting the results, and the paper was eventually republished with corrections.
Bailey has been accused of mistreating transgender research participants, but an investigation by bioethicist Alice Dreger found that of the many accusations, “almost none appear to have been legitimate.”
In a post on UnHerd earlier this month, Bailey responded to the reported concerns about the study lacking approval by an Institutional Review Board (IRB), and that the way the participants were recruited biased the results.
IRB approval was not necessary, Bailey wrote, because Suzanna Diaz, the first author who collected the data, was not affiliated with an institution that required it. “Suzanna Diaz” is a pseudonym for “the mother of a gender dysphoric child she believes has ROGD” who wishes to remain anonymous for the sake of her family, Bailey wrote.
The paper included the following statement about its ethical approval:
“The first author and creator of the survey is not affiliated with any university or hospital. Thus, she did not seek approval from an IRB. After seeing a presentation of preliminary survey results by the first author, the second author suggested the data to be analyzed and submitted as an academic article (he was not involved in collecting the data). The second author consulted with his university’s IRB, who declined to certify the study because data were already collected. However, they advised that publishing the results was likely ethical provided data were deidentified. Editor’s note: After I reviewed the manuscript, I concluded that its publication is ethically appropriate, consistent with Springer policy.”
In his UnHerd post, Bailey quoted from the journal’s submission guidelines:
“If a study has not been granted ethics committee approval prior to commencing, retrospective ethics approval usually cannot be obtained and it may not be possible to consider the manuscript for peer review. The decision on whether to proceed to peer review in such cases is at the Editor’s discretion.”
“Regarding the methodological limitations of the study, these were addressed forthrightly and thoroughly in our article,” Bailey wrote.
Adam Marcus, a cofounder of Retraction Watch, is an editor at this news organization.
A version of this article first appeared on RetractionWatch.com.
Cognitive decline risk in adult childhood cancer survivors
Among more than 2,300 adult survivors of childhood cancer and their siblings, who served as controls, new-onset memory impairment emerged more often in survivors decades later.
The increased risk was associated with the cancer treatment that was provided as well as modifiable health behaviors and chronic health conditions.
Even 35 years after being diagnosed, cancer survivors who never received chemotherapies or radiation therapies known to damage the brain reported far greater memory impairment than did their siblings, first author Nicholas Phillips, MD, told this news organization.
What the findings suggest is that “we need to educate oncologists and primary care providers on the risks our survivors face long after completion of therapy,” said Dr. Phillips, of the epidemiology and cancer control department at St. Jude Children’s Research Hospital, Memphis, Tenn.
The study was published online in JAMA Network Open.
Cancer survivors face an elevated risk for severe neurocognitive effects that can emerge 5-10 years following their diagnosis and treatment. However, it’s unclear whether new-onset neurocognitive problems can still develop a decade or more following diagnosis.
Over a long-term follow-up, Dr. Phillips and colleagues explored this question in 2,375 adult survivors of childhood cancer from the Childhood Cancer Survivor Study and 232 of their siblings.
Among the cancer cohort, 1,316 patients were survivors of acute lymphoblastic leukemia (ALL), 488 were survivors of central nervous system (CNS) tumors, and 571 had survived Hodgkin lymphoma.
The researchers determined the prevalence of new-onset neurocognitive impairment between baseline (23 years after diagnosis) and follow-up (35 years after diagnosis). New-onset neurocognitive impairment – present at follow-up but not at baseline – was defined as having a score in the worst 10% of the sibling cohort.
A higher proportion of survivors had new-onset memory impairment at follow-up compared with siblings. Specifically, about 8% of siblings had new-onset memory trouble, compared with 14% of ALL survivors treated with chemotherapy only, 26% of ALL survivors treated with cranial radiation, 35% of CNS tumor survivors, and 17% of Hodgkin lymphoma survivors.
New-onset memory impairment was associated with cranial radiation among CNS tumor survivors (relative risk [RR], 1.97) and alkylator chemotherapy at or above 8,000 mg/m2 among survivors of ALL who were treated without cranial radiation (RR, 2.80). The authors also found that smoking, low educational attainment, and low physical activity were associated with an elevated risk for new-onset memory impairment.
Dr. Phillips noted that current guidelines emphasize the importance of short-term monitoring of a survivor’s neurocognitive status on the basis of that person’s chemotherapy and radiation exposures.
However, “our study suggests that all survivors, regardless of their therapy, should be screened regularly for new-onset neurocognitive problems. And this screening should be done regularly for decades after diagnosis,” he said in an interview.
Dr. Phillips also noted the importance of communicating lifestyle modifications, such as not smoking and maintaining an active lifestyle.
“We need to start early and use the power of repetition when communicating with our survivors and their families,” Dr. Phillips said. “When our families and survivors hear the word ‘exercise,’ they think of gym memberships, lifting weights, and running on treadmills. But what we really want our survivors to do is stay active.”
What this means is engaging for about 2.5 hours a week in a range of activities, such as ballet, basketball, volleyball, bicycling, or swimming.
“And if our kids want to quit after 3 months, let them know that this is okay. They just need to replace that activity with another activity,” said Dr. Phillips. “We want them to find a fun hobby that they will enjoy that will keep them active.”
The study was supported by the National Cancer Institute. Dr. Phillips has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Among more than 2,300 adult survivors of childhood cancer and their siblings, who served as controls, new-onset memory impairment emerged more often in survivors decades later.
The increased risk was associated with the cancer treatment that was provided as well as modifiable health behaviors and chronic health conditions.
Even 35 years after being diagnosed, cancer survivors who never received chemotherapies or radiation therapies known to damage the brain reported far greater memory impairment than did their siblings, first author Nicholas Phillips, MD, told this news organization.
What the findings suggest is that “we need to educate oncologists and primary care providers on the risks our survivors face long after completion of therapy,” said Dr. Phillips, of the epidemiology and cancer control department at St. Jude Children’s Research Hospital, Memphis, Tenn.
The study was published online in JAMA Network Open.
Cancer survivors face an elevated risk for severe neurocognitive effects that can emerge 5-10 years following their diagnosis and treatment. However, it’s unclear whether new-onset neurocognitive problems can still develop a decade or more following diagnosis.
Over a long-term follow-up, Dr. Phillips and colleagues explored this question in 2,375 adult survivors of childhood cancer from the Childhood Cancer Survivor Study and 232 of their siblings.
Among the cancer cohort, 1,316 patients were survivors of acute lymphoblastic leukemia (ALL), 488 were survivors of central nervous system (CNS) tumors, and 571 had survived Hodgkin lymphoma.
The researchers determined the prevalence of new-onset neurocognitive impairment between baseline (23 years after diagnosis) and follow-up (35 years after diagnosis). New-onset neurocognitive impairment – present at follow-up but not at baseline – was defined as having a score in the worst 10% of the sibling cohort.
A higher proportion of survivors had new-onset memory impairment at follow-up compared with siblings. Specifically, about 8% of siblings had new-onset memory trouble, compared with 14% of ALL survivors treated with chemotherapy only, 26% of ALL survivors treated with cranial radiation, 35% of CNS tumor survivors, and 17% of Hodgkin lymphoma survivors.
New-onset memory impairment was associated with cranial radiation among CNS tumor survivors (relative risk [RR], 1.97) and alkylator chemotherapy at or above 8,000 mg/m2 among survivors of ALL who were treated without cranial radiation (RR, 2.80). The authors also found that smoking, low educational attainment, and low physical activity were associated with an elevated risk for new-onset memory impairment.
Dr. Phillips noted that current guidelines emphasize the importance of short-term monitoring of a survivor’s neurocognitive status on the basis of that person’s chemotherapy and radiation exposures.
However, “our study suggests that all survivors, regardless of their therapy, should be screened regularly for new-onset neurocognitive problems. And this screening should be done regularly for decades after diagnosis,” he said in an interview.
Dr. Phillips also noted the importance of communicating lifestyle modifications, such as not smoking and maintaining an active lifestyle.
“We need to start early and use the power of repetition when communicating with our survivors and their families,” Dr. Phillips said. “When our families and survivors hear the word ‘exercise,’ they think of gym memberships, lifting weights, and running on treadmills. But what we really want our survivors to do is stay active.”
What this means is engaging for about 2.5 hours a week in a range of activities, such as ballet, basketball, volleyball, bicycling, or swimming.
“And if our kids want to quit after 3 months, let them know that this is okay. They just need to replace that activity with another activity,” said Dr. Phillips. “We want them to find a fun hobby that they will enjoy that will keep them active.”
The study was supported by the National Cancer Institute. Dr. Phillips has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Among more than 2,300 adult survivors of childhood cancer and their siblings, who served as controls, new-onset memory impairment emerged more often in survivors decades later.
The increased risk was associated with the cancer treatment that was provided as well as modifiable health behaviors and chronic health conditions.
Even 35 years after being diagnosed, cancer survivors who never received chemotherapies or radiation therapies known to damage the brain reported far greater memory impairment than did their siblings, first author Nicholas Phillips, MD, told this news organization.
What the findings suggest is that “we need to educate oncologists and primary care providers on the risks our survivors face long after completion of therapy,” said Dr. Phillips, of the epidemiology and cancer control department at St. Jude Children’s Research Hospital, Memphis, Tenn.
The study was published online in JAMA Network Open.
Cancer survivors face an elevated risk for severe neurocognitive effects that can emerge 5-10 years following their diagnosis and treatment. However, it’s unclear whether new-onset neurocognitive problems can still develop a decade or more following diagnosis.
Over a long-term follow-up, Dr. Phillips and colleagues explored this question in 2,375 adult survivors of childhood cancer from the Childhood Cancer Survivor Study and 232 of their siblings.
Among the cancer cohort, 1,316 patients were survivors of acute lymphoblastic leukemia (ALL), 488 were survivors of central nervous system (CNS) tumors, and 571 had survived Hodgkin lymphoma.
The researchers determined the prevalence of new-onset neurocognitive impairment between baseline (23 years after diagnosis) and follow-up (35 years after diagnosis). New-onset neurocognitive impairment – present at follow-up but not at baseline – was defined as having a score in the worst 10% of the sibling cohort.
A higher proportion of survivors had new-onset memory impairment at follow-up compared with siblings. Specifically, about 8% of siblings had new-onset memory trouble, compared with 14% of ALL survivors treated with chemotherapy only, 26% of ALL survivors treated with cranial radiation, 35% of CNS tumor survivors, and 17% of Hodgkin lymphoma survivors.
New-onset memory impairment was associated with cranial radiation among CNS tumor survivors (relative risk [RR], 1.97) and alkylator chemotherapy at or above 8,000 mg/m2 among survivors of ALL who were treated without cranial radiation (RR, 2.80). The authors also found that smoking, low educational attainment, and low physical activity were associated with an elevated risk for new-onset memory impairment.
Dr. Phillips noted that current guidelines emphasize the importance of short-term monitoring of a survivor’s neurocognitive status on the basis of that person’s chemotherapy and radiation exposures.
However, “our study suggests that all survivors, regardless of their therapy, should be screened regularly for new-onset neurocognitive problems. And this screening should be done regularly for decades after diagnosis,” he said in an interview.
Dr. Phillips also noted the importance of communicating lifestyle modifications, such as not smoking and maintaining an active lifestyle.
“We need to start early and use the power of repetition when communicating with our survivors and their families,” Dr. Phillips said. “When our families and survivors hear the word ‘exercise,’ they think of gym memberships, lifting weights, and running on treadmills. But what we really want our survivors to do is stay active.”
What this means is engaging for about 2.5 hours a week in a range of activities, such as ballet, basketball, volleyball, bicycling, or swimming.
“And if our kids want to quit after 3 months, let them know that this is okay. They just need to replace that activity with another activity,” said Dr. Phillips. “We want them to find a fun hobby that they will enjoy that will keep them active.”
The study was supported by the National Cancer Institute. Dr. Phillips has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Is ChatGPT a friend or foe of medical publishing?
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.
These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.
At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.
What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
A change in medical publishing
OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:
“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”
Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.
There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.
The consensus is that AI has no place on the author byline.
“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
Issues with AI
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.
“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”
In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.
“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”
Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.
“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.
OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”
Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
A positive tool?
But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.
“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”
Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.
In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
New rules
But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.
“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.
While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.
“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.
The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.
It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”
Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Survival similar with hearts donated after circulatory or brain death
in the first randomized trial comparing the two approaches.
“This randomized trial showing recipient survival with DCD to be similar to DBD should lead to DCD becoming the standard of care alongside DBD,” lead author Jacob Schroder, MD, surgical director, heart transplantation program, Duke University Medical Center, Durham, N.C., said in an interview.
“This should enable many more heart transplants to take place and for us to be able to cast the net further and wider for donors,” he said.
The trial was published online in the New England Journal of Medicine.
Dr. Schroder estimated that only around one-fifth of the 120 U.S. heart transplant centers currently carry out DCD transplants, but he is hopeful that the publication of this study will encourage more transplant centers to do these DCD procedures.
“The problem is there are many low-volume heart transplant centers, which may not be keen to do DCD transplants as they are a bit more complicated and expensive than DBD heart transplants,” he said. “But we need to look at the big picture of how many lives can be saved by increasing the number of heart transplant procedures and the money saved by getting more patients off the waiting list.”
The authors explain that heart transplantation has traditionally been limited to the use of hearts obtained from donors after brain death, which allows in situ assessment of cardiac function and of the suitability for transplantation of the donor allograft before surgical procurement.
But because the need for heart transplants far exceeds the availability of suitable donors, the use of DCD hearts has been investigated and this approach is now being pursued in many countries. In the DCD approach, the heart will have stopped beating in the donor, and perfusion techniques are used to restart the organ.
There are two different approaches to restarting the heart in DCD. The first approach involves the heart being removed from the donor and reanimated, preserved, assessed, and transported with the use of a portable extracorporeal perfusion and preservation system (Organ Care System, TransMedics). The second involves restarting the heart in the donor’s body for evaluation before removal and transportation under the traditional cold storage method used for donations after brain death.
The current trial was designed to compare clinical outcomes in patients who had received a heart from a circulatory death donor using the portable extracorporeal perfusion method for DCD transplantation, with outcomes from the traditional method of heart transplantation using organs donated after brain death.
For the randomized, noninferiority trial, adult candidates for heart transplantation were assigned to receive a heart after the circulatory death of the donor or a heart from a donor after brain death if that heart was available first (circulatory-death group) or to receive only a heart that had been preserved with the use of traditional cold storage after the brain death of the donor (brain-death group).
The primary end point was the risk-adjusted survival at 6 months in the as-treated circulatory-death group, as compared with the brain-death group. The primary safety end point was serious adverse events associated with the heart graft at 30 days after transplantation.
A total of 180 patients underwent transplantation, 90 of whom received a heart donated after circulatory death and 90 who received a heart donated after brain death. A total of 166 transplant recipients were included in the as-treated primary analysis (80 who received a heart from a circulatory-death donor and 86 who received a heart from a brain-death donor).
The risk-adjusted 6-month survival in the as-treated population was 94% among recipients of a heart from a circulatory-death donor, as compared with 90% among recipients of a heart from a brain-death donor (P < .001 for noninferiority).
There were no substantial between-group differences in the mean per-patient number of serious adverse events associated with the heart graft at 30 days after transplantation.
Of 101 hearts from circulatory-death donors that were preserved with the use of the perfusion system, 90 were successfully transplanted according to the criteria for lactate trend and overall contractility of the donor heart, which resulted in overall utilization percentage of 89%.
More patients who received a heart from a circulatory-death donor had moderate or severe primary graft dysfunction (22%) than those who received a heart from a brain-death donor (10%). However, graft failure that resulted in retransplantation occurred in two (2.3%) patients who received a heart from a brain-death donor versus zero patients who received a heart from a circulatory-death donor.
The researchers note that the higher incidence of primary graft dysfunction in the circulatory-death group is expected, given the period of warm ischemia that occurs in this approach. But they point out that this did not affect patient or graft survival at 30 days or 1 year.
“Primary graft dysfunction is when the heart doesn’t fully work immediately after transplant and some mechanical support is needed,” Dr. Schroder commented to this news organization. “This occurred more often in the DCD group, but this mechanical support is only temporary, and generally only needed for a day or two.
“It looks like it might take the heart a little longer to start fully functioning after DCD, but our results show this doesn’t seem to affect recipient survival.”
He added: “We’ve started to become more comfortable with DCD. Sometimes it may take a little longer to get the heart working properly on its own, but the rate of mechanical support is now much lower than when we first started doing these procedures. And cardiac MRI on the recipient patients before discharge have shown that the DCD hearts are not more damaged than those from DBD donors.”
The authors also report that there were six donor hearts in the DCD group for which there were protocol deviations of functional warm ischemic time greater than 30 minutes or continuously rising lactate levels and these hearts did not show primary graft dysfunction.
On this observation, Dr. Schroder said: “I think we need to do more work on understanding the ischemic time limits. The current 30 minutes time limit was estimated in animal studies. We need to look more closely at data from actual DCD transplants. While 30 minutes may be too long for a heart from an older donor, the heart from a younger donor may be fine for a longer period of ischemic time as it will be healthier.”
“Exciting” results
In an editorial, Nancy K. Sweitzer, MD, PhD, vice chair of clinical research, department of medicine, and director of clinical research, division of cardiology, Washington University in St. Louis, describes the results of the current study as “exciting,” adding that, “They clearly show the feasibility and safety of transplantation of hearts from circulatory-death donors.”
However, Dr. Sweitzer points out that the sickest patients in the study – those who were United Network for Organ Sharing (UNOS) status 1 and 2 – were more likely to receive a DBD heart and the more stable patients (UNOS 3-6) were more likely to receive a DCD heart.
“This imbalance undoubtedly contributed to the success of the trial in meeting its noninferiority end point. Whether transplantation of hearts from circulatory-death donors is truly safe in our sickest patients with heart failure is not clear,” she says.
However, she concludes, “Although caution and continuous evaluation of data are warranted, the increased use of hearts from circulatory-death donors appears to be safe in the hands of experienced transplantation teams and will launch an exciting phase of learning and improvement.”
“A safely expanded pool of heart donors has the potential to increase fairness and equity in heart transplantation, allowing more persons with heart failure to have access to this lifesaving therapy,” she adds. “Organ donors and transplantation teams will save increasing numbers of lives with this most precious gift.”
The current study was supported by TransMedics. Dr. Schroder reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
in the first randomized trial comparing the two approaches.
“This randomized trial showing recipient survival with DCD to be similar to DBD should lead to DCD becoming the standard of care alongside DBD,” lead author Jacob Schroder, MD, surgical director, heart transplantation program, Duke University Medical Center, Durham, N.C., said in an interview.
“This should enable many more heart transplants to take place and for us to be able to cast the net further and wider for donors,” he said.
The trial was published online in the New England Journal of Medicine.
Dr. Schroder estimated that only around one-fifth of the 120 U.S. heart transplant centers currently carry out DCD transplants, but he is hopeful that the publication of this study will encourage more transplant centers to do these DCD procedures.
“The problem is there are many low-volume heart transplant centers, which may not be keen to do DCD transplants as they are a bit more complicated and expensive than DBD heart transplants,” he said. “But we need to look at the big picture of how many lives can be saved by increasing the number of heart transplant procedures and the money saved by getting more patients off the waiting list.”
The authors explain that heart transplantation has traditionally been limited to the use of hearts obtained from donors after brain death, which allows in situ assessment of cardiac function and of the suitability for transplantation of the donor allograft before surgical procurement.
But because the need for heart transplants far exceeds the availability of suitable donors, the use of DCD hearts has been investigated and this approach is now being pursued in many countries. In the DCD approach, the heart will have stopped beating in the donor, and perfusion techniques are used to restart the organ.
There are two different approaches to restarting the heart in DCD. The first approach involves the heart being removed from the donor and reanimated, preserved, assessed, and transported with the use of a portable extracorporeal perfusion and preservation system (Organ Care System, TransMedics). The second involves restarting the heart in the donor’s body for evaluation before removal and transportation under the traditional cold storage method used for donations after brain death.
The current trial was designed to compare clinical outcomes in patients who had received a heart from a circulatory death donor using the portable extracorporeal perfusion method for DCD transplantation, with outcomes from the traditional method of heart transplantation using organs donated after brain death.
For the randomized, noninferiority trial, adult candidates for heart transplantation were assigned to receive a heart after the circulatory death of the donor or a heart from a donor after brain death if that heart was available first (circulatory-death group) or to receive only a heart that had been preserved with the use of traditional cold storage after the brain death of the donor (brain-death group).
The primary end point was the risk-adjusted survival at 6 months in the as-treated circulatory-death group, as compared with the brain-death group. The primary safety end point was serious adverse events associated with the heart graft at 30 days after transplantation.
A total of 180 patients underwent transplantation, 90 of whom received a heart donated after circulatory death and 90 who received a heart donated after brain death. A total of 166 transplant recipients were included in the as-treated primary analysis (80 who received a heart from a circulatory-death donor and 86 who received a heart from a brain-death donor).
The risk-adjusted 6-month survival in the as-treated population was 94% among recipients of a heart from a circulatory-death donor, as compared with 90% among recipients of a heart from a brain-death donor (P < .001 for noninferiority).
There were no substantial between-group differences in the mean per-patient number of serious adverse events associated with the heart graft at 30 days after transplantation.
Of 101 hearts from circulatory-death donors that were preserved with the use of the perfusion system, 90 were successfully transplanted according to the criteria for lactate trend and overall contractility of the donor heart, which resulted in overall utilization percentage of 89%.
More patients who received a heart from a circulatory-death donor had moderate or severe primary graft dysfunction (22%) than those who received a heart from a brain-death donor (10%). However, graft failure that resulted in retransplantation occurred in two (2.3%) patients who received a heart from a brain-death donor versus zero patients who received a heart from a circulatory-death donor.
The researchers note that the higher incidence of primary graft dysfunction in the circulatory-death group is expected, given the period of warm ischemia that occurs in this approach. But they point out that this did not affect patient or graft survival at 30 days or 1 year.
“Primary graft dysfunction is when the heart doesn’t fully work immediately after transplant and some mechanical support is needed,” Dr. Schroder commented to this news organization. “This occurred more often in the DCD group, but this mechanical support is only temporary, and generally only needed for a day or two.
“It looks like it might take the heart a little longer to start fully functioning after DCD, but our results show this doesn’t seem to affect recipient survival.”
He added: “We’ve started to become more comfortable with DCD. Sometimes it may take a little longer to get the heart working properly on its own, but the rate of mechanical support is now much lower than when we first started doing these procedures. And cardiac MRI on the recipient patients before discharge have shown that the DCD hearts are not more damaged than those from DBD donors.”
The authors also report that there were six donor hearts in the DCD group for which there were protocol deviations of functional warm ischemic time greater than 30 minutes or continuously rising lactate levels and these hearts did not show primary graft dysfunction.
On this observation, Dr. Schroder said: “I think we need to do more work on understanding the ischemic time limits. The current 30 minutes time limit was estimated in animal studies. We need to look more closely at data from actual DCD transplants. While 30 minutes may be too long for a heart from an older donor, the heart from a younger donor may be fine for a longer period of ischemic time as it will be healthier.”
“Exciting” results
In an editorial, Nancy K. Sweitzer, MD, PhD, vice chair of clinical research, department of medicine, and director of clinical research, division of cardiology, Washington University in St. Louis, describes the results of the current study as “exciting,” adding that, “They clearly show the feasibility and safety of transplantation of hearts from circulatory-death donors.”
However, Dr. Sweitzer points out that the sickest patients in the study – those who were United Network for Organ Sharing (UNOS) status 1 and 2 – were more likely to receive a DBD heart and the more stable patients (UNOS 3-6) were more likely to receive a DCD heart.
“This imbalance undoubtedly contributed to the success of the trial in meeting its noninferiority end point. Whether transplantation of hearts from circulatory-death donors is truly safe in our sickest patients with heart failure is not clear,” she says.
However, she concludes, “Although caution and continuous evaluation of data are warranted, the increased use of hearts from circulatory-death donors appears to be safe in the hands of experienced transplantation teams and will launch an exciting phase of learning and improvement.”
“A safely expanded pool of heart donors has the potential to increase fairness and equity in heart transplantation, allowing more persons with heart failure to have access to this lifesaving therapy,” she adds. “Organ donors and transplantation teams will save increasing numbers of lives with this most precious gift.”
The current study was supported by TransMedics. Dr. Schroder reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
in the first randomized trial comparing the two approaches.
“This randomized trial showing recipient survival with DCD to be similar to DBD should lead to DCD becoming the standard of care alongside DBD,” lead author Jacob Schroder, MD, surgical director, heart transplantation program, Duke University Medical Center, Durham, N.C., said in an interview.
“This should enable many more heart transplants to take place and for us to be able to cast the net further and wider for donors,” he said.
The trial was published online in the New England Journal of Medicine.
Dr. Schroder estimated that only around one-fifth of the 120 U.S. heart transplant centers currently carry out DCD transplants, but he is hopeful that the publication of this study will encourage more transplant centers to do these DCD procedures.
“The problem is there are many low-volume heart transplant centers, which may not be keen to do DCD transplants as they are a bit more complicated and expensive than DBD heart transplants,” he said. “But we need to look at the big picture of how many lives can be saved by increasing the number of heart transplant procedures and the money saved by getting more patients off the waiting list.”
The authors explain that heart transplantation has traditionally been limited to the use of hearts obtained from donors after brain death, which allows in situ assessment of cardiac function and of the suitability for transplantation of the donor allograft before surgical procurement.
But because the need for heart transplants far exceeds the availability of suitable donors, the use of DCD hearts has been investigated and this approach is now being pursued in many countries. In the DCD approach, the heart will have stopped beating in the donor, and perfusion techniques are used to restart the organ.
There are two different approaches to restarting the heart in DCD. The first approach involves the heart being removed from the donor and reanimated, preserved, assessed, and transported with the use of a portable extracorporeal perfusion and preservation system (Organ Care System, TransMedics). The second involves restarting the heart in the donor’s body for evaluation before removal and transportation under the traditional cold storage method used for donations after brain death.
The current trial was designed to compare clinical outcomes in patients who had received a heart from a circulatory death donor using the portable extracorporeal perfusion method for DCD transplantation, with outcomes from the traditional method of heart transplantation using organs donated after brain death.
For the randomized, noninferiority trial, adult candidates for heart transplantation were assigned to receive a heart after the circulatory death of the donor or a heart from a donor after brain death if that heart was available first (circulatory-death group) or to receive only a heart that had been preserved with the use of traditional cold storage after the brain death of the donor (brain-death group).
The primary end point was the risk-adjusted survival at 6 months in the as-treated circulatory-death group, as compared with the brain-death group. The primary safety end point was serious adverse events associated with the heart graft at 30 days after transplantation.
A total of 180 patients underwent transplantation, 90 of whom received a heart donated after circulatory death and 90 who received a heart donated after brain death. A total of 166 transplant recipients were included in the as-treated primary analysis (80 who received a heart from a circulatory-death donor and 86 who received a heart from a brain-death donor).
The risk-adjusted 6-month survival in the as-treated population was 94% among recipients of a heart from a circulatory-death donor, as compared with 90% among recipients of a heart from a brain-death donor (P < .001 for noninferiority).
There were no substantial between-group differences in the mean per-patient number of serious adverse events associated with the heart graft at 30 days after transplantation.
Of 101 hearts from circulatory-death donors that were preserved with the use of the perfusion system, 90 were successfully transplanted according to the criteria for lactate trend and overall contractility of the donor heart, which resulted in overall utilization percentage of 89%.
More patients who received a heart from a circulatory-death donor had moderate or severe primary graft dysfunction (22%) than those who received a heart from a brain-death donor (10%). However, graft failure that resulted in retransplantation occurred in two (2.3%) patients who received a heart from a brain-death donor versus zero patients who received a heart from a circulatory-death donor.
The researchers note that the higher incidence of primary graft dysfunction in the circulatory-death group is expected, given the period of warm ischemia that occurs in this approach. But they point out that this did not affect patient or graft survival at 30 days or 1 year.
“Primary graft dysfunction is when the heart doesn’t fully work immediately after transplant and some mechanical support is needed,” Dr. Schroder commented to this news organization. “This occurred more often in the DCD group, but this mechanical support is only temporary, and generally only needed for a day or two.
“It looks like it might take the heart a little longer to start fully functioning after DCD, but our results show this doesn’t seem to affect recipient survival.”
He added: “We’ve started to become more comfortable with DCD. Sometimes it may take a little longer to get the heart working properly on its own, but the rate of mechanical support is now much lower than when we first started doing these procedures. And cardiac MRI on the recipient patients before discharge have shown that the DCD hearts are not more damaged than those from DBD donors.”
The authors also report that there were six donor hearts in the DCD group for which there were protocol deviations of functional warm ischemic time greater than 30 minutes or continuously rising lactate levels and these hearts did not show primary graft dysfunction.
On this observation, Dr. Schroder said: “I think we need to do more work on understanding the ischemic time limits. The current 30 minutes time limit was estimated in animal studies. We need to look more closely at data from actual DCD transplants. While 30 minutes may be too long for a heart from an older donor, the heart from a younger donor may be fine for a longer period of ischemic time as it will be healthier.”
“Exciting” results
In an editorial, Nancy K. Sweitzer, MD, PhD, vice chair of clinical research, department of medicine, and director of clinical research, division of cardiology, Washington University in St. Louis, describes the results of the current study as “exciting,” adding that, “They clearly show the feasibility and safety of transplantation of hearts from circulatory-death donors.”
However, Dr. Sweitzer points out that the sickest patients in the study – those who were United Network for Organ Sharing (UNOS) status 1 and 2 – were more likely to receive a DBD heart and the more stable patients (UNOS 3-6) were more likely to receive a DCD heart.
“This imbalance undoubtedly contributed to the success of the trial in meeting its noninferiority end point. Whether transplantation of hearts from circulatory-death donors is truly safe in our sickest patients with heart failure is not clear,” she says.
However, she concludes, “Although caution and continuous evaluation of data are warranted, the increased use of hearts from circulatory-death donors appears to be safe in the hands of experienced transplantation teams and will launch an exciting phase of learning and improvement.”
“A safely expanded pool of heart donors has the potential to increase fairness and equity in heart transplantation, allowing more persons with heart failure to have access to this lifesaving therapy,” she adds. “Organ donors and transplantation teams will save increasing numbers of lives with this most precious gift.”
The current study was supported by TransMedics. Dr. Schroder reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
Don’t screen, just listen
A recent study published in the journal Academic Pediatrics suggests that during health maintenance visits clinicians are giving too little attention to their patients’ sleep problems. Using a questionnaire, researchers surveyed patients’ caregivers’ concerns and observations regarding a variety of sleep problems. The investigators then reviewed the clinicians’ documentation of what transpired at the visit and found that while over 90% of the caregivers reported their child had at least one sleep related problem, only 20% of the clinicians documented the problem. And, only 12% documented a management plan regarding the sleep concerns.
I am always bit skeptical about studies that rely on clinicians’ “documentation” because clinicians are busy people and don’t always remember to record things they’ve discussed. You and I know that the lawyers’ dictum “if it wasn’t documented it didn’t happen” is rubbish. However, I still find the basic finding of this study concerning. If we are failing to ask about or even listen to caregivers’ concerns about something as important as sleep, we are missing the boat ... a very large boat.
How could this be happening? First, sleep may have fallen victim to the bloated list of topics that well-intentioned single-issue preventive health advocates have tacked on to the health maintenance visit. It’s a burden that few of us can manage without cutting corners.
However, it is more troubling to me that so many clinicians have chosen sleep as one of those corners to cut. This oversight suggests to me that too many of us have failed to realize from our own observations that sleep is incredibly important to the health of our patients ... and to ourselves.
I will admit that I am extremely sensitive to the importance of sleep. Some might say my sensitivity borders on an obsession. But, the literature is clear and becoming more voluminous every year that sleep is important to the mental health of our patients and their caregivers to things like obesity, to symptoms that suggest an attention-deficit/hyperactivity disorder, to school success, and to migraine ... to name just a few.
It may be that most of us realize the importance of sleep but feel our society has allowed itself to become so sleep deprived that there is little chance we can turn the ship around by spending just a few minutes trying help a family undo their deeply ingrained sleep unfriendly habits.
I am tempted to join those of you who see sleep depravation as a “why bother” issue. But, I’m not ready to throw in the towel. Even simply sharing your observations about the importance of sleep in the whole wellness picture may have an effect.
One of the benefits of retiring in the same community in which I practiced for over 40 years is that at least every month or two I encounter a parent who thanks me for sharing my views on the importance of sleep. They may not recall the little tip or two I gave them, but it seems that urging them to put sleep near the top of their lifestyle priority list has made the difference for them.
If I have failed in getting you to join me in my crusade against sleep deprivation, at least take to heart the most basic message of this study. That is that the investigators found only 20% of clinicians were addressing a concern that 90% of the caregivers shared. It happened to be sleep, but it could have been anything.
The authors of the study suggest that we need to be more assiduous in our screening for sleep problems. On the contrary. You and I know we don’t need more screening. We just need to be better listeners.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A recent study published in the journal Academic Pediatrics suggests that during health maintenance visits clinicians are giving too little attention to their patients’ sleep problems. Using a questionnaire, researchers surveyed patients’ caregivers’ concerns and observations regarding a variety of sleep problems. The investigators then reviewed the clinicians’ documentation of what transpired at the visit and found that while over 90% of the caregivers reported their child had at least one sleep related problem, only 20% of the clinicians documented the problem. And, only 12% documented a management plan regarding the sleep concerns.
I am always bit skeptical about studies that rely on clinicians’ “documentation” because clinicians are busy people and don’t always remember to record things they’ve discussed. You and I know that the lawyers’ dictum “if it wasn’t documented it didn’t happen” is rubbish. However, I still find the basic finding of this study concerning. If we are failing to ask about or even listen to caregivers’ concerns about something as important as sleep, we are missing the boat ... a very large boat.
How could this be happening? First, sleep may have fallen victim to the bloated list of topics that well-intentioned single-issue preventive health advocates have tacked on to the health maintenance visit. It’s a burden that few of us can manage without cutting corners.
However, it is more troubling to me that so many clinicians have chosen sleep as one of those corners to cut. This oversight suggests to me that too many of us have failed to realize from our own observations that sleep is incredibly important to the health of our patients ... and to ourselves.
I will admit that I am extremely sensitive to the importance of sleep. Some might say my sensitivity borders on an obsession. But, the literature is clear and becoming more voluminous every year that sleep is important to the mental health of our patients and their caregivers to things like obesity, to symptoms that suggest an attention-deficit/hyperactivity disorder, to school success, and to migraine ... to name just a few.
It may be that most of us realize the importance of sleep but feel our society has allowed itself to become so sleep deprived that there is little chance we can turn the ship around by spending just a few minutes trying help a family undo their deeply ingrained sleep unfriendly habits.
I am tempted to join those of you who see sleep depravation as a “why bother” issue. But, I’m not ready to throw in the towel. Even simply sharing your observations about the importance of sleep in the whole wellness picture may have an effect.
One of the benefits of retiring in the same community in which I practiced for over 40 years is that at least every month or two I encounter a parent who thanks me for sharing my views on the importance of sleep. They may not recall the little tip or two I gave them, but it seems that urging them to put sleep near the top of their lifestyle priority list has made the difference for them.
If I have failed in getting you to join me in my crusade against sleep deprivation, at least take to heart the most basic message of this study. That is that the investigators found only 20% of clinicians were addressing a concern that 90% of the caregivers shared. It happened to be sleep, but it could have been anything.
The authors of the study suggest that we need to be more assiduous in our screening for sleep problems. On the contrary. You and I know we don’t need more screening. We just need to be better listeners.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A recent study published in the journal Academic Pediatrics suggests that during health maintenance visits clinicians are giving too little attention to their patients’ sleep problems. Using a questionnaire, researchers surveyed patients’ caregivers’ concerns and observations regarding a variety of sleep problems. The investigators then reviewed the clinicians’ documentation of what transpired at the visit and found that while over 90% of the caregivers reported their child had at least one sleep related problem, only 20% of the clinicians documented the problem. And, only 12% documented a management plan regarding the sleep concerns.
I am always bit skeptical about studies that rely on clinicians’ “documentation” because clinicians are busy people and don’t always remember to record things they’ve discussed. You and I know that the lawyers’ dictum “if it wasn’t documented it didn’t happen” is rubbish. However, I still find the basic finding of this study concerning. If we are failing to ask about or even listen to caregivers’ concerns about something as important as sleep, we are missing the boat ... a very large boat.
How could this be happening? First, sleep may have fallen victim to the bloated list of topics that well-intentioned single-issue preventive health advocates have tacked on to the health maintenance visit. It’s a burden that few of us can manage without cutting corners.
However, it is more troubling to me that so many clinicians have chosen sleep as one of those corners to cut. This oversight suggests to me that too many of us have failed to realize from our own observations that sleep is incredibly important to the health of our patients ... and to ourselves.
I will admit that I am extremely sensitive to the importance of sleep. Some might say my sensitivity borders on an obsession. But, the literature is clear and becoming more voluminous every year that sleep is important to the mental health of our patients and their caregivers to things like obesity, to symptoms that suggest an attention-deficit/hyperactivity disorder, to school success, and to migraine ... to name just a few.
It may be that most of us realize the importance of sleep but feel our society has allowed itself to become so sleep deprived that there is little chance we can turn the ship around by spending just a few minutes trying help a family undo their deeply ingrained sleep unfriendly habits.
I am tempted to join those of you who see sleep depravation as a “why bother” issue. But, I’m not ready to throw in the towel. Even simply sharing your observations about the importance of sleep in the whole wellness picture may have an effect.
One of the benefits of retiring in the same community in which I practiced for over 40 years is that at least every month or two I encounter a parent who thanks me for sharing my views on the importance of sleep. They may not recall the little tip or two I gave them, but it seems that urging them to put sleep near the top of their lifestyle priority list has made the difference for them.
If I have failed in getting you to join me in my crusade against sleep deprivation, at least take to heart the most basic message of this study. That is that the investigators found only 20% of clinicians were addressing a concern that 90% of the caregivers shared. It happened to be sleep, but it could have been anything.
The authors of the study suggest that we need to be more assiduous in our screening for sleep problems. On the contrary. You and I know we don’t need more screening. We just need to be better listeners.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Three ‘synergistic’ problems when taking blood pressure
Insufficient blood pressure measurement during medical consultation, use of an inadequate technique for its determination, and lack of validated automatic sphygmomanometers are three problems that convergently complicate the diagnosis and control of arterial hypertension in the Americas, a silent disease that affects 180 million people in the region and is the main risk factor for cardiovascular diseases, said the Pan American Health Organization.
Jarbas Barbosa, MD, MPH, PhD, director of PAHO, said in an interview: “We don’t have specific data for each of these scenarios, but unfortunately, all three doubtless work together to make the situation worse.
“Often, the staff members at our primary care clinics are not prepared to diagnose and treat hypertension, because there aren’t national protocols to raise awareness and prepare them to provide this care to the correct standard. Also, they are often unqualified to take blood pressure readings properly,” he added.
This concern is reflected in the theme the organization chose for World Hypertension Day, which was observed on May 17: Measure your blood pressure accurately, control it, live longer! “We shouldn’t underestimate the importance of taking blood pressure,” warned Silvana Luciani, chief of PAHO’s noncommunicable diseases, violence, and injury prevention unit. But, the experts stressed, it must be done correctly.
Time no problem
It’s important to raise awareness of the value of blood pressure measurement for the general population. However, as multiple studies have shown, one barrier to detecting and controlling hypertension is that doctors and other health care professionals measure blood pressure less frequently in clinic than expected, or they use inappropriate techniques or obsolete or uncalibrated measurement devices.
“The importance of clinic blood pressure measurement has been recognized for many decades, but adherence to guidelines on proper, standardized blood pressure measurement remains uncommon in clinical practice,” concluded a consensus document signed by 25 experts from 13 institutions in the United States, Australia, Germany, the United Kingdom, Canada, Italy, Belgium, and Greece.
The first problem lies in the low quantity of measurements. A recent study in Argentina of nearly 3,000 visits to the doctor’s office at nine health care centers showed that doctors took blood pressure readings in only once in every seven encounters. Even cardiologists, the specialists with the best performance, did so only half of the time.
“Several factors can come into play: lack of awareness, medical inertia, or lack of appropriate equipment. But it is not for lack of time. How long does it take to take blood pressure three times within a 1-minute interval, with the patient seated and their back supported, as indicated? Four minutes. That’s not very much,” said Judith Zilberman, MD, PhD, said in an interview. Dr. Zilberman leads the department of hypertension and the women’s cardiovascular disease area at the Argerich Hospital in Buenos Aires, and is the former chair of the Argentinian Society of Hypertension.
Patricio López-Jaramillo, MD, PhD, said in an interview that the greatest obstacle is the lack of awareness among physicians and other health care staff about the importance of taking proper blood pressure measurements. Dr. López-Jaramillo is president and scientific director of the MASIRA Research Institute at the University of Santander in Bucaramanga, Colombia, and first author of the Manual Práctico de Diagnóstico y Manejo de la Hipertensión Arterial (Practice Guidelines for Diagnosing and Managing Hypertension), published by the Latin American Hypertension Society.
“Medical schools are also responsible for this. They go over this topic very superficially during undergraduate and, even worse, postgraduate training. The lack of time to take correct measurements, or the lack of appropriate instruments, is secondary to this lack of awareness among most health care staff members,” added Dr. López-Jaramillo, who is one of the researchers of the PURE epidemiologic study. Since 2002, it has followed a cohort of 225,000 participants from 27 high-, mid-, and low-income countries.
Dr. Zilberman added that it would be good practice for all primary care physicians to take blood pressure readings regardless of the reason for the visit and whether patients have been diagnosed with hypertension or not. “If a woman goes to her gynecologist because she wants to get pregnant, her blood pressure should also be taken! And any other specialist should interview the patient, ascertain her history, what medications she’s on, and then ask if her blood pressure has been taken recently,” she recommended.
Measure well
The second factor to consider is that a correct technique should be used to take blood pressure readings in the doctor’s office or clinic so as not to produce inaccurate results that could lead to underdiagnosis, overdiagnosis, or a poor assessment of the patient’s response to prescribed treatments. An observational study performed in Uruguay in 2017 showed that only 5% of 302 blood pressure measurements followed appropriate procedures.
A new fact sheet from the PAHO lists the following eight requirements for obtaining an accurate reading: don’t have a conversation, support the arm at heart level, put the cuff on a bare arm, use the correct cuff size, support the feet, keep the legs uncrossed, ensure the patient has an empty bladder, and support the back.
Though most guidelines recommend taking three readings, the “pragmatic” focus proposed in the international consensus accepts at least two readings separated by a minimum of 30 seconds. The two readings should then be averaged out. There is evidence that simplified protocols can be used, at least for population screening.
The authors of the new document also recommend preparing the patient before taking the measurement. The patient should be asked not to smoke, exercise, or consume alcohol or caffeine for at least 30 minutes beforehand. He or she should rest for a period of 3-5 minutes without speaking or being spoken to before the measurement is taken.
Lastly, clinically validated automated measurement devices should be used, as called for by the PAHO HEARTS initiative in the Americas. “The sphygmomanometer or classic aneroid tensiometer for the auscultatory method, which is still used way too often at doctor’s office visits in the region, has many weaknesses – not only the device itself but also the way it’s used (human error). This produces a rounded, approximate reading,” stressed Dr. Zilberman.
Automated devices also minimize interactions with the patient by reducing distractions during the preparation and measurement phases and freeing up time for the health care professional. “To [check for a] fever, we use the appropriate thermometer in the appropriate location. We should do the same for blood pressure,” she added.
The STRIDE-BP database, which is affiliated with the European Society of Hypertension, the International Society of Hypertension, and the World Hypertension League, contains an updated list of validated devices for measuring blood pressure.
The signers of the consensus likewise recognized that, beyond taking blood pressure measurements during office visits, the best measurements are those taken at home outside the context of medical care (doctor’s office or clinic) and that the same recommendations are directly applicable. “Few diseases can be detected so easily as with a simple at-home assessment performed by the individual himself or herself. If after three consecutive measurements, readings above 140/90 mm Hg are obtained, the individual should see the doctor to set up a comprehensive treatment program,” said Pablo Rodríguez, MD, secretary of the Argentinian Society of Hypertension. From now through September 14 (Day for Patients With Hypertension), the society is conducting a campaign to take blood pressure measurements at different locations across the country.
Dr. Zilberman and Dr. López-Jiménez disclosed no relevant financial relationships.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
Insufficient blood pressure measurement during medical consultation, use of an inadequate technique for its determination, and lack of validated automatic sphygmomanometers are three problems that convergently complicate the diagnosis and control of arterial hypertension in the Americas, a silent disease that affects 180 million people in the region and is the main risk factor for cardiovascular diseases, said the Pan American Health Organization.
Jarbas Barbosa, MD, MPH, PhD, director of PAHO, said in an interview: “We don’t have specific data for each of these scenarios, but unfortunately, all three doubtless work together to make the situation worse.
“Often, the staff members at our primary care clinics are not prepared to diagnose and treat hypertension, because there aren’t national protocols to raise awareness and prepare them to provide this care to the correct standard. Also, they are often unqualified to take blood pressure readings properly,” he added.
This concern is reflected in the theme the organization chose for World Hypertension Day, which was observed on May 17: Measure your blood pressure accurately, control it, live longer! “We shouldn’t underestimate the importance of taking blood pressure,” warned Silvana Luciani, chief of PAHO’s noncommunicable diseases, violence, and injury prevention unit. But, the experts stressed, it must be done correctly.
Time no problem
It’s important to raise awareness of the value of blood pressure measurement for the general population. However, as multiple studies have shown, one barrier to detecting and controlling hypertension is that doctors and other health care professionals measure blood pressure less frequently in clinic than expected, or they use inappropriate techniques or obsolete or uncalibrated measurement devices.
“The importance of clinic blood pressure measurement has been recognized for many decades, but adherence to guidelines on proper, standardized blood pressure measurement remains uncommon in clinical practice,” concluded a consensus document signed by 25 experts from 13 institutions in the United States, Australia, Germany, the United Kingdom, Canada, Italy, Belgium, and Greece.
The first problem lies in the low quantity of measurements. A recent study in Argentina of nearly 3,000 visits to the doctor’s office at nine health care centers showed that doctors took blood pressure readings in only once in every seven encounters. Even cardiologists, the specialists with the best performance, did so only half of the time.
“Several factors can come into play: lack of awareness, medical inertia, or lack of appropriate equipment. But it is not for lack of time. How long does it take to take blood pressure three times within a 1-minute interval, with the patient seated and their back supported, as indicated? Four minutes. That’s not very much,” said Judith Zilberman, MD, PhD, said in an interview. Dr. Zilberman leads the department of hypertension and the women’s cardiovascular disease area at the Argerich Hospital in Buenos Aires, and is the former chair of the Argentinian Society of Hypertension.
Patricio López-Jaramillo, MD, PhD, said in an interview that the greatest obstacle is the lack of awareness among physicians and other health care staff about the importance of taking proper blood pressure measurements. Dr. López-Jaramillo is president and scientific director of the MASIRA Research Institute at the University of Santander in Bucaramanga, Colombia, and first author of the Manual Práctico de Diagnóstico y Manejo de la Hipertensión Arterial (Practice Guidelines for Diagnosing and Managing Hypertension), published by the Latin American Hypertension Society.
“Medical schools are also responsible for this. They go over this topic very superficially during undergraduate and, even worse, postgraduate training. The lack of time to take correct measurements, or the lack of appropriate instruments, is secondary to this lack of awareness among most health care staff members,” added Dr. López-Jaramillo, who is one of the researchers of the PURE epidemiologic study. Since 2002, it has followed a cohort of 225,000 participants from 27 high-, mid-, and low-income countries.
Dr. Zilberman added that it would be good practice for all primary care physicians to take blood pressure readings regardless of the reason for the visit and whether patients have been diagnosed with hypertension or not. “If a woman goes to her gynecologist because she wants to get pregnant, her blood pressure should also be taken! And any other specialist should interview the patient, ascertain her history, what medications she’s on, and then ask if her blood pressure has been taken recently,” she recommended.
Measure well
The second factor to consider is that a correct technique should be used to take blood pressure readings in the doctor’s office or clinic so as not to produce inaccurate results that could lead to underdiagnosis, overdiagnosis, or a poor assessment of the patient’s response to prescribed treatments. An observational study performed in Uruguay in 2017 showed that only 5% of 302 blood pressure measurements followed appropriate procedures.
A new fact sheet from the PAHO lists the following eight requirements for obtaining an accurate reading: don’t have a conversation, support the arm at heart level, put the cuff on a bare arm, use the correct cuff size, support the feet, keep the legs uncrossed, ensure the patient has an empty bladder, and support the back.
Though most guidelines recommend taking three readings, the “pragmatic” focus proposed in the international consensus accepts at least two readings separated by a minimum of 30 seconds. The two readings should then be averaged out. There is evidence that simplified protocols can be used, at least for population screening.
The authors of the new document also recommend preparing the patient before taking the measurement. The patient should be asked not to smoke, exercise, or consume alcohol or caffeine for at least 30 minutes beforehand. He or she should rest for a period of 3-5 minutes without speaking or being spoken to before the measurement is taken.
Lastly, clinically validated automated measurement devices should be used, as called for by the PAHO HEARTS initiative in the Americas. “The sphygmomanometer or classic aneroid tensiometer for the auscultatory method, which is still used way too often at doctor’s office visits in the region, has many weaknesses – not only the device itself but also the way it’s used (human error). This produces a rounded, approximate reading,” stressed Dr. Zilberman.
Automated devices also minimize interactions with the patient by reducing distractions during the preparation and measurement phases and freeing up time for the health care professional. “To [check for a] fever, we use the appropriate thermometer in the appropriate location. We should do the same for blood pressure,” she added.
The STRIDE-BP database, which is affiliated with the European Society of Hypertension, the International Society of Hypertension, and the World Hypertension League, contains an updated list of validated devices for measuring blood pressure.
The signers of the consensus likewise recognized that, beyond taking blood pressure measurements during office visits, the best measurements are those taken at home outside the context of medical care (doctor’s office or clinic) and that the same recommendations are directly applicable. “Few diseases can be detected so easily as with a simple at-home assessment performed by the individual himself or herself. If after three consecutive measurements, readings above 140/90 mm Hg are obtained, the individual should see the doctor to set up a comprehensive treatment program,” said Pablo Rodríguez, MD, secretary of the Argentinian Society of Hypertension. From now through September 14 (Day for Patients With Hypertension), the society is conducting a campaign to take blood pressure measurements at different locations across the country.
Dr. Zilberman and Dr. López-Jiménez disclosed no relevant financial relationships.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
Insufficient blood pressure measurement during medical consultation, use of an inadequate technique for its determination, and lack of validated automatic sphygmomanometers are three problems that convergently complicate the diagnosis and control of arterial hypertension in the Americas, a silent disease that affects 180 million people in the region and is the main risk factor for cardiovascular diseases, said the Pan American Health Organization.
Jarbas Barbosa, MD, MPH, PhD, director of PAHO, said in an interview: “We don’t have specific data for each of these scenarios, but unfortunately, all three doubtless work together to make the situation worse.
“Often, the staff members at our primary care clinics are not prepared to diagnose and treat hypertension, because there aren’t national protocols to raise awareness and prepare them to provide this care to the correct standard. Also, they are often unqualified to take blood pressure readings properly,” he added.
This concern is reflected in the theme the organization chose for World Hypertension Day, which was observed on May 17: Measure your blood pressure accurately, control it, live longer! “We shouldn’t underestimate the importance of taking blood pressure,” warned Silvana Luciani, chief of PAHO’s noncommunicable diseases, violence, and injury prevention unit. But, the experts stressed, it must be done correctly.
Time no problem
It’s important to raise awareness of the value of blood pressure measurement for the general population. However, as multiple studies have shown, one barrier to detecting and controlling hypertension is that doctors and other health care professionals measure blood pressure less frequently in clinic than expected, or they use inappropriate techniques or obsolete or uncalibrated measurement devices.
“The importance of clinic blood pressure measurement has been recognized for many decades, but adherence to guidelines on proper, standardized blood pressure measurement remains uncommon in clinical practice,” concluded a consensus document signed by 25 experts from 13 institutions in the United States, Australia, Germany, the United Kingdom, Canada, Italy, Belgium, and Greece.
The first problem lies in the low quantity of measurements. A recent study in Argentina of nearly 3,000 visits to the doctor’s office at nine health care centers showed that doctors took blood pressure readings in only once in every seven encounters. Even cardiologists, the specialists with the best performance, did so only half of the time.
“Several factors can come into play: lack of awareness, medical inertia, or lack of appropriate equipment. But it is not for lack of time. How long does it take to take blood pressure three times within a 1-minute interval, with the patient seated and their back supported, as indicated? Four minutes. That’s not very much,” said Judith Zilberman, MD, PhD, said in an interview. Dr. Zilberman leads the department of hypertension and the women’s cardiovascular disease area at the Argerich Hospital in Buenos Aires, and is the former chair of the Argentinian Society of Hypertension.
Patricio López-Jaramillo, MD, PhD, said in an interview that the greatest obstacle is the lack of awareness among physicians and other health care staff about the importance of taking proper blood pressure measurements. Dr. López-Jaramillo is president and scientific director of the MASIRA Research Institute at the University of Santander in Bucaramanga, Colombia, and first author of the Manual Práctico de Diagnóstico y Manejo de la Hipertensión Arterial (Practice Guidelines for Diagnosing and Managing Hypertension), published by the Latin American Hypertension Society.
“Medical schools are also responsible for this. They go over this topic very superficially during undergraduate and, even worse, postgraduate training. The lack of time to take correct measurements, or the lack of appropriate instruments, is secondary to this lack of awareness among most health care staff members,” added Dr. López-Jaramillo, who is one of the researchers of the PURE epidemiologic study. Since 2002, it has followed a cohort of 225,000 participants from 27 high-, mid-, and low-income countries.
Dr. Zilberman added that it would be good practice for all primary care physicians to take blood pressure readings regardless of the reason for the visit and whether patients have been diagnosed with hypertension or not. “If a woman goes to her gynecologist because she wants to get pregnant, her blood pressure should also be taken! And any other specialist should interview the patient, ascertain her history, what medications she’s on, and then ask if her blood pressure has been taken recently,” she recommended.
Measure well
The second factor to consider is that a correct technique should be used to take blood pressure readings in the doctor’s office or clinic so as not to produce inaccurate results that could lead to underdiagnosis, overdiagnosis, or a poor assessment of the patient’s response to prescribed treatments. An observational study performed in Uruguay in 2017 showed that only 5% of 302 blood pressure measurements followed appropriate procedures.
A new fact sheet from the PAHO lists the following eight requirements for obtaining an accurate reading: don’t have a conversation, support the arm at heart level, put the cuff on a bare arm, use the correct cuff size, support the feet, keep the legs uncrossed, ensure the patient has an empty bladder, and support the back.
Though most guidelines recommend taking three readings, the “pragmatic” focus proposed in the international consensus accepts at least two readings separated by a minimum of 30 seconds. The two readings should then be averaged out. There is evidence that simplified protocols can be used, at least for population screening.
The authors of the new document also recommend preparing the patient before taking the measurement. The patient should be asked not to smoke, exercise, or consume alcohol or caffeine for at least 30 minutes beforehand. He or she should rest for a period of 3-5 minutes without speaking or being spoken to before the measurement is taken.
Lastly, clinically validated automated measurement devices should be used, as called for by the PAHO HEARTS initiative in the Americas. “The sphygmomanometer or classic aneroid tensiometer for the auscultatory method, which is still used way too often at doctor’s office visits in the region, has many weaknesses – not only the device itself but also the way it’s used (human error). This produces a rounded, approximate reading,” stressed Dr. Zilberman.
Automated devices also minimize interactions with the patient by reducing distractions during the preparation and measurement phases and freeing up time for the health care professional. “To [check for a] fever, we use the appropriate thermometer in the appropriate location. We should do the same for blood pressure,” she added.
The STRIDE-BP database, which is affiliated with the European Society of Hypertension, the International Society of Hypertension, and the World Hypertension League, contains an updated list of validated devices for measuring blood pressure.
The signers of the consensus likewise recognized that, beyond taking blood pressure measurements during office visits, the best measurements are those taken at home outside the context of medical care (doctor’s office or clinic) and that the same recommendations are directly applicable. “Few diseases can be detected so easily as with a simple at-home assessment performed by the individual himself or herself. If after three consecutive measurements, readings above 140/90 mm Hg are obtained, the individual should see the doctor to set up a comprehensive treatment program,” said Pablo Rodríguez, MD, secretary of the Argentinian Society of Hypertension. From now through September 14 (Day for Patients With Hypertension), the society is conducting a campaign to take blood pressure measurements at different locations across the country.
Dr. Zilberman and Dr. López-Jiménez disclosed no relevant financial relationships.
This article was translated from the Medscape Spanish Edition. A version appeared on Medscape.com.
When could you be sued for AI malpractice? You’re likely using it now
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.
“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”
Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:
- Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
- Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
- Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
- A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
- Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
- AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
- Some systems within EHRs use AI to indicate high-risk patients.
- Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
- About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
- Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.
“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
What are the top AI legal dangers of today?
A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.
This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.
“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.
“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”
Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.
It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.
When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”
Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.
“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”
In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.
Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.
“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”
The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.
As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.
“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”
So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.
“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
Upcoming AI legal risks to watch for
Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.
Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.
No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.
“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”
In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.
In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.
“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”
Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.
For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.
“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
How you can prevent AI-related lawsuits
The first step to preventing an AI-related claim is being aware of when and how you are using AI.
Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.
“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”
Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.
When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.
“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.
Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.
“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.
In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.
It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.
“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”
While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.
“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”
A version of this article first appeared on Medscape.com.
The enemy of carcinogenic fumes is my friendly begonia
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
Sowing the seeds of cancer prevention
Are you looking to add to your quality of life, even though pets are not your speed? Might we suggest something with lower maintenance? Something a little greener?
Indoor plants can purify the air that comes from outside. Researchers at the University of Technology Sydney, in partnership with the plantscaping company Ambius, showed that a “green wall” made up of mixed indoor plants was able to suck up 97% of “the most toxic compounds” from the air in just 8 hours. We’re talking about lung-irritating, headache-inducing, cancer risk–boosting compounds from gasoline fumes, including benzene.
Public health initiatives often strive to reduce cardiovascular and obesity risks, but breathing seems pretty important too. According to the World Health Organization, household air pollution is responsible for about 2.5 million global premature deaths each year. And since 2020 we’ve become accustomed to spending more time inside and at home.
“This new research proves that plants should not just be seen as ‘nice to have,’ but rather a crucial part of every workplace wellness plan,” Ambius General Manager Johan Hodgson said in statement released by the university.
So don’t spend hundreds of dollars on a fancy air filtration system when a wall of plants can do that for next to nothing. Find what works for you and your space and become a plant parent today! Your lungs will thank you.
But officer, I had to swerve to miss the duodenal ampulla
Tiny video capsule endoscopes have been around for many years, but they have one big weakness: The ingestible cameras’ journey through the GI tract is passively driven by gravity and the natural movement of the body, so they often miss potential problem areas.
Not anymore. That flaw has been addressed by medical technology company AnX Robotica, which has taken endoscopy to the next level by adding that wondrous directional control device of the modern electronic age, a joystick.
The new system “uses an external magnet and hand-held video game style joysticks to move the capsule in three dimensions,” which allows physicians to “remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas,” according to Andrew C. Meltzer, MD, of George Washington University and associates, who conducted a pilot study funded by AnX Robotica.
The video capsule provided a 95% rate of visualization in the stomachs of 40 patients who were examined at a medical office building by an emergency medicine physician who had no previous specialty training in endoscopy. “Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off site,” the investigators said in a written statement.
The capsule operator did receive some additional training, and development of artificial intelligence to self-drive the capsule is in the works, but for now, we’re talking about a device controlled by a human using a joystick. And we all know that 50-year-olds are not especially known for their joystick skills. For that we need real experts. Yup, we need to put those joystick-controlled capsule endoscopes in the hands of teenage gamers. Who wants to go first?
Maybe AI isn’t ready for the big time after all
“How long before some intrepid stockholder says: ‘Hey, instead of paying doctors, why don’t we just use the free robot instead?’ ” Those words appeared on LOTME but a month ago. After all, the AI is supposed to be smarter and more empathetic than a doctor. And did we mention it’s free? Or at least extremely cheap. Cheaper than, say, a group of recently unionized health care workers.
In early May, the paid employees manning the National Eating Disorders Association emergency hotline voted to unionize, as they felt overwhelmed and underpaid. Apparently, paying six people an extra few thousand a year was too much for NEDA’s leadership, as they decided a few weeks later to fire those workers, fully closing down the hotline. Instead of talking to a real person, people “calling in” for support would be met with Tessa, a wellness chatbot that would hopefully guide them through their crisis. Key word, hopefully.
In perhaps the least surprising twist of the year, NEDA was forced to walk back its decision about a week after its initial announcement. It all started with a viral Instagram post from a woman who called in and received the following advice from Tessa: Lose 1-2 pounds a week, count calories and work for a 500- to 1,000-calorie deficit, weigh herself weekly, and restrict her diet. Unfortunately, all of these suggestions were things that led to the development of the woman’s eating disorder.
Naturally, NEDA responded in good grace, accusing the woman of lying. A NEDA vice president even left some nasty comments on the post, but hastily deleted them a day later when NEDA announced it was shutting down Tessa “until further notice for a complete investigation.” NEDA’s CEO insisted they hadn’t seen that behavior from Tessa before, calling it a “bug” and insisting the bot would only be down temporarily until the triggers causing the bug were fixed.
In the aftermath, several doctors and psychologists chimed in, terming the rush to automate human roles dangerous and risky. After all, much of what makes these hotlines effective is the volunteers speaking from their own experience. An unsupervised bot doesn’t seem to have what it takes to deal with a mental health crisis, but we’re betting that Tessa will be back. As a wise cephalopod once said: Nobody gives a care about the fate of labor as long as they can get their instant gratification.
You can’t spell existential without s-t-e-n-t
This week, we’re including a special “bonus” item that, to be honest, has nothing to do with stents. That’s why our editor is making us call this a “bonus” (and making us use quote marks, too): It doesn’t really have anything to do with stents or health care or those who practice health care. Actually, his exact words were, “You can’t just give the readers someone else’s ****ing list and expect to get paid for it.” Did we mention that he looks like Jack Nicklaus but acts like BoJack Horseman?
Anywaaay, we’re pretty sure that the list in question – “America’s Top 10 Most Googled Existential Questions” – says something about the human condition, just not about stents:
1. Why is the sky blue?
2. What do dreams mean?
3. What is the meaning of life?
4. Why am I so tired?
5. Who am I?
6. What is love?
7. Is a hot dog a sandwich?
8. What came first, the chicken or the egg?
9. What should I do?
10. Do animals have souls?
‘Never worry alone:’ Expand your child mental health comfort zone using supports
That mantra echoed through my postgraduate medical training, and is shared with patients to encourage reaching out for help. But providers are often in the exam room alone with patients whom they are, legitimately, very worried about.
Dr. Rettew’s column last month detailed the systems that are changing (slowly!) to better facilitate interface between mental health and primary care. There are increasingly supports available at a clinic level, and also a state level. Regardless of where your practice is in the process of integration, . This moment in time seems like a great opportunity to review a few favorites.
Who you gonna call?
Child Psychiatry Access Programs, sometimes called Psychiatry Access Lines, are almost everywhere!1 If you haven’t called one yet, click on your state and call! You will have immediate access to mental health resources that are curated and available in your state, child psychiatry expertise, and a way to connect families in need with targeted treatments. A long-term side effect of CPAP utilization may include improved system coordination on behalf of kids.
What about screening?
The AAP has an excellent mental health minute on screening.2 Pediatricians screen thoughtfully for psychosocial and medical concerns. Primary and secondary screenings for mental health are becoming ubiquitous in practices as a first step toward diagnosis and treatment. Primary, or initial, screening can catch concerns in your patient population. These include common tools like the Strengths and Difficulties Questionnaire (SDQ, ages 2-17), or the Pediatric Symptom Checklist (PSC-14, ages 4-17). Subscale scores help point care toward the right direction.
Once we know there is a mental health problem through screening or interview, secondary mental health screening and rating scales help find a specific diagnosis. Some basics include the PHQ-A for depression (ages 11-17), the GAD-7 for general anxiety (ages 11+), the SCARED for specific anxiety (ages 8-18), and the Vanderbilt (ages 6+) or SNAP-IV (ages 5+) parent/teacher scales for ADHD/ODD/CD/anxiety/depressive symptoms. The CY-BOCS symptom checklist (ages 6-17) is excellent to determine the extent of OCD symptoms. The asQ (ages 10+) and Columbia (C-SSRS, ages 11+) are must-use screeners to help prevent suicide. Screeners and rating scales are found on many CPAP websites, such as New York’s.3 A site full of these can seem overwhelming, but once you get comfortable with a few favorites, expanding your repertoire little by little makes providing care a lot easier!
Treating to target?
When you are fairly certain of the diagnosis, you can feel more confident to treat. Diagnoses can be tools; find the best fit one, and in a few years with more information, a different tool might be a better fit.
Some favorite treatment resources include the CPAP guidebook from your state (for example, Washington’s4 and Virginia’s5), and the AACAP parent medication guides.6 They detail evidence-based treatments including medications, and can help us professionals and high health care–literacy families. The medication tracking form found at the back of each guide is especially key. Another great book is the DSM 5 Pocket Guide for Child and Adolescent Mental Health.7 Some screeners can be repeated to see if treatment is working, as the AIMS model suggests “treat to target”8 specific symptoms until they improve.
How to provide help with few resources?
There is knowing what your patient needs, like a specific therapy, and then there is the challenge of connecting the patient with help. Getting a family started on a first step of treatment while they are on a waiting list can be transformative. One example is treatment for oppositional defiant disorder (ODD); parents can start with the first step, “special time,”9 even before a therapist is available. Or, if a family is struggling with OCD, they can start an Exposure Therapy with Response Prevention (ERP) workbook10 or look at the iocdf.org website before seeing a specialized therapist. We all know how unsatisfactory a wait-list is as a treatment plan; it is so empowering to start the family with first steps.
What about connections for us providers?
Leveraging your own relationship with patients who have mental health challenges can be powerful, and staying connected with others is vital to maintaining your own emotional well-being. Having a therapist, being active in your medical chapters, gardening, and connecting your practice to local mental health providers and schools can be rejuvenating. Improving the systems around us prevents burnout and keeps us connected.
And finally ...
So, join the movement to help our fields work better together; walk out of that exam room and listen to your worry about your patients and the systems that support them. Reach out for help, toward child psychiatry access lines, the AAP, AACAP, and other collective agents of change. Share what is making your lives and your patients’ lives easier so we can amplify these together. Let’s worry together, and make things better.
Dr. Margaret Spottswood is a child psychiatrist practicing in an integrated care clinic at the Community Health Centers of Burlington, Vt., a Federally Qualified Health Center. She is also the medical director of the Vermont Child Psychiatry Access Program and a clinical assistant professor in the department of psychiatry at the University of Vermont, Burlington.
References
1. National Network of Child Psychiatry Access Programs. Child Psychiatry Access Programs in the United States. https://www.nncpap.orgmap. 2023 Mar 14.
2. American Academy of Pediatrics. Screening Tools: Pediatric Mental Health Minute Series. https://www.aap.org/en/patient-care/mental-health-minute/screening-tools.
3. New York ProjectTEACH. Child Clinical Rating Scales. https://projectteachny.org/child-rating-scales.
4. Hilt H, Barclay R. Seattle Children’s Primary Care Principles for Child Mental Health. https://www.seattlechildrens.org/globalassets/documents/healthcare-professionals/pal/wa/wa-pal-care-guide.pdf.
5. Virginia Mental Health Access Program. VMAP Guidebook. https://vmap.org/guidebook.
6. American Academy of Child and Adolescent Psychiatry. Parents’ Medication Guides. https://www.aacap.org/AACAP/Families_and_Youth/Family_Resources/Parents_Medication_Guides.aspx.
7. Hilt RJ, Nussbaum AM. DSM-5 Pocket Guide to Child and Adolescent Mental Health. Arlington, Va.: American Psychiatric Association Publishing, 2015.
8. Advanced Integration Mental Health Solutions. Measurement-Based Treatment to Target. https://aims.uw.edu/resource-library/measurement-based-treatment-target.
9. Vermont Child Psychiatry Access Program. Caregiver Guide: Special Time With Children. https://www.chcb.org/wp-content/uploads/2023/03/Special-Time-with-Children-for-Caregivers.pdf.
10. Reuter T. Standing Up to OCD Workbook for Kids. New York: Simon and Schuster, 2019.
That mantra echoed through my postgraduate medical training, and is shared with patients to encourage reaching out for help. But providers are often in the exam room alone with patients whom they are, legitimately, very worried about.
Dr. Rettew’s column last month detailed the systems that are changing (slowly!) to better facilitate interface between mental health and primary care. There are increasingly supports available at a clinic level, and also a state level. Regardless of where your practice is in the process of integration, . This moment in time seems like a great opportunity to review a few favorites.
Who you gonna call?
Child Psychiatry Access Programs, sometimes called Psychiatry Access Lines, are almost everywhere!1 If you haven’t called one yet, click on your state and call! You will have immediate access to mental health resources that are curated and available in your state, child psychiatry expertise, and a way to connect families in need with targeted treatments. A long-term side effect of CPAP utilization may include improved system coordination on behalf of kids.
What about screening?
The AAP has an excellent mental health minute on screening.2 Pediatricians screen thoughtfully for psychosocial and medical concerns. Primary and secondary screenings for mental health are becoming ubiquitous in practices as a first step toward diagnosis and treatment. Primary, or initial, screening can catch concerns in your patient population. These include common tools like the Strengths and Difficulties Questionnaire (SDQ, ages 2-17), or the Pediatric Symptom Checklist (PSC-14, ages 4-17). Subscale scores help point care toward the right direction.
Once we know there is a mental health problem through screening or interview, secondary mental health screening and rating scales help find a specific diagnosis. Some basics include the PHQ-A for depression (ages 11-17), the GAD-7 for general anxiety (ages 11+), the SCARED for specific anxiety (ages 8-18), and the Vanderbilt (ages 6+) or SNAP-IV (ages 5+) parent/teacher scales for ADHD/ODD/CD/anxiety/depressive symptoms. The CY-BOCS symptom checklist (ages 6-17) is excellent to determine the extent of OCD symptoms. The asQ (ages 10+) and Columbia (C-SSRS, ages 11+) are must-use screeners to help prevent suicide. Screeners and rating scales are found on many CPAP websites, such as New York’s.3 A site full of these can seem overwhelming, but once you get comfortable with a few favorites, expanding your repertoire little by little makes providing care a lot easier!
Treating to target?
When you are fairly certain of the diagnosis, you can feel more confident to treat. Diagnoses can be tools; find the best fit one, and in a few years with more information, a different tool might be a better fit.
Some favorite treatment resources include the CPAP guidebook from your state (for example, Washington’s4 and Virginia’s5), and the AACAP parent medication guides.6 They detail evidence-based treatments including medications, and can help us professionals and high health care–literacy families. The medication tracking form found at the back of each guide is especially key. Another great book is the DSM 5 Pocket Guide for Child and Adolescent Mental Health.7 Some screeners can be repeated to see if treatment is working, as the AIMS model suggests “treat to target”8 specific symptoms until they improve.
How to provide help with few resources?
There is knowing what your patient needs, like a specific therapy, and then there is the challenge of connecting the patient with help. Getting a family started on a first step of treatment while they are on a waiting list can be transformative. One example is treatment for oppositional defiant disorder (ODD); parents can start with the first step, “special time,”9 even before a therapist is available. Or, if a family is struggling with OCD, they can start an Exposure Therapy with Response Prevention (ERP) workbook10 or look at the iocdf.org website before seeing a specialized therapist. We all know how unsatisfactory a wait-list is as a treatment plan; it is so empowering to start the family with first steps.
What about connections for us providers?
Leveraging your own relationship with patients who have mental health challenges can be powerful, and staying connected with others is vital to maintaining your own emotional well-being. Having a therapist, being active in your medical chapters, gardening, and connecting your practice to local mental health providers and schools can be rejuvenating. Improving the systems around us prevents burnout and keeps us connected.
And finally ...
So, join the movement to help our fields work better together; walk out of that exam room and listen to your worry about your patients and the systems that support them. Reach out for help, toward child psychiatry access lines, the AAP, AACAP, and other collective agents of change. Share what is making your lives and your patients’ lives easier so we can amplify these together. Let’s worry together, and make things better.
Dr. Margaret Spottswood is a child psychiatrist practicing in an integrated care clinic at the Community Health Centers of Burlington, Vt., a Federally Qualified Health Center. She is also the medical director of the Vermont Child Psychiatry Access Program and a clinical assistant professor in the department of psychiatry at the University of Vermont, Burlington.
References
1. National Network of Child Psychiatry Access Programs. Child Psychiatry Access Programs in the United States. https://www.nncpap.orgmap. 2023 Mar 14.
2. American Academy of Pediatrics. Screening Tools: Pediatric Mental Health Minute Series. https://www.aap.org/en/patient-care/mental-health-minute/screening-tools.
3. New York ProjectTEACH. Child Clinical Rating Scales. https://projectteachny.org/child-rating-scales.
4. Hilt H, Barclay R. Seattle Children’s Primary Care Principles for Child Mental Health. https://www.seattlechildrens.org/globalassets/documents/healthcare-professionals/pal/wa/wa-pal-care-guide.pdf.
5. Virginia Mental Health Access Program. VMAP Guidebook. https://vmap.org/guidebook.
6. American Academy of Child and Adolescent Psychiatry. Parents’ Medication Guides. https://www.aacap.org/AACAP/Families_and_Youth/Family_Resources/Parents_Medication_Guides.aspx.
7. Hilt RJ, Nussbaum AM. DSM-5 Pocket Guide to Child and Adolescent Mental Health. Arlington, Va.: American Psychiatric Association Publishing, 2015.
8. Advanced Integration Mental Health Solutions. Measurement-Based Treatment to Target. https://aims.uw.edu/resource-library/measurement-based-treatment-target.
9. Vermont Child Psychiatry Access Program. Caregiver Guide: Special Time With Children. https://www.chcb.org/wp-content/uploads/2023/03/Special-Time-with-Children-for-Caregivers.pdf.
10. Reuter T. Standing Up to OCD Workbook for Kids. New York: Simon and Schuster, 2019.
That mantra echoed through my postgraduate medical training, and is shared with patients to encourage reaching out for help. But providers are often in the exam room alone with patients whom they are, legitimately, very worried about.
Dr. Rettew’s column last month detailed the systems that are changing (slowly!) to better facilitate interface between mental health and primary care. There are increasingly supports available at a clinic level, and also a state level. Regardless of where your practice is in the process of integration, . This moment in time seems like a great opportunity to review a few favorites.
Who you gonna call?
Child Psychiatry Access Programs, sometimes called Psychiatry Access Lines, are almost everywhere!1 If you haven’t called one yet, click on your state and call! You will have immediate access to mental health resources that are curated and available in your state, child psychiatry expertise, and a way to connect families in need with targeted treatments. A long-term side effect of CPAP utilization may include improved system coordination on behalf of kids.
What about screening?
The AAP has an excellent mental health minute on screening.2 Pediatricians screen thoughtfully for psychosocial and medical concerns. Primary and secondary screenings for mental health are becoming ubiquitous in practices as a first step toward diagnosis and treatment. Primary, or initial, screening can catch concerns in your patient population. These include common tools like the Strengths and Difficulties Questionnaire (SDQ, ages 2-17), or the Pediatric Symptom Checklist (PSC-14, ages 4-17). Subscale scores help point care toward the right direction.
Once we know there is a mental health problem through screening or interview, secondary mental health screening and rating scales help find a specific diagnosis. Some basics include the PHQ-A for depression (ages 11-17), the GAD-7 for general anxiety (ages 11+), the SCARED for specific anxiety (ages 8-18), and the Vanderbilt (ages 6+) or SNAP-IV (ages 5+) parent/teacher scales for ADHD/ODD/CD/anxiety/depressive symptoms. The CY-BOCS symptom checklist (ages 6-17) is excellent to determine the extent of OCD symptoms. The asQ (ages 10+) and Columbia (C-SSRS, ages 11+) are must-use screeners to help prevent suicide. Screeners and rating scales are found on many CPAP websites, such as New York’s.3 A site full of these can seem overwhelming, but once you get comfortable with a few favorites, expanding your repertoire little by little makes providing care a lot easier!
Treating to target?
When you are fairly certain of the diagnosis, you can feel more confident to treat. Diagnoses can be tools; find the best fit one, and in a few years with more information, a different tool might be a better fit.
Some favorite treatment resources include the CPAP guidebook from your state (for example, Washington’s4 and Virginia’s5), and the AACAP parent medication guides.6 They detail evidence-based treatments including medications, and can help us professionals and high health care–literacy families. The medication tracking form found at the back of each guide is especially key. Another great book is the DSM 5 Pocket Guide for Child and Adolescent Mental Health.7 Some screeners can be repeated to see if treatment is working, as the AIMS model suggests “treat to target”8 specific symptoms until they improve.
How to provide help with few resources?
There is knowing what your patient needs, like a specific therapy, and then there is the challenge of connecting the patient with help. Getting a family started on a first step of treatment while they are on a waiting list can be transformative. One example is treatment for oppositional defiant disorder (ODD); parents can start with the first step, “special time,”9 even before a therapist is available. Or, if a family is struggling with OCD, they can start an Exposure Therapy with Response Prevention (ERP) workbook10 or look at the iocdf.org website before seeing a specialized therapist. We all know how unsatisfactory a wait-list is as a treatment plan; it is so empowering to start the family with first steps.
What about connections for us providers?
Leveraging your own relationship with patients who have mental health challenges can be powerful, and staying connected with others is vital to maintaining your own emotional well-being. Having a therapist, being active in your medical chapters, gardening, and connecting your practice to local mental health providers and schools can be rejuvenating. Improving the systems around us prevents burnout and keeps us connected.
And finally ...
So, join the movement to help our fields work better together; walk out of that exam room and listen to your worry about your patients and the systems that support them. Reach out for help, toward child psychiatry access lines, the AAP, AACAP, and other collective agents of change. Share what is making your lives and your patients’ lives easier so we can amplify these together. Let’s worry together, and make things better.
Dr. Margaret Spottswood is a child psychiatrist practicing in an integrated care clinic at the Community Health Centers of Burlington, Vt., a Federally Qualified Health Center. She is also the medical director of the Vermont Child Psychiatry Access Program and a clinical assistant professor in the department of psychiatry at the University of Vermont, Burlington.
References
1. National Network of Child Psychiatry Access Programs. Child Psychiatry Access Programs in the United States. https://www.nncpap.orgmap. 2023 Mar 14.
2. American Academy of Pediatrics. Screening Tools: Pediatric Mental Health Minute Series. https://www.aap.org/en/patient-care/mental-health-minute/screening-tools.
3. New York ProjectTEACH. Child Clinical Rating Scales. https://projectteachny.org/child-rating-scales.
4. Hilt H, Barclay R. Seattle Children’s Primary Care Principles for Child Mental Health. https://www.seattlechildrens.org/globalassets/documents/healthcare-professionals/pal/wa/wa-pal-care-guide.pdf.
5. Virginia Mental Health Access Program. VMAP Guidebook. https://vmap.org/guidebook.
6. American Academy of Child and Adolescent Psychiatry. Parents’ Medication Guides. https://www.aacap.org/AACAP/Families_and_Youth/Family_Resources/Parents_Medication_Guides.aspx.
7. Hilt RJ, Nussbaum AM. DSM-5 Pocket Guide to Child and Adolescent Mental Health. Arlington, Va.: American Psychiatric Association Publishing, 2015.
8. Advanced Integration Mental Health Solutions. Measurement-Based Treatment to Target. https://aims.uw.edu/resource-library/measurement-based-treatment-target.
9. Vermont Child Psychiatry Access Program. Caregiver Guide: Special Time With Children. https://www.chcb.org/wp-content/uploads/2023/03/Special-Time-with-Children-for-Caregivers.pdf.
10. Reuter T. Standing Up to OCD Workbook for Kids. New York: Simon and Schuster, 2019.