Article Type
Changed
Wed, 01/17/2024 - 16:28

ORLANDO — Experts shed light on the applications, benefits, and pitfalls of artificial intelligence (AI) during the Merrit-Putnam Symposium at the annual meeting of the American Epilepsy Society (AES).

In a session titled “Artificial Intelligence Fundamentals and Breakthrough Applications in Epilepsy,” University of Pittsburgh neurologist and assistant professor Wesley Kerr, MD, PhD, provided an overview of AI as well its applications in neurology. He began by addressing perhaps one of the most controversial topics regarding AI in the medical community: clinicians’ fear of being replaced by technology.

“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence,” he told the audience.
 

To Optimize AI, Clinicians Must Lay the Proper Foundation

Dr. Kerr’s presentation focused on providing audience members with tools to help them evaluate new technologies, recognize benefits, and identify key costs and limitations associated with AI implementation and integration into clinical practice.

Before delving deeper, one must first understand basic terminology regarding AI. Without this knowledge, clinicians may inadvertently introduce bias or errata or fail to understand how to best leverage the technology to enhance the quality of the practice while improving patient outcomes.

Machine learning (ML) describes the process of using data to learn a specific task. Deep learning (DL) stacks multiple layers of ML to improve performance on the task. Lastly, generative AI generates content such as text, images, and media.

Utilizing AI effectively in clinical applications involves tapping into select features most related to prediction (for example, disease factors) and grouping features into categories based on measuring commonalities such as factor composition in a population. This information should be used in training data only.

Fully understanding ML/AI allows clinicians to use it as a diagnostic test by exploiting a combination of accuracy, sensitivity, and specificity, along with positive and negative predictive values.
 

Data Fidelity and Integrity Hinge on Optimal Data Inputs

In the case of epilepsy, calibration curves can provide practical guidance in terms of predicting impending seizures.

“ML/AI needs gold-standard labels for evaluation,” Dr. Kerr said. He went on to stress the importance of quality data inputs to optimize the fidelity of AI’s predictive analytics.

“If you input garbage, you’ll get garbage out,” he said. “So a lot of garbage going in means a lot of garbage out.”

Such “garbage” can result in missed or erroneous diagnoses, or even faulty predictions. Even when the data are complete, AI can draw incorrect conclusions based on trends for which it lacks proper context.

Dr. Kerr used epilepsy trends in the Black population to illustrate this problem.

“One potential bias is that AI can figure out a patient is Black without being told, and based on data that Black patients are less likely to get epilepsy surgery,” he said, “AI would say they don’t need it because they’re Black, which isn’t true.”

In other words, ML/AI can use systematic determinants of health, such as race, to learn what Dr. Kerr referred to as an “inappropriate association.”

For that reason, ML/AI users must test for bias.

Such data are often retrieved from electronic health records (EHR), which serve as an important source of data ML/AI input. Using EHR makes sense, as they are a major source of missed potential in improving prompt treatment. According to Dr. Kerr, 20% of academic neurologists’ notes miss seizure frequency, and 30% miss the age of onset.

In addition, International Classification of Diseases (ICD) codes create another hurdle depending on the type of code used. For example, epilepsy with G40 or 2 codes of R56 is reliable, while focal to bilateral versus generalized epilepsy proves more challenging.
 

 

 

AI Improves Efficiency in National Language Generation

Large language models (LLM) look at first drafts and can save time on formatting, image selection, and construction. Perhaps ChatGPT is the most famous LLM, but other tools in this category include Open AI and Bard. LLMs are trained on “the whole internet” and use publicly accessible text.

In these cases, prompts serve as input data. Output data are predictions of the first and subsequent words.

Many users appreciate the foundation LLMs provide in terms of facilitating and collating research and summarizing ideas. The LLM-generated text actually serves as a first draft, saving users time on more clerical tasks such as formatting, image selection, and structure. Notwithstanding, these tools still require human supervision to screen for hallucinations or to add specialized content.

“LLMs are a great starting place to save time but are loaded with errors,” Dr. Kerr said.

Even if the tools could produce error-free content, ethics still come into play when using AI-generated content without any alterations. Any ML/AI that has not been modified or supervised is considered plagiarism.

Yet, interestingly enough, Dr. Kerr found that patients respond more positively to AI than physicians when interacting.

“Patients felt that AI was more sensitive and compassionate because it was longer-winded and humans are short,” he said. He went on to argue that AI might actually prove useful in helping physicians to improve the quality of their patient interactions.

Dr. Kerr left the audience with these key takeaways:

  • ML/AI is just one type of clinical tool with benefits and limitations. The technology conveys the advantages of freeing up the clinician’s time to focus on more human-centered tasks, improving clinical decisions in challenging situations, and improving efficiency.
  • However, healthcare systems should understand that ML/AI is not 100% foolproof, as the software’s knowledge is limited to its training exposure, and proper use requires supervision.
Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

ORLANDO — Experts shed light on the applications, benefits, and pitfalls of artificial intelligence (AI) during the Merrit-Putnam Symposium at the annual meeting of the American Epilepsy Society (AES).

In a session titled “Artificial Intelligence Fundamentals and Breakthrough Applications in Epilepsy,” University of Pittsburgh neurologist and assistant professor Wesley Kerr, MD, PhD, provided an overview of AI as well its applications in neurology. He began by addressing perhaps one of the most controversial topics regarding AI in the medical community: clinicians’ fear of being replaced by technology.

“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence,” he told the audience.
 

To Optimize AI, Clinicians Must Lay the Proper Foundation

Dr. Kerr’s presentation focused on providing audience members with tools to help them evaluate new technologies, recognize benefits, and identify key costs and limitations associated with AI implementation and integration into clinical practice.

Before delving deeper, one must first understand basic terminology regarding AI. Without this knowledge, clinicians may inadvertently introduce bias or errata or fail to understand how to best leverage the technology to enhance the quality of the practice while improving patient outcomes.

Machine learning (ML) describes the process of using data to learn a specific task. Deep learning (DL) stacks multiple layers of ML to improve performance on the task. Lastly, generative AI generates content such as text, images, and media.

Utilizing AI effectively in clinical applications involves tapping into select features most related to prediction (for example, disease factors) and grouping features into categories based on measuring commonalities such as factor composition in a population. This information should be used in training data only.

Fully understanding ML/AI allows clinicians to use it as a diagnostic test by exploiting a combination of accuracy, sensitivity, and specificity, along with positive and negative predictive values.
 

Data Fidelity and Integrity Hinge on Optimal Data Inputs

In the case of epilepsy, calibration curves can provide practical guidance in terms of predicting impending seizures.

“ML/AI needs gold-standard labels for evaluation,” Dr. Kerr said. He went on to stress the importance of quality data inputs to optimize the fidelity of AI’s predictive analytics.

“If you input garbage, you’ll get garbage out,” he said. “So a lot of garbage going in means a lot of garbage out.”

Such “garbage” can result in missed or erroneous diagnoses, or even faulty predictions. Even when the data are complete, AI can draw incorrect conclusions based on trends for which it lacks proper context.

Dr. Kerr used epilepsy trends in the Black population to illustrate this problem.

“One potential bias is that AI can figure out a patient is Black without being told, and based on data that Black patients are less likely to get epilepsy surgery,” he said, “AI would say they don’t need it because they’re Black, which isn’t true.”

In other words, ML/AI can use systematic determinants of health, such as race, to learn what Dr. Kerr referred to as an “inappropriate association.”

For that reason, ML/AI users must test for bias.

Such data are often retrieved from electronic health records (EHR), which serve as an important source of data ML/AI input. Using EHR makes sense, as they are a major source of missed potential in improving prompt treatment. According to Dr. Kerr, 20% of academic neurologists’ notes miss seizure frequency, and 30% miss the age of onset.

In addition, International Classification of Diseases (ICD) codes create another hurdle depending on the type of code used. For example, epilepsy with G40 or 2 codes of R56 is reliable, while focal to bilateral versus generalized epilepsy proves more challenging.
 

 

 

AI Improves Efficiency in National Language Generation

Large language models (LLM) look at first drafts and can save time on formatting, image selection, and construction. Perhaps ChatGPT is the most famous LLM, but other tools in this category include Open AI and Bard. LLMs are trained on “the whole internet” and use publicly accessible text.

In these cases, prompts serve as input data. Output data are predictions of the first and subsequent words.

Many users appreciate the foundation LLMs provide in terms of facilitating and collating research and summarizing ideas. The LLM-generated text actually serves as a first draft, saving users time on more clerical tasks such as formatting, image selection, and structure. Notwithstanding, these tools still require human supervision to screen for hallucinations or to add specialized content.

“LLMs are a great starting place to save time but are loaded with errors,” Dr. Kerr said.

Even if the tools could produce error-free content, ethics still come into play when using AI-generated content without any alterations. Any ML/AI that has not been modified or supervised is considered plagiarism.

Yet, interestingly enough, Dr. Kerr found that patients respond more positively to AI than physicians when interacting.

“Patients felt that AI was more sensitive and compassionate because it was longer-winded and humans are short,” he said. He went on to argue that AI might actually prove useful in helping physicians to improve the quality of their patient interactions.

Dr. Kerr left the audience with these key takeaways:

  • ML/AI is just one type of clinical tool with benefits and limitations. The technology conveys the advantages of freeing up the clinician’s time to focus on more human-centered tasks, improving clinical decisions in challenging situations, and improving efficiency.
  • However, healthcare systems should understand that ML/AI is not 100% foolproof, as the software’s knowledge is limited to its training exposure, and proper use requires supervision.

ORLANDO — Experts shed light on the applications, benefits, and pitfalls of artificial intelligence (AI) during the Merrit-Putnam Symposium at the annual meeting of the American Epilepsy Society (AES).

In a session titled “Artificial Intelligence Fundamentals and Breakthrough Applications in Epilepsy,” University of Pittsburgh neurologist and assistant professor Wesley Kerr, MD, PhD, provided an overview of AI as well its applications in neurology. He began by addressing perhaps one of the most controversial topics regarding AI in the medical community: clinicians’ fear of being replaced by technology.

“Artificial intelligence will not replace clinicians, but clinicians assisted by artificial intelligence will replace clinicians without artificial intelligence,” he told the audience.
 

To Optimize AI, Clinicians Must Lay the Proper Foundation

Dr. Kerr’s presentation focused on providing audience members with tools to help them evaluate new technologies, recognize benefits, and identify key costs and limitations associated with AI implementation and integration into clinical practice.

Before delving deeper, one must first understand basic terminology regarding AI. Without this knowledge, clinicians may inadvertently introduce bias or errata or fail to understand how to best leverage the technology to enhance the quality of the practice while improving patient outcomes.

Machine learning (ML) describes the process of using data to learn a specific task. Deep learning (DL) stacks multiple layers of ML to improve performance on the task. Lastly, generative AI generates content such as text, images, and media.

Utilizing AI effectively in clinical applications involves tapping into select features most related to prediction (for example, disease factors) and grouping features into categories based on measuring commonalities such as factor composition in a population. This information should be used in training data only.

Fully understanding ML/AI allows clinicians to use it as a diagnostic test by exploiting a combination of accuracy, sensitivity, and specificity, along with positive and negative predictive values.
 

Data Fidelity and Integrity Hinge on Optimal Data Inputs

In the case of epilepsy, calibration curves can provide practical guidance in terms of predicting impending seizures.

“ML/AI needs gold-standard labels for evaluation,” Dr. Kerr said. He went on to stress the importance of quality data inputs to optimize the fidelity of AI’s predictive analytics.

“If you input garbage, you’ll get garbage out,” he said. “So a lot of garbage going in means a lot of garbage out.”

Such “garbage” can result in missed or erroneous diagnoses, or even faulty predictions. Even when the data are complete, AI can draw incorrect conclusions based on trends for which it lacks proper context.

Dr. Kerr used epilepsy trends in the Black population to illustrate this problem.

“One potential bias is that AI can figure out a patient is Black without being told, and based on data that Black patients are less likely to get epilepsy surgery,” he said, “AI would say they don’t need it because they’re Black, which isn’t true.”

In other words, ML/AI can use systematic determinants of health, such as race, to learn what Dr. Kerr referred to as an “inappropriate association.”

For that reason, ML/AI users must test for bias.

Such data are often retrieved from electronic health records (EHR), which serve as an important source of data ML/AI input. Using EHR makes sense, as they are a major source of missed potential in improving prompt treatment. According to Dr. Kerr, 20% of academic neurologists’ notes miss seizure frequency, and 30% miss the age of onset.

In addition, International Classification of Diseases (ICD) codes create another hurdle depending on the type of code used. For example, epilepsy with G40 or 2 codes of R56 is reliable, while focal to bilateral versus generalized epilepsy proves more challenging.
 

 

 

AI Improves Efficiency in National Language Generation

Large language models (LLM) look at first drafts and can save time on formatting, image selection, and construction. Perhaps ChatGPT is the most famous LLM, but other tools in this category include Open AI and Bard. LLMs are trained on “the whole internet” and use publicly accessible text.

In these cases, prompts serve as input data. Output data are predictions of the first and subsequent words.

Many users appreciate the foundation LLMs provide in terms of facilitating and collating research and summarizing ideas. The LLM-generated text actually serves as a first draft, saving users time on more clerical tasks such as formatting, image selection, and structure. Notwithstanding, these tools still require human supervision to screen for hallucinations or to add specialized content.

“LLMs are a great starting place to save time but are loaded with errors,” Dr. Kerr said.

Even if the tools could produce error-free content, ethics still come into play when using AI-generated content without any alterations. Any ML/AI that has not been modified or supervised is considered plagiarism.

Yet, interestingly enough, Dr. Kerr found that patients respond more positively to AI than physicians when interacting.

“Patients felt that AI was more sensitive and compassionate because it was longer-winded and humans are short,” he said. He went on to argue that AI might actually prove useful in helping physicians to improve the quality of their patient interactions.

Dr. Kerr left the audience with these key takeaways:

  • ML/AI is just one type of clinical tool with benefits and limitations. The technology conveys the advantages of freeing up the clinician’s time to focus on more human-centered tasks, improving clinical decisions in challenging situations, and improving efficiency.
  • However, healthcare systems should understand that ML/AI is not 100% foolproof, as the software’s knowledge is limited to its training exposure, and proper use requires supervision.
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AES 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article