Article Type
Changed
Sun, 05/21/2023 - 15:09

Artificial Intelligence has arrived at medical offices, whether or not clinicians feel ready for it.

AI might result in more accurate, efficient, and cost-effective care. But it’s possible it could cause harm. That’s according to Benjamin Collins, MD, at Vanderbilt University Medical Center, Nashville, Tenn., who spoke on the subject at the annual meeting of the Society of General Internal Medicine.

Understanding the nuances of AI is even more important because of the quick development of the algorithms.

“When I submitted this workshop, there was no ChatGPT,” said Dr. Collins, referring to Chat Generative Pre-trained Transformer, a recently released natural language processing model. “A lot has already changed.”
 

Biased data

Biased data are perhaps the biggest pitfall of AI algorithms, Dr. Collins said. If garbage data go in, garbage predictions come out.

If the dataset that trains the algorithm underrepresents a particular gender or ethnic group, for example, the algorithm may not respond accurately to prompts. When an AI tool compounds existing inequalities related to socioeconomic status, ethnicity, or sexual orientation, the algorithm is biased, according to Harvard researchers.

“People often assume that artificial intelligence is free of bias due to the use of scientific processes and its development,” he said. “But whatever flaws exist in data collection and old data can lead to poor representation or underrepresentation in the data used to train the AI tool.”

Racial minorities are underrepresented in studies; therefore, data input into an AI tool might skew results for these patients.

The Framingham Heart Study, for example, which began in 1948, examined heart disease in mainly White participants. The findings from the study resulted in the creation of a sex-specific algorithm that was used to estimate the 10-year cardiovascular risk of a patient. While the cardiovascular risk score was accurate for White persons, it was less accurate for Black patients.

study published in Science in 2019 revealed bias in an algorithm that used health care costs as a proxy for health needs. Because less money was spent on Black patients who had the same level of need as their White counterparts, the output inaccurately showed that Black patients were healthier and thus did not require extra care.

Developers can also be a source of bias, inasmuch as AI often reflects preexisting human biases, Dr. Collins said.

“Algorithmic bias presents a clear risk of harm that clinicians must play against the benefits of using AI,” Dr. Collins said. “That risk of harm is often disproportionately distributed to marginalized populations.”

As clinicians use AI algorithms to diagnose and detect disease, predict outcomes, and guide treatment, trouble comes when those algorithms perform well for some patients and poorly for others. This gap can exacerbate existing disparities in health care outcomes.

Dr. Collins advised clinicians to push to find out what data were used to train AI algorithms to determine how bias could have influenced the model and whether the developers risk-adjusted for bias. If the training data are not available, clinicians should ask their employers and AI developers to know more about the system.

Clinicians may face the so-called black box phenomenon, which occurs when developers cannot or will not explain what data went into an AI model, Dr. Collins said.

According to Stanford (Calif.) University, AI must be trained on large datasets of images that have been annotated by human experts. Those datasets can cost millions of dollars to create, meaning corporations often fund them and do not always share the data publicly.

Some groups, such as Stanford’s Center for Artificial Intelligence in Medicine and Imaging, are working to acquire annotated datasets so researchers who train AI models can know where the data came from.

Paul Haidet, MD, MPH, an internist at Penn State College of Medicine, Hershey, sees the technology as a tool that requires careful handling.

“It takes a while to learn how to use a stethoscope, and AI is like that,” Dr. Haidet said. “The thing about AI, though, is that it can be just dropped into a system and no one knows how it works.”

Dr. Haidet said he likes knowing how the sausage is made, something AI developers are often reticent to make known.

“If you’re just putting blind faith in a tool, that’s scary,” Dr. Haidet said.
 

 

 

Transparency and ‘explainability’

The ability to explain what goes into tools is essential to maintaining trust in the health care system, Dr. Collins said.

“Part of knowing how much trust to place in the system is the transparency of those systems and the ability to audit how well the algorithm is performing,” Dr. Collins said. “The system should also regularly report to users the level of certainty with which it is providing an output rather than providing a simple binary output.”

Dr. Collins recommends that providers develop an understanding of the limits of AI regulations as well, which might including learning how the system was approved and how it is monitored.

“The FDA has oversight over some applications of AI and health care for software as a medical device, but there’s currently no dedicated process to evaluate the systems for the presence of bias,” Dr. Collins said. “The gaps in regulation leave the door open for the use of AI in clinical care that contain significant biases.”

Dr. Haidet likened AI tools to the Global Positioning System: A good GPS system will let users see alternate routes, opt out of toll roads or highways, and will highlight why routes have changed. But users need to understand how to read the map so they can tell when something seems amiss.

Dr. Collins and Dr. Haidet report no relevant financial relationships

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Artificial Intelligence has arrived at medical offices, whether or not clinicians feel ready for it.

AI might result in more accurate, efficient, and cost-effective care. But it’s possible it could cause harm. That’s according to Benjamin Collins, MD, at Vanderbilt University Medical Center, Nashville, Tenn., who spoke on the subject at the annual meeting of the Society of General Internal Medicine.

Understanding the nuances of AI is even more important because of the quick development of the algorithms.

“When I submitted this workshop, there was no ChatGPT,” said Dr. Collins, referring to Chat Generative Pre-trained Transformer, a recently released natural language processing model. “A lot has already changed.”
 

Biased data

Biased data are perhaps the biggest pitfall of AI algorithms, Dr. Collins said. If garbage data go in, garbage predictions come out.

If the dataset that trains the algorithm underrepresents a particular gender or ethnic group, for example, the algorithm may not respond accurately to prompts. When an AI tool compounds existing inequalities related to socioeconomic status, ethnicity, or sexual orientation, the algorithm is biased, according to Harvard researchers.

“People often assume that artificial intelligence is free of bias due to the use of scientific processes and its development,” he said. “But whatever flaws exist in data collection and old data can lead to poor representation or underrepresentation in the data used to train the AI tool.”

Racial minorities are underrepresented in studies; therefore, data input into an AI tool might skew results for these patients.

The Framingham Heart Study, for example, which began in 1948, examined heart disease in mainly White participants. The findings from the study resulted in the creation of a sex-specific algorithm that was used to estimate the 10-year cardiovascular risk of a patient. While the cardiovascular risk score was accurate for White persons, it was less accurate for Black patients.

study published in Science in 2019 revealed bias in an algorithm that used health care costs as a proxy for health needs. Because less money was spent on Black patients who had the same level of need as their White counterparts, the output inaccurately showed that Black patients were healthier and thus did not require extra care.

Developers can also be a source of bias, inasmuch as AI often reflects preexisting human biases, Dr. Collins said.

“Algorithmic bias presents a clear risk of harm that clinicians must play against the benefits of using AI,” Dr. Collins said. “That risk of harm is often disproportionately distributed to marginalized populations.”

As clinicians use AI algorithms to diagnose and detect disease, predict outcomes, and guide treatment, trouble comes when those algorithms perform well for some patients and poorly for others. This gap can exacerbate existing disparities in health care outcomes.

Dr. Collins advised clinicians to push to find out what data were used to train AI algorithms to determine how bias could have influenced the model and whether the developers risk-adjusted for bias. If the training data are not available, clinicians should ask their employers and AI developers to know more about the system.

Clinicians may face the so-called black box phenomenon, which occurs when developers cannot or will not explain what data went into an AI model, Dr. Collins said.

According to Stanford (Calif.) University, AI must be trained on large datasets of images that have been annotated by human experts. Those datasets can cost millions of dollars to create, meaning corporations often fund them and do not always share the data publicly.

Some groups, such as Stanford’s Center for Artificial Intelligence in Medicine and Imaging, are working to acquire annotated datasets so researchers who train AI models can know where the data came from.

Paul Haidet, MD, MPH, an internist at Penn State College of Medicine, Hershey, sees the technology as a tool that requires careful handling.

“It takes a while to learn how to use a stethoscope, and AI is like that,” Dr. Haidet said. “The thing about AI, though, is that it can be just dropped into a system and no one knows how it works.”

Dr. Haidet said he likes knowing how the sausage is made, something AI developers are often reticent to make known.

“If you’re just putting blind faith in a tool, that’s scary,” Dr. Haidet said.
 

 

 

Transparency and ‘explainability’

The ability to explain what goes into tools is essential to maintaining trust in the health care system, Dr. Collins said.

“Part of knowing how much trust to place in the system is the transparency of those systems and the ability to audit how well the algorithm is performing,” Dr. Collins said. “The system should also regularly report to users the level of certainty with which it is providing an output rather than providing a simple binary output.”

Dr. Collins recommends that providers develop an understanding of the limits of AI regulations as well, which might including learning how the system was approved and how it is monitored.

“The FDA has oversight over some applications of AI and health care for software as a medical device, but there’s currently no dedicated process to evaluate the systems for the presence of bias,” Dr. Collins said. “The gaps in regulation leave the door open for the use of AI in clinical care that contain significant biases.”

Dr. Haidet likened AI tools to the Global Positioning System: A good GPS system will let users see alternate routes, opt out of toll roads or highways, and will highlight why routes have changed. But users need to understand how to read the map so they can tell when something seems amiss.

Dr. Collins and Dr. Haidet report no relevant financial relationships

A version of this article first appeared on Medscape.com.

Artificial Intelligence has arrived at medical offices, whether or not clinicians feel ready for it.

AI might result in more accurate, efficient, and cost-effective care. But it’s possible it could cause harm. That’s according to Benjamin Collins, MD, at Vanderbilt University Medical Center, Nashville, Tenn., who spoke on the subject at the annual meeting of the Society of General Internal Medicine.

Understanding the nuances of AI is even more important because of the quick development of the algorithms.

“When I submitted this workshop, there was no ChatGPT,” said Dr. Collins, referring to Chat Generative Pre-trained Transformer, a recently released natural language processing model. “A lot has already changed.”
 

Biased data

Biased data are perhaps the biggest pitfall of AI algorithms, Dr. Collins said. If garbage data go in, garbage predictions come out.

If the dataset that trains the algorithm underrepresents a particular gender or ethnic group, for example, the algorithm may not respond accurately to prompts. When an AI tool compounds existing inequalities related to socioeconomic status, ethnicity, or sexual orientation, the algorithm is biased, according to Harvard researchers.

“People often assume that artificial intelligence is free of bias due to the use of scientific processes and its development,” he said. “But whatever flaws exist in data collection and old data can lead to poor representation or underrepresentation in the data used to train the AI tool.”

Racial minorities are underrepresented in studies; therefore, data input into an AI tool might skew results for these patients.

The Framingham Heart Study, for example, which began in 1948, examined heart disease in mainly White participants. The findings from the study resulted in the creation of a sex-specific algorithm that was used to estimate the 10-year cardiovascular risk of a patient. While the cardiovascular risk score was accurate for White persons, it was less accurate for Black patients.

study published in Science in 2019 revealed bias in an algorithm that used health care costs as a proxy for health needs. Because less money was spent on Black patients who had the same level of need as their White counterparts, the output inaccurately showed that Black patients were healthier and thus did not require extra care.

Developers can also be a source of bias, inasmuch as AI often reflects preexisting human biases, Dr. Collins said.

“Algorithmic bias presents a clear risk of harm that clinicians must play against the benefits of using AI,” Dr. Collins said. “That risk of harm is often disproportionately distributed to marginalized populations.”

As clinicians use AI algorithms to diagnose and detect disease, predict outcomes, and guide treatment, trouble comes when those algorithms perform well for some patients and poorly for others. This gap can exacerbate existing disparities in health care outcomes.

Dr. Collins advised clinicians to push to find out what data were used to train AI algorithms to determine how bias could have influenced the model and whether the developers risk-adjusted for bias. If the training data are not available, clinicians should ask their employers and AI developers to know more about the system.

Clinicians may face the so-called black box phenomenon, which occurs when developers cannot or will not explain what data went into an AI model, Dr. Collins said.

According to Stanford (Calif.) University, AI must be trained on large datasets of images that have been annotated by human experts. Those datasets can cost millions of dollars to create, meaning corporations often fund them and do not always share the data publicly.

Some groups, such as Stanford’s Center for Artificial Intelligence in Medicine and Imaging, are working to acquire annotated datasets so researchers who train AI models can know where the data came from.

Paul Haidet, MD, MPH, an internist at Penn State College of Medicine, Hershey, sees the technology as a tool that requires careful handling.

“It takes a while to learn how to use a stethoscope, and AI is like that,” Dr. Haidet said. “The thing about AI, though, is that it can be just dropped into a system and no one knows how it works.”

Dr. Haidet said he likes knowing how the sausage is made, something AI developers are often reticent to make known.

“If you’re just putting blind faith in a tool, that’s scary,” Dr. Haidet said.
 

 

 

Transparency and ‘explainability’

The ability to explain what goes into tools is essential to maintaining trust in the health care system, Dr. Collins said.

“Part of knowing how much trust to place in the system is the transparency of those systems and the ability to audit how well the algorithm is performing,” Dr. Collins said. “The system should also regularly report to users the level of certainty with which it is providing an output rather than providing a simple binary output.”

Dr. Collins recommends that providers develop an understanding of the limits of AI regulations as well, which might including learning how the system was approved and how it is monitored.

“The FDA has oversight over some applications of AI and health care for software as a medical device, but there’s currently no dedicated process to evaluate the systems for the presence of bias,” Dr. Collins said. “The gaps in regulation leave the door open for the use of AI in clinical care that contain significant biases.”

Dr. Haidet likened AI tools to the Global Positioning System: A good GPS system will let users see alternate routes, opt out of toll roads or highways, and will highlight why routes have changed. But users need to understand how to read the map so they can tell when something seems amiss.

Dr. Collins and Dr. Haidet report no relevant financial relationships

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT SGIM 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article