User login
When OpenAI released ChatGPT-3 publicly last November, some doctors decided to try out the free AI tool that learns language and writes human-like text. Some physicians found the chatbot made mistakes and stopped using it, while others were happy with the results and plan to use it more often.
“We’ve played around with it. It was very early on in AI and we noticed it gave us incorrect information with regards to clinical guidance,” said Monalisa Tailor, MD, an internal medicine physician at Norton Health Care in Louisville, Ky. “We decided not to pursue it further,” she said.
Orthopedic spine surgeon Daniel Choi, MD, who owns a small medical/surgical practice in Long Island, New York, tested the chatbot’s performance with a few administrative tasks, including writing a job listing for an administrator and prior authorization letters.
He was enthusiastic. “A well-polished job posting that would usually take me 2-3 hours to write was done in 5 minutes,” Dr. Choi said. “I was blown away by the writing – it was much better than anything I could write.”
The chatbot can also automate administrative tasks in doctors’ practices from appointment scheduling and billing to clinical documentation, saving doctors time and money, experts say.
Most physicians are proceeding cautiously. About 10% of more than 500 medical group leaders, responding to a March poll by the Medical Group Management Association, said their practices regularly use AI tools.
More than half of the respondents not using AI said they first want more evidence that the technology works as intended.
“None of them work as advertised,” said one respondent.
MGMA practice management consultant Dawn Plested acknowledges that many of the physician practices she’s worked with are still wary. “I have yet to encounter a practice that is using any AI tool, even something as low-risk as appointment scheduling,” she said.
Physician groups may be concerned about the costs and logistics of integrating ChatGPT with their electronic health record systems (EHRs) and how that would work, said Ms. Plested.
Doctors may also be skeptical of AI based on their experience with EHRs, she said.
“They were promoted as a panacea to many problems; they were supposed to automate business practice, reduce staff and clinician’s work, and improve billing/coding/documentation. Unfortunately, they have become a major source of frustration for doctors,” said Ms. Plested.
Drawing the line at patient care
Patients are worried about their doctors relying on AI for their care, according to a Pew Research Center poll released in February. About 60% of U.S. adults say they would feel uncomfortable if their own health care professional relied on artificial intelligence to do things like diagnose disease and recommend treatments; about 40% say they would feel comfortable with this.
“We have not yet gone into using ChatGPT for clinical purposes and will be very cautious with these types of applications due to concerns about inaccuracies,” Dr. Choi said.
Practice leaders reported in the MGMA poll that the most common uses of AI were nonclinical, such as:
- Patient communications, including call center answering service to help triage calls, to sort/distribute incoming fax messages, and outreach such as appointment reminders and marketing materials.
- Capturing clinical documentation, often with natural language processing or speech recognition platforms to help virtually scribe.
- Improving billing operations and predictive analytics.
Some doctors told The New York Times that ChatGPT helped them communicate with patients in a more compassionate way.
They used chatbots “to find words to break bad news and express concerns about a patient’s suffering, or to just more clearly explain medical recommendations,” the story noted.
Is regulation needed?
Some legal scholars and medical groups say that AI should be regulated to protect patients and doctors from risks, including medical errors, that could harm patients.
“It’s very important to evaluate the accuracy, safety, and privacy of language learning models (LLMs) before integrating them into the medical system. The same should be true of any new medical tool,” said Mason Marks, MD, JD, a health law professor at the Florida State University College of Law in Tallahassee.
In mid-June, the American Medical Association approved two resolutions calling for greater government oversight of AI. The AMA will develop proposed state and federal regulations and work with the federal government and other organizations to protect patients from false or misleading AI-generated medical advice.
Dr. Marks pointed to existing federal rules that apply to AI. “The Federal Trade Commission already has regulation that can potentially be used to combat unfair or deceptive trade practices associated with chatbots,” he said.
In addition, “the U.S. Food and Drug Administration can also regulate these tools, but it needs to update how it approaches risk when it comes to AI. The FDA has an outdated view of risk as physical harm, for instance, from traditional medical devices. That view of risk needs to be updated and expanded to encompass the unique harms of AI,” Dr. Marks said.
There should also be more transparency about how LLM software is used in medicine, he said. “That could be a norm implemented by the LLM developers and it could also be enforced by federal agencies. For instance, the FDA could require developers to be more transparent regarding training data and methods, and the FTC could require greater transparency regarding how consumer data might be used and opportunities to opt out of certain uses,” said Dr. Marks.
What should doctors do?
Dr. Marks advised doctors to be cautious when using ChatGPT and other LLMs, especially for medical advice. “The same would apply to any new medical tool, but we know that the current generation of LLMs [is] particularly prone to making things up, which could lead to medical errors if relied on in clinical settings,” he said.
There is also potential for breaches of patient confidentiality if doctors input clinical information. ChatGPT and OpenAI-enabled tools may not be compliant with the Health Insurance Portability and Accountability Act, which set national standards to protect individuals’ medical records and individually identifiable health information.
“The best approach is to use chatbots cautiously and with skepticism. Don’t input patient information, confirm the accuracy of information produced, and don’t use them as replacements for professional judgment,” Dr. Marks recommended.
Ms. Plested suggested that doctors who want to experiment with AI start with a low-risk tool such as appointment reminders that could save staff time and money. “I never recommend they start with something as high-stakes as coding/billing,” she said.
A version of this article appeared on Medscape.com.
When OpenAI released ChatGPT-3 publicly last November, some doctors decided to try out the free AI tool that learns language and writes human-like text. Some physicians found the chatbot made mistakes and stopped using it, while others were happy with the results and plan to use it more often.
“We’ve played around with it. It was very early on in AI and we noticed it gave us incorrect information with regards to clinical guidance,” said Monalisa Tailor, MD, an internal medicine physician at Norton Health Care in Louisville, Ky. “We decided not to pursue it further,” she said.
Orthopedic spine surgeon Daniel Choi, MD, who owns a small medical/surgical practice in Long Island, New York, tested the chatbot’s performance with a few administrative tasks, including writing a job listing for an administrator and prior authorization letters.
He was enthusiastic. “A well-polished job posting that would usually take me 2-3 hours to write was done in 5 minutes,” Dr. Choi said. “I was blown away by the writing – it was much better than anything I could write.”
The chatbot can also automate administrative tasks in doctors’ practices from appointment scheduling and billing to clinical documentation, saving doctors time and money, experts say.
Most physicians are proceeding cautiously. About 10% of more than 500 medical group leaders, responding to a March poll by the Medical Group Management Association, said their practices regularly use AI tools.
More than half of the respondents not using AI said they first want more evidence that the technology works as intended.
“None of them work as advertised,” said one respondent.
MGMA practice management consultant Dawn Plested acknowledges that many of the physician practices she’s worked with are still wary. “I have yet to encounter a practice that is using any AI tool, even something as low-risk as appointment scheduling,” she said.
Physician groups may be concerned about the costs and logistics of integrating ChatGPT with their electronic health record systems (EHRs) and how that would work, said Ms. Plested.
Doctors may also be skeptical of AI based on their experience with EHRs, she said.
“They were promoted as a panacea to many problems; they were supposed to automate business practice, reduce staff and clinician’s work, and improve billing/coding/documentation. Unfortunately, they have become a major source of frustration for doctors,” said Ms. Plested.
Drawing the line at patient care
Patients are worried about their doctors relying on AI for their care, according to a Pew Research Center poll released in February. About 60% of U.S. adults say they would feel uncomfortable if their own health care professional relied on artificial intelligence to do things like diagnose disease and recommend treatments; about 40% say they would feel comfortable with this.
“We have not yet gone into using ChatGPT for clinical purposes and will be very cautious with these types of applications due to concerns about inaccuracies,” Dr. Choi said.
Practice leaders reported in the MGMA poll that the most common uses of AI were nonclinical, such as:
- Patient communications, including call center answering service to help triage calls, to sort/distribute incoming fax messages, and outreach such as appointment reminders and marketing materials.
- Capturing clinical documentation, often with natural language processing or speech recognition platforms to help virtually scribe.
- Improving billing operations and predictive analytics.
Some doctors told The New York Times that ChatGPT helped them communicate with patients in a more compassionate way.
They used chatbots “to find words to break bad news and express concerns about a patient’s suffering, or to just more clearly explain medical recommendations,” the story noted.
Is regulation needed?
Some legal scholars and medical groups say that AI should be regulated to protect patients and doctors from risks, including medical errors, that could harm patients.
“It’s very important to evaluate the accuracy, safety, and privacy of language learning models (LLMs) before integrating them into the medical system. The same should be true of any new medical tool,” said Mason Marks, MD, JD, a health law professor at the Florida State University College of Law in Tallahassee.
In mid-June, the American Medical Association approved two resolutions calling for greater government oversight of AI. The AMA will develop proposed state and federal regulations and work with the federal government and other organizations to protect patients from false or misleading AI-generated medical advice.
Dr. Marks pointed to existing federal rules that apply to AI. “The Federal Trade Commission already has regulation that can potentially be used to combat unfair or deceptive trade practices associated with chatbots,” he said.
In addition, “the U.S. Food and Drug Administration can also regulate these tools, but it needs to update how it approaches risk when it comes to AI. The FDA has an outdated view of risk as physical harm, for instance, from traditional medical devices. That view of risk needs to be updated and expanded to encompass the unique harms of AI,” Dr. Marks said.
There should also be more transparency about how LLM software is used in medicine, he said. “That could be a norm implemented by the LLM developers and it could also be enforced by federal agencies. For instance, the FDA could require developers to be more transparent regarding training data and methods, and the FTC could require greater transparency regarding how consumer data might be used and opportunities to opt out of certain uses,” said Dr. Marks.
What should doctors do?
Dr. Marks advised doctors to be cautious when using ChatGPT and other LLMs, especially for medical advice. “The same would apply to any new medical tool, but we know that the current generation of LLMs [is] particularly prone to making things up, which could lead to medical errors if relied on in clinical settings,” he said.
There is also potential for breaches of patient confidentiality if doctors input clinical information. ChatGPT and OpenAI-enabled tools may not be compliant with the Health Insurance Portability and Accountability Act, which set national standards to protect individuals’ medical records and individually identifiable health information.
“The best approach is to use chatbots cautiously and with skepticism. Don’t input patient information, confirm the accuracy of information produced, and don’t use them as replacements for professional judgment,” Dr. Marks recommended.
Ms. Plested suggested that doctors who want to experiment with AI start with a low-risk tool such as appointment reminders that could save staff time and money. “I never recommend they start with something as high-stakes as coding/billing,” she said.
A version of this article appeared on Medscape.com.
When OpenAI released ChatGPT-3 publicly last November, some doctors decided to try out the free AI tool that learns language and writes human-like text. Some physicians found the chatbot made mistakes and stopped using it, while others were happy with the results and plan to use it more often.
“We’ve played around with it. It was very early on in AI and we noticed it gave us incorrect information with regards to clinical guidance,” said Monalisa Tailor, MD, an internal medicine physician at Norton Health Care in Louisville, Ky. “We decided not to pursue it further,” she said.
Orthopedic spine surgeon Daniel Choi, MD, who owns a small medical/surgical practice in Long Island, New York, tested the chatbot’s performance with a few administrative tasks, including writing a job listing for an administrator and prior authorization letters.
He was enthusiastic. “A well-polished job posting that would usually take me 2-3 hours to write was done in 5 minutes,” Dr. Choi said. “I was blown away by the writing – it was much better than anything I could write.”
The chatbot can also automate administrative tasks in doctors’ practices from appointment scheduling and billing to clinical documentation, saving doctors time and money, experts say.
Most physicians are proceeding cautiously. About 10% of more than 500 medical group leaders, responding to a March poll by the Medical Group Management Association, said their practices regularly use AI tools.
More than half of the respondents not using AI said they first want more evidence that the technology works as intended.
“None of them work as advertised,” said one respondent.
MGMA practice management consultant Dawn Plested acknowledges that many of the physician practices she’s worked with are still wary. “I have yet to encounter a practice that is using any AI tool, even something as low-risk as appointment scheduling,” she said.
Physician groups may be concerned about the costs and logistics of integrating ChatGPT with their electronic health record systems (EHRs) and how that would work, said Ms. Plested.
Doctors may also be skeptical of AI based on their experience with EHRs, she said.
“They were promoted as a panacea to many problems; they were supposed to automate business practice, reduce staff and clinician’s work, and improve billing/coding/documentation. Unfortunately, they have become a major source of frustration for doctors,” said Ms. Plested.
Drawing the line at patient care
Patients are worried about their doctors relying on AI for their care, according to a Pew Research Center poll released in February. About 60% of U.S. adults say they would feel uncomfortable if their own health care professional relied on artificial intelligence to do things like diagnose disease and recommend treatments; about 40% say they would feel comfortable with this.
“We have not yet gone into using ChatGPT for clinical purposes and will be very cautious with these types of applications due to concerns about inaccuracies,” Dr. Choi said.
Practice leaders reported in the MGMA poll that the most common uses of AI were nonclinical, such as:
- Patient communications, including call center answering service to help triage calls, to sort/distribute incoming fax messages, and outreach such as appointment reminders and marketing materials.
- Capturing clinical documentation, often with natural language processing or speech recognition platforms to help virtually scribe.
- Improving billing operations and predictive analytics.
Some doctors told The New York Times that ChatGPT helped them communicate with patients in a more compassionate way.
They used chatbots “to find words to break bad news and express concerns about a patient’s suffering, or to just more clearly explain medical recommendations,” the story noted.
Is regulation needed?
Some legal scholars and medical groups say that AI should be regulated to protect patients and doctors from risks, including medical errors, that could harm patients.
“It’s very important to evaluate the accuracy, safety, and privacy of language learning models (LLMs) before integrating them into the medical system. The same should be true of any new medical tool,” said Mason Marks, MD, JD, a health law professor at the Florida State University College of Law in Tallahassee.
In mid-June, the American Medical Association approved two resolutions calling for greater government oversight of AI. The AMA will develop proposed state and federal regulations and work with the federal government and other organizations to protect patients from false or misleading AI-generated medical advice.
Dr. Marks pointed to existing federal rules that apply to AI. “The Federal Trade Commission already has regulation that can potentially be used to combat unfair or deceptive trade practices associated with chatbots,” he said.
In addition, “the U.S. Food and Drug Administration can also regulate these tools, but it needs to update how it approaches risk when it comes to AI. The FDA has an outdated view of risk as physical harm, for instance, from traditional medical devices. That view of risk needs to be updated and expanded to encompass the unique harms of AI,” Dr. Marks said.
There should also be more transparency about how LLM software is used in medicine, he said. “That could be a norm implemented by the LLM developers and it could also be enforced by federal agencies. For instance, the FDA could require developers to be more transparent regarding training data and methods, and the FTC could require greater transparency regarding how consumer data might be used and opportunities to opt out of certain uses,” said Dr. Marks.
What should doctors do?
Dr. Marks advised doctors to be cautious when using ChatGPT and other LLMs, especially for medical advice. “The same would apply to any new medical tool, but we know that the current generation of LLMs [is] particularly prone to making things up, which could lead to medical errors if relied on in clinical settings,” he said.
There is also potential for breaches of patient confidentiality if doctors input clinical information. ChatGPT and OpenAI-enabled tools may not be compliant with the Health Insurance Portability and Accountability Act, which set national standards to protect individuals’ medical records and individually identifiable health information.
“The best approach is to use chatbots cautiously and with skepticism. Don’t input patient information, confirm the accuracy of information produced, and don’t use them as replacements for professional judgment,” Dr. Marks recommended.
Ms. Plested suggested that doctors who want to experiment with AI start with a low-risk tool such as appointment reminders that could save staff time and money. “I never recommend they start with something as high-stakes as coding/billing,” she said.
A version of this article appeared on Medscape.com.