User login
In a 2023 study published in the Annals of Emergency Medicine, European researchers fed the AI system ChatGPT information on 30 ER patients. Details included physician notes on the patients’ symptoms, physical exams, and lab results. ChatGPT made the correct diagnosis in 97% of patients compared to 87% for human doctors.
AI 1, Physicians 0
JAMA Cardiology reported in 2021 that an AI trained on nearly a million ECGs performed comparably to or exceeded cardiologist clinical diagnoses and the MUSE (GE Healthcare) system›s automated ECG analysis for most diagnostic classes.
AI 2, Physicians 0
Google’s medically focused AI model (Med-PaLM2) scored 85%+ when answering US Medical Licensing Examination–style questions. That›s an «expert» physician level and far beyond the accuracy threshold needed to pass the actual exam.
AI 3, Physicians 0
A new AI tool that uses an online finger-tapping test outperformed primary care physicians when assessing the severity of Parkinson’s disease.
AI 4, Physicians 0
JAMA Ophthalmology reported in 2024 that a chatbot outperformed glaucoma specialists and matched retina specialists in diagnostic and treatment accuracy.
AI 5, Physicians 0
Should we stop? Because we could go on. In the last few years, these AI vs Physician studies have proliferated, and guess who’s winning?
65% of Doctors are Concerned
Now, the standard answer with anything AI-and-Medicine goes something like this: AI is coming, and it will be a transformative tool for physicians and improve patient care.
But the underlying unanswered question is:
The Medscape 2023 Physician and AI Report surveyed 1043 US physicians about their views on AI. In total, 65% are concerned about AI making diagnosis and treatment decisions, but 56% are enthusiastic about having it as an adjunct.
Cardiologists, anesthesiologists, and radiologists are most enthusiastic about AI, whereas family physicians and pediatricians are the least enthusiastic.
To get a more personal view of how physicians and other healthcare professionals are feeling about this transformative tech, I spoke with a variety of practicing doctors, a psychotherapist, and a third-year Harvard Medical School student.
‘Abysmally Poor Understanding’
Alfredo A. Sadun, MD, PhD, has been a neuro-ophthalmologist for nearly 50 years. A graduate of MIT and vice-chair of ophthalmology at UCLA, he’s long been fascinated by AI’s march into medicine. He’s watched it accomplish things that no ophthalmologist can do, such as identify gender, age, and risk for heart attack and stroke from retinal scans. But he doesn›t see the same level of interest and comprehension among the medical community.
“There’s still an abysmally poor understanding of AI among physicians in general,” he said. “It’s striking because these are intelligent, well-educated people. But we tend to draw conclusions based on what we’re familiar with, and most doctors’ experience with computers involves EHRs [electronic health records] and administrative garbage. It’s the reason they’re burning out.”
Easing the Burden
Anthony Philippakis, MD, PhD, left his cardiology practice in 2015 to become the chief data officer at the Broad Institute of MIT and Harvard. While there, he helped develop an AI-based method for identifying patients at risk for atrial fibrillation. Now, he’s a general partner at Google Ventures with the goal of bridging the gap between data sciences and medicine. His perspective on AI is unique, given that he’s seen the issue from both sides.
“I am not a bitter physician, but to be honest, when I was practicing, way too much of my time was spent staring at screens and not enough laying hands on patients,” he said. “Can you imagine what it would be like to speak to the EHR naturally and say, ‘Please order the following labs for this patient and notify me when the results come in.’ Boy, would that improve healthcare and physician satisfaction. Every physician I know is excited and optimistic about that. Almost everyone I’ve talked to feels like AI could take a lot of the stuff they don’t like doing off their plates.”
Indeed, the dividing line between physician support for AI and physician suspicion or skepticism of AI is just that. In our survey, more than three quarters of physicians said they would consider using AI for office administrative tasks, scheduling, EHRs, researching medical conditions, and even summarizing a patient’s record before a visit. But far fewer are supportive of it delivering diagnoses and treatments. This, despite an estimated 800,000 Americans dying or becoming permanently disabled each year because of diagnostic error.
Could AI Have Diagnosed This?
John D. Nuschke, MD, has been a primary care physician in Allentown, Pennsylvania, for 40 years. He’s a jovial general physician who insists his patients call him Jack. He’s recently started using an AI medical scribe called Freed. With the patient’s permission, it listens in on the visit and generates notes, saving Dr. Nuschke time and helping him focus on the person. He likes that type of assistance, but when it comes to AI replacing him, he’s skeptical.
“I had this patient I diagnosed with prostate cancer,” he explained. “He got treated and was fine for 5 years. Then, he started losing weight and feeling awful — got weak as a kitten. He went back to his urologist and oncologist who thought he had metastatic prostate cancer. He went through PET scans and blood work, but there was no sign his cancer had returned. So the specialists sent him back to me, and the second he walked in, I saw he was floridly hyperthyroid. I could tell across the room just by looking at him. Would AI have been able to make that diagnosis? Does AI do physical exams?”
Dr. Nuschke said he’s also had several instances where patients received their cancer diagnosis from the lab through an automated patient-portal system rather than from him. “That’s an AI of sorts, and I found it distressing,” he said.
Empathy From a Robot
All the doctors I spoke to were hopeful that by freeing them from the burden of administrative work, they would be able to return to the reason they got into this business in the first place — to spend more time with patients in need and support them with grace and compassion.
But suppose AI could do that too?
In a 2023 study conducted at the University of California San Diego and published in JAMA Internal Medicine, three licensed healthcare professionals compared the responses of ChatGPT and physicians to real-world health questions. The panel rated the AI’s answers nearly four times higher in quality and almost 10 times more empathetic than physicians’ replies.
A similar 2024 study in Nature found that Google’s large-language model AI matched or surpassed physician diagnostic accuracy in all six of the medical specialties considered. Plus, it outperformed doctors in 24 of 26 criteria for conversation quality, including politeness, explanation, honesty, and expressing care and commitment.
Nathaniel Chin, MD, is a gerontologist at the University of Wisconsin and advisory board member for the Alzheimer’s Foundation of America. Although he admits that studies like these “sadden me,” he’s also a realist. “There was hesitation among physicians at the beginning of the pandemic to virtual care because we missed the human connection,” he explained, “but we worked our way around that. We need to remember that what makes a chatbot strong is that it’s nothuman. It doesn’t burn out, it doesn’t get tired, it can look at data very quickly, and it doesn’t have to go home to a family and try to balance work with other aspects of life. A human being is very complex, whereas a chatbot has one single purpose.”
“Even if you don’t have AI in your space now or don’t like the idea of it, that doesn’t matter,” he added. “It’s coming. But it needs to be done right. If AI is implemented by clinicians for clinicians, it has great potential. But if it’s implemented by businesspeople for business reasons, perhaps not.”
‘The Ones Who Use the Tools the Best Will Be the Best’
One branch of medicine that stands to be dramatically affected by AI is mental health. Because bots are natural data-crunchers, they are becoming adept at analyzing the many subtle clues (phrasing in social media posts and text messages, smartwatch biometrics, therapy session videos…) that could indicate depression or other psychological disorders. In fact, its availability via smartphone apps could help democratize and destigmatize the practice.
“There is a day ahead — probably within 5 years — when a patient won’t be able to tell the difference between a real therapist and an AI therapist,” said Ken Mallon, MS, LMFT, a clinical psychotherapist and data scientist in San Jose, California. “That doesn’t worry me, though. It’s hard on therapists’ egos, but new technologies get developed. Things change. People who embrace these tools will benefit from them. The ones who use the tools the best will be the best.”
Time to Restructure Med School
Aditya Jain is in his third year at Harvard Medical School. At age 24, he’s heading into this brave new medical world with excitement and anxiety. Excitement because he sees AI revolutionizing healthcare on every level. Although the current generations of physicians and patients may grumble about its onset, he believes younger ones will feel comfortable with “DocGPT.” He’s excited that his generation of physicians will be the “translators and managers of this transition” and redefine “what it means to be a doctor.”
His anxiety, however, stems from the fact that AI has come on so fast that “it has not yet crossed the threshold of medical education,” he said. “Medical schools still largely prepare students to work as solo clinical decision makers. Most of my first 2 years were spent on pattern recognition and rote memorization, skills that AI can and will master.”
Indeed, Mr. Jain said AI was not a part of his first- or second-year curriculum. “I talk to students who are a year older than me, graduating, heading to residency, and they tell me they wish they had gotten a better grasp of how to use these technologies in medicine and in their practice. They were surprised to hear that people in my year hadn’t started using ChatGPT. We need to expend a lot more effort within the field, within academia, within practicing physicians, to figure out what our role will be in a world where AI is matching or even exceeding human intelligence. And then we need to restructure the medical education to better accomplish these goals.”
So Are You Ready for AI to Be a Better Doctor Than You?
“Yes, I am,” said Dr. Philippakis without hesitation. “When I was going through my medical training, I was continually confronted with the reality that I personally was not smart enough to keep all the information in my head that could be used to make a good decision for a patient. We have now reached a point where the amount of information that is important and useful in the practice of medicine outstrips what a human being can know. The opportunity to enable physicians with AI to remedy that situation is a good thing for doctors and, most importantly, a good thing for patients. I believe the future of medicine belongs not so much to the AI practitioner but to the AI-enabled practitioner.”
“Quick story,” added Dr. Chin. “I asked ChatGPT two questions. The first was ‘Explain the difference between Alzheimer’s and dementia’ because that’s the most common misconception in my field. And it gave me a pretty darn good answer — one I would use in a presentation with some tweaking. Then I asked it, ‘Are you a better doctor than me?’ And it replied, ‘My purpose is not to replace you, my purpose is to be supportive of you and enhance your ability.’ ”
A version of this article appeared on Medscape.com.
In a 2023 study published in the Annals of Emergency Medicine, European researchers fed the AI system ChatGPT information on 30 ER patients. Details included physician notes on the patients’ symptoms, physical exams, and lab results. ChatGPT made the correct diagnosis in 97% of patients compared to 87% for human doctors.
AI 1, Physicians 0
JAMA Cardiology reported in 2021 that an AI trained on nearly a million ECGs performed comparably to or exceeded cardiologist clinical diagnoses and the MUSE (GE Healthcare) system›s automated ECG analysis for most diagnostic classes.
AI 2, Physicians 0
Google’s medically focused AI model (Med-PaLM2) scored 85%+ when answering US Medical Licensing Examination–style questions. That›s an «expert» physician level and far beyond the accuracy threshold needed to pass the actual exam.
AI 3, Physicians 0
A new AI tool that uses an online finger-tapping test outperformed primary care physicians when assessing the severity of Parkinson’s disease.
AI 4, Physicians 0
JAMA Ophthalmology reported in 2024 that a chatbot outperformed glaucoma specialists and matched retina specialists in diagnostic and treatment accuracy.
AI 5, Physicians 0
Should we stop? Because we could go on. In the last few years, these AI vs Physician studies have proliferated, and guess who’s winning?
65% of Doctors are Concerned
Now, the standard answer with anything AI-and-Medicine goes something like this: AI is coming, and it will be a transformative tool for physicians and improve patient care.
But the underlying unanswered question is:
The Medscape 2023 Physician and AI Report surveyed 1043 US physicians about their views on AI. In total, 65% are concerned about AI making diagnosis and treatment decisions, but 56% are enthusiastic about having it as an adjunct.
Cardiologists, anesthesiologists, and radiologists are most enthusiastic about AI, whereas family physicians and pediatricians are the least enthusiastic.
To get a more personal view of how physicians and other healthcare professionals are feeling about this transformative tech, I spoke with a variety of practicing doctors, a psychotherapist, and a third-year Harvard Medical School student.
‘Abysmally Poor Understanding’
Alfredo A. Sadun, MD, PhD, has been a neuro-ophthalmologist for nearly 50 years. A graduate of MIT and vice-chair of ophthalmology at UCLA, he’s long been fascinated by AI’s march into medicine. He’s watched it accomplish things that no ophthalmologist can do, such as identify gender, age, and risk for heart attack and stroke from retinal scans. But he doesn›t see the same level of interest and comprehension among the medical community.
“There’s still an abysmally poor understanding of AI among physicians in general,” he said. “It’s striking because these are intelligent, well-educated people. But we tend to draw conclusions based on what we’re familiar with, and most doctors’ experience with computers involves EHRs [electronic health records] and administrative garbage. It’s the reason they’re burning out.”
Easing the Burden
Anthony Philippakis, MD, PhD, left his cardiology practice in 2015 to become the chief data officer at the Broad Institute of MIT and Harvard. While there, he helped develop an AI-based method for identifying patients at risk for atrial fibrillation. Now, he’s a general partner at Google Ventures with the goal of bridging the gap between data sciences and medicine. His perspective on AI is unique, given that he’s seen the issue from both sides.
“I am not a bitter physician, but to be honest, when I was practicing, way too much of my time was spent staring at screens and not enough laying hands on patients,” he said. “Can you imagine what it would be like to speak to the EHR naturally and say, ‘Please order the following labs for this patient and notify me when the results come in.’ Boy, would that improve healthcare and physician satisfaction. Every physician I know is excited and optimistic about that. Almost everyone I’ve talked to feels like AI could take a lot of the stuff they don’t like doing off their plates.”
Indeed, the dividing line between physician support for AI and physician suspicion or skepticism of AI is just that. In our survey, more than three quarters of physicians said they would consider using AI for office administrative tasks, scheduling, EHRs, researching medical conditions, and even summarizing a patient’s record before a visit. But far fewer are supportive of it delivering diagnoses and treatments. This, despite an estimated 800,000 Americans dying or becoming permanently disabled each year because of diagnostic error.
Could AI Have Diagnosed This?
John D. Nuschke, MD, has been a primary care physician in Allentown, Pennsylvania, for 40 years. He’s a jovial general physician who insists his patients call him Jack. He’s recently started using an AI medical scribe called Freed. With the patient’s permission, it listens in on the visit and generates notes, saving Dr. Nuschke time and helping him focus on the person. He likes that type of assistance, but when it comes to AI replacing him, he’s skeptical.
“I had this patient I diagnosed with prostate cancer,” he explained. “He got treated and was fine for 5 years. Then, he started losing weight and feeling awful — got weak as a kitten. He went back to his urologist and oncologist who thought he had metastatic prostate cancer. He went through PET scans and blood work, but there was no sign his cancer had returned. So the specialists sent him back to me, and the second he walked in, I saw he was floridly hyperthyroid. I could tell across the room just by looking at him. Would AI have been able to make that diagnosis? Does AI do physical exams?”
Dr. Nuschke said he’s also had several instances where patients received their cancer diagnosis from the lab through an automated patient-portal system rather than from him. “That’s an AI of sorts, and I found it distressing,” he said.
Empathy From a Robot
All the doctors I spoke to were hopeful that by freeing them from the burden of administrative work, they would be able to return to the reason they got into this business in the first place — to spend more time with patients in need and support them with grace and compassion.
But suppose AI could do that too?
In a 2023 study conducted at the University of California San Diego and published in JAMA Internal Medicine, three licensed healthcare professionals compared the responses of ChatGPT and physicians to real-world health questions. The panel rated the AI’s answers nearly four times higher in quality and almost 10 times more empathetic than physicians’ replies.
A similar 2024 study in Nature found that Google’s large-language model AI matched or surpassed physician diagnostic accuracy in all six of the medical specialties considered. Plus, it outperformed doctors in 24 of 26 criteria for conversation quality, including politeness, explanation, honesty, and expressing care and commitment.
Nathaniel Chin, MD, is a gerontologist at the University of Wisconsin and advisory board member for the Alzheimer’s Foundation of America. Although he admits that studies like these “sadden me,” he’s also a realist. “There was hesitation among physicians at the beginning of the pandemic to virtual care because we missed the human connection,” he explained, “but we worked our way around that. We need to remember that what makes a chatbot strong is that it’s nothuman. It doesn’t burn out, it doesn’t get tired, it can look at data very quickly, and it doesn’t have to go home to a family and try to balance work with other aspects of life. A human being is very complex, whereas a chatbot has one single purpose.”
“Even if you don’t have AI in your space now or don’t like the idea of it, that doesn’t matter,” he added. “It’s coming. But it needs to be done right. If AI is implemented by clinicians for clinicians, it has great potential. But if it’s implemented by businesspeople for business reasons, perhaps not.”
‘The Ones Who Use the Tools the Best Will Be the Best’
One branch of medicine that stands to be dramatically affected by AI is mental health. Because bots are natural data-crunchers, they are becoming adept at analyzing the many subtle clues (phrasing in social media posts and text messages, smartwatch biometrics, therapy session videos…) that could indicate depression or other psychological disorders. In fact, its availability via smartphone apps could help democratize and destigmatize the practice.
“There is a day ahead — probably within 5 years — when a patient won’t be able to tell the difference between a real therapist and an AI therapist,” said Ken Mallon, MS, LMFT, a clinical psychotherapist and data scientist in San Jose, California. “That doesn’t worry me, though. It’s hard on therapists’ egos, but new technologies get developed. Things change. People who embrace these tools will benefit from them. The ones who use the tools the best will be the best.”
Time to Restructure Med School
Aditya Jain is in his third year at Harvard Medical School. At age 24, he’s heading into this brave new medical world with excitement and anxiety. Excitement because he sees AI revolutionizing healthcare on every level. Although the current generations of physicians and patients may grumble about its onset, he believes younger ones will feel comfortable with “DocGPT.” He’s excited that his generation of physicians will be the “translators and managers of this transition” and redefine “what it means to be a doctor.”
His anxiety, however, stems from the fact that AI has come on so fast that “it has not yet crossed the threshold of medical education,” he said. “Medical schools still largely prepare students to work as solo clinical decision makers. Most of my first 2 years were spent on pattern recognition and rote memorization, skills that AI can and will master.”
Indeed, Mr. Jain said AI was not a part of his first- or second-year curriculum. “I talk to students who are a year older than me, graduating, heading to residency, and they tell me they wish they had gotten a better grasp of how to use these technologies in medicine and in their practice. They were surprised to hear that people in my year hadn’t started using ChatGPT. We need to expend a lot more effort within the field, within academia, within practicing physicians, to figure out what our role will be in a world where AI is matching or even exceeding human intelligence. And then we need to restructure the medical education to better accomplish these goals.”
So Are You Ready for AI to Be a Better Doctor Than You?
“Yes, I am,” said Dr. Philippakis without hesitation. “When I was going through my medical training, I was continually confronted with the reality that I personally was not smart enough to keep all the information in my head that could be used to make a good decision for a patient. We have now reached a point where the amount of information that is important and useful in the practice of medicine outstrips what a human being can know. The opportunity to enable physicians with AI to remedy that situation is a good thing for doctors and, most importantly, a good thing for patients. I believe the future of medicine belongs not so much to the AI practitioner but to the AI-enabled practitioner.”
“Quick story,” added Dr. Chin. “I asked ChatGPT two questions. The first was ‘Explain the difference between Alzheimer’s and dementia’ because that’s the most common misconception in my field. And it gave me a pretty darn good answer — one I would use in a presentation with some tweaking. Then I asked it, ‘Are you a better doctor than me?’ And it replied, ‘My purpose is not to replace you, my purpose is to be supportive of you and enhance your ability.’ ”
A version of this article appeared on Medscape.com.
In a 2023 study published in the Annals of Emergency Medicine, European researchers fed the AI system ChatGPT information on 30 ER patients. Details included physician notes on the patients’ symptoms, physical exams, and lab results. ChatGPT made the correct diagnosis in 97% of patients compared to 87% for human doctors.
AI 1, Physicians 0
JAMA Cardiology reported in 2021 that an AI trained on nearly a million ECGs performed comparably to or exceeded cardiologist clinical diagnoses and the MUSE (GE Healthcare) system›s automated ECG analysis for most diagnostic classes.
AI 2, Physicians 0
Google’s medically focused AI model (Med-PaLM2) scored 85%+ when answering US Medical Licensing Examination–style questions. That›s an «expert» physician level and far beyond the accuracy threshold needed to pass the actual exam.
AI 3, Physicians 0
A new AI tool that uses an online finger-tapping test outperformed primary care physicians when assessing the severity of Parkinson’s disease.
AI 4, Physicians 0
JAMA Ophthalmology reported in 2024 that a chatbot outperformed glaucoma specialists and matched retina specialists in diagnostic and treatment accuracy.
AI 5, Physicians 0
Should we stop? Because we could go on. In the last few years, these AI vs Physician studies have proliferated, and guess who’s winning?
65% of Doctors are Concerned
Now, the standard answer with anything AI-and-Medicine goes something like this: AI is coming, and it will be a transformative tool for physicians and improve patient care.
But the underlying unanswered question is:
The Medscape 2023 Physician and AI Report surveyed 1043 US physicians about their views on AI. In total, 65% are concerned about AI making diagnosis and treatment decisions, but 56% are enthusiastic about having it as an adjunct.
Cardiologists, anesthesiologists, and radiologists are most enthusiastic about AI, whereas family physicians and pediatricians are the least enthusiastic.
To get a more personal view of how physicians and other healthcare professionals are feeling about this transformative tech, I spoke with a variety of practicing doctors, a psychotherapist, and a third-year Harvard Medical School student.
‘Abysmally Poor Understanding’
Alfredo A. Sadun, MD, PhD, has been a neuro-ophthalmologist for nearly 50 years. A graduate of MIT and vice-chair of ophthalmology at UCLA, he’s long been fascinated by AI’s march into medicine. He’s watched it accomplish things that no ophthalmologist can do, such as identify gender, age, and risk for heart attack and stroke from retinal scans. But he doesn›t see the same level of interest and comprehension among the medical community.
“There’s still an abysmally poor understanding of AI among physicians in general,” he said. “It’s striking because these are intelligent, well-educated people. But we tend to draw conclusions based on what we’re familiar with, and most doctors’ experience with computers involves EHRs [electronic health records] and administrative garbage. It’s the reason they’re burning out.”
Easing the Burden
Anthony Philippakis, MD, PhD, left his cardiology practice in 2015 to become the chief data officer at the Broad Institute of MIT and Harvard. While there, he helped develop an AI-based method for identifying patients at risk for atrial fibrillation. Now, he’s a general partner at Google Ventures with the goal of bridging the gap between data sciences and medicine. His perspective on AI is unique, given that he’s seen the issue from both sides.
“I am not a bitter physician, but to be honest, when I was practicing, way too much of my time was spent staring at screens and not enough laying hands on patients,” he said. “Can you imagine what it would be like to speak to the EHR naturally and say, ‘Please order the following labs for this patient and notify me when the results come in.’ Boy, would that improve healthcare and physician satisfaction. Every physician I know is excited and optimistic about that. Almost everyone I’ve talked to feels like AI could take a lot of the stuff they don’t like doing off their plates.”
Indeed, the dividing line between physician support for AI and physician suspicion or skepticism of AI is just that. In our survey, more than three quarters of physicians said they would consider using AI for office administrative tasks, scheduling, EHRs, researching medical conditions, and even summarizing a patient’s record before a visit. But far fewer are supportive of it delivering diagnoses and treatments. This, despite an estimated 800,000 Americans dying or becoming permanently disabled each year because of diagnostic error.
Could AI Have Diagnosed This?
John D. Nuschke, MD, has been a primary care physician in Allentown, Pennsylvania, for 40 years. He’s a jovial general physician who insists his patients call him Jack. He’s recently started using an AI medical scribe called Freed. With the patient’s permission, it listens in on the visit and generates notes, saving Dr. Nuschke time and helping him focus on the person. He likes that type of assistance, but when it comes to AI replacing him, he’s skeptical.
“I had this patient I diagnosed with prostate cancer,” he explained. “He got treated and was fine for 5 years. Then, he started losing weight and feeling awful — got weak as a kitten. He went back to his urologist and oncologist who thought he had metastatic prostate cancer. He went through PET scans and blood work, but there was no sign his cancer had returned. So the specialists sent him back to me, and the second he walked in, I saw he was floridly hyperthyroid. I could tell across the room just by looking at him. Would AI have been able to make that diagnosis? Does AI do physical exams?”
Dr. Nuschke said he’s also had several instances where patients received their cancer diagnosis from the lab through an automated patient-portal system rather than from him. “That’s an AI of sorts, and I found it distressing,” he said.
Empathy From a Robot
All the doctors I spoke to were hopeful that by freeing them from the burden of administrative work, they would be able to return to the reason they got into this business in the first place — to spend more time with patients in need and support them with grace and compassion.
But suppose AI could do that too?
In a 2023 study conducted at the University of California San Diego and published in JAMA Internal Medicine, three licensed healthcare professionals compared the responses of ChatGPT and physicians to real-world health questions. The panel rated the AI’s answers nearly four times higher in quality and almost 10 times more empathetic than physicians’ replies.
A similar 2024 study in Nature found that Google’s large-language model AI matched or surpassed physician diagnostic accuracy in all six of the medical specialties considered. Plus, it outperformed doctors in 24 of 26 criteria for conversation quality, including politeness, explanation, honesty, and expressing care and commitment.
Nathaniel Chin, MD, is a gerontologist at the University of Wisconsin and advisory board member for the Alzheimer’s Foundation of America. Although he admits that studies like these “sadden me,” he’s also a realist. “There was hesitation among physicians at the beginning of the pandemic to virtual care because we missed the human connection,” he explained, “but we worked our way around that. We need to remember that what makes a chatbot strong is that it’s nothuman. It doesn’t burn out, it doesn’t get tired, it can look at data very quickly, and it doesn’t have to go home to a family and try to balance work with other aspects of life. A human being is very complex, whereas a chatbot has one single purpose.”
“Even if you don’t have AI in your space now or don’t like the idea of it, that doesn’t matter,” he added. “It’s coming. But it needs to be done right. If AI is implemented by clinicians for clinicians, it has great potential. But if it’s implemented by businesspeople for business reasons, perhaps not.”
‘The Ones Who Use the Tools the Best Will Be the Best’
One branch of medicine that stands to be dramatically affected by AI is mental health. Because bots are natural data-crunchers, they are becoming adept at analyzing the many subtle clues (phrasing in social media posts and text messages, smartwatch biometrics, therapy session videos…) that could indicate depression or other psychological disorders. In fact, its availability via smartphone apps could help democratize and destigmatize the practice.
“There is a day ahead — probably within 5 years — when a patient won’t be able to tell the difference between a real therapist and an AI therapist,” said Ken Mallon, MS, LMFT, a clinical psychotherapist and data scientist in San Jose, California. “That doesn’t worry me, though. It’s hard on therapists’ egos, but new technologies get developed. Things change. People who embrace these tools will benefit from them. The ones who use the tools the best will be the best.”
Time to Restructure Med School
Aditya Jain is in his third year at Harvard Medical School. At age 24, he’s heading into this brave new medical world with excitement and anxiety. Excitement because he sees AI revolutionizing healthcare on every level. Although the current generations of physicians and patients may grumble about its onset, he believes younger ones will feel comfortable with “DocGPT.” He’s excited that his generation of physicians will be the “translators and managers of this transition” and redefine “what it means to be a doctor.”
His anxiety, however, stems from the fact that AI has come on so fast that “it has not yet crossed the threshold of medical education,” he said. “Medical schools still largely prepare students to work as solo clinical decision makers. Most of my first 2 years were spent on pattern recognition and rote memorization, skills that AI can and will master.”
Indeed, Mr. Jain said AI was not a part of his first- or second-year curriculum. “I talk to students who are a year older than me, graduating, heading to residency, and they tell me they wish they had gotten a better grasp of how to use these technologies in medicine and in their practice. They were surprised to hear that people in my year hadn’t started using ChatGPT. We need to expend a lot more effort within the field, within academia, within practicing physicians, to figure out what our role will be in a world where AI is matching or even exceeding human intelligence. And then we need to restructure the medical education to better accomplish these goals.”
So Are You Ready for AI to Be a Better Doctor Than You?
“Yes, I am,” said Dr. Philippakis without hesitation. “When I was going through my medical training, I was continually confronted with the reality that I personally was not smart enough to keep all the information in my head that could be used to make a good decision for a patient. We have now reached a point where the amount of information that is important and useful in the practice of medicine outstrips what a human being can know. The opportunity to enable physicians with AI to remedy that situation is a good thing for doctors and, most importantly, a good thing for patients. I believe the future of medicine belongs not so much to the AI practitioner but to the AI-enabled practitioner.”
“Quick story,” added Dr. Chin. “I asked ChatGPT two questions. The first was ‘Explain the difference between Alzheimer’s and dementia’ because that’s the most common misconception in my field. And it gave me a pretty darn good answer — one I would use in a presentation with some tweaking. Then I asked it, ‘Are you a better doctor than me?’ And it replied, ‘My purpose is not to replace you, my purpose is to be supportive of you and enhance your ability.’ ”
A version of this article appeared on Medscape.com.