User login
A recent medical meeting I attended included multiple sessions on the use of artificial intelligence (AI), a mere preview, I suspect, of what is to come for both patients and physicians.
I vow not to be a contrarian, but I have concerns. If we’d known how cell phones would permeate nearly every waking moment of our lives, would we have built in more protections from the onset?
Although anyone can see the enormous potential of AI in medicine, harnessing the wonders of it without guarding against the dangers could be paramount to texting and driving.
A palpable disruption in the common work-a-day human interaction is a given. CEOs who mind the bottom line will seek every opportunity to cut personnel whenever machine learning can deliver. As our dependence on algorithms increases, our need to understand electrocardiogram interpretation and echocardiographic calculations will wane. Subtle case information will go undetected. Nuanced subconscious alerts regarding the patient condition will go unnoticed.
These realities are never reflected in the pronouncements of companies who promote and develop AI.
The 2-minute echo
In September 2020, Carolyn Lam, MBBS, PhD, and James Hare, MBA, founders of the AI tech company US2.AI, told Healthcare Transformers that AI advances in echocardiology will turn “a manual process of 30 minutes, 250 clicks, with up to 21% variability among fully trained sonographers analyzing the same exam, into an AI-automated process taking 2 minutes, 1 click, with 0% variability.”
Let’s contrast this 2-minute human-machine interaction with the standard 20- to 30-minute human-to-human echocardiography procedure.
Take Mrs. Smith, for instance. She is referred for echocardiography for shortness of breath. She’s shown to a room and instructed to lie down on a table, where she undergoes a brief AI-directed acquisition of images and then a cheery dismissal from the imaging lab. Medical corporate chief financial officers will salivate at the efficiency, the decrease in cost for personnel, and the sharp increase in put-through for the echo lab schedule.
But what if Mrs. Smith gets a standard 30-minute sonographer-directed exam and the astute echocardiographer notes a left ventricular ejection fraction of 38%. A conversation with the patient reveals that she lost her son a few weeks ago. Upon completion of the study, the patient stands up and then adds, “I hope I can sleep in my bed tonight.” Thinking there may be more to the patient’s insomnia than grief-driven anxiety, the sonographer asks her to explain. “I had to sleep in a chair last night because I couldn’t breathe,” Mrs. Smith replies.
The sonographer reasons correctly that Mrs. Smith is likely a few weeks past an acute coronary syndrome for which she didn’t seek attention and is now in heart failure. The consulting cardiologist is alerted. Mrs. Smith is worked into the office schedule a week earlier than planned, and a costly in-patient stay for acute heart failure or worse is avoided.
Here’s a true-life example (some details have been changed to protect the patient’s identity): Mr. Rodriquez was referred for echocardiography because of dizziness. The sonographer notes significant mitral regurgitation and a decline in left ventricular ejection fraction from moderately impaired to severely reduced. When the sonographer inquires about a fresh bruise over Mr. Rodriguez’s left eye, he replies that he “must have fallen, but can’t remember.” The sonographer also notes runs of nonsustained ventricular tachycardia on the echo telemetry, and after a phone call from the echo lab to the ordering physician, Mr. Rodriquez is admitted. Instead of chancing a sudden death at home while awaiting follow-up, he undergoes catheterization and gets an implantable cardioverter defibrillator.
These scenarios illustrate that a 2-minute visit for AI-directed acquisition of echocardiogram images will never garner the protections of a conversation with a human. Any attempts at downplaying the importance of these human interactions are misguided.
Sometimes we embrace the latest advances in medicine while failing to tend to the most rudimentary necessities of data analysis and reporting. Catherine M. Otto, MD, director of the heart valve clinic and a professor of cardiology at the University of Washington Medical Center, Seattle, is a fan of the basics.
At the recent annual congress of the European Society of Cardiology, she commented on the AI-ENHANCED trial, which used an AI decision support algorithm to identify patients with moderate to severe aortic stenosis, which is associated with poor survival if left untreated. She correctly highlighted that while we are discussing the merits of AI-driven assessment of aortic stenosis, we are doing so in an era when many echo interpreters exclude critical information. The vital findings of aortic valve area, Vmax, and ejection fraction are often nowhere to be seen on reports. We should attend to our basic flaws in interpretation and reporting before we shift our focus to AI.
Flawed algorithms
Incorrect AI algorithms that are broadly adopted could negatively affect the health of millions.
Perhaps the most unsettling claim is made by causaLens: “Causal AI is the only technology that can reason and make choices like humans do,” the website states. A tantalizing tag line that is categorically untrue.
Our mysterious and complex neurophysiological function of reasoning still eludes understanding, but one thing is certain: medical reasoning originates with listening, seeing, and touching.
As AI infiltrates mainstream medicine, opportunities for hearing, observing, and palpating will be greatly reduced.
Folkert Asselbergs from University Medical Center Utrecht, the Netherlands, who has cautioned against overhyping AI, was the discussant for an ESC study on the use of causal AI to improve cardiovascular risk estimation.
He flashed a slide of a 2019 Science article on racial bias in an algorithm that U.S. health care systems use. Remedying that bias “would increase the percentage of Black people receiving additional help from 17.7% to 46.5%,” according to the authors.
Successful integration of AI-driven technology will come only if we build human interaction into every patient encounter.
I hope I don’t live to see the rise of the physician cyborg.
Artificial intelligence could be the greatest boon since the invention of the stethoscope, but it will be our downfall if we stop administering a healthy dose of humanity to every patient encounter.
Melissa Walton-Shirley, MD, is a clinical cardiologist in Nashville, Tenn., who has retired from full-time invasive cardiology. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
A recent medical meeting I attended included multiple sessions on the use of artificial intelligence (AI), a mere preview, I suspect, of what is to come for both patients and physicians.
I vow not to be a contrarian, but I have concerns. If we’d known how cell phones would permeate nearly every waking moment of our lives, would we have built in more protections from the onset?
Although anyone can see the enormous potential of AI in medicine, harnessing the wonders of it without guarding against the dangers could be paramount to texting and driving.
A palpable disruption in the common work-a-day human interaction is a given. CEOs who mind the bottom line will seek every opportunity to cut personnel whenever machine learning can deliver. As our dependence on algorithms increases, our need to understand electrocardiogram interpretation and echocardiographic calculations will wane. Subtle case information will go undetected. Nuanced subconscious alerts regarding the patient condition will go unnoticed.
These realities are never reflected in the pronouncements of companies who promote and develop AI.
The 2-minute echo
In September 2020, Carolyn Lam, MBBS, PhD, and James Hare, MBA, founders of the AI tech company US2.AI, told Healthcare Transformers that AI advances in echocardiology will turn “a manual process of 30 minutes, 250 clicks, with up to 21% variability among fully trained sonographers analyzing the same exam, into an AI-automated process taking 2 minutes, 1 click, with 0% variability.”
Let’s contrast this 2-minute human-machine interaction with the standard 20- to 30-minute human-to-human echocardiography procedure.
Take Mrs. Smith, for instance. She is referred for echocardiography for shortness of breath. She’s shown to a room and instructed to lie down on a table, where she undergoes a brief AI-directed acquisition of images and then a cheery dismissal from the imaging lab. Medical corporate chief financial officers will salivate at the efficiency, the decrease in cost for personnel, and the sharp increase in put-through for the echo lab schedule.
But what if Mrs. Smith gets a standard 30-minute sonographer-directed exam and the astute echocardiographer notes a left ventricular ejection fraction of 38%. A conversation with the patient reveals that she lost her son a few weeks ago. Upon completion of the study, the patient stands up and then adds, “I hope I can sleep in my bed tonight.” Thinking there may be more to the patient’s insomnia than grief-driven anxiety, the sonographer asks her to explain. “I had to sleep in a chair last night because I couldn’t breathe,” Mrs. Smith replies.
The sonographer reasons correctly that Mrs. Smith is likely a few weeks past an acute coronary syndrome for which she didn’t seek attention and is now in heart failure. The consulting cardiologist is alerted. Mrs. Smith is worked into the office schedule a week earlier than planned, and a costly in-patient stay for acute heart failure or worse is avoided.
Here’s a true-life example (some details have been changed to protect the patient’s identity): Mr. Rodriquez was referred for echocardiography because of dizziness. The sonographer notes significant mitral regurgitation and a decline in left ventricular ejection fraction from moderately impaired to severely reduced. When the sonographer inquires about a fresh bruise over Mr. Rodriguez’s left eye, he replies that he “must have fallen, but can’t remember.” The sonographer also notes runs of nonsustained ventricular tachycardia on the echo telemetry, and after a phone call from the echo lab to the ordering physician, Mr. Rodriquez is admitted. Instead of chancing a sudden death at home while awaiting follow-up, he undergoes catheterization and gets an implantable cardioverter defibrillator.
These scenarios illustrate that a 2-minute visit for AI-directed acquisition of echocardiogram images will never garner the protections of a conversation with a human. Any attempts at downplaying the importance of these human interactions are misguided.
Sometimes we embrace the latest advances in medicine while failing to tend to the most rudimentary necessities of data analysis and reporting. Catherine M. Otto, MD, director of the heart valve clinic and a professor of cardiology at the University of Washington Medical Center, Seattle, is a fan of the basics.
At the recent annual congress of the European Society of Cardiology, she commented on the AI-ENHANCED trial, which used an AI decision support algorithm to identify patients with moderate to severe aortic stenosis, which is associated with poor survival if left untreated. She correctly highlighted that while we are discussing the merits of AI-driven assessment of aortic stenosis, we are doing so in an era when many echo interpreters exclude critical information. The vital findings of aortic valve area, Vmax, and ejection fraction are often nowhere to be seen on reports. We should attend to our basic flaws in interpretation and reporting before we shift our focus to AI.
Flawed algorithms
Incorrect AI algorithms that are broadly adopted could negatively affect the health of millions.
Perhaps the most unsettling claim is made by causaLens: “Causal AI is the only technology that can reason and make choices like humans do,” the website states. A tantalizing tag line that is categorically untrue.
Our mysterious and complex neurophysiological function of reasoning still eludes understanding, but one thing is certain: medical reasoning originates with listening, seeing, and touching.
As AI infiltrates mainstream medicine, opportunities for hearing, observing, and palpating will be greatly reduced.
Folkert Asselbergs from University Medical Center Utrecht, the Netherlands, who has cautioned against overhyping AI, was the discussant for an ESC study on the use of causal AI to improve cardiovascular risk estimation.
He flashed a slide of a 2019 Science article on racial bias in an algorithm that U.S. health care systems use. Remedying that bias “would increase the percentage of Black people receiving additional help from 17.7% to 46.5%,” according to the authors.
Successful integration of AI-driven technology will come only if we build human interaction into every patient encounter.
I hope I don’t live to see the rise of the physician cyborg.
Artificial intelligence could be the greatest boon since the invention of the stethoscope, but it will be our downfall if we stop administering a healthy dose of humanity to every patient encounter.
Melissa Walton-Shirley, MD, is a clinical cardiologist in Nashville, Tenn., who has retired from full-time invasive cardiology. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
A recent medical meeting I attended included multiple sessions on the use of artificial intelligence (AI), a mere preview, I suspect, of what is to come for both patients and physicians.
I vow not to be a contrarian, but I have concerns. If we’d known how cell phones would permeate nearly every waking moment of our lives, would we have built in more protections from the onset?
Although anyone can see the enormous potential of AI in medicine, harnessing the wonders of it without guarding against the dangers could be paramount to texting and driving.
A palpable disruption in the common work-a-day human interaction is a given. CEOs who mind the bottom line will seek every opportunity to cut personnel whenever machine learning can deliver. As our dependence on algorithms increases, our need to understand electrocardiogram interpretation and echocardiographic calculations will wane. Subtle case information will go undetected. Nuanced subconscious alerts regarding the patient condition will go unnoticed.
These realities are never reflected in the pronouncements of companies who promote and develop AI.
The 2-minute echo
In September 2020, Carolyn Lam, MBBS, PhD, and James Hare, MBA, founders of the AI tech company US2.AI, told Healthcare Transformers that AI advances in echocardiology will turn “a manual process of 30 minutes, 250 clicks, with up to 21% variability among fully trained sonographers analyzing the same exam, into an AI-automated process taking 2 minutes, 1 click, with 0% variability.”
Let’s contrast this 2-minute human-machine interaction with the standard 20- to 30-minute human-to-human echocardiography procedure.
Take Mrs. Smith, for instance. She is referred for echocardiography for shortness of breath. She’s shown to a room and instructed to lie down on a table, where she undergoes a brief AI-directed acquisition of images and then a cheery dismissal from the imaging lab. Medical corporate chief financial officers will salivate at the efficiency, the decrease in cost for personnel, and the sharp increase in put-through for the echo lab schedule.
But what if Mrs. Smith gets a standard 30-minute sonographer-directed exam and the astute echocardiographer notes a left ventricular ejection fraction of 38%. A conversation with the patient reveals that she lost her son a few weeks ago. Upon completion of the study, the patient stands up and then adds, “I hope I can sleep in my bed tonight.” Thinking there may be more to the patient’s insomnia than grief-driven anxiety, the sonographer asks her to explain. “I had to sleep in a chair last night because I couldn’t breathe,” Mrs. Smith replies.
The sonographer reasons correctly that Mrs. Smith is likely a few weeks past an acute coronary syndrome for which she didn’t seek attention and is now in heart failure. The consulting cardiologist is alerted. Mrs. Smith is worked into the office schedule a week earlier than planned, and a costly in-patient stay for acute heart failure or worse is avoided.
Here’s a true-life example (some details have been changed to protect the patient’s identity): Mr. Rodriquez was referred for echocardiography because of dizziness. The sonographer notes significant mitral regurgitation and a decline in left ventricular ejection fraction from moderately impaired to severely reduced. When the sonographer inquires about a fresh bruise over Mr. Rodriguez’s left eye, he replies that he “must have fallen, but can’t remember.” The sonographer also notes runs of nonsustained ventricular tachycardia on the echo telemetry, and after a phone call from the echo lab to the ordering physician, Mr. Rodriquez is admitted. Instead of chancing a sudden death at home while awaiting follow-up, he undergoes catheterization and gets an implantable cardioverter defibrillator.
These scenarios illustrate that a 2-minute visit for AI-directed acquisition of echocardiogram images will never garner the protections of a conversation with a human. Any attempts at downplaying the importance of these human interactions are misguided.
Sometimes we embrace the latest advances in medicine while failing to tend to the most rudimentary necessities of data analysis and reporting. Catherine M. Otto, MD, director of the heart valve clinic and a professor of cardiology at the University of Washington Medical Center, Seattle, is a fan of the basics.
At the recent annual congress of the European Society of Cardiology, she commented on the AI-ENHANCED trial, which used an AI decision support algorithm to identify patients with moderate to severe aortic stenosis, which is associated with poor survival if left untreated. She correctly highlighted that while we are discussing the merits of AI-driven assessment of aortic stenosis, we are doing so in an era when many echo interpreters exclude critical information. The vital findings of aortic valve area, Vmax, and ejection fraction are often nowhere to be seen on reports. We should attend to our basic flaws in interpretation and reporting before we shift our focus to AI.
Flawed algorithms
Incorrect AI algorithms that are broadly adopted could negatively affect the health of millions.
Perhaps the most unsettling claim is made by causaLens: “Causal AI is the only technology that can reason and make choices like humans do,” the website states. A tantalizing tag line that is categorically untrue.
Our mysterious and complex neurophysiological function of reasoning still eludes understanding, but one thing is certain: medical reasoning originates with listening, seeing, and touching.
As AI infiltrates mainstream medicine, opportunities for hearing, observing, and palpating will be greatly reduced.
Folkert Asselbergs from University Medical Center Utrecht, the Netherlands, who has cautioned against overhyping AI, was the discussant for an ESC study on the use of causal AI to improve cardiovascular risk estimation.
He flashed a slide of a 2019 Science article on racial bias in an algorithm that U.S. health care systems use. Remedying that bias “would increase the percentage of Black people receiving additional help from 17.7% to 46.5%,” according to the authors.
Successful integration of AI-driven technology will come only if we build human interaction into every patient encounter.
I hope I don’t live to see the rise of the physician cyborg.
Artificial intelligence could be the greatest boon since the invention of the stethoscope, but it will be our downfall if we stop administering a healthy dose of humanity to every patient encounter.
Melissa Walton-Shirley, MD, is a clinical cardiologist in Nashville, Tenn., who has retired from full-time invasive cardiology. She disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.