User login
Artificial intelligence, COVID-19, and the future of pandemics
Editor’s note: This article has been provided by The Doctors Company, the exclusively endorsed medical malpractice carrier for the Society of Hospital Medicine.
Artificial intelligence (AI) has proven of value in the COVID-19 pandemic and shows promise for mitigating future health care crises. During the pandemic’s first wave in New York, for example, Mount Sinai Health System used an algorithm to help identify patients ready for discharge. Such systems can help overburdened hospitals manage personnel and the flow of supplies in a medical crisis so they can continue to provide superior patient care.1
Pandemic applications have demonstrated AI’s potential not only to lift administrative burdens, but also to give physicians back what Eric Topol, MD, founder and director of Scripps Research Translational Institute and author of Deep Medicine, calls “the gift of time.”2 More time with patients contributes to clear communication and positive relationships, which lower the odds of medical errors, enhance patient safety, and potentially reduce physicians’ risks of certain types of litigation.3
However, physicians and health systems will need to approach AI with caution. Many unknowns remain – including potential liability risks and the potential for worsening preexisting bias. The law will need to evolve to account for AI-related liability scenarios, some of which are yet to be imagined.
Like any emerging technology, AI brings risk, but its promise of benefit should outweigh the probability of negative consequences – provided we remain aware of and mitigate the potential for AI-induced adverse events.
AI’s pandemic success limited due to fragmented data
Innovation is the key to success in any crisis, and many health care providers have shown their ability to innovate with AI during the pandemic. For example, researchers at the University of California, San Diego, health system who were designing an AI program to help doctors spot pneumonia on a chest x-ray retooled their application to assist physicians fighting coronavirus.4
Meanwhile, AI has been used to distinguish COVID-19–specific symptoms: It was a computer sifting medical records that took anosmia, loss of the sense of smell, from an anecdotal connection to an officially recognized early symptom of the virus.5 This information now helps physicians distinguish COVID-19 from influenza.
However, holding back more innovation is the fragmentation of health care data in the United States. Most AI applications for medicine rely on machine learning; that is, they train on historical patient data to recognize patterns. Therefore, “Everything that we’re doing gets better with a lot more annotated datasets,” Dr. Topol says. Unfortunately, because of our disparate systems, we don’t have centralized data.6 And even if our data were centralized, researchers lack enough reliable COVID-19 data to perfect algorithms in the short term.
Or, put in bleaker terms by the Washington Post: “One of the biggest challenges has been that much data remains siloed inside incompatible computer systems, hoarded by business interests and tangled in geopolitics.”7
The good news is that machine learning and data science platform Kaggle is hosting the COVID-19 Open Research Dataset, or CORD-19, which contains well over 100,000 scholarly articles on COVID-19, SARS, and other relevant infections.8 In lieu of a true central repository of anonymized health data, such large datasets can help train new AI applications in search of new diagnostic tools and therapies.
AI introduces new questions around liability
While AI may eventually be assigned legal personhood, it is not, in fact, a person: It is a tool wielded by individual clinicians, by teams, by health systems, even multiple systems collaborating. Our current liability laws are not ready for the era of digital medicine.
AI algorithms are not perfect. Because we know that diagnostic error is already a major allegation in malpractice claims, we must ask: What happens when a patient alleges that diagnostic error occurred because a physician or physicians leaned too heavily on AI?
In the United States, testing delays have threatened the safety of patients, physicians, and the public by delaying diagnosis of COVID-19. But again, health care providers have applied real innovation – generating novel and useful ideas and applying those ideas – to this problem. For example, researchers at Mount Sinai became the first in the country to combine AI with imaging and clinical data to produce an algorithm that can detect COVID-19 based on computed tomography scans of the chest, in combination with patient information and exposure history.9
AI in health care can help mitigate bias – or worsen it
Machine learning is only as good as the information provided to train the machine. Models trained on partial datasets can skew toward demographics that turned up more often in the data – for example, White race or men over 60. There is concern that “analyses based on faulty or biased algorithms could exacerbate existing racial gaps and other disparities in health care.”10 Already during the pandemic’s first waves, multiple AI systems used to classify x-rays have been found to show racial, gender, and socioeconomic biases.11
Such bias could create high potential for poor recommendations, including false positives and false negatives. It’s critical that system builders are able to explain and qualify their training data and that those who best understand AI-related system risks are the ones who influence health care systems or alter applications to mitigate AI-related harms.12
AI can help spot the next outbreak
More than a week before the World Health Organization released its first warning about a novel coronavirus, the AI platform BlueDot, created in Toronto, spotted an unusual cluster of pneumonia cases in Wuhan, China. Meanwhile, at Boston Children’s Hospital, the AI application Healthmap was scanning social media and news sites for signs of disease cluster, and it, too, flagged the first signs of what would become the COVID-19 outbreak – days before the WHO’s first formal alert.13
These innovative applications of AI in health care demonstrate real promise in detecting future outbreaks of new viruses early. This will allow health care providers and public health officials to get information out sooner, reducing the load on health systems, and ultimately, saving lives.
Dr. Anderson is chairman and chief executive officer, The Doctors Company and TDC Group.
References
1. Gold A. “Coronavirus tests the value of artificial intelligence in medicine” Fierce Biotech. 2020 May 22.
2. Topol E. “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” (New York: Hachette Book Group; 2019:285).
3. The Doctors Company. “The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk” 2020 Jan.
4. Gold A. Coronavirus tests the value of artificial intelligence in medicine. Fierce Biotech. 2020 May 22.
5. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
6. Reuter E. Hundreds of AI solutions proposed for pandemic, but few are proven. MedCity News. 2020 May 28.
7. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
8. Lee K. COVID-19 will accelerate the AI health care revolution. Wired. 2020 May 22.
9. Mei X et al. Artificial intelligence–enabled rapid diagnosis of patients with COVID-19. Nat Med. 2020 May 19;26:1224-8. doi: 10.1038/s41591-020-0931-3.
10. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
11. Wiggers K. Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers. The Machine: Making Sense of AI. 2020 Oct 21.
12. The Doctors Company. “The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk” 2020 Jan.
13. Sewalk K. Innovative disease surveillance platforms detected early warning signs for novel coronavirus outbreak (nCoV-2019). The Disease Daily. 2020 Jan 31.
Editor’s note: This article has been provided by The Doctors Company, the exclusively endorsed medical malpractice carrier for the Society of Hospital Medicine.
Artificial intelligence (AI) has proven of value in the COVID-19 pandemic and shows promise for mitigating future health care crises. During the pandemic’s first wave in New York, for example, Mount Sinai Health System used an algorithm to help identify patients ready for discharge. Such systems can help overburdened hospitals manage personnel and the flow of supplies in a medical crisis so they can continue to provide superior patient care.1
Pandemic applications have demonstrated AI’s potential not only to lift administrative burdens, but also to give physicians back what Eric Topol, MD, founder and director of Scripps Research Translational Institute and author of Deep Medicine, calls “the gift of time.”2 More time with patients contributes to clear communication and positive relationships, which lower the odds of medical errors, enhance patient safety, and potentially reduce physicians’ risks of certain types of litigation.3
However, physicians and health systems will need to approach AI with caution. Many unknowns remain – including potential liability risks and the potential for worsening preexisting bias. The law will need to evolve to account for AI-related liability scenarios, some of which are yet to be imagined.
Like any emerging technology, AI brings risk, but its promise of benefit should outweigh the probability of negative consequences – provided we remain aware of and mitigate the potential for AI-induced adverse events.
AI’s pandemic success limited due to fragmented data
Innovation is the key to success in any crisis, and many health care providers have shown their ability to innovate with AI during the pandemic. For example, researchers at the University of California, San Diego, health system who were designing an AI program to help doctors spot pneumonia on a chest x-ray retooled their application to assist physicians fighting coronavirus.4
Meanwhile, AI has been used to distinguish COVID-19–specific symptoms: It was a computer sifting medical records that took anosmia, loss of the sense of smell, from an anecdotal connection to an officially recognized early symptom of the virus.5 This information now helps physicians distinguish COVID-19 from influenza.
However, holding back more innovation is the fragmentation of health care data in the United States. Most AI applications for medicine rely on machine learning; that is, they train on historical patient data to recognize patterns. Therefore, “Everything that we’re doing gets better with a lot more annotated datasets,” Dr. Topol says. Unfortunately, because of our disparate systems, we don’t have centralized data.6 And even if our data were centralized, researchers lack enough reliable COVID-19 data to perfect algorithms in the short term.
Or, put in bleaker terms by the Washington Post: “One of the biggest challenges has been that much data remains siloed inside incompatible computer systems, hoarded by business interests and tangled in geopolitics.”7
The good news is that machine learning and data science platform Kaggle is hosting the COVID-19 Open Research Dataset, or CORD-19, which contains well over 100,000 scholarly articles on COVID-19, SARS, and other relevant infections.8 In lieu of a true central repository of anonymized health data, such large datasets can help train new AI applications in search of new diagnostic tools and therapies.
AI introduces new questions around liability
While AI may eventually be assigned legal personhood, it is not, in fact, a person: It is a tool wielded by individual clinicians, by teams, by health systems, even multiple systems collaborating. Our current liability laws are not ready for the era of digital medicine.
AI algorithms are not perfect. Because we know that diagnostic error is already a major allegation in malpractice claims, we must ask: What happens when a patient alleges that diagnostic error occurred because a physician or physicians leaned too heavily on AI?
In the United States, testing delays have threatened the safety of patients, physicians, and the public by delaying diagnosis of COVID-19. But again, health care providers have applied real innovation – generating novel and useful ideas and applying those ideas – to this problem. For example, researchers at Mount Sinai became the first in the country to combine AI with imaging and clinical data to produce an algorithm that can detect COVID-19 based on computed tomography scans of the chest, in combination with patient information and exposure history.9
AI in health care can help mitigate bias – or worsen it
Machine learning is only as good as the information provided to train the machine. Models trained on partial datasets can skew toward demographics that turned up more often in the data – for example, White race or men over 60. There is concern that “analyses based on faulty or biased algorithms could exacerbate existing racial gaps and other disparities in health care.”10 Already during the pandemic’s first waves, multiple AI systems used to classify x-rays have been found to show racial, gender, and socioeconomic biases.11
Such bias could create high potential for poor recommendations, including false positives and false negatives. It’s critical that system builders are able to explain and qualify their training data and that those who best understand AI-related system risks are the ones who influence health care systems or alter applications to mitigate AI-related harms.12
AI can help spot the next outbreak
More than a week before the World Health Organization released its first warning about a novel coronavirus, the AI platform BlueDot, created in Toronto, spotted an unusual cluster of pneumonia cases in Wuhan, China. Meanwhile, at Boston Children’s Hospital, the AI application Healthmap was scanning social media and news sites for signs of disease cluster, and it, too, flagged the first signs of what would become the COVID-19 outbreak – days before the WHO’s first formal alert.13
These innovative applications of AI in health care demonstrate real promise in detecting future outbreaks of new viruses early. This will allow health care providers and public health officials to get information out sooner, reducing the load on health systems, and ultimately, saving lives.
Dr. Anderson is chairman and chief executive officer, The Doctors Company and TDC Group.
References
1. Gold A. “Coronavirus tests the value of artificial intelligence in medicine” Fierce Biotech. 2020 May 22.
2. Topol E. “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” (New York: Hachette Book Group; 2019:285).
3. The Doctors Company. “The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk” 2020 Jan.
4. Gold A. Coronavirus tests the value of artificial intelligence in medicine. Fierce Biotech. 2020 May 22.
5. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
6. Reuter E. Hundreds of AI solutions proposed for pandemic, but few are proven. MedCity News. 2020 May 28.
7. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
8. Lee K. COVID-19 will accelerate the AI health care revolution. Wired. 2020 May 22.
9. Mei X et al. Artificial intelligence–enabled rapid diagnosis of patients with COVID-19. Nat Med. 2020 May 19;26:1224-8. doi: 10.1038/s41591-020-0931-3.
10. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
11. Wiggers K. Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers. The Machine: Making Sense of AI. 2020 Oct 21.
12. The Doctors Company. “The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk” 2020 Jan.
13. Sewalk K. Innovative disease surveillance platforms detected early warning signs for novel coronavirus outbreak (nCoV-2019). The Disease Daily. 2020 Jan 31.
Editor’s note: This article has been provided by The Doctors Company, the exclusively endorsed medical malpractice carrier for the Society of Hospital Medicine.
Artificial intelligence (AI) has proven of value in the COVID-19 pandemic and shows promise for mitigating future health care crises. During the pandemic’s first wave in New York, for example, Mount Sinai Health System used an algorithm to help identify patients ready for discharge. Such systems can help overburdened hospitals manage personnel and the flow of supplies in a medical crisis so they can continue to provide superior patient care.1
Pandemic applications have demonstrated AI’s potential not only to lift administrative burdens, but also to give physicians back what Eric Topol, MD, founder and director of Scripps Research Translational Institute and author of Deep Medicine, calls “the gift of time.”2 More time with patients contributes to clear communication and positive relationships, which lower the odds of medical errors, enhance patient safety, and potentially reduce physicians’ risks of certain types of litigation.3
However, physicians and health systems will need to approach AI with caution. Many unknowns remain – including potential liability risks and the potential for worsening preexisting bias. The law will need to evolve to account for AI-related liability scenarios, some of which are yet to be imagined.
Like any emerging technology, AI brings risk, but its promise of benefit should outweigh the probability of negative consequences – provided we remain aware of and mitigate the potential for AI-induced adverse events.
AI’s pandemic success limited due to fragmented data
Innovation is the key to success in any crisis, and many health care providers have shown their ability to innovate with AI during the pandemic. For example, researchers at the University of California, San Diego, health system who were designing an AI program to help doctors spot pneumonia on a chest x-ray retooled their application to assist physicians fighting coronavirus.4
Meanwhile, AI has been used to distinguish COVID-19–specific symptoms: It was a computer sifting medical records that took anosmia, loss of the sense of smell, from an anecdotal connection to an officially recognized early symptom of the virus.5 This information now helps physicians distinguish COVID-19 from influenza.
However, holding back more innovation is the fragmentation of health care data in the United States. Most AI applications for medicine rely on machine learning; that is, they train on historical patient data to recognize patterns. Therefore, “Everything that we’re doing gets better with a lot more annotated datasets,” Dr. Topol says. Unfortunately, because of our disparate systems, we don’t have centralized data.6 And even if our data were centralized, researchers lack enough reliable COVID-19 data to perfect algorithms in the short term.
Or, put in bleaker terms by the Washington Post: “One of the biggest challenges has been that much data remains siloed inside incompatible computer systems, hoarded by business interests and tangled in geopolitics.”7
The good news is that machine learning and data science platform Kaggle is hosting the COVID-19 Open Research Dataset, or CORD-19, which contains well over 100,000 scholarly articles on COVID-19, SARS, and other relevant infections.8 In lieu of a true central repository of anonymized health data, such large datasets can help train new AI applications in search of new diagnostic tools and therapies.
AI introduces new questions around liability
While AI may eventually be assigned legal personhood, it is not, in fact, a person: It is a tool wielded by individual clinicians, by teams, by health systems, even multiple systems collaborating. Our current liability laws are not ready for the era of digital medicine.
AI algorithms are not perfect. Because we know that diagnostic error is already a major allegation in malpractice claims, we must ask: What happens when a patient alleges that diagnostic error occurred because a physician or physicians leaned too heavily on AI?
In the United States, testing delays have threatened the safety of patients, physicians, and the public by delaying diagnosis of COVID-19. But again, health care providers have applied real innovation – generating novel and useful ideas and applying those ideas – to this problem. For example, researchers at Mount Sinai became the first in the country to combine AI with imaging and clinical data to produce an algorithm that can detect COVID-19 based on computed tomography scans of the chest, in combination with patient information and exposure history.9
AI in health care can help mitigate bias – or worsen it
Machine learning is only as good as the information provided to train the machine. Models trained on partial datasets can skew toward demographics that turned up more often in the data – for example, White race or men over 60. There is concern that “analyses based on faulty or biased algorithms could exacerbate existing racial gaps and other disparities in health care.”10 Already during the pandemic’s first waves, multiple AI systems used to classify x-rays have been found to show racial, gender, and socioeconomic biases.11
Such bias could create high potential for poor recommendations, including false positives and false negatives. It’s critical that system builders are able to explain and qualify their training data and that those who best understand AI-related system risks are the ones who influence health care systems or alter applications to mitigate AI-related harms.12
AI can help spot the next outbreak
More than a week before the World Health Organization released its first warning about a novel coronavirus, the AI platform BlueDot, created in Toronto, spotted an unusual cluster of pneumonia cases in Wuhan, China. Meanwhile, at Boston Children’s Hospital, the AI application Healthmap was scanning social media and news sites for signs of disease cluster, and it, too, flagged the first signs of what would become the COVID-19 outbreak – days before the WHO’s first formal alert.13
These innovative applications of AI in health care demonstrate real promise in detecting future outbreaks of new viruses early. This will allow health care providers and public health officials to get information out sooner, reducing the load on health systems, and ultimately, saving lives.
Dr. Anderson is chairman and chief executive officer, The Doctors Company and TDC Group.
References
1. Gold A. “Coronavirus tests the value of artificial intelligence in medicine” Fierce Biotech. 2020 May 22.
2. Topol E. “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” (New York: Hachette Book Group; 2019:285).
3. The Doctors Company. “The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk” 2020 Jan.
4. Gold A. Coronavirus tests the value of artificial intelligence in medicine. Fierce Biotech. 2020 May 22.
5. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
6. Reuter E. Hundreds of AI solutions proposed for pandemic, but few are proven. MedCity News. 2020 May 28.
7. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
8. Lee K. COVID-19 will accelerate the AI health care revolution. Wired. 2020 May 22.
9. Mei X et al. Artificial intelligence–enabled rapid diagnosis of patients with COVID-19. Nat Med. 2020 May 19;26:1224-8. doi: 10.1038/s41591-020-0931-3.
10. Cha AE. Artificial intelligence and COVID-19: Can the machines save us? Washington Post. 2020 Nov 1.
11. Wiggers K. Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers. The Machine: Making Sense of AI. 2020 Oct 21.
12. The Doctors Company. “The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk” 2020 Jan.
13. Sewalk K. Innovative disease surveillance platforms detected early warning signs for novel coronavirus outbreak (nCoV-2019). The Disease Daily. 2020 Jan 31.