Neuroimaging in the Era of Artificial Intelligence: Current Applications

Article Type
Changed
Thu, 04/14/2022 - 15:18

Artificial intelligence (AI) in medicine has shown significant promise, particularly in neuroimaging. AI refers to computer systems designed to perform tasks that normally require human intelligence.1 Machine learning (ML), a field in which computers learn from data without being specifically programmed, is the AI subset responsible for its success in matching or even surpassing humans in certain tasks.2

Supervised learning, a subset of ML, uses an algorithm with annotated data from which to learn.3 The program will use the characteristics of a training data set to predict a specific outcome or target when exposed to a sample data set of the same type. Unsupervised learning finds naturally occurring patterns or groupings within the data.4 With deep learning (DL) algorithms, computers learn the features that optimally represent the data for the problem at hand.5 Both ML and DL are meant to emulate neural networks in the brain, giving rise to artificial neural networks composed of nodes structured within input, hidden, and output layers.

The DL neural network differs from a conventional one by having many hidden layers instead of just 1 layer that extracts patterns within the data.6 Convolutional neural networks (CNNs) are the most prevalent DL architecture used in medical imaging. CNN’s hidden layers apply convolution and pooling operations to break down an image into features containing the most valuable information. The connecting layer applies high-level reasoning before the output layer provides predictions for the image. This framework has applications within radiology, such as predicting a lesion category or condition from an image, determining whether a specific pixel belongs to background or a target class, and predicting the location of lesions.1

AI promises to increase efficiency and reduces errors. With increased data processing and image interpretation, AI technology may help radiologists improve the quality of patient care.6 This article discusses the current applications and future integration of AI in neuroradiology.

Neuroimaging Applications

AI can improve the quality of neuroimaging and reduce the clinical and systemic loads of other imaging modalities. AI can predict patient wait times for computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and X-ray imaging.7 A ML-based AI has detected the variables that most affected patient wait times, including proximity to federal holidays and severity of the patient’s condition, and calculated how long patients would be delayed after their scheduled appointment time. This AI modality could allow more efficient patient scheduling and reveal areas of patient processing that could be changed, potentially improving patient satisfaction and outcomes for time-sensitive neurologic conditions.

AI can save patient and health care practitioner time for repeat MRIs. An estimated 20% of MRI scans require a repeat series—a massive loss of time and funds for both patients and the health care system.8 A DL approach can determine whether an MRI is usable clinically or unclear enough to require repetition.9 This initial screening measure can prevent patients from making return visits and neuroradiologists from reading inconclusive images. AI offers the opportunity to reduce time and costs incurred by optimizing the health care process before imaging is obtained.

Speeding Up Neuroimaging

AI can reduce the time spent performing imaging. Because MRIs consume time and resources, compressed sensing (CS) is commonly used. CS preferentially maintains in-plane resolution at the expense of through-plane resolution to produce a scan with a single, usable viewpoint that preserves signal-to-noise ratio (SNR). CS, however, limits interpretation to single directions and can create aliasing artifacts. An AI algorithm known as synthetic multi-orientation resolution enhancement works in real time to reduce aliasing and improve resolution in these compressed scans.10 This AI improved resolution of white matter lesions in patients with multiple sclerosis (MS) on FLAIR (fluid-attenuated inversion recovery) images, and permitted multiview reconstruction from these limited scans.

Tasks of reconstructing and anti-aliasing come with high computational costs that vary inversely with the extent of scanning compression, potentially negating the time and resource savings of CS. DL AI modalities have been developed to reduce operational loads and further improve image resolution in several directions from CS. One such deep residual learning AI was trained with compressed MRIs and used the framelet method to create a CNN that could rapidly remove global and deeply coherent aliasing artifacts.11 This system, compared with synthetic multi-orientation resolution enhancement, uses a pretrained, pretested AI that does not require additional time during scanning for computational analysis, thereby multiplying the time benefit of CS while retaining the benefits of multidirectional reconstruction and increased resolution. This methodology suffers from inherent degradation of perceptual image quality in its reconstructions because of the L2 loss function the CNN uses to reduce mean squared error, which causes blurring by averaging all possible outcomes of signal distribution during reconstruction. To combat this, researchers have developed another AI to reduce reconstruction times that uses a different loss function in a generative adversarial network to retain image quality, while offering reconstruction times several hundred times faster than current CS-MRI structures.12 So-called sparse-coding methods promise further reduction in reconstruction times, with the possibility of processing completed online with a lightweight architecture rather than on a local system.13

Neuroimaging of acute cases benefits most directly from these technologies because MRIs and their high resolution and SNR begin to approach CT imaging time scales. This could have important implications in clinical care, particularly for stroke imaging and evaluating spinal cord compression. CS-MRI optimization represents one of the greatest areas of neuroimaging cost savings and neurologic care improvement in the modern radiology era.

 

 

Reducing Contrast and Radiation Doses

AI has the ability to read CT, MRI, and positron emission tomography (PET) with reduced or without contrast without significant loss in sensitivity for detecting lesions. With MRI, gadolinium-based contrast can cause injection site reactions, allergic reactions, metal deposition throughout the body, and nephrogenic systemic fibrosis in the most severe instances.14 DL has been applied to brain MRIs performed with 10% of a full dose of contrast without significant degradation of image quality. Neuroradiologists did not rate the AI-synthesized images for several MRI indications lower than their full-dose counterparts.15 Low-dose contrast imaging, regardless of modality, generates greater noise with a significantly reduced signal. However, with AI applied, researchers found that the software suppressed motion and aliasing artifacts and improved image quality, perhaps evidence that this low-dose modality is less vulnerable to the most common pitfalls of MRI.

Recently, low-dose MRI moved into the spotlight when Subtle Medical SubtleGAD software received a National Institutes of Health grant and an expedited pathway to phase 2 clinical trials.16 SubtleGAD, a DL AI that enables low-dose MRI interpretation, might allow contrast MRI for patients with advanced kidney disease or contrast allergies. At some point, contrast with MRI might not be necessary because DL AI applied to noncontrast MRs for detecting MS lesions was found to be preliminarily effective with 78% lesion detection sensitivity.17

PET-MRI combines simultaneous PET and MRI and has been used to evaluate neurologic disorders. PET-MRI can detect amyloid plaques in Alzheimer disease 10 to 20 years before clinical signs of dementia emerge.18 PET-MRI has sparked DL AI development to decrease the dose of the IV radioactive tracer 18F-florbetaben used in imaging to reduce radiation exposure and imaging costs.This reduction is critical if PET-MRI is to become used widely.19-21

An initial CNN could reconstruct low-dose amyloid scans to full-dose resolution, albeit with a greater susceptibility to some artifacts and motion blurring.22 Similar to the synthetic multi-orientation resolution enhancement CNN, this program showed signal blurring from the L2 loss function, which was corrected in a later AI that used a generative adversarial network to minimize perceptual loss.23 This new AI demonstrated greater image resolution, feature preservation, and radiologist rating over the previous AI and was capable of reconstructing low-dose PET scans to full-dose resolution without an accompanying MRI. Applications of this algorithm are far-reaching, potentially allowing neuroimaging of brain tumors at more frequent intervals with higher resolution and lower total radiation exposure.

AI also has been applied to neurologic CT to reduce radiation exposure.24 Because it is critical to abide by the principles of ALARA (as low as reasonably achievable), the ability of AI to reduce radiation exposure holds significant promise. A CNN has been used to transform low-dose CTs of anthropomorphic models with calcium inserts and cardiac patients to normal-dose CTs, with the goal of improving the SNR.25 By training a noise-discriminating CNN and a noise-generating CNN together in a generative adversarial network, the AI improved image feature preservation during transformation. This algorithm has a direct application in imaging cerebral vasculature, including calcification that can explain lacunar infarcts and tracking systemic atherosclerosis.26

Another CNN has been applied to remove more complex noise patterns from the phenomena of beam hardening and photon starvation common in low-dose CT. This algorithm extracts the directional components of artifacts and compares them to known artifact patterns, allowing for highly specific suppression of unwanted signals.27 In June 2019, the US Food and Drug Administration (FDA) approved ClariPi, a deep CNN program for advanced denoising and resolution improvement of low- and ultra low-dose CTs.28 Aside from only low-dose settings, this AI could reduce artifacts in all CT imaging modalities and improve therapeutic value of procedures, including cerebral angiograms and emergency cranial scans. As the average CT radiation dose decreased from 12 mSv in 2009 to 1.5 mSv in 2014 and continues to fall, these algorithms will become increasingly necessary to retain the high resolution and diagnostic power expected of neurologic CTs.29,30

Downstream Applications

Downstream applications refer to AI use after a radiologic study is acquired, mostly image interpretation. More than 70% of FDA-approved AI medical devices are in radiology, and many of these relate to image analysis.6,31 Although AI is not limited to black-and-white image interpretation, it is hypothesized that one of the reasons radiology is inviting to AI is because gray-scale images lend themselves to standardization.3 Moreover, most radiology departments already use AI-friendly picture archiving and communication systems.31,32

AI has been applied to a range of radiologic modalities, including MRI, CT, ultrasonography, PET, and mammography.32-38 AI also has been specifically applied to radiography, including the interpretation of tuberculosis, pneumonia, lung lesions, and COVID-19.33,39-45 AI also can assist triage, patient screening, providing a “second opinion” rapidly, shortening the time needed for attaining a diagnosis, monitoring disease progression, and predicting prognosis.37-39,43,45-47 Downstream applications of AI in neuroradiology and neurology include using CT to aid in detecting hemorrhage or ischemic stroke; using MRI to automatically segment lesions, such as tumors or MS lesions; assisting in early diagnosis and predicting prognosis in MS; assisting in treating paralysis, including from spinal cord injury; determining seizure type and localizing area of seizure onset; and using cameras, wearable devices, and smartphone applications to diagnose and assess treatment response in neurodegenerative disorders, such as Parkinson or Alzheimer diseases (Figure).37,48-56



Several AI tools have been deployed in the clinical setting, particularly triaging intracranial hemorrhage and moving these studies to the top of the radiologist’s worklist. In 2020 the Centers for Medicare and Medicaid Services (CMS) began reimbursing Viz.ai software’s AI-based Viz ContaCT (Viz LVO) with a new International Statistical Classification of Diseases, Tenth Revision procedure code.57

 

 



Viz LVO automatically detects large vessel occlusions, flags the occlusion on CT angiogram, alerts the stroke team (interventional radiologist, neuroradiologist, and neurologist), and transmits images through a secure application to the stroke team members’ mobile devices—all in less than 6 minutes from study acquisition to alarm notification.48 Additional software can quantify and measure perfusion in affected brain areas.48 This could have implications for quantifying and targeting areas of ischemic penumbra that could be salvaged after a stroke and then using that information to plan targeted treatment and/or intervention. Because many trials (DAWN/DEFUSE3) have shown benefits in stroke outcome by extending the therapeutic window for the endovascular thrombectomy, the ability to identify appropriate candidates is essential.58,59 Development of AI tools in assessing ischemic penumbra with quantitative parameters (mean transit time, cerebral blood volume, cerebral blood flow, mismatch ratio) using AI has benefited image interpretation. Medtronic RAPID software can provide quantitative assessment of CT perfusion. AI tools could be used to provide an automatic ASPECT score, which provides a quantitative measure for assessing potential ischemic zones and aids in assessing appropriate candidates for thrombectomy.

Several FDA-approved AI tools help quantify brain structures in neuroradiology, including quantitative analysis through MRI for analysis of anatomy and PET for analysis of functional uptake, assisting in more accurate and more objective detection and monitoring of conditions such as atrophy, dementia, trauma, seizure disorders, and MS.48 The growing number of FDA-approved AI technologies and the recent CMS-approved reimbursement for an AI tool indicate a changing landscape that is more accepting of downstream applications of AI in neuroradiology. As AI continues to integrate into medical regulation and finance, we predict AI will continue to play a prominent role in neuroradiology.

Practical and Ethical Considerations

In any discussion of the benefits of AI, it is prudent to address its shortcomings. Chief among these is overfitting, which occurs when an AI is too closely aligned with its training dataset and prone to error when applied to novel cases. Often this is a byproduct of a small training set.60 Neuroradiology, particularly with uncommon, advanced imaging methods, has a smaller number of available studies.61 Even with more prevalent imaging modalities, such as head CT, the work of collecting training scans from patients with the prerequisite disease processes, particularly if these processes are rare, can limit the number of datapoints collected. Neuroradiologists should understand how an AI tool was generated, including the size and variety of the training dataset used, to best gauge the clinical applicability and fitness of the system.

Another point of concern for AI clinical decision support tools’ implementation is automation bias—the tendency for clinicians to favor machine-generated decisions and ignore contrary data or conflicting human decisions.62 This situation often arises when radiologists experience overwhelming patient loads or are in underresourced settings, where there is little ability to review every AI-based diagnosis. Although AI might be of benefit in such conditions by reducing physician workload and streamlining the diagnostic process, there is the propensity to improperly rely on a tool meant to augment, not replace, a radiologist’s judgment. Such cases have led to adverse outcomes for patients, and legal precedence shows that this constitutes negligence.63 Maintaining awareness of each tool’s limitations and proper application is the only remedy for such situations.

Ethically, we must consider the opaqueness of ML-developed neuroimaging AIs. For many systems, the specific process by which an AI arrives at its conclusions is unknown. This AI “black box” can conceal potential errors and biases that are masked by overall positive performance metrics. The lack of understanding about how a tool functions in the zero-failure clinical setting understandably gives radiologists pause. The question must be asked: Is it ethical to use a system that is a relatively unknown quantity? Entities, including state governments, Canada, and the European Union, have produced an answer. Each of these governments have implemented policies requiring that health care AIs use some method to display to end users the process by which they arrive at conclusions.64-68

The 21st Century Cures Act declares that to attain approval, clinical AIs must demonstrate this explainability to clinicians and patients.69 The response has been an explosion in the development of explainable AI. Systems that visualize the areas where AI attention most often rests with heatmaps, generate labels for the most heavily weighted features of radiographic images, and create full diagnostic reports to justify AI conclusions aim to meet the goal of transparency and inspiring confidence in clinical end users.70 The ability to understand the “thought process” of a system proves useful for error correction and retooling. A trend toward under- or overdetecting conditions, flagging seemingly irrelevant image regions, or low reproducibility can be better addressed when it is clear how the AI is drawing its false conclusions. With an iterative process of testing and redesigning, false positive and negative rates can be reduced, the need for human intervention can be lowered to an appropriate minimum, and patient outcomes can be improved.71

Data collection raises another ethical concern. To train functional clinical decision support tools, massive amounts of patient demographic, laboratory, and imaging data are required. With incentives to develop the most powerful AI systems, record collection can venture down a path where patient autonomy and privacy are threatened. Radiologists have a duty to ensure data mining serves patients and improves the practice of radiology while protecting patients’ personal information.62 Policies have placed similar limits on the access to and use of patient records.64-69 Patients have the right to request explanation of the AI systems their data have been used to train. Approval for data acquisition requires the use of explainable AI, standardized data security protocol implementation, and adequate proof of communal benefit from the clinical decision support tool. Establishment of state-mandated protections bodes well for a future when developers can access enormous caches of data while patients and health care professionals are assured that no identifying information has escaped a well-regulated space. On the level of the individual radiologist, the knowledge that each datum represents a human life. These are people who has made themselves vulnerable by seeking relief for what ails them, which should serve as a lasting reminder to operate with utmost care when handling sensitive information.

Conclusions

The demonstrated applications of AI in neuroimaging are numerous and varied, and it is reasonable to assume that its implementation will increase as the technology matures. AI use for detecting important neurologic conditions holds promise in combatting ever greater imaging volumes and providing timely diagnoses. As medicine witnesses the continuing adoption of AI, it is important that practitioners possess an understanding of its current and emerging uses.

References

1. Chartrand G, Cheng PM, Vorontsov E, et al. Deep learning: a primer for radiologists. Radiographics. 2017;37(7):2113-2131. doi:10.1148/rg.2017170077

2. King BF Jr. Guest editorial: discovery and artificial intelligence. AJR Am J Roentgenol. 2017;209(6):1189-1190. doi:10.2214/AJR.17.19178

3. Syed AB, Zoga AC. Artificial intelligence in radiology: current technology and future directions. Semin Musculoskelet Radiol. 2018;22(5):540-545. doi:10.1055/s-0038-1673383

4. Deo RC. Machine learning in medicine. Circulation. 2015;132(20):1920-1930. doi:10.1161/CIRCULATIONAHA.115.001593 5. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60-88. doi:10.1016/j.media.2017.07.005

6. Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp. 2018;2(1):35. doi:10.1186/s41747-018-0061-6

7. Curtis C, Liu C, Bollerman TJ, Pianykh OS. Machine learning for predicting patient wait times and appointment delays. J Am Coll Radiol. 2018;15(9):1310-1316. doi:10.1016/j.jacr.2017.08.021

8. Andre JB, Bresnahan BW, Mossa-Basha M, et al. Toward quantifying the prevalence, severity, and cost associated with patient motion during clinical MR examinations. J Am Coll Radiol. 2015;12(7):689-695. doi:10.1016/j.jacr.2015.03.007

9. Sreekumari A, Shanbhag D, Yeo D, et al. A deep learning-based approach to reduce rescan and recall rates in clinical MRI examinations. AJNR Am J Neuroradiol. 2019;40(2):217-223. doi:10.3174/ajnr.A5926

10. Zhao C, Shao M, Carass A, et al. Applications of a deep learning method for anti-aliasing and super-resolution in MRI. Magn Reson Imaging. 2019;64:132-141. doi:10.1016/j.mri.2019.05.038

11. Lee D, Yoo J, Tak S, Ye JC. Deep residual learning for accelerated MRI using magnitude and phase networks. IEEE Trans Biomed Eng. 2018;65(9):1985-1995. doi:10.1109/TBME.2018.2821699

12. Mardani M, Gong E, Cheng JY, et al. Deep generative adversarial neural networks for compressive sensing MRI. IEEE Trans Med Imaging. 2019;38(1):167-179. doi:10.1109/TMI.2018.2858752

13. Dong C, Loy CC, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell. 2016;38(2):295-307. doi:10.1109/TPAMI.2015.2439281

14. Sammet S. Magnetic resonance safety. Abdom Radiol (NY). 2016;41(3):444-451. doi:10.1007/s00261-016-0680-4

15. Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging. 2018;48(2):330-340. doi:10.1002/jmri.25970

16. Subtle Medical NIH awards Subtle Medical, Inc. $1.6 million grant to improve safety of MRI exams by reducing gadolinium dose using AI. Press release. September 18, 2019. Accessed March 14, 2022. https://www.biospace.com/article/releases/nih-awards-subtle-medical-inc-1-6-million-grant-to-improve-safety-of-mri-exams-by-reducing-gadolinium-dose-using-ai

17. Narayana PA, Coronado I, Sujit SJ, Wolinsky JS, Lublin FD, Gabr RE. Deep learning for predicting enhancing lesions in multiple sclerosis from noncontrast MRI. Radiology. 2020;294(2):398-404. doi:10.1148/radiol.2019191061

18. Jack CR Jr, Knopman DS, Jagust WJ, et al. Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade. Lancet Neurol. 2010;9(1):119-128. doi:10.1016/S1474-4422(09)70299-6

19. Gatidis S, Würslin C, Seith F, et al. Towards tracer dose reduction in PET studies: simulation of dose reduction by retrospective randomized undersampling of list-mode data. Hell J Nucl Med. 2016;19(1):15-18. doi:10.1967/s002449910333

20. Kaplan S, Zhu YM. Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study. J Digit Imaging. 2019;32(5):773-778. doi:10.1007/s10278-018-0150-3

21. Xu J, Gong E, Pauly J, Zaharchuk G. 200x low-dose PET reconstruction using deep learning. arXiv: 1712.04119. Accessed 2/16/2022. https://arxiv.org/pdf/1712.04119.pdf

22. Chen KT, Gong E, de Carvalho Macruz FB, et al. Ultra-low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290(3):649-656. doi:10.1148/radiol.2018180940

23. Ouyang J, Chen KT, Gong E, Pauly J, Zaharchuk G. Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss. Med Phys. 2019;46(8):3555-3564. doi:10.1002/mp.13626

24. Brenner DJ, Hall EJ. Computed tomography—an increasing source of radiation exposure. N Engl J Med. 2007;357(22):2277-2284. doi:10.1056/NEJMra072149

25. Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans Med Imaging. 2017;36(12):2536-2545. doi:10.1109/TMI.2017.2708987

26. Sohn YH, Cheon HY, Jeon P, Kang SY. Clinical implication of cerebral artery calcification on brain CT. Cerebrovasc Dis. 2004;18(4):332-337. doi:10.1159/000080772

27. Kang E, Min J, Ye JC. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med Phys. 2017;44(10):e360-e375. doi:10.1002/mp.12344

28. ClariPi gets FDA clearance for AI-powered CT image denoising solution. Published June 24, 2019. Accessed February 16, 2022. https://www.itnonline.com/content/claripi-gets-fda-clearance-ai-powered-ct-image-denoising-solution

29. Hausleiter J, Meyer T, Hermann F, et al. Estimated radiation dose associated with cardiac CT angiography. JAMA. 2009;301(5):500-507. doi:10.1001/jama.2009.54

30. Al-Mallah M, Aljizeeri A, Alharthi M, Alsaileek A. Routine low-radiation-dose coronary computed tomography angiography. Eur Heart J Suppl. 2014;16(suppl B):B12-B16. doi:10.1093/eurheartj/suu024

31. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3:118. doi:10.1038/s41746-020-00324-0

32. Talebi-Liasi F, Markowitz O. Is artificial intelligence going to replace dermatologists? Cutis. 2020;105(1):28-31.

33. Khan O, Bebb G, Alimohamed NA. Artificial intelligence in medicine: what oncologists need to know about its potential—and its limitations. Oncology Exchange. 2017;16(4):8-13. http://www.oncologyex.com/pdf/vol16_no4/feature_khan-ai.pdf

34. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271-e297. doi:10.1016/S2589-7500(19)30123-2

35. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7

36. Salim M, Wåhlin E, Dembrower K, et al. External evaluation of 3 commercial artificial intelligence algorithms for independent assessment of screening mammograms. JAMA Oncol. 2020;6(10):1581-1588. doi:10.1001/jamaoncol.2020.3321

37. Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digit Med. 2018;1(1):1-7. doi:10.1038/s41746-017-0015-z

38. Sheth D, Giger ML. Artificial intelligence in the interpretation of breast cancer on MRI. J Magn Reson Imaging. 2020;51(5):1310-1324. doi:10.1002/jmri.26878

39. Borkowski AA, Viswanadhan NA, Thomas LB, Guzman RD, Deland LA, Mastorides SM. Using artificial intelligence for COVID-19 chest X-ray diagnosis. Fed Pract. 2020;37(9):398-404. doi:10.12788/fp.0045

40. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

41. Nam JG, Park S, Hwang EJ, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2019;290(1):218-228. doi:10.1148/radiol.2018180237

42. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 2018;15(11):e1002683. doi:10.1371/journal.pmed.1002683

43. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582. doi:10.1148/radiol.2017162326

44. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest X-Ray algorithms to the clinical setting. arXiv preprint arXiv:200211379. Accessed February 16, 2022. https://arxiv.org/pdf/2002.11379.pdf

45. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-36. doi:10.1038/s41591-018-0307-0

46. Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current status and future perspectives of artificial intelligence in magnetic resonance breast imaging. Contrast Media Mol Imaging. 2020;2020:6805710. doi:10.1155/2020/6805710

47. Booth AL, Abels E, McCaffrey P. Development of a prognostic model for mortality in COVID-19 infection using machine learning. Mod Pathol. 2020;4(3):522-531. doi:10.1038/s41379-020-00700-x

48. Bash S. Enhancing neuroimaging with artificial intelligence. Applied Radiology. 2020;49(1):20-21.

49. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. doi:10.1136/svn-2017-000101

50. Valliani AA, Ranti D, Oermann EK. Deep learning and neurology: a systematic review. Neurol Ther. 2019;8(2):351-365. doi:10.1007/s40120-019-00153-8

51. Gupta R, Krishnam SP, Schaefer PW, Lev MH, Gonzalez RG. An east coast perspective on artificial intelligence and machine learning: part 2: ischemic stroke imaging and triage. Neuroimaging Clin N Am. 2020;30(4):467-478. doi:10.1016/j.nic.2020.08.002

52. Belić M, Bobić V, Badža M, Šolaja N, Đurić-Jovičić M, Kostić VS. Artificial intelligence for assisting diagnostics and assessment of Parkinson’s disease-A review. Clin Neurol Neurosurg. 2019;184:105442. doi:10.1016/j.clineuro.2019.105442

53. An S, Kang C, Lee HW. Artificial intelligence and computational approaches for epilepsy. J Epilepsy Res. 2020;10(1):8-17. doi:10.14581/jer.20003

54. Pavel AM, Rennie JM, de Vries LS, et al. A machine-learning algorithm for neonatal seizure recognition: a multicentre, randomised, controlled trial. Lancet Child Adolesc Health. 2020;4(10):740-749. doi:10.1016/S2352-4642(20)30239-X

55. Afzal HMR, Luo S, Ramadan S, Lechner-Scott J. The emerging role of artificial intelligence in multiple sclerosis imaging. Mult Scler. 2020;1352458520966298. doi:10.1177/1352458520966298

56. Bouton CE. Restoring movement in paralysis with a bioelectronic neural bypass approach: current state and future directions. Cold Spring Harb Perspect Med. 2019;9(11):a034306. doi:10.1101/cshperspect.a034306

57. Hassan AE. New technology add-on payment (NTAP) for Viz LVO: a win for stroke care. J Neurointerv Surg. 2020;neurintsurg-2020-016897. doi:10.1136/neurintsurg-2020-016897

58. Nogueira RG , Jadhav AP , Haussen DC , et al; DAWN Trial Investigators. Thrombectomy 6 to 24 hours after stroke with a mismatch between deficit and infarct. N Engl J Med. 2018;378:11–21. doi:10.1056/NEJMoa1706442

59. Albers GW , Marks MP , Kemp S , et al; DEFUSE 3 Investigators. Thrombectomy for stroke at 6 to 16 hours with selection by perfusion imaging. N Engl J Med. 2018;378:708–18. doi:10.1056/NEJMoa1713973

60. Bi WL, Hosny A, Schabath MB, et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin. 2019;69(2):127-157. doi:10.3322/caac.21552 

61. Wagner MW, Namdar K, Biswas A, Monah S, Khalvati F, Ertl-Wagner BB. Radiomics, machine learning, and artificial intelligence-what the neuroradiologist needs to know. Neuroradiology. 2021;63(12):1957-1967. doi:10.1007/s00234-021-02813-9 

62. Geis JR, Brady AP, Wu CC, et al. Ethics of artificial intelligence in radiology: summary of the Joint European and North American Multisociety Statement. J Am Coll Radiol. 2019;16(11):1516-1521. doi:10.1016/j.jacr.2019.07.028

63. Kingston J. Artificial intelligence and legal liability. arXiv:1802.07782. https://arxiv.org/ftp/arxiv/papers/1802/1802.07782.pdf

64. Council of the European Union, General Data Protection Regulation. Official Journal of the European Union. Accessed February 16, 2022. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679

65. Consumer Privacy Protection Act of 2017, HR 4081, 115th Cong (2017). Accessed February 10, 2022. https://www.congress.gov/bill/115th-congress/house-bill/4081

66. Cal. Civ. Code § 1798.198(a) (2018). California Consumer Privacy Act of 2018.

67. Va. Code Ann. § 59.1 (2021). Consumer Data Protection Act. Accessed February 10, 2022. https://lis.virginia.gov/cgi-bin/legp604.exe?212+ful+SB1392ER+pdf

68. Colo. Rev. Stat. § 6-1-1301 (2021). Colorado Privacy Act. Accessed February 10, 2022. https://leg.colorado.gov/sites/default/files/2021a_190_signed.pdf

69. 21st Century Cures Act, Pub L No. 114-255 (2016). Accessed February 10, 2022. https://www.govinfo.gov/content/pkg/PLAW-114publ255/html/PLAW-114publ255.htm

70. Huff DT, Weisman AJ, Jeraj R. Interpretation and visualization techniques for deep learning models in medical imaging. Phys Med Biol. 2021;66(4):04TR01. doi:10.1088/1361-6560/abcd17

71. Thrall JH, Li X, Li Q, et al. Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol. 2018;15(3, pt B):504-508. doi:10.1016/j.jacr.2017.12.026

Article PDF
Author and Disclosure Information

Robert Monsoura; Mudit Duttaa; Ahmed-Zayn Mohameda; Andrew Borkowski, MDa,b; and Narayan A. Viswanadhan, MDa,b
Correspondence:Robert Monsour ([email protected])

aUniversity of South Florida Morsani College of Medicine, Tampa, Florida
bJames A. Haley Veterans’ Hospital, Tampa, Florida

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Issue
Federal Practitioner - 39(1)s
Publications
Topics
Page Number
S14-S20
Sections
Author and Disclosure Information

Robert Monsoura; Mudit Duttaa; Ahmed-Zayn Mohameda; Andrew Borkowski, MDa,b; and Narayan A. Viswanadhan, MDa,b
Correspondence:Robert Monsour ([email protected])

aUniversity of South Florida Morsani College of Medicine, Tampa, Florida
bJames A. Haley Veterans’ Hospital, Tampa, Florida

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Author and Disclosure Information

Robert Monsoura; Mudit Duttaa; Ahmed-Zayn Mohameda; Andrew Borkowski, MDa,b; and Narayan A. Viswanadhan, MDa,b
Correspondence:Robert Monsour ([email protected])

aUniversity of South Florida Morsani College of Medicine, Tampa, Florida
bJames A. Haley Veterans’ Hospital, Tampa, Florida

Author disclosures

The authors report no actual or potential conflicts of interest or outside sources of funding with regard to this article.

Disclaimer

The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Article PDF
Article PDF

Artificial intelligence (AI) in medicine has shown significant promise, particularly in neuroimaging. AI refers to computer systems designed to perform tasks that normally require human intelligence.1 Machine learning (ML), a field in which computers learn from data without being specifically programmed, is the AI subset responsible for its success in matching or even surpassing humans in certain tasks.2

Supervised learning, a subset of ML, uses an algorithm with annotated data from which to learn.3 The program will use the characteristics of a training data set to predict a specific outcome or target when exposed to a sample data set of the same type. Unsupervised learning finds naturally occurring patterns or groupings within the data.4 With deep learning (DL) algorithms, computers learn the features that optimally represent the data for the problem at hand.5 Both ML and DL are meant to emulate neural networks in the brain, giving rise to artificial neural networks composed of nodes structured within input, hidden, and output layers.

The DL neural network differs from a conventional one by having many hidden layers instead of just 1 layer that extracts patterns within the data.6 Convolutional neural networks (CNNs) are the most prevalent DL architecture used in medical imaging. CNN’s hidden layers apply convolution and pooling operations to break down an image into features containing the most valuable information. The connecting layer applies high-level reasoning before the output layer provides predictions for the image. This framework has applications within radiology, such as predicting a lesion category or condition from an image, determining whether a specific pixel belongs to background or a target class, and predicting the location of lesions.1

AI promises to increase efficiency and reduces errors. With increased data processing and image interpretation, AI technology may help radiologists improve the quality of patient care.6 This article discusses the current applications and future integration of AI in neuroradiology.

Neuroimaging Applications

AI can improve the quality of neuroimaging and reduce the clinical and systemic loads of other imaging modalities. AI can predict patient wait times for computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and X-ray imaging.7 A ML-based AI has detected the variables that most affected patient wait times, including proximity to federal holidays and severity of the patient’s condition, and calculated how long patients would be delayed after their scheduled appointment time. This AI modality could allow more efficient patient scheduling and reveal areas of patient processing that could be changed, potentially improving patient satisfaction and outcomes for time-sensitive neurologic conditions.

AI can save patient and health care practitioner time for repeat MRIs. An estimated 20% of MRI scans require a repeat series—a massive loss of time and funds for both patients and the health care system.8 A DL approach can determine whether an MRI is usable clinically or unclear enough to require repetition.9 This initial screening measure can prevent patients from making return visits and neuroradiologists from reading inconclusive images. AI offers the opportunity to reduce time and costs incurred by optimizing the health care process before imaging is obtained.

Speeding Up Neuroimaging

AI can reduce the time spent performing imaging. Because MRIs consume time and resources, compressed sensing (CS) is commonly used. CS preferentially maintains in-plane resolution at the expense of through-plane resolution to produce a scan with a single, usable viewpoint that preserves signal-to-noise ratio (SNR). CS, however, limits interpretation to single directions and can create aliasing artifacts. An AI algorithm known as synthetic multi-orientation resolution enhancement works in real time to reduce aliasing and improve resolution in these compressed scans.10 This AI improved resolution of white matter lesions in patients with multiple sclerosis (MS) on FLAIR (fluid-attenuated inversion recovery) images, and permitted multiview reconstruction from these limited scans.

Tasks of reconstructing and anti-aliasing come with high computational costs that vary inversely with the extent of scanning compression, potentially negating the time and resource savings of CS. DL AI modalities have been developed to reduce operational loads and further improve image resolution in several directions from CS. One such deep residual learning AI was trained with compressed MRIs and used the framelet method to create a CNN that could rapidly remove global and deeply coherent aliasing artifacts.11 This system, compared with synthetic multi-orientation resolution enhancement, uses a pretrained, pretested AI that does not require additional time during scanning for computational analysis, thereby multiplying the time benefit of CS while retaining the benefits of multidirectional reconstruction and increased resolution. This methodology suffers from inherent degradation of perceptual image quality in its reconstructions because of the L2 loss function the CNN uses to reduce mean squared error, which causes blurring by averaging all possible outcomes of signal distribution during reconstruction. To combat this, researchers have developed another AI to reduce reconstruction times that uses a different loss function in a generative adversarial network to retain image quality, while offering reconstruction times several hundred times faster than current CS-MRI structures.12 So-called sparse-coding methods promise further reduction in reconstruction times, with the possibility of processing completed online with a lightweight architecture rather than on a local system.13

Neuroimaging of acute cases benefits most directly from these technologies because MRIs and their high resolution and SNR begin to approach CT imaging time scales. This could have important implications in clinical care, particularly for stroke imaging and evaluating spinal cord compression. CS-MRI optimization represents one of the greatest areas of neuroimaging cost savings and neurologic care improvement in the modern radiology era.

 

 

Reducing Contrast and Radiation Doses

AI has the ability to read CT, MRI, and positron emission tomography (PET) with reduced or without contrast without significant loss in sensitivity for detecting lesions. With MRI, gadolinium-based contrast can cause injection site reactions, allergic reactions, metal deposition throughout the body, and nephrogenic systemic fibrosis in the most severe instances.14 DL has been applied to brain MRIs performed with 10% of a full dose of contrast without significant degradation of image quality. Neuroradiologists did not rate the AI-synthesized images for several MRI indications lower than their full-dose counterparts.15 Low-dose contrast imaging, regardless of modality, generates greater noise with a significantly reduced signal. However, with AI applied, researchers found that the software suppressed motion and aliasing artifacts and improved image quality, perhaps evidence that this low-dose modality is less vulnerable to the most common pitfalls of MRI.

Recently, low-dose MRI moved into the spotlight when Subtle Medical SubtleGAD software received a National Institutes of Health grant and an expedited pathway to phase 2 clinical trials.16 SubtleGAD, a DL AI that enables low-dose MRI interpretation, might allow contrast MRI for patients with advanced kidney disease or contrast allergies. At some point, contrast with MRI might not be necessary because DL AI applied to noncontrast MRs for detecting MS lesions was found to be preliminarily effective with 78% lesion detection sensitivity.17

PET-MRI combines simultaneous PET and MRI and has been used to evaluate neurologic disorders. PET-MRI can detect amyloid plaques in Alzheimer disease 10 to 20 years before clinical signs of dementia emerge.18 PET-MRI has sparked DL AI development to decrease the dose of the IV radioactive tracer 18F-florbetaben used in imaging to reduce radiation exposure and imaging costs.This reduction is critical if PET-MRI is to become used widely.19-21

An initial CNN could reconstruct low-dose amyloid scans to full-dose resolution, albeit with a greater susceptibility to some artifacts and motion blurring.22 Similar to the synthetic multi-orientation resolution enhancement CNN, this program showed signal blurring from the L2 loss function, which was corrected in a later AI that used a generative adversarial network to minimize perceptual loss.23 This new AI demonstrated greater image resolution, feature preservation, and radiologist rating over the previous AI and was capable of reconstructing low-dose PET scans to full-dose resolution without an accompanying MRI. Applications of this algorithm are far-reaching, potentially allowing neuroimaging of brain tumors at more frequent intervals with higher resolution and lower total radiation exposure.

AI also has been applied to neurologic CT to reduce radiation exposure.24 Because it is critical to abide by the principles of ALARA (as low as reasonably achievable), the ability of AI to reduce radiation exposure holds significant promise. A CNN has been used to transform low-dose CTs of anthropomorphic models with calcium inserts and cardiac patients to normal-dose CTs, with the goal of improving the SNR.25 By training a noise-discriminating CNN and a noise-generating CNN together in a generative adversarial network, the AI improved image feature preservation during transformation. This algorithm has a direct application in imaging cerebral vasculature, including calcification that can explain lacunar infarcts and tracking systemic atherosclerosis.26

Another CNN has been applied to remove more complex noise patterns from the phenomena of beam hardening and photon starvation common in low-dose CT. This algorithm extracts the directional components of artifacts and compares them to known artifact patterns, allowing for highly specific suppression of unwanted signals.27 In June 2019, the US Food and Drug Administration (FDA) approved ClariPi, a deep CNN program for advanced denoising and resolution improvement of low- and ultra low-dose CTs.28 Aside from only low-dose settings, this AI could reduce artifacts in all CT imaging modalities and improve therapeutic value of procedures, including cerebral angiograms and emergency cranial scans. As the average CT radiation dose decreased from 12 mSv in 2009 to 1.5 mSv in 2014 and continues to fall, these algorithms will become increasingly necessary to retain the high resolution and diagnostic power expected of neurologic CTs.29,30

Downstream Applications

Downstream applications refer to AI use after a radiologic study is acquired, mostly image interpretation. More than 70% of FDA-approved AI medical devices are in radiology, and many of these relate to image analysis.6,31 Although AI is not limited to black-and-white image interpretation, it is hypothesized that one of the reasons radiology is inviting to AI is because gray-scale images lend themselves to standardization.3 Moreover, most radiology departments already use AI-friendly picture archiving and communication systems.31,32

AI has been applied to a range of radiologic modalities, including MRI, CT, ultrasonography, PET, and mammography.32-38 AI also has been specifically applied to radiography, including the interpretation of tuberculosis, pneumonia, lung lesions, and COVID-19.33,39-45 AI also can assist triage, patient screening, providing a “second opinion” rapidly, shortening the time needed for attaining a diagnosis, monitoring disease progression, and predicting prognosis.37-39,43,45-47 Downstream applications of AI in neuroradiology and neurology include using CT to aid in detecting hemorrhage or ischemic stroke; using MRI to automatically segment lesions, such as tumors or MS lesions; assisting in early diagnosis and predicting prognosis in MS; assisting in treating paralysis, including from spinal cord injury; determining seizure type and localizing area of seizure onset; and using cameras, wearable devices, and smartphone applications to diagnose and assess treatment response in neurodegenerative disorders, such as Parkinson or Alzheimer diseases (Figure).37,48-56



Several AI tools have been deployed in the clinical setting, particularly triaging intracranial hemorrhage and moving these studies to the top of the radiologist’s worklist. In 2020 the Centers for Medicare and Medicaid Services (CMS) began reimbursing Viz.ai software’s AI-based Viz ContaCT (Viz LVO) with a new International Statistical Classification of Diseases, Tenth Revision procedure code.57

 

 



Viz LVO automatically detects large vessel occlusions, flags the occlusion on CT angiogram, alerts the stroke team (interventional radiologist, neuroradiologist, and neurologist), and transmits images through a secure application to the stroke team members’ mobile devices—all in less than 6 minutes from study acquisition to alarm notification.48 Additional software can quantify and measure perfusion in affected brain areas.48 This could have implications for quantifying and targeting areas of ischemic penumbra that could be salvaged after a stroke and then using that information to plan targeted treatment and/or intervention. Because many trials (DAWN/DEFUSE3) have shown benefits in stroke outcome by extending the therapeutic window for the endovascular thrombectomy, the ability to identify appropriate candidates is essential.58,59 Development of AI tools in assessing ischemic penumbra with quantitative parameters (mean transit time, cerebral blood volume, cerebral blood flow, mismatch ratio) using AI has benefited image interpretation. Medtronic RAPID software can provide quantitative assessment of CT perfusion. AI tools could be used to provide an automatic ASPECT score, which provides a quantitative measure for assessing potential ischemic zones and aids in assessing appropriate candidates for thrombectomy.

Several FDA-approved AI tools help quantify brain structures in neuroradiology, including quantitative analysis through MRI for analysis of anatomy and PET for analysis of functional uptake, assisting in more accurate and more objective detection and monitoring of conditions such as atrophy, dementia, trauma, seizure disorders, and MS.48 The growing number of FDA-approved AI technologies and the recent CMS-approved reimbursement for an AI tool indicate a changing landscape that is more accepting of downstream applications of AI in neuroradiology. As AI continues to integrate into medical regulation and finance, we predict AI will continue to play a prominent role in neuroradiology.

Practical and Ethical Considerations

In any discussion of the benefits of AI, it is prudent to address its shortcomings. Chief among these is overfitting, which occurs when an AI is too closely aligned with its training dataset and prone to error when applied to novel cases. Often this is a byproduct of a small training set.60 Neuroradiology, particularly with uncommon, advanced imaging methods, has a smaller number of available studies.61 Even with more prevalent imaging modalities, such as head CT, the work of collecting training scans from patients with the prerequisite disease processes, particularly if these processes are rare, can limit the number of datapoints collected. Neuroradiologists should understand how an AI tool was generated, including the size and variety of the training dataset used, to best gauge the clinical applicability and fitness of the system.

Another point of concern for AI clinical decision support tools’ implementation is automation bias—the tendency for clinicians to favor machine-generated decisions and ignore contrary data or conflicting human decisions.62 This situation often arises when radiologists experience overwhelming patient loads or are in underresourced settings, where there is little ability to review every AI-based diagnosis. Although AI might be of benefit in such conditions by reducing physician workload and streamlining the diagnostic process, there is the propensity to improperly rely on a tool meant to augment, not replace, a radiologist’s judgment. Such cases have led to adverse outcomes for patients, and legal precedence shows that this constitutes negligence.63 Maintaining awareness of each tool’s limitations and proper application is the only remedy for such situations.

Ethically, we must consider the opaqueness of ML-developed neuroimaging AIs. For many systems, the specific process by which an AI arrives at its conclusions is unknown. This AI “black box” can conceal potential errors and biases that are masked by overall positive performance metrics. The lack of understanding about how a tool functions in the zero-failure clinical setting understandably gives radiologists pause. The question must be asked: Is it ethical to use a system that is a relatively unknown quantity? Entities, including state governments, Canada, and the European Union, have produced an answer. Each of these governments have implemented policies requiring that health care AIs use some method to display to end users the process by which they arrive at conclusions.64-68

The 21st Century Cures Act declares that to attain approval, clinical AIs must demonstrate this explainability to clinicians and patients.69 The response has been an explosion in the development of explainable AI. Systems that visualize the areas where AI attention most often rests with heatmaps, generate labels for the most heavily weighted features of radiographic images, and create full diagnostic reports to justify AI conclusions aim to meet the goal of transparency and inspiring confidence in clinical end users.70 The ability to understand the “thought process” of a system proves useful for error correction and retooling. A trend toward under- or overdetecting conditions, flagging seemingly irrelevant image regions, or low reproducibility can be better addressed when it is clear how the AI is drawing its false conclusions. With an iterative process of testing and redesigning, false positive and negative rates can be reduced, the need for human intervention can be lowered to an appropriate minimum, and patient outcomes can be improved.71

Data collection raises another ethical concern. To train functional clinical decision support tools, massive amounts of patient demographic, laboratory, and imaging data are required. With incentives to develop the most powerful AI systems, record collection can venture down a path where patient autonomy and privacy are threatened. Radiologists have a duty to ensure data mining serves patients and improves the practice of radiology while protecting patients’ personal information.62 Policies have placed similar limits on the access to and use of patient records.64-69 Patients have the right to request explanation of the AI systems their data have been used to train. Approval for data acquisition requires the use of explainable AI, standardized data security protocol implementation, and adequate proof of communal benefit from the clinical decision support tool. Establishment of state-mandated protections bodes well for a future when developers can access enormous caches of data while patients and health care professionals are assured that no identifying information has escaped a well-regulated space. On the level of the individual radiologist, the knowledge that each datum represents a human life. These are people who has made themselves vulnerable by seeking relief for what ails them, which should serve as a lasting reminder to operate with utmost care when handling sensitive information.

Conclusions

The demonstrated applications of AI in neuroimaging are numerous and varied, and it is reasonable to assume that its implementation will increase as the technology matures. AI use for detecting important neurologic conditions holds promise in combatting ever greater imaging volumes and providing timely diagnoses. As medicine witnesses the continuing adoption of AI, it is important that practitioners possess an understanding of its current and emerging uses.

Artificial intelligence (AI) in medicine has shown significant promise, particularly in neuroimaging. AI refers to computer systems designed to perform tasks that normally require human intelligence.1 Machine learning (ML), a field in which computers learn from data without being specifically programmed, is the AI subset responsible for its success in matching or even surpassing humans in certain tasks.2

Supervised learning, a subset of ML, uses an algorithm with annotated data from which to learn.3 The program will use the characteristics of a training data set to predict a specific outcome or target when exposed to a sample data set of the same type. Unsupervised learning finds naturally occurring patterns or groupings within the data.4 With deep learning (DL) algorithms, computers learn the features that optimally represent the data for the problem at hand.5 Both ML and DL are meant to emulate neural networks in the brain, giving rise to artificial neural networks composed of nodes structured within input, hidden, and output layers.

The DL neural network differs from a conventional one by having many hidden layers instead of just 1 layer that extracts patterns within the data.6 Convolutional neural networks (CNNs) are the most prevalent DL architecture used in medical imaging. CNN’s hidden layers apply convolution and pooling operations to break down an image into features containing the most valuable information. The connecting layer applies high-level reasoning before the output layer provides predictions for the image. This framework has applications within radiology, such as predicting a lesion category or condition from an image, determining whether a specific pixel belongs to background or a target class, and predicting the location of lesions.1

AI promises to increase efficiency and reduces errors. With increased data processing and image interpretation, AI technology may help radiologists improve the quality of patient care.6 This article discusses the current applications and future integration of AI in neuroradiology.

Neuroimaging Applications

AI can improve the quality of neuroimaging and reduce the clinical and systemic loads of other imaging modalities. AI can predict patient wait times for computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and X-ray imaging.7 A ML-based AI has detected the variables that most affected patient wait times, including proximity to federal holidays and severity of the patient’s condition, and calculated how long patients would be delayed after their scheduled appointment time. This AI modality could allow more efficient patient scheduling and reveal areas of patient processing that could be changed, potentially improving patient satisfaction and outcomes for time-sensitive neurologic conditions.

AI can save patient and health care practitioner time for repeat MRIs. An estimated 20% of MRI scans require a repeat series—a massive loss of time and funds for both patients and the health care system.8 A DL approach can determine whether an MRI is usable clinically or unclear enough to require repetition.9 This initial screening measure can prevent patients from making return visits and neuroradiologists from reading inconclusive images. AI offers the opportunity to reduce time and costs incurred by optimizing the health care process before imaging is obtained.

Speeding Up Neuroimaging

AI can reduce the time spent performing imaging. Because MRIs consume time and resources, compressed sensing (CS) is commonly used. CS preferentially maintains in-plane resolution at the expense of through-plane resolution to produce a scan with a single, usable viewpoint that preserves signal-to-noise ratio (SNR). CS, however, limits interpretation to single directions and can create aliasing artifacts. An AI algorithm known as synthetic multi-orientation resolution enhancement works in real time to reduce aliasing and improve resolution in these compressed scans.10 This AI improved resolution of white matter lesions in patients with multiple sclerosis (MS) on FLAIR (fluid-attenuated inversion recovery) images, and permitted multiview reconstruction from these limited scans.

Tasks of reconstructing and anti-aliasing come with high computational costs that vary inversely with the extent of scanning compression, potentially negating the time and resource savings of CS. DL AI modalities have been developed to reduce operational loads and further improve image resolution in several directions from CS. One such deep residual learning AI was trained with compressed MRIs and used the framelet method to create a CNN that could rapidly remove global and deeply coherent aliasing artifacts.11 This system, compared with synthetic multi-orientation resolution enhancement, uses a pretrained, pretested AI that does not require additional time during scanning for computational analysis, thereby multiplying the time benefit of CS while retaining the benefits of multidirectional reconstruction and increased resolution. This methodology suffers from inherent degradation of perceptual image quality in its reconstructions because of the L2 loss function the CNN uses to reduce mean squared error, which causes blurring by averaging all possible outcomes of signal distribution during reconstruction. To combat this, researchers have developed another AI to reduce reconstruction times that uses a different loss function in a generative adversarial network to retain image quality, while offering reconstruction times several hundred times faster than current CS-MRI structures.12 So-called sparse-coding methods promise further reduction in reconstruction times, with the possibility of processing completed online with a lightweight architecture rather than on a local system.13

Neuroimaging of acute cases benefits most directly from these technologies because MRIs and their high resolution and SNR begin to approach CT imaging time scales. This could have important implications in clinical care, particularly for stroke imaging and evaluating spinal cord compression. CS-MRI optimization represents one of the greatest areas of neuroimaging cost savings and neurologic care improvement in the modern radiology era.

 

 

Reducing Contrast and Radiation Doses

AI has the ability to read CT, MRI, and positron emission tomography (PET) with reduced or without contrast without significant loss in sensitivity for detecting lesions. With MRI, gadolinium-based contrast can cause injection site reactions, allergic reactions, metal deposition throughout the body, and nephrogenic systemic fibrosis in the most severe instances.14 DL has been applied to brain MRIs performed with 10% of a full dose of contrast without significant degradation of image quality. Neuroradiologists did not rate the AI-synthesized images for several MRI indications lower than their full-dose counterparts.15 Low-dose contrast imaging, regardless of modality, generates greater noise with a significantly reduced signal. However, with AI applied, researchers found that the software suppressed motion and aliasing artifacts and improved image quality, perhaps evidence that this low-dose modality is less vulnerable to the most common pitfalls of MRI.

Recently, low-dose MRI moved into the spotlight when Subtle Medical SubtleGAD software received a National Institutes of Health grant and an expedited pathway to phase 2 clinical trials.16 SubtleGAD, a DL AI that enables low-dose MRI interpretation, might allow contrast MRI for patients with advanced kidney disease or contrast allergies. At some point, contrast with MRI might not be necessary because DL AI applied to noncontrast MRs for detecting MS lesions was found to be preliminarily effective with 78% lesion detection sensitivity.17

PET-MRI combines simultaneous PET and MRI and has been used to evaluate neurologic disorders. PET-MRI can detect amyloid plaques in Alzheimer disease 10 to 20 years before clinical signs of dementia emerge.18 PET-MRI has sparked DL AI development to decrease the dose of the IV radioactive tracer 18F-florbetaben used in imaging to reduce radiation exposure and imaging costs.This reduction is critical if PET-MRI is to become used widely.19-21

An initial CNN could reconstruct low-dose amyloid scans to full-dose resolution, albeit with a greater susceptibility to some artifacts and motion blurring.22 Similar to the synthetic multi-orientation resolution enhancement CNN, this program showed signal blurring from the L2 loss function, which was corrected in a later AI that used a generative adversarial network to minimize perceptual loss.23 This new AI demonstrated greater image resolution, feature preservation, and radiologist rating over the previous AI and was capable of reconstructing low-dose PET scans to full-dose resolution without an accompanying MRI. Applications of this algorithm are far-reaching, potentially allowing neuroimaging of brain tumors at more frequent intervals with higher resolution and lower total radiation exposure.

AI also has been applied to neurologic CT to reduce radiation exposure.24 Because it is critical to abide by the principles of ALARA (as low as reasonably achievable), the ability of AI to reduce radiation exposure holds significant promise. A CNN has been used to transform low-dose CTs of anthropomorphic models with calcium inserts and cardiac patients to normal-dose CTs, with the goal of improving the SNR.25 By training a noise-discriminating CNN and a noise-generating CNN together in a generative adversarial network, the AI improved image feature preservation during transformation. This algorithm has a direct application in imaging cerebral vasculature, including calcification that can explain lacunar infarcts and tracking systemic atherosclerosis.26

Another CNN has been applied to remove more complex noise patterns from the phenomena of beam hardening and photon starvation common in low-dose CT. This algorithm extracts the directional components of artifacts and compares them to known artifact patterns, allowing for highly specific suppression of unwanted signals.27 In June 2019, the US Food and Drug Administration (FDA) approved ClariPi, a deep CNN program for advanced denoising and resolution improvement of low- and ultra low-dose CTs.28 Aside from only low-dose settings, this AI could reduce artifacts in all CT imaging modalities and improve therapeutic value of procedures, including cerebral angiograms and emergency cranial scans. As the average CT radiation dose decreased from 12 mSv in 2009 to 1.5 mSv in 2014 and continues to fall, these algorithms will become increasingly necessary to retain the high resolution and diagnostic power expected of neurologic CTs.29,30

Downstream Applications

Downstream applications refer to AI use after a radiologic study is acquired, mostly image interpretation. More than 70% of FDA-approved AI medical devices are in radiology, and many of these relate to image analysis.6,31 Although AI is not limited to black-and-white image interpretation, it is hypothesized that one of the reasons radiology is inviting to AI is because gray-scale images lend themselves to standardization.3 Moreover, most radiology departments already use AI-friendly picture archiving and communication systems.31,32

AI has been applied to a range of radiologic modalities, including MRI, CT, ultrasonography, PET, and mammography.32-38 AI also has been specifically applied to radiography, including the interpretation of tuberculosis, pneumonia, lung lesions, and COVID-19.33,39-45 AI also can assist triage, patient screening, providing a “second opinion” rapidly, shortening the time needed for attaining a diagnosis, monitoring disease progression, and predicting prognosis.37-39,43,45-47 Downstream applications of AI in neuroradiology and neurology include using CT to aid in detecting hemorrhage or ischemic stroke; using MRI to automatically segment lesions, such as tumors or MS lesions; assisting in early diagnosis and predicting prognosis in MS; assisting in treating paralysis, including from spinal cord injury; determining seizure type and localizing area of seizure onset; and using cameras, wearable devices, and smartphone applications to diagnose and assess treatment response in neurodegenerative disorders, such as Parkinson or Alzheimer diseases (Figure).37,48-56



Several AI tools have been deployed in the clinical setting, particularly triaging intracranial hemorrhage and moving these studies to the top of the radiologist’s worklist. In 2020 the Centers for Medicare and Medicaid Services (CMS) began reimbursing Viz.ai software’s AI-based Viz ContaCT (Viz LVO) with a new International Statistical Classification of Diseases, Tenth Revision procedure code.57

 

 



Viz LVO automatically detects large vessel occlusions, flags the occlusion on CT angiogram, alerts the stroke team (interventional radiologist, neuroradiologist, and neurologist), and transmits images through a secure application to the stroke team members’ mobile devices—all in less than 6 minutes from study acquisition to alarm notification.48 Additional software can quantify and measure perfusion in affected brain areas.48 This could have implications for quantifying and targeting areas of ischemic penumbra that could be salvaged after a stroke and then using that information to plan targeted treatment and/or intervention. Because many trials (DAWN/DEFUSE3) have shown benefits in stroke outcome by extending the therapeutic window for the endovascular thrombectomy, the ability to identify appropriate candidates is essential.58,59 Development of AI tools in assessing ischemic penumbra with quantitative parameters (mean transit time, cerebral blood volume, cerebral blood flow, mismatch ratio) using AI has benefited image interpretation. Medtronic RAPID software can provide quantitative assessment of CT perfusion. AI tools could be used to provide an automatic ASPECT score, which provides a quantitative measure for assessing potential ischemic zones and aids in assessing appropriate candidates for thrombectomy.

Several FDA-approved AI tools help quantify brain structures in neuroradiology, including quantitative analysis through MRI for analysis of anatomy and PET for analysis of functional uptake, assisting in more accurate and more objective detection and monitoring of conditions such as atrophy, dementia, trauma, seizure disorders, and MS.48 The growing number of FDA-approved AI technologies and the recent CMS-approved reimbursement for an AI tool indicate a changing landscape that is more accepting of downstream applications of AI in neuroradiology. As AI continues to integrate into medical regulation and finance, we predict AI will continue to play a prominent role in neuroradiology.

Practical and Ethical Considerations

In any discussion of the benefits of AI, it is prudent to address its shortcomings. Chief among these is overfitting, which occurs when an AI is too closely aligned with its training dataset and prone to error when applied to novel cases. Often this is a byproduct of a small training set.60 Neuroradiology, particularly with uncommon, advanced imaging methods, has a smaller number of available studies.61 Even with more prevalent imaging modalities, such as head CT, the work of collecting training scans from patients with the prerequisite disease processes, particularly if these processes are rare, can limit the number of datapoints collected. Neuroradiologists should understand how an AI tool was generated, including the size and variety of the training dataset used, to best gauge the clinical applicability and fitness of the system.

Another point of concern for AI clinical decision support tools’ implementation is automation bias—the tendency for clinicians to favor machine-generated decisions and ignore contrary data or conflicting human decisions.62 This situation often arises when radiologists experience overwhelming patient loads or are in underresourced settings, where there is little ability to review every AI-based diagnosis. Although AI might be of benefit in such conditions by reducing physician workload and streamlining the diagnostic process, there is the propensity to improperly rely on a tool meant to augment, not replace, a radiologist’s judgment. Such cases have led to adverse outcomes for patients, and legal precedence shows that this constitutes negligence.63 Maintaining awareness of each tool’s limitations and proper application is the only remedy for such situations.

Ethically, we must consider the opaqueness of ML-developed neuroimaging AIs. For many systems, the specific process by which an AI arrives at its conclusions is unknown. This AI “black box” can conceal potential errors and biases that are masked by overall positive performance metrics. The lack of understanding about how a tool functions in the zero-failure clinical setting understandably gives radiologists pause. The question must be asked: Is it ethical to use a system that is a relatively unknown quantity? Entities, including state governments, Canada, and the European Union, have produced an answer. Each of these governments have implemented policies requiring that health care AIs use some method to display to end users the process by which they arrive at conclusions.64-68

The 21st Century Cures Act declares that to attain approval, clinical AIs must demonstrate this explainability to clinicians and patients.69 The response has been an explosion in the development of explainable AI. Systems that visualize the areas where AI attention most often rests with heatmaps, generate labels for the most heavily weighted features of radiographic images, and create full diagnostic reports to justify AI conclusions aim to meet the goal of transparency and inspiring confidence in clinical end users.70 The ability to understand the “thought process” of a system proves useful for error correction and retooling. A trend toward under- or overdetecting conditions, flagging seemingly irrelevant image regions, or low reproducibility can be better addressed when it is clear how the AI is drawing its false conclusions. With an iterative process of testing and redesigning, false positive and negative rates can be reduced, the need for human intervention can be lowered to an appropriate minimum, and patient outcomes can be improved.71

Data collection raises another ethical concern. To train functional clinical decision support tools, massive amounts of patient demographic, laboratory, and imaging data are required. With incentives to develop the most powerful AI systems, record collection can venture down a path where patient autonomy and privacy are threatened. Radiologists have a duty to ensure data mining serves patients and improves the practice of radiology while protecting patients’ personal information.62 Policies have placed similar limits on the access to and use of patient records.64-69 Patients have the right to request explanation of the AI systems their data have been used to train. Approval for data acquisition requires the use of explainable AI, standardized data security protocol implementation, and adequate proof of communal benefit from the clinical decision support tool. Establishment of state-mandated protections bodes well for a future when developers can access enormous caches of data while patients and health care professionals are assured that no identifying information has escaped a well-regulated space. On the level of the individual radiologist, the knowledge that each datum represents a human life. These are people who has made themselves vulnerable by seeking relief for what ails them, which should serve as a lasting reminder to operate with utmost care when handling sensitive information.

Conclusions

The demonstrated applications of AI in neuroimaging are numerous and varied, and it is reasonable to assume that its implementation will increase as the technology matures. AI use for detecting important neurologic conditions holds promise in combatting ever greater imaging volumes and providing timely diagnoses. As medicine witnesses the continuing adoption of AI, it is important that practitioners possess an understanding of its current and emerging uses.

References

1. Chartrand G, Cheng PM, Vorontsov E, et al. Deep learning: a primer for radiologists. Radiographics. 2017;37(7):2113-2131. doi:10.1148/rg.2017170077

2. King BF Jr. Guest editorial: discovery and artificial intelligence. AJR Am J Roentgenol. 2017;209(6):1189-1190. doi:10.2214/AJR.17.19178

3. Syed AB, Zoga AC. Artificial intelligence in radiology: current technology and future directions. Semin Musculoskelet Radiol. 2018;22(5):540-545. doi:10.1055/s-0038-1673383

4. Deo RC. Machine learning in medicine. Circulation. 2015;132(20):1920-1930. doi:10.1161/CIRCULATIONAHA.115.001593 5. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60-88. doi:10.1016/j.media.2017.07.005

6. Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp. 2018;2(1):35. doi:10.1186/s41747-018-0061-6

7. Curtis C, Liu C, Bollerman TJ, Pianykh OS. Machine learning for predicting patient wait times and appointment delays. J Am Coll Radiol. 2018;15(9):1310-1316. doi:10.1016/j.jacr.2017.08.021

8. Andre JB, Bresnahan BW, Mossa-Basha M, et al. Toward quantifying the prevalence, severity, and cost associated with patient motion during clinical MR examinations. J Am Coll Radiol. 2015;12(7):689-695. doi:10.1016/j.jacr.2015.03.007

9. Sreekumari A, Shanbhag D, Yeo D, et al. A deep learning-based approach to reduce rescan and recall rates in clinical MRI examinations. AJNR Am J Neuroradiol. 2019;40(2):217-223. doi:10.3174/ajnr.A5926

10. Zhao C, Shao M, Carass A, et al. Applications of a deep learning method for anti-aliasing and super-resolution in MRI. Magn Reson Imaging. 2019;64:132-141. doi:10.1016/j.mri.2019.05.038

11. Lee D, Yoo J, Tak S, Ye JC. Deep residual learning for accelerated MRI using magnitude and phase networks. IEEE Trans Biomed Eng. 2018;65(9):1985-1995. doi:10.1109/TBME.2018.2821699

12. Mardani M, Gong E, Cheng JY, et al. Deep generative adversarial neural networks for compressive sensing MRI. IEEE Trans Med Imaging. 2019;38(1):167-179. doi:10.1109/TMI.2018.2858752

13. Dong C, Loy CC, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell. 2016;38(2):295-307. doi:10.1109/TPAMI.2015.2439281

14. Sammet S. Magnetic resonance safety. Abdom Radiol (NY). 2016;41(3):444-451. doi:10.1007/s00261-016-0680-4

15. Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging. 2018;48(2):330-340. doi:10.1002/jmri.25970

16. Subtle Medical NIH awards Subtle Medical, Inc. $1.6 million grant to improve safety of MRI exams by reducing gadolinium dose using AI. Press release. September 18, 2019. Accessed March 14, 2022. https://www.biospace.com/article/releases/nih-awards-subtle-medical-inc-1-6-million-grant-to-improve-safety-of-mri-exams-by-reducing-gadolinium-dose-using-ai

17. Narayana PA, Coronado I, Sujit SJ, Wolinsky JS, Lublin FD, Gabr RE. Deep learning for predicting enhancing lesions in multiple sclerosis from noncontrast MRI. Radiology. 2020;294(2):398-404. doi:10.1148/radiol.2019191061

18. Jack CR Jr, Knopman DS, Jagust WJ, et al. Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade. Lancet Neurol. 2010;9(1):119-128. doi:10.1016/S1474-4422(09)70299-6

19. Gatidis S, Würslin C, Seith F, et al. Towards tracer dose reduction in PET studies: simulation of dose reduction by retrospective randomized undersampling of list-mode data. Hell J Nucl Med. 2016;19(1):15-18. doi:10.1967/s002449910333

20. Kaplan S, Zhu YM. Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study. J Digit Imaging. 2019;32(5):773-778. doi:10.1007/s10278-018-0150-3

21. Xu J, Gong E, Pauly J, Zaharchuk G. 200x low-dose PET reconstruction using deep learning. arXiv: 1712.04119. Accessed 2/16/2022. https://arxiv.org/pdf/1712.04119.pdf

22. Chen KT, Gong E, de Carvalho Macruz FB, et al. Ultra-low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290(3):649-656. doi:10.1148/radiol.2018180940

23. Ouyang J, Chen KT, Gong E, Pauly J, Zaharchuk G. Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss. Med Phys. 2019;46(8):3555-3564. doi:10.1002/mp.13626

24. Brenner DJ, Hall EJ. Computed tomography—an increasing source of radiation exposure. N Engl J Med. 2007;357(22):2277-2284. doi:10.1056/NEJMra072149

25. Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans Med Imaging. 2017;36(12):2536-2545. doi:10.1109/TMI.2017.2708987

26. Sohn YH, Cheon HY, Jeon P, Kang SY. Clinical implication of cerebral artery calcification on brain CT. Cerebrovasc Dis. 2004;18(4):332-337. doi:10.1159/000080772

27. Kang E, Min J, Ye JC. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med Phys. 2017;44(10):e360-e375. doi:10.1002/mp.12344

28. ClariPi gets FDA clearance for AI-powered CT image denoising solution. Published June 24, 2019. Accessed February 16, 2022. https://www.itnonline.com/content/claripi-gets-fda-clearance-ai-powered-ct-image-denoising-solution

29. Hausleiter J, Meyer T, Hermann F, et al. Estimated radiation dose associated with cardiac CT angiography. JAMA. 2009;301(5):500-507. doi:10.1001/jama.2009.54

30. Al-Mallah M, Aljizeeri A, Alharthi M, Alsaileek A. Routine low-radiation-dose coronary computed tomography angiography. Eur Heart J Suppl. 2014;16(suppl B):B12-B16. doi:10.1093/eurheartj/suu024

31. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3:118. doi:10.1038/s41746-020-00324-0

32. Talebi-Liasi F, Markowitz O. Is artificial intelligence going to replace dermatologists? Cutis. 2020;105(1):28-31.

33. Khan O, Bebb G, Alimohamed NA. Artificial intelligence in medicine: what oncologists need to know about its potential—and its limitations. Oncology Exchange. 2017;16(4):8-13. http://www.oncologyex.com/pdf/vol16_no4/feature_khan-ai.pdf

34. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271-e297. doi:10.1016/S2589-7500(19)30123-2

35. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7

36. Salim M, Wåhlin E, Dembrower K, et al. External evaluation of 3 commercial artificial intelligence algorithms for independent assessment of screening mammograms. JAMA Oncol. 2020;6(10):1581-1588. doi:10.1001/jamaoncol.2020.3321

37. Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digit Med. 2018;1(1):1-7. doi:10.1038/s41746-017-0015-z

38. Sheth D, Giger ML. Artificial intelligence in the interpretation of breast cancer on MRI. J Magn Reson Imaging. 2020;51(5):1310-1324. doi:10.1002/jmri.26878

39. Borkowski AA, Viswanadhan NA, Thomas LB, Guzman RD, Deland LA, Mastorides SM. Using artificial intelligence for COVID-19 chest X-ray diagnosis. Fed Pract. 2020;37(9):398-404. doi:10.12788/fp.0045

40. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

41. Nam JG, Park S, Hwang EJ, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2019;290(1):218-228. doi:10.1148/radiol.2018180237

42. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 2018;15(11):e1002683. doi:10.1371/journal.pmed.1002683

43. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582. doi:10.1148/radiol.2017162326

44. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest X-Ray algorithms to the clinical setting. arXiv preprint arXiv:200211379. Accessed February 16, 2022. https://arxiv.org/pdf/2002.11379.pdf

45. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-36. doi:10.1038/s41591-018-0307-0

46. Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current status and future perspectives of artificial intelligence in magnetic resonance breast imaging. Contrast Media Mol Imaging. 2020;2020:6805710. doi:10.1155/2020/6805710

47. Booth AL, Abels E, McCaffrey P. Development of a prognostic model for mortality in COVID-19 infection using machine learning. Mod Pathol. 2020;4(3):522-531. doi:10.1038/s41379-020-00700-x

48. Bash S. Enhancing neuroimaging with artificial intelligence. Applied Radiology. 2020;49(1):20-21.

49. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. doi:10.1136/svn-2017-000101

50. Valliani AA, Ranti D, Oermann EK. Deep learning and neurology: a systematic review. Neurol Ther. 2019;8(2):351-365. doi:10.1007/s40120-019-00153-8

51. Gupta R, Krishnam SP, Schaefer PW, Lev MH, Gonzalez RG. An east coast perspective on artificial intelligence and machine learning: part 2: ischemic stroke imaging and triage. Neuroimaging Clin N Am. 2020;30(4):467-478. doi:10.1016/j.nic.2020.08.002

52. Belić M, Bobić V, Badža M, Šolaja N, Đurić-Jovičić M, Kostić VS. Artificial intelligence for assisting diagnostics and assessment of Parkinson’s disease-A review. Clin Neurol Neurosurg. 2019;184:105442. doi:10.1016/j.clineuro.2019.105442

53. An S, Kang C, Lee HW. Artificial intelligence and computational approaches for epilepsy. J Epilepsy Res. 2020;10(1):8-17. doi:10.14581/jer.20003

54. Pavel AM, Rennie JM, de Vries LS, et al. A machine-learning algorithm for neonatal seizure recognition: a multicentre, randomised, controlled trial. Lancet Child Adolesc Health. 2020;4(10):740-749. doi:10.1016/S2352-4642(20)30239-X

55. Afzal HMR, Luo S, Ramadan S, Lechner-Scott J. The emerging role of artificial intelligence in multiple sclerosis imaging. Mult Scler. 2020;1352458520966298. doi:10.1177/1352458520966298

56. Bouton CE. Restoring movement in paralysis with a bioelectronic neural bypass approach: current state and future directions. Cold Spring Harb Perspect Med. 2019;9(11):a034306. doi:10.1101/cshperspect.a034306

57. Hassan AE. New technology add-on payment (NTAP) for Viz LVO: a win for stroke care. J Neurointerv Surg. 2020;neurintsurg-2020-016897. doi:10.1136/neurintsurg-2020-016897

58. Nogueira RG , Jadhav AP , Haussen DC , et al; DAWN Trial Investigators. Thrombectomy 6 to 24 hours after stroke with a mismatch between deficit and infarct. N Engl J Med. 2018;378:11–21. doi:10.1056/NEJMoa1706442

59. Albers GW , Marks MP , Kemp S , et al; DEFUSE 3 Investigators. Thrombectomy for stroke at 6 to 16 hours with selection by perfusion imaging. N Engl J Med. 2018;378:708–18. doi:10.1056/NEJMoa1713973

60. Bi WL, Hosny A, Schabath MB, et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin. 2019;69(2):127-157. doi:10.3322/caac.21552 

61. Wagner MW, Namdar K, Biswas A, Monah S, Khalvati F, Ertl-Wagner BB. Radiomics, machine learning, and artificial intelligence-what the neuroradiologist needs to know. Neuroradiology. 2021;63(12):1957-1967. doi:10.1007/s00234-021-02813-9 

62. Geis JR, Brady AP, Wu CC, et al. Ethics of artificial intelligence in radiology: summary of the Joint European and North American Multisociety Statement. J Am Coll Radiol. 2019;16(11):1516-1521. doi:10.1016/j.jacr.2019.07.028

63. Kingston J. Artificial intelligence and legal liability. arXiv:1802.07782. https://arxiv.org/ftp/arxiv/papers/1802/1802.07782.pdf

64. Council of the European Union, General Data Protection Regulation. Official Journal of the European Union. Accessed February 16, 2022. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679

65. Consumer Privacy Protection Act of 2017, HR 4081, 115th Cong (2017). Accessed February 10, 2022. https://www.congress.gov/bill/115th-congress/house-bill/4081

66. Cal. Civ. Code § 1798.198(a) (2018). California Consumer Privacy Act of 2018.

67. Va. Code Ann. § 59.1 (2021). Consumer Data Protection Act. Accessed February 10, 2022. https://lis.virginia.gov/cgi-bin/legp604.exe?212+ful+SB1392ER+pdf

68. Colo. Rev. Stat. § 6-1-1301 (2021). Colorado Privacy Act. Accessed February 10, 2022. https://leg.colorado.gov/sites/default/files/2021a_190_signed.pdf

69. 21st Century Cures Act, Pub L No. 114-255 (2016). Accessed February 10, 2022. https://www.govinfo.gov/content/pkg/PLAW-114publ255/html/PLAW-114publ255.htm

70. Huff DT, Weisman AJ, Jeraj R. Interpretation and visualization techniques for deep learning models in medical imaging. Phys Med Biol. 2021;66(4):04TR01. doi:10.1088/1361-6560/abcd17

71. Thrall JH, Li X, Li Q, et al. Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol. 2018;15(3, pt B):504-508. doi:10.1016/j.jacr.2017.12.026

References

1. Chartrand G, Cheng PM, Vorontsov E, et al. Deep learning: a primer for radiologists. Radiographics. 2017;37(7):2113-2131. doi:10.1148/rg.2017170077

2. King BF Jr. Guest editorial: discovery and artificial intelligence. AJR Am J Roentgenol. 2017;209(6):1189-1190. doi:10.2214/AJR.17.19178

3. Syed AB, Zoga AC. Artificial intelligence in radiology: current technology and future directions. Semin Musculoskelet Radiol. 2018;22(5):540-545. doi:10.1055/s-0038-1673383

4. Deo RC. Machine learning in medicine. Circulation. 2015;132(20):1920-1930. doi:10.1161/CIRCULATIONAHA.115.001593 5. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60-88. doi:10.1016/j.media.2017.07.005

6. Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp. 2018;2(1):35. doi:10.1186/s41747-018-0061-6

7. Curtis C, Liu C, Bollerman TJ, Pianykh OS. Machine learning for predicting patient wait times and appointment delays. J Am Coll Radiol. 2018;15(9):1310-1316. doi:10.1016/j.jacr.2017.08.021

8. Andre JB, Bresnahan BW, Mossa-Basha M, et al. Toward quantifying the prevalence, severity, and cost associated with patient motion during clinical MR examinations. J Am Coll Radiol. 2015;12(7):689-695. doi:10.1016/j.jacr.2015.03.007

9. Sreekumari A, Shanbhag D, Yeo D, et al. A deep learning-based approach to reduce rescan and recall rates in clinical MRI examinations. AJNR Am J Neuroradiol. 2019;40(2):217-223. doi:10.3174/ajnr.A5926

10. Zhao C, Shao M, Carass A, et al. Applications of a deep learning method for anti-aliasing and super-resolution in MRI. Magn Reson Imaging. 2019;64:132-141. doi:10.1016/j.mri.2019.05.038

11. Lee D, Yoo J, Tak S, Ye JC. Deep residual learning for accelerated MRI using magnitude and phase networks. IEEE Trans Biomed Eng. 2018;65(9):1985-1995. doi:10.1109/TBME.2018.2821699

12. Mardani M, Gong E, Cheng JY, et al. Deep generative adversarial neural networks for compressive sensing MRI. IEEE Trans Med Imaging. 2019;38(1):167-179. doi:10.1109/TMI.2018.2858752

13. Dong C, Loy CC, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell. 2016;38(2):295-307. doi:10.1109/TPAMI.2015.2439281

14. Sammet S. Magnetic resonance safety. Abdom Radiol (NY). 2016;41(3):444-451. doi:10.1007/s00261-016-0680-4

15. Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging. 2018;48(2):330-340. doi:10.1002/jmri.25970

16. Subtle Medical NIH awards Subtle Medical, Inc. $1.6 million grant to improve safety of MRI exams by reducing gadolinium dose using AI. Press release. September 18, 2019. Accessed March 14, 2022. https://www.biospace.com/article/releases/nih-awards-subtle-medical-inc-1-6-million-grant-to-improve-safety-of-mri-exams-by-reducing-gadolinium-dose-using-ai

17. Narayana PA, Coronado I, Sujit SJ, Wolinsky JS, Lublin FD, Gabr RE. Deep learning for predicting enhancing lesions in multiple sclerosis from noncontrast MRI. Radiology. 2020;294(2):398-404. doi:10.1148/radiol.2019191061

18. Jack CR Jr, Knopman DS, Jagust WJ, et al. Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade. Lancet Neurol. 2010;9(1):119-128. doi:10.1016/S1474-4422(09)70299-6

19. Gatidis S, Würslin C, Seith F, et al. Towards tracer dose reduction in PET studies: simulation of dose reduction by retrospective randomized undersampling of list-mode data. Hell J Nucl Med. 2016;19(1):15-18. doi:10.1967/s002449910333

20. Kaplan S, Zhu YM. Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study. J Digit Imaging. 2019;32(5):773-778. doi:10.1007/s10278-018-0150-3

21. Xu J, Gong E, Pauly J, Zaharchuk G. 200x low-dose PET reconstruction using deep learning. arXiv: 1712.04119. Accessed 2/16/2022. https://arxiv.org/pdf/1712.04119.pdf

22. Chen KT, Gong E, de Carvalho Macruz FB, et al. Ultra-low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290(3):649-656. doi:10.1148/radiol.2018180940

23. Ouyang J, Chen KT, Gong E, Pauly J, Zaharchuk G. Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss. Med Phys. 2019;46(8):3555-3564. doi:10.1002/mp.13626

24. Brenner DJ, Hall EJ. Computed tomography—an increasing source of radiation exposure. N Engl J Med. 2007;357(22):2277-2284. doi:10.1056/NEJMra072149

25. Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans Med Imaging. 2017;36(12):2536-2545. doi:10.1109/TMI.2017.2708987

26. Sohn YH, Cheon HY, Jeon P, Kang SY. Clinical implication of cerebral artery calcification on brain CT. Cerebrovasc Dis. 2004;18(4):332-337. doi:10.1159/000080772

27. Kang E, Min J, Ye JC. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med Phys. 2017;44(10):e360-e375. doi:10.1002/mp.12344

28. ClariPi gets FDA clearance for AI-powered CT image denoising solution. Published June 24, 2019. Accessed February 16, 2022. https://www.itnonline.com/content/claripi-gets-fda-clearance-ai-powered-ct-image-denoising-solution

29. Hausleiter J, Meyer T, Hermann F, et al. Estimated radiation dose associated with cardiac CT angiography. JAMA. 2009;301(5):500-507. doi:10.1001/jama.2009.54

30. Al-Mallah M, Aljizeeri A, Alharthi M, Alsaileek A. Routine low-radiation-dose coronary computed tomography angiography. Eur Heart J Suppl. 2014;16(suppl B):B12-B16. doi:10.1093/eurheartj/suu024

31. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3:118. doi:10.1038/s41746-020-00324-0

32. Talebi-Liasi F, Markowitz O. Is artificial intelligence going to replace dermatologists? Cutis. 2020;105(1):28-31.

33. Khan O, Bebb G, Alimohamed NA. Artificial intelligence in medicine: what oncologists need to know about its potential—and its limitations. Oncology Exchange. 2017;16(4):8-13. http://www.oncologyex.com/pdf/vol16_no4/feature_khan-ai.pdf

34. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271-e297. doi:10.1016/S2589-7500(19)30123-2

35. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7

36. Salim M, Wåhlin E, Dembrower K, et al. External evaluation of 3 commercial artificial intelligence algorithms for independent assessment of screening mammograms. JAMA Oncol. 2020;6(10):1581-1588. doi:10.1001/jamaoncol.2020.3321

37. Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digit Med. 2018;1(1):1-7. doi:10.1038/s41746-017-0015-z

38. Sheth D, Giger ML. Artificial intelligence in the interpretation of breast cancer on MRI. J Magn Reson Imaging. 2020;51(5):1310-1324. doi:10.1002/jmri.26878

39. Borkowski AA, Viswanadhan NA, Thomas LB, Guzman RD, Deland LA, Mastorides SM. Using artificial intelligence for COVID-19 chest X-ray diagnosis. Fed Pract. 2020;37(9):398-404. doi:10.12788/fp.0045

40. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

41. Nam JG, Park S, Hwang EJ, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2019;290(1):218-228. doi:10.1148/radiol.2018180237

42. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 2018;15(11):e1002683. doi:10.1371/journal.pmed.1002683

43. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582. doi:10.1148/radiol.2017162326

44. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest X-Ray algorithms to the clinical setting. arXiv preprint arXiv:200211379. Accessed February 16, 2022. https://arxiv.org/pdf/2002.11379.pdf

45. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-36. doi:10.1038/s41591-018-0307-0

46. Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current status and future perspectives of artificial intelligence in magnetic resonance breast imaging. Contrast Media Mol Imaging. 2020;2020:6805710. doi:10.1155/2020/6805710

47. Booth AL, Abels E, McCaffrey P. Development of a prognostic model for mortality in COVID-19 infection using machine learning. Mod Pathol. 2020;4(3):522-531. doi:10.1038/s41379-020-00700-x

48. Bash S. Enhancing neuroimaging with artificial intelligence. Applied Radiology. 2020;49(1):20-21.

49. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. doi:10.1136/svn-2017-000101

50. Valliani AA, Ranti D, Oermann EK. Deep learning and neurology: a systematic review. Neurol Ther. 2019;8(2):351-365. doi:10.1007/s40120-019-00153-8

51. Gupta R, Krishnam SP, Schaefer PW, Lev MH, Gonzalez RG. An east coast perspective on artificial intelligence and machine learning: part 2: ischemic stroke imaging and triage. Neuroimaging Clin N Am. 2020;30(4):467-478. doi:10.1016/j.nic.2020.08.002

52. Belić M, Bobić V, Badža M, Šolaja N, Đurić-Jovičić M, Kostić VS. Artificial intelligence for assisting diagnostics and assessment of Parkinson’s disease-A review. Clin Neurol Neurosurg. 2019;184:105442. doi:10.1016/j.clineuro.2019.105442

53. An S, Kang C, Lee HW. Artificial intelligence and computational approaches for epilepsy. J Epilepsy Res. 2020;10(1):8-17. doi:10.14581/jer.20003

54. Pavel AM, Rennie JM, de Vries LS, et al. A machine-learning algorithm for neonatal seizure recognition: a multicentre, randomised, controlled trial. Lancet Child Adolesc Health. 2020;4(10):740-749. doi:10.1016/S2352-4642(20)30239-X

55. Afzal HMR, Luo S, Ramadan S, Lechner-Scott J. The emerging role of artificial intelligence in multiple sclerosis imaging. Mult Scler. 2020;1352458520966298. doi:10.1177/1352458520966298

56. Bouton CE. Restoring movement in paralysis with a bioelectronic neural bypass approach: current state and future directions. Cold Spring Harb Perspect Med. 2019;9(11):a034306. doi:10.1101/cshperspect.a034306

57. Hassan AE. New technology add-on payment (NTAP) for Viz LVO: a win for stroke care. J Neurointerv Surg. 2020;neurintsurg-2020-016897. doi:10.1136/neurintsurg-2020-016897

58. Nogueira RG , Jadhav AP , Haussen DC , et al; DAWN Trial Investigators. Thrombectomy 6 to 24 hours after stroke with a mismatch between deficit and infarct. N Engl J Med. 2018;378:11–21. doi:10.1056/NEJMoa1706442

59. Albers GW , Marks MP , Kemp S , et al; DEFUSE 3 Investigators. Thrombectomy for stroke at 6 to 16 hours with selection by perfusion imaging. N Engl J Med. 2018;378:708–18. doi:10.1056/NEJMoa1713973

60. Bi WL, Hosny A, Schabath MB, et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin. 2019;69(2):127-157. doi:10.3322/caac.21552 

61. Wagner MW, Namdar K, Biswas A, Monah S, Khalvati F, Ertl-Wagner BB. Radiomics, machine learning, and artificial intelligence-what the neuroradiologist needs to know. Neuroradiology. 2021;63(12):1957-1967. doi:10.1007/s00234-021-02813-9 

62. Geis JR, Brady AP, Wu CC, et al. Ethics of artificial intelligence in radiology: summary of the Joint European and North American Multisociety Statement. J Am Coll Radiol. 2019;16(11):1516-1521. doi:10.1016/j.jacr.2019.07.028

63. Kingston J. Artificial intelligence and legal liability. arXiv:1802.07782. https://arxiv.org/ftp/arxiv/papers/1802/1802.07782.pdf

64. Council of the European Union, General Data Protection Regulation. Official Journal of the European Union. Accessed February 16, 2022. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679

65. Consumer Privacy Protection Act of 2017, HR 4081, 115th Cong (2017). Accessed February 10, 2022. https://www.congress.gov/bill/115th-congress/house-bill/4081

66. Cal. Civ. Code § 1798.198(a) (2018). California Consumer Privacy Act of 2018.

67. Va. Code Ann. § 59.1 (2021). Consumer Data Protection Act. Accessed February 10, 2022. https://lis.virginia.gov/cgi-bin/legp604.exe?212+ful+SB1392ER+pdf

68. Colo. Rev. Stat. § 6-1-1301 (2021). Colorado Privacy Act. Accessed February 10, 2022. https://leg.colorado.gov/sites/default/files/2021a_190_signed.pdf

69. 21st Century Cures Act, Pub L No. 114-255 (2016). Accessed February 10, 2022. https://www.govinfo.gov/content/pkg/PLAW-114publ255/html/PLAW-114publ255.htm

70. Huff DT, Weisman AJ, Jeraj R. Interpretation and visualization techniques for deep learning models in medical imaging. Phys Med Biol. 2021;66(4):04TR01. doi:10.1088/1361-6560/abcd17

71. Thrall JH, Li X, Li Q, et al. Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol. 2018;15(3, pt B):504-508. doi:10.1016/j.jacr.2017.12.026

Issue
Federal Practitioner - 39(1)s
Issue
Federal Practitioner - 39(1)s
Page Number
S14-S20
Page Number
S14-S20
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Artificial Intelligence: Review of Current and Future Applications in Medicine

Article Type
Changed
Mon, 11/08/2021 - 15:36

Artificial Intelligence (AI) was first described in 1956 and refers to machines having the ability to learn as they receive and process information, resulting in the ability to “think” like humans.1 AI’s impact in medicine is increasing; currently, at least 29 AI medical devices and algorithms are approved by the US Food and Drug Administration (FDA) in a variety of areas, including radiograph interpretation, managing glucose levels in patients with diabetes mellitus, analyzing electrocardiograms (ECGs), and diagnosing sleep disorders among others.2 Significantly, in 2020, the Centers for Medicare and Medicaid Services (CMS) announced the first reimbursement to hospitals for an AI platform, a model for early detection of strokes.3 AI is rapidly becoming an integral part of health care, and its role will only increase in the future (Table).

Key Historical Events in Artifical Intelligence Development With a Focus on Health Care Applications Table

As knowledge in medicine is expanding exponentially, AI has great potential to assist with handling complex patient care data. The concept of exponential growth is not a natural one. As Bini described, with exponential growth the volume of knowledge amassed over the past 10 years will now occur in perhaps only 1 year.1 Likewise, equivalent advances over the past year may take just a few months. This phenomenon is partly due to the law of accelerating returns, which states that advances feed on themselves, continually increasing the rate of further advances.4 The volume of medical data doubles every 2 to 5 years.5 Fortunately, the field of AI is growing exponentially as well and can help health care practitioners (HCPs) keep pace, allowing the continued delivery of effective health care.

In this report, we review common terminology, principles, and general applications of AI, followed by current and potential applications of AI for selected medical specialties. Finally, we discuss AI’s future in health care, along with potential risks and pitfalls.

 

AI Overview

AI refers to machine programs that can “learn” or think based on past experiences. This functionality contrasts with simple rules-based programming available to health care for years. An example of rules-based programming is the warfarindosing.org website developed by Barnes-Jewish Hospital at Washington University Medical Center, which guides initial warfarin dosing.6,7 The prescriber inputs detailed patient information, including age, sex, height, weight, tobacco history, medications, laboratory results, and genotype if available. The application then calculates recommended warfarin dosing regimens to avoid over- or underanticoagulation. While the dosing algorithm may be complex, it depends entirely on preprogrammed rules. The program does not learn to reach its conclusions and recommendations from patient data.

In contrast, one of the most common subsets of AI is machine learning (ML). ML describes a program that “learns from experience and improves its performance as it learns.”1 With ML, the computer is initially provided with a training data set—data with known outcomes or labels. Because the initial data are input from known samples, this type of AI is known as supervised learning.8-10 As an example, we recently reported using ML to diagnose various types of cancer from pathology slides.11 In one experiment, we captured images of colon adenocarcinoma and normal colon (these 2 groups represent the training data set). Unlike traditional programming, we did not define characteristics that would differentiate colon cancer from normal; rather, the machine learned these characteristics independently by assessing the labeled images provided. A second data set (the validation data set) was used to evaluate the program and fine-tune the ML training model’s parameters. Finally, the program was presented with new images of cancer and normal cases for final assessment of accuracy (test data set). Our program learned to recognize differences from the images provided and was able to differentiate normal and cancer images with > 95% accuracy.

Advances in computer processing have allowed for the development of artificial neural networks (ANNs). While there are several types of ANNs, the most common types used for image classification and segmentation are known as convolutional neural networks (CNNs).9,12-14 The programs are designed to work similar to the human brain, specifically the visual cortex.15,16 As data are acquired, they are processed by various layers in the program. Much like neurons in the brain, one layer decides whether to advance information to the next.13,14 CNNs can be many layers deep, leading to the term deep learning: “computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.”1,13,17

ANNs can process larger volumes of data. This advance has led to the development of unstructured or unsupervised learning. With this type of learning, imputing defined features (ie, predetermined answers) of the training data set described above is no longer required.1,8,10,14 The advantage of unsupervised learning is that the program can be presented raw data and extract meaningful interpretation without human input, often with less bias than may exist with supervised learning.1,18 If shown enough data, the program can extract relevant features to make conclusions independently without predefined definitions, potentially uncovering markers not previously known. For example, several studies have used unsupervised learning to search patient data to assess readmission risks of patients with congestive heart failure.10,19,20 AI compiled features independently and not previously defined, predicting patients at greater risk for readmission superior to traditional methods.

Artificial Intelligence Health Care Applications Figure


A more detailed description of the various terminologies and techniques of AI is beyond the scope of this review.9,10,17,21 However, in this basic overview, we describe 4 general areas that AI impacts health care (Figure).

 

 

Health Care Applications

Image analysis has seen the most AI health care applications.8,15 AI has shown potential in interpreting many types of medical images, including pathology slides, radiographs of various types, retina and other eye scans, and photographs of skin lesions. Many studies have demonstrated that AI can interpret these images as accurately as or even better than experienced clinicians.9,13,22-29 Studies have suggested AI interpretation of radiographs may better distinguish patients infected with COVID-19 from other causes of pneumonia, and AI interpretation of pathology slides may detect specific genetic mutations not previously identified without additional molecular tests.11,14,23,24,30-32

The second area in which AI can impact health care is improving workflow and efficiency. AI has improved surgery scheduling, saving significant revenue, and decreased patient wait times for appointments.1 AI can screen and triage radiographs, allowing attention to be directed to critical patients. This use would be valuable in many busy clinical settings, such as the recent COVID-19 pandemic.8,23 Similarly, AI can screen retina images to prioritize urgent conditions.25 AI has improved pathologists’ efficiency when used to detect breast metastases.33 Finally, AI may reduce medical errors, thereby ensuring patient safety.8,9,34

A third health care benefit of AI is in public health and epidemiology. AI can assist with clinical decision-making and diagnoses in low-income countries and areas with limited health care resources and personnel.25,29 AI can improve identification of infectious outbreaks, such as tuberculosis, malaria, dengue fever, and influenza.29,35-40 AI has been used to predict transmission patterns of the Zika virus and the current COVID-19 pandemic.41,42 Applications can stratify the risk of outbreaks based on multiple factors, including age, income, race, atypical geographic clusters, and seasonal factors like rainfall and temperature.35,36,38,43 AI has been used to assess morbidity and mortality, such as predicting disease severity with malaria and identifying treatment failures in tuberculosis.29

Finally, AI can dramatically impact health care due to processing large data sets or disconnected volumes of patient information—so-called big data.44-46 An example is the widespread use of electronic health records (EHRs) such as the Computerized Patient Record System used in Veteran Affairs medical centers (VAMCs). Much of patient information exists in written text: HCP notes, laboratory and radiology reports, medication records, etc. Natural language processing (NLP) allows platforms to sort through extensive volumes of data on complex patients at rates much faster than human capability, which has great potential to assist with diagnosis and treatment decisions.9

Medical literature is being produced at rates that exceed our ability to digest. More than 200,000 cancer-related articles were published in 2019 alone.14 NLP capabilities of AI have the potential to rapidly sort through this extensive medical literature and relate specific verbiage in patient records guiding therapy.46 IBM Watson, a supercomputer based on ML and NLP, demonstrates this concept with many potential applications, only some of which relate to health care.1,9 Watson has an oncology component to assimilate multiple aspects of patient care, including clinical notes, pathology results, radiograph findings, staging, and a tumor’s genetic profile. It coordinates these inputs from the EHR and mines medical literature and research databases to recommend treatment options.1,46 AI can assess and compile far greater patient data and therapeutic options than would be feasible by individual clinicians, thus providing customized patient care.47 Watson has partnered with numerous medical centers, including MD Anderson Cancer Center and Memorial Sloan Kettering Cancer Center, with variable success.44,47-49 While the full potential of Watson appears not yet realized, these AI-driven approaches will likely play an important role in leveraging the hidden value in the expanding volume of health care information.

Medical Specialty Applications

Radiology

Currently > 70% of FDA-approved AI medical devices are in the field of radiology.2 Most radiology departments have used AI-friendly digital imaging for years, such as the picture archiving and communication systems used by numerous health care systems, including VAMCs.2,15 Gray-scale images common in radiology lend themselves to standardization, although AI is not limited to black-and- white image interpretation.15

An abundance of literature describes plain radiograph interpretation using AI. One FDA-approved platform improved X-ray diagnosis of wrist fractures when used by emergency medicine clinicians.2,50 AI has been applied to chest X-ray (CXR) interpretation of many conditions, including pneumonia, tuberculosis, malignant lung lesions, and COVID-19.23,25,28,44,51-53 For example, Nam and colleagues suggested AI is better at diagnosing malignant pulmonary nodules from CXRs than are trained radiologists.28

In addition to plain radiographs, AI has been applied to many other imaging technologies, including ultrasounds, positron emission tomography, mammograms, computed tomography (CT), and magnetic resonance imaging (MRI).15,26,44,48,54-56 A large study demonstrated that ML platforms significantly reduced the time to diagnose intracranial hemorrhages on CT and identified subtle hemorrhages missed by radiologists.55 Other studies have claimed that AI programs may be better than radiologists in detecting cancer in screening mammograms, and 3 FDA-approved devices focus on mammogram interpretation.2,15,54,57 There is also great interest in MRI applications to detect and predict prognosis for breast cancer based on imaging findings.21,56

Aside from providing accurate diagnoses, other studies focus on AI radiograph interpretation to assist with patient screening, triage, improving time to final diagnosis, providing a rapid “second opinion,” and even monitoring disease progression and offering insights into prognosis.8,21,23,52,55,56,58 These features help in busy urban centers but may play an even greater role in areas with limited access to health care or trained specialists such as radiologists.52

 

 

Cardiology

Cardiology has the second highest number of FDA-approved AI applications.2 Many cardiology AI platforms involve image analysis, as described in several recent reviews.45,59,60 AI has been applied to echocardiography to measure ejection fractions, detect valvular disease, and assess heart failure from hypertrophic and restrictive cardiomyopathy and amyloidosis.45,48,59 Applications for cardiac CT scans and CT angiography have successfully quantified both calcified and noncalcified coronary artery plaques and lumen assessments, assessed myocardial perfusion, and performed coronary artery calcium scoring.45,59,60 Likewise, AI applications for cardiac MRI have been used to quantitate ejection fraction, large vessel flow assessment, and cardiac scar burden.45,59

For years ECG devices have provided interpretation with limited accuracy using preprogrammed parameters.48 However, the application of AI allows ECG interpretation on par with trained cardiologists. Numerous such AI applications exist, and 2 FDA-approved devices perform ECG interpretation.2,61-64 One of these devices incorporates an AI-powered stethoscope to detect atrial fibrillation and heart murmurs.65

Pathology

The advancement of whole slide imaging, wherein entire slides can be scanned and digitized at high speed and resolution, creates great potential for AI applications in pathology.12,24,32,33,66 A landmark study demonstrating the potential of AI for assessing whole slide imaging examined sentinel lymph node metastases in patients with breast cancer.22 Multiple algorithms in the study demonstrated that AI was equivalent or better than pathologists in detecting metastases, especially when the pathologists were time-constrained consistent with a normal working environment. Significantly, the most accurate and efficient diagnoses were achieved when the pathologist and AI interpretations were used together.22,33

AI has shown promise in diagnosing many other entities, including cancers of the prostate (including Gleason scoring), lung, colon, breast, and skin.11,12,24,27,32,67 In addition, AI has shown great potential in scoring biomarkers important for prognosis and treatment, such as immunohistochemistry (IHC) labeling of Ki-67 and PD-L1.32 Pathologists can have difficulty classifying certain tumors or determining the site of origin for metastases, often having to rely on IHC with limited success. The unique features of image analysis with AI have the potential to assist in classifying difficult tumors and identifying sites of origin for metastatic disease based on morphology alone.11

Oncology depends heavily on molecular pathology testing to dictate treatment options and determine prognosis. Preliminary studies suggest that AI interpretation alone has the potential to delineate whether certain molecular mutations are present in tumors from various sites.11,14,24,32 One study combined histology and genomic results for AI interpretation that improved prognostic predictions.68 In addition, AI analysis may have potential in predicting tumor recurrence or prognosis based on cellular features, as demonstrated for lung cancer and melanoma.67,69,70

Ophthalmology

AI applications for ophthalmology have focused on diabetic retinopathy, age-related macular degeneration, glaucoma, retinopathy of prematurity, age-related and congenital cataracts, and retinal vein occlusion.71-73 Diabetic retinopathy is a leading cause of blindness and has been studied by numerous platforms with good success, most having used color fundus photography.71,72 One study showed AI could diagnose diabetic retinopathy and diabetic macular edema with specificities similar to ophthalmologists.74 In 2018, the FDA approved the AI platform IDx-DR. This diagnostic system classifies retinal images and recommends referral for patients determined to have “more than mild diabetic retinopathy” and reexamination within a year for other patients.8,75 Significantly, the platform recommendations do not require confirmation by a clinician.8

AI has been applied to other modalities in ophthalmology such as optical coherence tomography (OCT) to diagnose retinal disease and to predict appropriate management of congenital cataracts.25,73,76 For example, an AI application using OCT has been demonstrated to match or exceed the accuracy of retinal experts in diagnosing and triaging patients with a variety of retinal pathologies, including patients needing urgent referrals.77

Dermatology

Multiple studies demonstrate AI performs at least equal to experienced dermatologists in differentiating selected skin lesions.78-81 For example, Esteva and colleagues demonstrated AI could differentiate keratinocyte carcinomas from benign seborrheic keratoses and malignant melanomas from benign nevi with accuracy equal to 21 board-certified dermatologists.78

 

 

AI is applicable to various imaging procedures common to dermatology, such as dermoscopy, very high-frequency ultrasound, and reflectance confocal microscopy.82 Several studies have demonstrated that AI interpretation compared favorably to dermatologists evaluating dermoscopy to assess melanocytic lesions.78-81,83

A limitation in these studies is that they differentiate only a few diagnoses.82 Furthermore, dermatologists have sensory input such as touch and visual examination under various conditions, something AI has yet to replicate.15,34,84 Also, most AI devices use no or limited clinical information.81 Dermatologists can recognize rarer conditions for which AI models may have had limited or no training.34 Nevertheless, a recent study assessed AI for the diagnosis of 134 separate skin disorders with promising results, including providing diagnoses with accuracy comparable to that of dermatologists and providing accurate treatment strategies.84 As Topol points out, most skin lesions are diagnosed in the primary care setting where AI can have a greater impact when used in conjunction with the clinical impression, especially where specialists are in limited supply.48,78

Finally, dermatology lends itself to using portable or smartphone applications (apps) wherein the user can photograph a lesion for analysis by AI algorithms to assess the need for further evaluation or make treatment recommendations.34,84,85 Although results from currently available apps are not encouraging, they may play a greater role as the technology advances.34,85

 

Oncology

Applications of AI in oncology include predicting prognosis for patients with cancer based on histologic and/or genetic information.14,68,86 Programs can predict the risk of complications before and recurrence risks after surgery for malignancies.44,87-89 AI can also assist in treatment planning and predict treatment failure with radiation therapy.90,91

AI has great potential in processing the large volumes of patient data in cancer genomics. Next-generation sequencing has allowed for the identification of millions of DNA sequences in a single tumor to detect genetic anomalies.92 Thousands of mutations can be found in individual tumor samples, and processing this information and determining its significance can be beyond human capability.14 We know little about the effects of various mutation combinations, and most tumors have a heterogeneous molecular profile among different cell populations.14,93 The presence or absence of various mutations can have diagnostic, prognostic, and therapeutic implications.93 AI has great potential to sort through these complex data and identify actionable findings.

More than 200,000 cancer-related articles were published in 2019, and publications in the field of cancer genomics are increasing exponentially.14,92,93 Patel and colleagues assessed the utility of IBM Watson for Genomics against results from a molecular tumor board.93 Watson for Genomics identified potentially significant mutations not identified by the tumor board in 32% of patients. Most mutations were related to new clinical trials not yet added to the tumor board watch list, demonstrating the role AI will have in processing the large volume of genetic data required to deliver personalized medicine moving forward.

Gastroenterology

AI has shown promise in predicting risk or outcomes based on clinical parameters in various common gastroenterology problems, including gastric reflux, acute pancreatitis, gastrointestinal bleeding, celiac disease, and inflammatory bowel disease.94,95 AI endoscopic analysis has demonstrated potential in assessing Barrett’s esophagus, gastric Helicobacter pylori infections, gastric atrophy, and gastric intestinal metaplasia.95 Applications have been used to assess esophageal, gastric, and colonic malignancies, including depth of invasion based on endoscopic images.95 Finally, studies have evaluated AI to assess small colon polyps during colonoscopy, including differentiating benign and premalignant polyps with success comparable to gastroenterologists.94,95 AI has been shown to increase the speed and accuracy of gastroenterologists in detecting small polyps during colonoscopy.48 In a prospective randomized study, colonoscopies performed using an AI device identified significantly more small adenomatous polyps than colonoscopies without AI.96

Neurology

It has been suggested that AI technologies are well suited for application in neurology due to the subtle presentation of many neurologic diseases.16 Viz LVO, the first CMS-approved AI reimbursement for the diagnosis of strokes, analyzes CTs to detect early ischemic strokes and alerts the medical team, thus shortening time to treatment.3,97 Many other AI platforms are in use or development that use CT and MRI for the early detection of strokes as well as for treatment and prognosis.9,97

AI technologies have been applied to neurodegenerative diseases, such as Alzheimer and Parkinson diseases.16,98 For example, several studies have evaluated patient movements in Parkinson disease for both early diagnosis and to assess response to treatment.98 These evaluations included assessment with both external cameras as well as wearable devices and smartphone apps.

 

 



AI has also been applied to seizure disorders, attempting to determine seizure type, localize the area of seizure onset, and address the challenges of identifying seizures in neonates.99,100 Other potential applications range from early detection and prognosis predictions for cases of multiple sclerosis to restoring movement in paralysis from a variety of conditions such as spinal cord injury.9,101,102
 

 

Mental Health

Due to the interactive nature of mental health care, the field has been slower to develop AI applications.18 With heavy reliance on textual information (eg, clinic notes, mood rating scales, and documentation of conversations), successful AI applications in this field will likely rely heavily on NLP.18 However, studies investigating the application of AI to mental health have also incorporated data such as brain imaging, smartphone monitoring, and social media platforms, such as Facebook and Twitter.18,103,104

The risk of suicide is higher in veteran patients, and ML algorithms have had limited success in predicting suicide risk in both veteran and nonveteran populations.104-106 While early models have low positive predictive values and low sensitivities, they still promise to be a useful tool in conjunction with traditional risk assessments.106 Kessler and colleagues suggest that combining multiple rather than single ML algorithms might lead to greater success.105,106

AI may assist in diagnosing other mental health disorders, including major depressive disorder, attention deficit hyperactivity disorder (ADHD), schizophrenia, posttraumatic stress disorder, and Alzheimer disease.103,104,107 These investigations are in the early stages with limited clinical applicability. However, 2 AI applications awaiting FDA approval relate to ADHD and opioid use.2 Furthermore, potential exists for AI to not only assist with prevention and diagnosis of ADHD, but also to identify optimal treatment options.2,103

General and Personalized Medicine

Additional AI applications include diagnosing patients with suspected sepsis, measuring liver iron concentrations, predicting hospital mortality at the time of admission, and more.2,108,109 AI can guide end-of-life decisions such as resuscitation status or whether to initiate mechanical ventilation.48

AI-driven smartphone apps can be beneficial to both patients and clinicians. Examples include predicting nonadherence to anticoagulation therapy, monitoring heart rhythms for atrial fibrillation or signs of hyperkalemia in patients with renal failure, and improving outcomes for patients with diabetes mellitus by decreasing glycemic variability and reducing hypoglycemia.8,48,110,111 The potential for AI applications to health care and personalized medicine are almost limitless.

Discussion

With ever-increasing expectations for all health care sectors to deliver timely, fiscally-responsible, high-quality health care, AI has the potential to have numerous impacts. AI can improve diagnostic accuracy while limiting errors and impact patient safety such as assisting with prescription delivery.8,9,34 It can screen and triage patients, alerting clinicians to those needing more urgent evaluation.8,23,77,97 AI also may increase a clinician’s efficiency and speed to render a diagnosis.12,13,55,97 AI can provide a rapid second opinion, an ability especially beneficial in underserved areas with shortages of specialists.23,25,26,29,34 Similarly, AI may decrease the inter- and intraobserver variability common in many medical specialties.12,27,45 AI applications can also monitor disease progression, identifying patients at greatest risk, and provide information for prognosis.21,23,56,58 Finally, as described with applications using IBM Watson, AI can allow for an integrated approach to health care that is currently lacking.

We have described many reports suggesting AI can render diagnoses as well as or better than experienced clinicians, and speculation exists that AI will replace many roles currently performed by health care practitioners.9,26 However, most studies demonstrate that AI’s diagnostic benefits are best realized when used to supplement a clinician’s impression.8,22,30,33,52,54,56,69,84 AI is not likely to replace humans in health care in the foreseeable future. The technology can be likened to the impact of CT scans developed in the 1970s in neurology. Prior to such detailed imaging, neurologists spent extensive time performing detailed physicals to render diagnoses and locate lesions before surgery. There was mistrust of this new technology and concern that CT scans would eliminate the need for neurologists.112 On the contrary, neurology is alive and well, frequently being augmented by the technologies once speculated to replace it.

Commercial AI health care platforms represented a $2 billion industry in 2018 and are growing rapidly each year.13,32 Many AI products are offered ready for implementation for various tasks, including diagnostics, patient management, and improved efficiency. Others will likely be provided as templates suitable for modification to meet the specific needs of the facility, practice, or specialty for its patient population.

 

 

AI Risks and Limitations

AI has several risks and limitations. Although there is progress in explainable AI, at times we still struggle to understand how the output provided by machine learning algorithms was created.44,48 The many layers associated with deep learning self-determine the criteria to reach its conclusion, and these criteria can continually evolve. The parameters of deep learning are not preprogrammed, and there are too many individual data points to be extrapolated or deconvoluted for evaluation at our current level of knowledge.26,51 These apparent lack of constraints cause concern for patient safety and suggest that greater validation and continued scrutiny of validity is required.8,48 Efforts are underway to create explainable AI programs to make their processes more transparent, but such clarification is limited presently.14,26,48,77

Another challenge of AI is determining the amount of training data required to function optimally. Also, if the output describes multiple variables or diagnoses, are each equally valid?113 Furthermore, many AI applications look for a specific process, such as cancer diagnoses on CXRs. However, how coexisting conditions like cardiomegaly, emphysema, pneumonia, etc, seen on CXRs will affect the diagnosis needs to be considered.51,52 Zech and colleagues provide the example that diagnoses for pneumothorax are frequently rendered on CXRs with chest tubes in place.51 They suggest that CNNs may develop a bias toward diagnosing pneumothorax when chest tubes are present. Many current studies approach an issue in isolation, a situation not realistic in real-world clinical practice.26

Most studies on AI have been retrospective, and frequently data used to train the program are preselected.13,26 The data are typically validated on available databases rather than actual patients in the clinical setting, limiting confidence in the validity of the AI output when applied to real-world situations. Currently, fewer than 12 prospective trials had been published comparing AI with traditional clinical care.13,114 Randomized prospective clinical trials are even fewer, with none currently reported from the United States.13,114 The results from several studies have been shown to diminish when repeated prospectively.114

The FDA has created a new category known as Software as a Medical Device and has a Digital Health Innovation Action Plan to regulate AI platforms. Still, the process of AI regulation is of necessity different from traditional approval processes and is continually evolving.8 The FDA approval process cannot account for the fact that the program’s parameters may continually evolve or adapt.2

Guidelines for investigating and reporting AI research with its unique attributes are being developed. Examples include the TRIPOD-ML statement and others.49,115 In September 2020, 2 publications addressed the paucity of gold-standard randomized clinical trials in clinical AI applications.116,117 The SPIRIT-AI statement expands on the original SPIRIT statement published in 2013 to guide minimal reporting standards for AI clinical trial protocols to promote transparency of design and methodology.116 Similarly, the CONSORT-AI extension, stemming from the original CONSORT statement in 1996, aims to ensure quality reporting of randomized controlled trials in AI.117

Another risk with AI is that while an individual physician making a mistake may adversely affect 1 patient, a single mistake in an AI algorithm could potentially affect thousands of patients.48 Also, AI programs developed for patient populations at a facility may not translate to another. Referred to as overfitting, this phenomenon relates to selection bias in training data sets.15,34,49,51,52 Studies have shown that programs that underrepresent certain group characteristics such as age, sex, or race may be less effective when applied to a population in which these characteristics have differing representations.8,48,49 This problem of underrepresentation has been demonstrated in programs interpreting pathology slides, radiographs, and skin lesions.15,32,51

Admittedly, most of these challenges are not specific to AI and existed in health care previously. Physicians make mistakes, treatments are sometimes used without adequate prospective studies, and medications are given without understanding their mechanism of action, much like AI-facilitated processes reach a conclusion that cannot be fully explained.48

Conclusions

The view that AI will dramatically impact health care in the coming years will likely prove true. However, much work is needed, especially because of the paucity of prospective clinical trials as has been historically required in medical research. Any concern that AI will replace HCPs seems unwarranted. Early studies suggest that even AI programs that appear to exceed human interpretation perform best when working in cooperation with and oversight from clinicians. AI’s greatest potential appears to be its ability to augment care from health professionals, improving efficiency and accuracy, and should be anticipated with enthusiasm as the field moves forward at an exponential rate.

Acknowledgments

The authors thank Makenna G. Thomas for proofreading and review of the manuscript. This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital. This research has been approved by the James A. Haley Veteran’s Hospital Office of Communications and Media.

References

1. Bini SA. Artificial intelligence, machine learning, deep learning, and cognitive computing: what do these terms mean and how will they impact health care? J Arthroplasty. 2018;33(8):2358-2361. doi:10.1016/j.arth.2018.02.067

2. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3:118. doi:10.1038/s41746-020-00324-0

3. Viz. AI powered synchronized stroke care. Accessed September 15, 2021. https://www.viz.ai/ischemic-stroke

4. Buchanan M. The law of accelerating returns. Nat Phys. 2008;4(7):507. doi:10.1038/nphys1010

5. IBM Watson Health computes a pair of new solutions to improve healthcare data and security. Published September 10, 2015. Accessed October 21, 2020. https://www.techrepublic.com/article/ibm-watson-health-computes-a-pair-of-new-solutions-to-improve-healthcare-data-and-security

6. Borkowski AA, Kardani A, Mastorides SM, Thomas LB. Warfarin pharmacogenomics: recommendations with available patented clinical technologies. Recent Pat Biotechnol. 2014;8(2):110-115. doi:10.2174/1872208309666140904112003

7. Washington University in St. Louis. Warfarin dosing. Accessed September 15, 2021. http://www.warfarindosing.org/Source/Home.aspx

8. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-36. doi:10.1038/s41591-018-0307-0

9. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. Published 2017 Jun 21. doi:10.1136/svn-2017-000101

10. Johnson KW, Torres Soto J, Glicksberg BS, et al. Artificial intelligence in cardiology. J Am Coll Cardiol. 2018;71(23):2668-2679. doi:10.1016/j.jacc.2018.03.521

11. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.

12. Cruz-Roa A, Gilmore H, Basavanhally A, et al. High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: application to invasive breast cancer detection. PLoS One. 2018;13(5):e0196828. Published 2018 May 24. doi:10.1371/journal.pone.0196828

13. Nagendran M, Chen Y, Lovejoy CA, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689. Published 2020 Mar 25. doi:10.1136/bmj.m689

14. Shimizu H, Nakayama KI. Artificial intelligence in oncology. Cancer Sci. 2020;111(5):1452-1460. doi:10.1111/cas.14377

15. Talebi-Liasi F, Markowitz O. Is artificial intelligence going to replace dermatologists?. Cutis. 2020;105(1):28-31.

16. Valliani AA, Ranti D, Oermann EK. Deep learning and neurology: a systematic review. Neurol Ther. 2019;8(2):351-365. doi:10.1007/s40120-019-00153-8

17. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539

18. Graham S, Depp C, Lee EE, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep. 2019;21(11):116. Published 2019 Nov 7. doi:10.1007/s11920-019-1094-0

19. Golas SB, Shibahara T, Agboola S, et al. A machine learning model to predict the risk of 30-day readmissions in patients with heart failure: a retrospective analysis of electronic medical records data. BMC Med Inform Decis Mak. 2018;18(1):44. Published 2018 Jun 22. doi:10.1186/s12911-018-0620-z

20. Mortazavi BJ, Downing NS, Bucholz EM, et al. Analysis of machine learning techniques for heart failure readmissions. Circ Cardiovasc Qual Outcomes. 2016;9(6):629-640. doi:10.1161/CIRCOUTCOMES.116.003039

21. Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current status and future perspectives of artificial intelligence in magnetic resonance breast imaging. Contrast Media Mol Imaging. 2020;2020:6805710. Published 2020 Aug 28. doi:10.1155/2020/6805710

22. Ehteshami Bejnordi B, Veta M, Johannes van Diest P, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318(22):2199-2210. doi:10.1001/jama.2017.14585

23. Borkowski AA, Viswanadhan NA, Thomas LB, Guzman RD, Deland LA, Mastorides SM. Using artificial intelligence for COVID-19 chest X-ray diagnosis. Fed Pract. 2020;37(9):398-404. doi:10.12788/fp.0045

24. Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med. 2018;24(10):1559-1567. doi:10.1038/s41591-018-0177-5

25. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

26. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271-e297. doi:10.1016/S2589-7500(19)30123-2

27. Nagpal K, Foote D, Liu Y, et al. Development and validation of a deep learning algorithm for improving Gleason scoring of prostate cancer [published correction appears in NPJ Digit Med. 2019 Nov 19;2:113]. NPJ Digit Med. 2019;2:48. Published 2019 Jun 7. doi:10.1038/s41746-019-0112-2

28. Nam JG, Park S, Hwang EJ, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2019;290(1):218-228. doi:10.1148/radiol.2018180237

29. Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet. 2020;395(10236):1579-1586. doi:10.1016/S0140-6736(20)30226-9

30. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT [published correction appears in Radiology. 2021 Apr;299(1):E225]. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491

31. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905

32. Serag A, Ion-Margineanu A, Qureshi H, et al. Translational AI and deep learning in diagnostic pathology. Front Med (Lausanne). 2019;6:185. Published 2019 Oct 1. doi:10.3389/fmed.2019.00185

<--pagebreak-->

33. Wang D, Khosla A, Gargeya R, Irshad H, Beck AH. Deep learning for identifying metastatic breast cancer. ArXiv. 2016 June 18:arXiv:1606.05718v1. Published online June 18, 2016. Accessed September 15, 2021. http://arxiv.org/abs/1606.05718

34. Alabdulkareem A. Artificial intelligence and dermatologists: friends or foes? J Dermatology Dermatol Surg. 2019;23(2):57-60. doi:10.4103/jdds.jdds_19_19

35. Mollalo A, Mao L, Rashidi P, Glass GE. A GIS-based artificial neural network model for spatial distribution of tuberculosis across the continental United States. Int J Environ Res Public Health. 2019;16(1):157. Published 2019 Jan 8. doi:10.3390/ijerph16010157

36. Haddawy P, Hasan AHMI, Kasantikul R, et al. Spatiotemporal Bayesian networks for malaria prediction. Artif Intell Med. 2018;84:127-138. doi:10.1016/j.artmed.2017.12.002

37. Laureano-Rosario AE, Duncan AP, Mendez-Lazaro PA, et al. Application of artificial neural networks for dengue fever outbreak predictions in the northwest coast of Yucatan, Mexico and San Juan, Puerto Rico. Trop Med Infect Dis. 2018;3(1):5. Published 2018 Jan 5. doi:10.3390/tropicalmed3010005

38. Buczak AL, Koshute PT, Babin SM, Feighner BH, Lewis SH. A data-driven epidemiological prediction method for dengue outbreaks using local and remote sensing data. BMC Med Inform Decis Mak. 2012;12:124. Published 2012 Nov 5. doi:10.1186/1472-6947-12-124

39. Scavuzzo JM, Trucco F, Espinosa M, et al. Modeling dengue vector population using remotely sensed data and machine learning. Acta Trop. 2018;185:167-175. doi:10.1016/j.actatropica.2018.05.003

40. Xue H, Bai Y, Hu H, Liang H. Influenza activity surveillance based on multiple regression model and artificial neural network. IEEE Access. 2018;6:563-575. doi:10.1109/ACCESS.2017.2771798

41. Jiang D, Hao M, Ding F, Fu J, Li M. Mapping the transmission risk of Zika virus using machine learning models. Acta Trop. 2018;185:391-399. doi:10.1016/j.actatropica.2018.06.021

42. Bragazzi NL, Dai H, Damiani G, Behzadifar M, Martini M, Wu J. How big data and artificial intelligence can help better manage the COVID-19 pandemic. Int J Environ Res Public Health. 2020;17(9):3176. Published 2020 May 2. doi:10.3390/ijerph17093176

43. Lake IR, Colón-González FJ, Barker GC, Morbey RA, Smith GE, Elliot AJ. Machine learning to refine decision making within a syndromic surveillance service. BMC Public Health. 2019;19(1):559. Published 2019 May 14. doi:10.1186/s12889-019-6916-9

44. Khan OF, Bebb G, Alimohamed NA. Artificial intelligence in medicine: what oncologists need to know about its potential-and its limitations. Oncol Exch. 2017;16(4):8-13. Accessed September 1, 2021. http://www.oncologyex.com/pdf/vol16_no4/feature_khan-ai.pdf

45. Badano LP, Keller DM, Muraru D, Torlasco C, Parati G. Artificial intelligence and cardiovascular imaging: A win-win combination. Anatol J Cardiol. 2020;24(4):214-223. doi:10.14744/AnatolJCardiol.2020.94491

46. Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA. 2013;309(13):1351-1352. doi:10.1001/jama.2013.393

47. Greatbatch O, Garrett A, Snape K. The impact of artificial intelligence on the current and future practice of clinical cancer genomics. Genet Res (Camb). 2019;101:e9. Published 2019 Oct 31. doi:10.1017/S0016672319000089

48. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7

49. Vollmer S, Mateen BA, Bohner G, et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness [published correction appears in BMJ. 2020 Apr 1;369:m1312]. BMJ. 2020;368:l6927. Published 2020 Mar 20. doi:10.1136/bmj.l6927

50. Lindsey R, Daluiski A, Chopra S, et al. Deep neural network improves fracture detection by clinicians. Proc Natl Acad Sci U S A. 2018;115(45):11591-11596. doi:10.1073/pnas.1806905115

51. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 2018;15(11):e1002683. doi:10.1371/journal.pmed.1002683

52. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582. doi:10.1148/radiol.2017162326

53. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. ArXiv. 2020 Feb 26:arXiv:2002.11379v2. Revised March 11, 2020. Accessed September 15, 2021. http://arxiv.org/abs/2002.11379

54. Salim M, Wåhlin E, Dembrower K, et al. External evaluation of 3 commercial artificial intelligence algorithms for independent assessment of screening mammograms. JAMA Oncol. 2020;6(10):1581-1588. doi:10.1001/jamaoncol.2020.3321

55. Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digit Med. 2018;1:9. doi:10.1038/s41746-017-0015-z

56. Sheth D, Giger ML. Artificial intelligence in the interpretation of breast cancer on MRI. J Magn Reson Imaging. 2020;51(5):1310-1324. doi:10.1002/jmri.26878

57. McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89-94. doi:10.1038/s41586-019-1799-6

58. Booth AL, Abels E, McCaffrey P. Development of a prognostic model for mortality in COVID-19 infection using machine learning. Mod Pathol. 2021;34(3):522-531. doi:10.1038/s41379-020-00700-x

59. Xu B, Kocyigit D, Grimm R, Griffin BP, Cheng F. Applications of artificial intelligence in multimodality cardiovascular imaging: a state-of-the-art review. Prog Cardiovasc Dis. 2020;63(3):367-376. doi:10.1016/j.pcad.2020.03.003

60. Dey D, Slomka PJ, Leeson P, et al. Artificial intelligence in cardiovascular imaging: JACC state-of-the-art review. J Am Coll Cardiol. 2019;73(11):1317-1335. doi:10.1016/j.jacc.2018.12.054

61. Carewell Health. AI powered ECG diagnosis solutions. Accessed November 2, 2020. https://www.carewellhealth.com/products_aiecg.html

62. Strodthoff N, Strodthoff C. Detecting and interpreting myocardial infarction using fully convolutional neural networks. Physiol Meas. 2019;40(1):015001. doi:10.1088/1361-6579/aaf34d

63. Hannun AY, Rajpurkar P, Haghpanahi M, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019;25(1):65-69. doi:10.1038/s41591-018-0268-3

64. Kwon JM, Jeon KH, Kim HM, et al. Comparing the performance of artificial intelligence and conventional diagnosis criteria for detecting left ventricular hypertrophy using electrocardiography. Europace. 2020;22(3):412-419. doi:10.1093/europace/euz324

<--pagebreak-->

65. Eko. FDA clears Eko’s AFib and heart murmur detection algorithms, making it the first AI-powered stethoscope to screen for serious heart conditions [press release]. Published January 28, 2020. Accessed September 15, 2021. https://www.businesswire.com/news/home/20200128005232/en/FDA-Clears-Eko’s-AFib-and-Heart-Murmur-Detection-Algorithms-Making-It-the-First-AI-Powered-Stethoscope-to-Screen-for-Serious-Heart-Conditions

66. Cruz-Roa A, Gilmore H, Basavanhally A, et al. Accurate and reproducible invasive breast cancer detection in whole-slide images: a deep learning approach for quantifying tumor extent. Sci Rep. 2017;7:46450. doi:10.1038/srep46450

67. Acs B, Rantalainen M, Hartman J. Artificial intelligence as the next step towards precision pathology. J Intern Med. 2020;288(1):62-81. doi:10.1111/joim.13030

68. Mobadersany P, Yousefi S, Amgad M, et al. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc Natl Acad Sci U S A. 2018;115(13):E2970-E2979. doi:10.1073/pnas.1717139115

69. Wang X, Janowczyk A, Zhou Y, et al. Prediction of recurrence in early stage non-small cell lung cancer using computer extracted nuclear features from digital H&E images. Sci Rep. 2017;7:13543. doi:10.1038/s41598-017-13773-7

70. Kulkarni PM, Robinson EJ, Pradhan JS, et al. Deep learning based on standard H&E images of primary melanoma tumors identifies patients at risk for visceral recurrence and death. Clin Cancer Res. 2020;26(5):1126-1134. doi:10.1158/1078-0432.CCR-19-1495

71. Du XL, Li WB, Hu BJ. Application of artificial intelligence in ophthalmology. Int J Ophthalmol. 2018;11(9):1555-1561. doi:10.18240/ijo.2018.09.21

72. Gunasekeran DV, Wong TY. Artificial intelligence in ophthalmology in 2020: a technology on the cusp for translation and implementation. Asia Pac J Ophthalmol (Phila). 2020;9(2):61-66. doi:10.1097/01.APO.0000656984.56467.2c

73. Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103(2):167-175. doi:10.1136/bjophthalmol-2018-313173

74. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2410. doi:10.1001/jama.2016.17216

75. US Food and Drug Administration. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems [press release]. Published April 11, 2018. Accessed September 15, 2021. https://www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-intelligence-based-device-detect-certain-diabetes-related-eye

76. Long E, Chen J, Wu X, et al. Artificial intelligence manages congenital cataract with individualized prediction and telehealth computing. NPJ Digit Med. 2020;3:112. doi:10.1038/s41746-020-00319-x

77. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24(9):1342-1350. doi:10.1038/s41591-018-0107-6

78. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118. doi:10.1038/nature21056

79. Brinker TJ, Hekler A, Enk AH, et al. Deep neural networks are superior to dermatologists in melanoma image classification. Eur J Cancer. 2019;119:11-17. doi:10.1016/j.ejca.2019.05.023

80. Brinker TJ, Hekler A, Enk AH, et al. A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task. Eur J Cancer. 2019;111:148-154. doi:10.1016/j.ejca.2019.02.005

81. Haenssle HA, Fink C, Schneiderbauer R, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):1836-1842. doi:10.1093/annonc/mdy166

82. Li CX, Shen CB, Xue K, et al. Artificial intelligence in dermatology: past, present, and future. Chin Med J (Engl). 2019;132(17):2017-2020. doi:10.1097/CM9.0000000000000372

83. Tschandl P, Codella N, Akay BN, et al. Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study. Lancet Oncol. 2019;20(7):938-947. doi:10.1016/S1470-2045(19)30333-X

84. Han SS, Park I, Eun Chang SE, et al. Augmented intelligence dermatology: deep neural networks empower medical professionals in diagnosing skin cancer and predicting treatment options for 134 skin disorders. J Invest Dermatol. 2020;140(9):1753-1761. doi:10.1016/j.jid.2020.01.019

85. Freeman K, Dinnes J, Chuchu N, et al. Algorithm based smartphone apps to assess risk of skin cancer in adults: systematic review of diagnostic accuracy studies [published correction appears in BMJ. 2020 Feb 25;368:m645]. BMJ. 2020;368:m127. Published 2020 Feb 10. doi:10.1136/bmj.m127

86. Chen YC, Ke WC, Chiu HW. Risk classification of cancer survival using ANN with gene expression data from multiple laboratories. Comput Biol Med. 2014;48:1-7. doi:10.1016/j.compbiomed.2014.02.006

87. Kim W, Kim KS, Lee JE, et al. Development of novel breast cancer recurrence prediction model using support vector machine. J Breast Cancer. 2012;15(2):230-238. doi:10.4048/jbc.2012.15.2.230

88. Merath K, Hyer JM, Mehta R, et al. Use of machine learning for prediction of patient risk of postoperative complications after liver, pancreatic, and colorectal surgery. J Gastrointest Surg. 2020;24(8):1843-1851. doi:10.1007/s11605-019-04338-2

89. Santos-García G, Varela G, Novoa N, Jiménez MF. Prediction of postoperative morbidity after lung resection using an artificial neural network ensemble. Artif Intell Med. 2004;30(1):61-69. doi:10.1016/S0933-3657(03)00059-9

90. Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys. 2017;44(2):547-557. doi:10.1002/mp.12045

91. Lou B, Doken S, Zhuang T, et al. An image-based deep learning framework for individualizing radiotherapy dose. Lancet Digit Health. 2019;1(3):e136-e147. doi:10.1016/S2589-7500(19)30058-5

92. Xu J, Yang P, Xue S, et al. Translating cancer genomics into precision medicine with artificial intelligence: applications, challenges and future perspectives. Hum Genet. 2019;138(2):109-124. doi:10.1007/s00439-019-01970-5

93. Patel NM, Michelini VV, Snell JM, et al. Enhancing next‐generation sequencing‐guided cancer care through cognitive computing. Oncologist. 2018;23(2):179-185. doi:10.1634/theoncologist.2017-0170

94. Le Berre C, Sandborn WJ, Aridhi S, et al. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology. 2020;158(1):76-94.e2. doi:10.1053/j.gastro.2019.08.058

95. Yang YJ, Bang CS. Application of artificial intelligence in gastroenterology. World J Gastroenterol. 2019;25(14):1666-1683. doi:10.3748/wjg.v25.i14.1666

96. Wang P, Berzin TM, Glissen Brown JR, et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut. 2019;68(10):1813-1819. doi:10.1136/gutjnl-2018-317500

<--pagebreak-->

97. Gupta R, Krishnam SP, Schaefer PW, Lev MH, Gonzalez RG. An East Coast perspective on artificial intelligence and machine learning: part 2: ischemic stroke imaging and triage. Neuroimaging Clin N Am. 2020;30(4):467-478. doi:10.1016/j.nic.2020.08.002

98. Beli M, Bobi V, Badža M, Šolaja N, Duri-Jovii M, Kosti VS. Artificial intelligence for assisting diagnostics and assessment of Parkinson’s disease—a review. Clin Neurol Neurosurg. 2019;184:105442. doi:10.1016/j.clineuro.2019.105442

99. An S, Kang C, Lee HW. Artificial intelligence and computational approaches for epilepsy. J Epilepsy Res. 2020;10(1):8-17. doi:10.14581/jer.20003

100. Pavel AM, Rennie JM, de Vries LS, et al. A machine-learning algorithm for neonatal seizure recognition: a multicentre, randomised, controlled trial. Lancet Child Adolesc Health. 2020;4(10):740-749. doi:10.1016/S2352-4642(20)30239-X

101. Afzal HMR, Luo S, Ramadan S, Lechner-Scott J. The emerging role of artificial intelligence in multiple sclerosis imaging [published online ahead of print, 2020 Oct 28]. Mult Scler. 2020;1352458520966298. doi:10.1177/1352458520966298

102. Bouton CE. Restoring movement in paralysis with a bioelectronic neural bypass approach: current state and future directions. Cold Spring Harb Perspect Med. 2019;9(11):a034306. doi:10.1101/cshperspect.a034306

103. Durstewitz D, Koppe G, Meyer-Lindenberg A. Deep neural networks in psychiatry. Mol Psychiatry. 2019;24(11):1583-1598. doi:10.1038/s41380-019-0365-9

104. Fonseka TM, Bhat V, Kennedy SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Aust N Z J Psychiatry. 2019;53(10):954-964. doi:10.1177/0004867419864428

105. Kessler RC, Hwang I, Hoffmire CA, et al. Developing a practical suicide risk prediction model for targeting high-risk patients in the Veterans Health Administration. Int J Methods Psychiatr Res. 2017;26(3):e1575. doi:10.1002/mpr.1575

106. Kessler RC, Bauer MS, Bishop TM, et al. Using administrative data to predict suicide after psychiatric hospitalization in the Veterans Health Administration System. Front Psychiatry. 2020;11:390. doi:10.3389/fpsyt.2020.00390

107. Kessler RC, van Loo HM, Wardenaar KJ, et al. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports. Mol Psychiatry. 2016;21(10):1366-1371. doi:10.1038/mp.2015.198

108. Horng S, Sontag DA, Halpern Y, Jernite Y, Shapiro NI, Nathanson LA. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. PLoS One. 2017;12(4):e0174708. doi:10.1371/journal.pone.0174708

109. Soffer S, Klang E, Barash Y, Grossman E, Zimlichman E. Predicting in-hospital mortality at admission to the medical ward: a big-data machine learning model. Am J Med. 2021;134(2):227-234.e4. doi:10.1016/j.amjmed.2020.07.014

110. Labovitz DL, Shafner L, Reyes Gil M, Virmani D, Hanina A. Using artificial intelligence to reduce the risk of nonadherence in patients on anticoagulation therapy. Stroke. 2017;48(5):1416-1419. doi:10.1161/STROKEAHA.116.016281

111. Forlenza GP. Use of artificial intelligence to improve diabetes outcomes in patients using multiple daily injections therapy. Diabetes Technol Ther. 2019;21(S2):S24-S28. doi:10.1089/dia.2019.0077

112. Poser CM. CT scan and the practice of neurology. Arch Neurol. 1977;34(2):132. doi:10.1001/archneur.1977.00500140086023

113. Angus DC. Randomized clinical trials of artificial intelligence. JAMA. 2020;323(11):1043-1045. doi:10.1001/jama.2020.1039

114. Topol EJ. Welcoming new guidelines for AI clinical research. Nat Med. 2020;26(9):1318-1320. doi:10.1038/s41591-020-1042-x

115. Collins GS, Moons KGM. Reporting of artificial intelligence prediction models. Lancet. 2019;393(10181):1577-1579. doi:10.1016/S0140-6736(19)30037-6

116. Cruz Rivera S, Liu X, Chan AW, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020;26(9):1351-1363. doi:10.1038/s41591-020-1037-7

117. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med. 2020;26(9):1364-1374. doi:10.1038/s41591-020-1034-x

118. McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5(4):115-133. doi:10.1007/BF02478259

119. Samuel AL. Some studies in machine learning using the game of Checkers. IBM J Res Dev. 1959;3(3):535-554. Accessed September 15, 2021. https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.368.2254

120. Sonoda M, Takano M, Miyahara J, Kato H. Computed radiography utilizing scanning laser stimulated luminescence. Radiology. 1983;148(3):833-838. doi:10.1148/radiology.148.3.6878707

121. Dechter R. Learning while searching in constraint-satisfaction-problems. AAAI’86: proceedings of the fifth AAAI national conference on artificial intelligence. Published 1986. Accessed September 15, 2021. https://www.aaai.org/Papers/AAAI/1986/AAAI86-029.pdf

122. Le Cun Y, Jackel LD, Boser B, et al. Handwritten digit recognition: applications of neural network chips and automatic learning. IEEE Commun Mag. 1989;27(11):41-46. doi:10.1109/35.41400

123. US Food and Drug Administration. FDA allows marketing of first whole slide imaging system for digital pathology [press release]. Published April 12, 2017. Accessed September 15, 2021. https://www.fda.gov/news-events/press-announcements/fda-allows-marketing-first-whole-slide-imaging-system-digital-pathology

Author and Disclosure Information

L. Brannon Thomas is Chief of the Microbiology Laboratory, Stephen Mastorides is Chief of Pathology, Narayan Viswanadhan is Assistant Chief of Radiology, Colleen Jakey is Chief of Staff, and Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory, all at James A. Haley Veterans’ Hospital in Tampa, Florida. Andrew Borkowski and Stephen Mastorides are Professors, Colleen Jakey is an Associate Professor, and L. Brannon Thomas is an Associate Professor, all at the University of South Florida, Morsani College of Medicine in Tampa.
Correspondence: L. Brannon Thomas ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 38(11)a
Publications
Topics
Page Number
527-538
Sections
Author and Disclosure Information

L. Brannon Thomas is Chief of the Microbiology Laboratory, Stephen Mastorides is Chief of Pathology, Narayan Viswanadhan is Assistant Chief of Radiology, Colleen Jakey is Chief of Staff, and Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory, all at James A. Haley Veterans’ Hospital in Tampa, Florida. Andrew Borkowski and Stephen Mastorides are Professors, Colleen Jakey is an Associate Professor, and L. Brannon Thomas is an Associate Professor, all at the University of South Florida, Morsani College of Medicine in Tampa.
Correspondence: L. Brannon Thomas ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

L. Brannon Thomas is Chief of the Microbiology Laboratory, Stephen Mastorides is Chief of Pathology, Narayan Viswanadhan is Assistant Chief of Radiology, Colleen Jakey is Chief of Staff, and Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory, all at James A. Haley Veterans’ Hospital in Tampa, Florida. Andrew Borkowski and Stephen Mastorides are Professors, Colleen Jakey is an Associate Professor, and L. Brannon Thomas is an Associate Professor, all at the University of South Florida, Morsani College of Medicine in Tampa.
Correspondence: L. Brannon Thomas ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Artificial Intelligence (AI) was first described in 1956 and refers to machines having the ability to learn as they receive and process information, resulting in the ability to “think” like humans.1 AI’s impact in medicine is increasing; currently, at least 29 AI medical devices and algorithms are approved by the US Food and Drug Administration (FDA) in a variety of areas, including radiograph interpretation, managing glucose levels in patients with diabetes mellitus, analyzing electrocardiograms (ECGs), and diagnosing sleep disorders among others.2 Significantly, in 2020, the Centers for Medicare and Medicaid Services (CMS) announced the first reimbursement to hospitals for an AI platform, a model for early detection of strokes.3 AI is rapidly becoming an integral part of health care, and its role will only increase in the future (Table).

Key Historical Events in Artifical Intelligence Development With a Focus on Health Care Applications Table

As knowledge in medicine is expanding exponentially, AI has great potential to assist with handling complex patient care data. The concept of exponential growth is not a natural one. As Bini described, with exponential growth the volume of knowledge amassed over the past 10 years will now occur in perhaps only 1 year.1 Likewise, equivalent advances over the past year may take just a few months. This phenomenon is partly due to the law of accelerating returns, which states that advances feed on themselves, continually increasing the rate of further advances.4 The volume of medical data doubles every 2 to 5 years.5 Fortunately, the field of AI is growing exponentially as well and can help health care practitioners (HCPs) keep pace, allowing the continued delivery of effective health care.

In this report, we review common terminology, principles, and general applications of AI, followed by current and potential applications of AI for selected medical specialties. Finally, we discuss AI’s future in health care, along with potential risks and pitfalls.

 

AI Overview

AI refers to machine programs that can “learn” or think based on past experiences. This functionality contrasts with simple rules-based programming available to health care for years. An example of rules-based programming is the warfarindosing.org website developed by Barnes-Jewish Hospital at Washington University Medical Center, which guides initial warfarin dosing.6,7 The prescriber inputs detailed patient information, including age, sex, height, weight, tobacco history, medications, laboratory results, and genotype if available. The application then calculates recommended warfarin dosing regimens to avoid over- or underanticoagulation. While the dosing algorithm may be complex, it depends entirely on preprogrammed rules. The program does not learn to reach its conclusions and recommendations from patient data.

In contrast, one of the most common subsets of AI is machine learning (ML). ML describes a program that “learns from experience and improves its performance as it learns.”1 With ML, the computer is initially provided with a training data set—data with known outcomes or labels. Because the initial data are input from known samples, this type of AI is known as supervised learning.8-10 As an example, we recently reported using ML to diagnose various types of cancer from pathology slides.11 In one experiment, we captured images of colon adenocarcinoma and normal colon (these 2 groups represent the training data set). Unlike traditional programming, we did not define characteristics that would differentiate colon cancer from normal; rather, the machine learned these characteristics independently by assessing the labeled images provided. A second data set (the validation data set) was used to evaluate the program and fine-tune the ML training model’s parameters. Finally, the program was presented with new images of cancer and normal cases for final assessment of accuracy (test data set). Our program learned to recognize differences from the images provided and was able to differentiate normal and cancer images with > 95% accuracy.

Advances in computer processing have allowed for the development of artificial neural networks (ANNs). While there are several types of ANNs, the most common types used for image classification and segmentation are known as convolutional neural networks (CNNs).9,12-14 The programs are designed to work similar to the human brain, specifically the visual cortex.15,16 As data are acquired, they are processed by various layers in the program. Much like neurons in the brain, one layer decides whether to advance information to the next.13,14 CNNs can be many layers deep, leading to the term deep learning: “computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.”1,13,17

ANNs can process larger volumes of data. This advance has led to the development of unstructured or unsupervised learning. With this type of learning, imputing defined features (ie, predetermined answers) of the training data set described above is no longer required.1,8,10,14 The advantage of unsupervised learning is that the program can be presented raw data and extract meaningful interpretation without human input, often with less bias than may exist with supervised learning.1,18 If shown enough data, the program can extract relevant features to make conclusions independently without predefined definitions, potentially uncovering markers not previously known. For example, several studies have used unsupervised learning to search patient data to assess readmission risks of patients with congestive heart failure.10,19,20 AI compiled features independently and not previously defined, predicting patients at greater risk for readmission superior to traditional methods.

Artificial Intelligence Health Care Applications Figure


A more detailed description of the various terminologies and techniques of AI is beyond the scope of this review.9,10,17,21 However, in this basic overview, we describe 4 general areas that AI impacts health care (Figure).

 

 

Health Care Applications

Image analysis has seen the most AI health care applications.8,15 AI has shown potential in interpreting many types of medical images, including pathology slides, radiographs of various types, retina and other eye scans, and photographs of skin lesions. Many studies have demonstrated that AI can interpret these images as accurately as or even better than experienced clinicians.9,13,22-29 Studies have suggested AI interpretation of radiographs may better distinguish patients infected with COVID-19 from other causes of pneumonia, and AI interpretation of pathology slides may detect specific genetic mutations not previously identified without additional molecular tests.11,14,23,24,30-32

The second area in which AI can impact health care is improving workflow and efficiency. AI has improved surgery scheduling, saving significant revenue, and decreased patient wait times for appointments.1 AI can screen and triage radiographs, allowing attention to be directed to critical patients. This use would be valuable in many busy clinical settings, such as the recent COVID-19 pandemic.8,23 Similarly, AI can screen retina images to prioritize urgent conditions.25 AI has improved pathologists’ efficiency when used to detect breast metastases.33 Finally, AI may reduce medical errors, thereby ensuring patient safety.8,9,34

A third health care benefit of AI is in public health and epidemiology. AI can assist with clinical decision-making and diagnoses in low-income countries and areas with limited health care resources and personnel.25,29 AI can improve identification of infectious outbreaks, such as tuberculosis, malaria, dengue fever, and influenza.29,35-40 AI has been used to predict transmission patterns of the Zika virus and the current COVID-19 pandemic.41,42 Applications can stratify the risk of outbreaks based on multiple factors, including age, income, race, atypical geographic clusters, and seasonal factors like rainfall and temperature.35,36,38,43 AI has been used to assess morbidity and mortality, such as predicting disease severity with malaria and identifying treatment failures in tuberculosis.29

Finally, AI can dramatically impact health care due to processing large data sets or disconnected volumes of patient information—so-called big data.44-46 An example is the widespread use of electronic health records (EHRs) such as the Computerized Patient Record System used in Veteran Affairs medical centers (VAMCs). Much of patient information exists in written text: HCP notes, laboratory and radiology reports, medication records, etc. Natural language processing (NLP) allows platforms to sort through extensive volumes of data on complex patients at rates much faster than human capability, which has great potential to assist with diagnosis and treatment decisions.9

Medical literature is being produced at rates that exceed our ability to digest. More than 200,000 cancer-related articles were published in 2019 alone.14 NLP capabilities of AI have the potential to rapidly sort through this extensive medical literature and relate specific verbiage in patient records guiding therapy.46 IBM Watson, a supercomputer based on ML and NLP, demonstrates this concept with many potential applications, only some of which relate to health care.1,9 Watson has an oncology component to assimilate multiple aspects of patient care, including clinical notes, pathology results, radiograph findings, staging, and a tumor’s genetic profile. It coordinates these inputs from the EHR and mines medical literature and research databases to recommend treatment options.1,46 AI can assess and compile far greater patient data and therapeutic options than would be feasible by individual clinicians, thus providing customized patient care.47 Watson has partnered with numerous medical centers, including MD Anderson Cancer Center and Memorial Sloan Kettering Cancer Center, with variable success.44,47-49 While the full potential of Watson appears not yet realized, these AI-driven approaches will likely play an important role in leveraging the hidden value in the expanding volume of health care information.

Medical Specialty Applications

Radiology

Currently > 70% of FDA-approved AI medical devices are in the field of radiology.2 Most radiology departments have used AI-friendly digital imaging for years, such as the picture archiving and communication systems used by numerous health care systems, including VAMCs.2,15 Gray-scale images common in radiology lend themselves to standardization, although AI is not limited to black-and- white image interpretation.15

An abundance of literature describes plain radiograph interpretation using AI. One FDA-approved platform improved X-ray diagnosis of wrist fractures when used by emergency medicine clinicians.2,50 AI has been applied to chest X-ray (CXR) interpretation of many conditions, including pneumonia, tuberculosis, malignant lung lesions, and COVID-19.23,25,28,44,51-53 For example, Nam and colleagues suggested AI is better at diagnosing malignant pulmonary nodules from CXRs than are trained radiologists.28

In addition to plain radiographs, AI has been applied to many other imaging technologies, including ultrasounds, positron emission tomography, mammograms, computed tomography (CT), and magnetic resonance imaging (MRI).15,26,44,48,54-56 A large study demonstrated that ML platforms significantly reduced the time to diagnose intracranial hemorrhages on CT and identified subtle hemorrhages missed by radiologists.55 Other studies have claimed that AI programs may be better than radiologists in detecting cancer in screening mammograms, and 3 FDA-approved devices focus on mammogram interpretation.2,15,54,57 There is also great interest in MRI applications to detect and predict prognosis for breast cancer based on imaging findings.21,56

Aside from providing accurate diagnoses, other studies focus on AI radiograph interpretation to assist with patient screening, triage, improving time to final diagnosis, providing a rapid “second opinion,” and even monitoring disease progression and offering insights into prognosis.8,21,23,52,55,56,58 These features help in busy urban centers but may play an even greater role in areas with limited access to health care or trained specialists such as radiologists.52

 

 

Cardiology

Cardiology has the second highest number of FDA-approved AI applications.2 Many cardiology AI platforms involve image analysis, as described in several recent reviews.45,59,60 AI has been applied to echocardiography to measure ejection fractions, detect valvular disease, and assess heart failure from hypertrophic and restrictive cardiomyopathy and amyloidosis.45,48,59 Applications for cardiac CT scans and CT angiography have successfully quantified both calcified and noncalcified coronary artery plaques and lumen assessments, assessed myocardial perfusion, and performed coronary artery calcium scoring.45,59,60 Likewise, AI applications for cardiac MRI have been used to quantitate ejection fraction, large vessel flow assessment, and cardiac scar burden.45,59

For years ECG devices have provided interpretation with limited accuracy using preprogrammed parameters.48 However, the application of AI allows ECG interpretation on par with trained cardiologists. Numerous such AI applications exist, and 2 FDA-approved devices perform ECG interpretation.2,61-64 One of these devices incorporates an AI-powered stethoscope to detect atrial fibrillation and heart murmurs.65

Pathology

The advancement of whole slide imaging, wherein entire slides can be scanned and digitized at high speed and resolution, creates great potential for AI applications in pathology.12,24,32,33,66 A landmark study demonstrating the potential of AI for assessing whole slide imaging examined sentinel lymph node metastases in patients with breast cancer.22 Multiple algorithms in the study demonstrated that AI was equivalent or better than pathologists in detecting metastases, especially when the pathologists were time-constrained consistent with a normal working environment. Significantly, the most accurate and efficient diagnoses were achieved when the pathologist and AI interpretations were used together.22,33

AI has shown promise in diagnosing many other entities, including cancers of the prostate (including Gleason scoring), lung, colon, breast, and skin.11,12,24,27,32,67 In addition, AI has shown great potential in scoring biomarkers important for prognosis and treatment, such as immunohistochemistry (IHC) labeling of Ki-67 and PD-L1.32 Pathologists can have difficulty classifying certain tumors or determining the site of origin for metastases, often having to rely on IHC with limited success. The unique features of image analysis with AI have the potential to assist in classifying difficult tumors and identifying sites of origin for metastatic disease based on morphology alone.11

Oncology depends heavily on molecular pathology testing to dictate treatment options and determine prognosis. Preliminary studies suggest that AI interpretation alone has the potential to delineate whether certain molecular mutations are present in tumors from various sites.11,14,24,32 One study combined histology and genomic results for AI interpretation that improved prognostic predictions.68 In addition, AI analysis may have potential in predicting tumor recurrence or prognosis based on cellular features, as demonstrated for lung cancer and melanoma.67,69,70

Ophthalmology

AI applications for ophthalmology have focused on diabetic retinopathy, age-related macular degeneration, glaucoma, retinopathy of prematurity, age-related and congenital cataracts, and retinal vein occlusion.71-73 Diabetic retinopathy is a leading cause of blindness and has been studied by numerous platforms with good success, most having used color fundus photography.71,72 One study showed AI could diagnose diabetic retinopathy and diabetic macular edema with specificities similar to ophthalmologists.74 In 2018, the FDA approved the AI platform IDx-DR. This diagnostic system classifies retinal images and recommends referral for patients determined to have “more than mild diabetic retinopathy” and reexamination within a year for other patients.8,75 Significantly, the platform recommendations do not require confirmation by a clinician.8

AI has been applied to other modalities in ophthalmology such as optical coherence tomography (OCT) to diagnose retinal disease and to predict appropriate management of congenital cataracts.25,73,76 For example, an AI application using OCT has been demonstrated to match or exceed the accuracy of retinal experts in diagnosing and triaging patients with a variety of retinal pathologies, including patients needing urgent referrals.77

Dermatology

Multiple studies demonstrate AI performs at least equal to experienced dermatologists in differentiating selected skin lesions.78-81 For example, Esteva and colleagues demonstrated AI could differentiate keratinocyte carcinomas from benign seborrheic keratoses and malignant melanomas from benign nevi with accuracy equal to 21 board-certified dermatologists.78

 

 

AI is applicable to various imaging procedures common to dermatology, such as dermoscopy, very high-frequency ultrasound, and reflectance confocal microscopy.82 Several studies have demonstrated that AI interpretation compared favorably to dermatologists evaluating dermoscopy to assess melanocytic lesions.78-81,83

A limitation in these studies is that they differentiate only a few diagnoses.82 Furthermore, dermatologists have sensory input such as touch and visual examination under various conditions, something AI has yet to replicate.15,34,84 Also, most AI devices use no or limited clinical information.81 Dermatologists can recognize rarer conditions for which AI models may have had limited or no training.34 Nevertheless, a recent study assessed AI for the diagnosis of 134 separate skin disorders with promising results, including providing diagnoses with accuracy comparable to that of dermatologists and providing accurate treatment strategies.84 As Topol points out, most skin lesions are diagnosed in the primary care setting where AI can have a greater impact when used in conjunction with the clinical impression, especially where specialists are in limited supply.48,78

Finally, dermatology lends itself to using portable or smartphone applications (apps) wherein the user can photograph a lesion for analysis by AI algorithms to assess the need for further evaluation or make treatment recommendations.34,84,85 Although results from currently available apps are not encouraging, they may play a greater role as the technology advances.34,85

 

Oncology

Applications of AI in oncology include predicting prognosis for patients with cancer based on histologic and/or genetic information.14,68,86 Programs can predict the risk of complications before and recurrence risks after surgery for malignancies.44,87-89 AI can also assist in treatment planning and predict treatment failure with radiation therapy.90,91

AI has great potential in processing the large volumes of patient data in cancer genomics. Next-generation sequencing has allowed for the identification of millions of DNA sequences in a single tumor to detect genetic anomalies.92 Thousands of mutations can be found in individual tumor samples, and processing this information and determining its significance can be beyond human capability.14 We know little about the effects of various mutation combinations, and most tumors have a heterogeneous molecular profile among different cell populations.14,93 The presence or absence of various mutations can have diagnostic, prognostic, and therapeutic implications.93 AI has great potential to sort through these complex data and identify actionable findings.

More than 200,000 cancer-related articles were published in 2019, and publications in the field of cancer genomics are increasing exponentially.14,92,93 Patel and colleagues assessed the utility of IBM Watson for Genomics against results from a molecular tumor board.93 Watson for Genomics identified potentially significant mutations not identified by the tumor board in 32% of patients. Most mutations were related to new clinical trials not yet added to the tumor board watch list, demonstrating the role AI will have in processing the large volume of genetic data required to deliver personalized medicine moving forward.

Gastroenterology

AI has shown promise in predicting risk or outcomes based on clinical parameters in various common gastroenterology problems, including gastric reflux, acute pancreatitis, gastrointestinal bleeding, celiac disease, and inflammatory bowel disease.94,95 AI endoscopic analysis has demonstrated potential in assessing Barrett’s esophagus, gastric Helicobacter pylori infections, gastric atrophy, and gastric intestinal metaplasia.95 Applications have been used to assess esophageal, gastric, and colonic malignancies, including depth of invasion based on endoscopic images.95 Finally, studies have evaluated AI to assess small colon polyps during colonoscopy, including differentiating benign and premalignant polyps with success comparable to gastroenterologists.94,95 AI has been shown to increase the speed and accuracy of gastroenterologists in detecting small polyps during colonoscopy.48 In a prospective randomized study, colonoscopies performed using an AI device identified significantly more small adenomatous polyps than colonoscopies without AI.96

Neurology

It has been suggested that AI technologies are well suited for application in neurology due to the subtle presentation of many neurologic diseases.16 Viz LVO, the first CMS-approved AI reimbursement for the diagnosis of strokes, analyzes CTs to detect early ischemic strokes and alerts the medical team, thus shortening time to treatment.3,97 Many other AI platforms are in use or development that use CT and MRI for the early detection of strokes as well as for treatment and prognosis.9,97

AI technologies have been applied to neurodegenerative diseases, such as Alzheimer and Parkinson diseases.16,98 For example, several studies have evaluated patient movements in Parkinson disease for both early diagnosis and to assess response to treatment.98 These evaluations included assessment with both external cameras as well as wearable devices and smartphone apps.

 

 



AI has also been applied to seizure disorders, attempting to determine seizure type, localize the area of seizure onset, and address the challenges of identifying seizures in neonates.99,100 Other potential applications range from early detection and prognosis predictions for cases of multiple sclerosis to restoring movement in paralysis from a variety of conditions such as spinal cord injury.9,101,102
 

 

Mental Health

Due to the interactive nature of mental health care, the field has been slower to develop AI applications.18 With heavy reliance on textual information (eg, clinic notes, mood rating scales, and documentation of conversations), successful AI applications in this field will likely rely heavily on NLP.18 However, studies investigating the application of AI to mental health have also incorporated data such as brain imaging, smartphone monitoring, and social media platforms, such as Facebook and Twitter.18,103,104

The risk of suicide is higher in veteran patients, and ML algorithms have had limited success in predicting suicide risk in both veteran and nonveteran populations.104-106 While early models have low positive predictive values and low sensitivities, they still promise to be a useful tool in conjunction with traditional risk assessments.106 Kessler and colleagues suggest that combining multiple rather than single ML algorithms might lead to greater success.105,106

AI may assist in diagnosing other mental health disorders, including major depressive disorder, attention deficit hyperactivity disorder (ADHD), schizophrenia, posttraumatic stress disorder, and Alzheimer disease.103,104,107 These investigations are in the early stages with limited clinical applicability. However, 2 AI applications awaiting FDA approval relate to ADHD and opioid use.2 Furthermore, potential exists for AI to not only assist with prevention and diagnosis of ADHD, but also to identify optimal treatment options.2,103

General and Personalized Medicine

Additional AI applications include diagnosing patients with suspected sepsis, measuring liver iron concentrations, predicting hospital mortality at the time of admission, and more.2,108,109 AI can guide end-of-life decisions such as resuscitation status or whether to initiate mechanical ventilation.48

AI-driven smartphone apps can be beneficial to both patients and clinicians. Examples include predicting nonadherence to anticoagulation therapy, monitoring heart rhythms for atrial fibrillation or signs of hyperkalemia in patients with renal failure, and improving outcomes for patients with diabetes mellitus by decreasing glycemic variability and reducing hypoglycemia.8,48,110,111 The potential for AI applications to health care and personalized medicine are almost limitless.

Discussion

With ever-increasing expectations for all health care sectors to deliver timely, fiscally-responsible, high-quality health care, AI has the potential to have numerous impacts. AI can improve diagnostic accuracy while limiting errors and impact patient safety such as assisting with prescription delivery.8,9,34 It can screen and triage patients, alerting clinicians to those needing more urgent evaluation.8,23,77,97 AI also may increase a clinician’s efficiency and speed to render a diagnosis.12,13,55,97 AI can provide a rapid second opinion, an ability especially beneficial in underserved areas with shortages of specialists.23,25,26,29,34 Similarly, AI may decrease the inter- and intraobserver variability common in many medical specialties.12,27,45 AI applications can also monitor disease progression, identifying patients at greatest risk, and provide information for prognosis.21,23,56,58 Finally, as described with applications using IBM Watson, AI can allow for an integrated approach to health care that is currently lacking.

We have described many reports suggesting AI can render diagnoses as well as or better than experienced clinicians, and speculation exists that AI will replace many roles currently performed by health care practitioners.9,26 However, most studies demonstrate that AI’s diagnostic benefits are best realized when used to supplement a clinician’s impression.8,22,30,33,52,54,56,69,84 AI is not likely to replace humans in health care in the foreseeable future. The technology can be likened to the impact of CT scans developed in the 1970s in neurology. Prior to such detailed imaging, neurologists spent extensive time performing detailed physicals to render diagnoses and locate lesions before surgery. There was mistrust of this new technology and concern that CT scans would eliminate the need for neurologists.112 On the contrary, neurology is alive and well, frequently being augmented by the technologies once speculated to replace it.

Commercial AI health care platforms represented a $2 billion industry in 2018 and are growing rapidly each year.13,32 Many AI products are offered ready for implementation for various tasks, including diagnostics, patient management, and improved efficiency. Others will likely be provided as templates suitable for modification to meet the specific needs of the facility, practice, or specialty for its patient population.

 

 

AI Risks and Limitations

AI has several risks and limitations. Although there is progress in explainable AI, at times we still struggle to understand how the output provided by machine learning algorithms was created.44,48 The many layers associated with deep learning self-determine the criteria to reach its conclusion, and these criteria can continually evolve. The parameters of deep learning are not preprogrammed, and there are too many individual data points to be extrapolated or deconvoluted for evaluation at our current level of knowledge.26,51 These apparent lack of constraints cause concern for patient safety and suggest that greater validation and continued scrutiny of validity is required.8,48 Efforts are underway to create explainable AI programs to make their processes more transparent, but such clarification is limited presently.14,26,48,77

Another challenge of AI is determining the amount of training data required to function optimally. Also, if the output describes multiple variables or diagnoses, are each equally valid?113 Furthermore, many AI applications look for a specific process, such as cancer diagnoses on CXRs. However, how coexisting conditions like cardiomegaly, emphysema, pneumonia, etc, seen on CXRs will affect the diagnosis needs to be considered.51,52 Zech and colleagues provide the example that diagnoses for pneumothorax are frequently rendered on CXRs with chest tubes in place.51 They suggest that CNNs may develop a bias toward diagnosing pneumothorax when chest tubes are present. Many current studies approach an issue in isolation, a situation not realistic in real-world clinical practice.26

Most studies on AI have been retrospective, and frequently data used to train the program are preselected.13,26 The data are typically validated on available databases rather than actual patients in the clinical setting, limiting confidence in the validity of the AI output when applied to real-world situations. Currently, fewer than 12 prospective trials had been published comparing AI with traditional clinical care.13,114 Randomized prospective clinical trials are even fewer, with none currently reported from the United States.13,114 The results from several studies have been shown to diminish when repeated prospectively.114

The FDA has created a new category known as Software as a Medical Device and has a Digital Health Innovation Action Plan to regulate AI platforms. Still, the process of AI regulation is of necessity different from traditional approval processes and is continually evolving.8 The FDA approval process cannot account for the fact that the program’s parameters may continually evolve or adapt.2

Guidelines for investigating and reporting AI research with its unique attributes are being developed. Examples include the TRIPOD-ML statement and others.49,115 In September 2020, 2 publications addressed the paucity of gold-standard randomized clinical trials in clinical AI applications.116,117 The SPIRIT-AI statement expands on the original SPIRIT statement published in 2013 to guide minimal reporting standards for AI clinical trial protocols to promote transparency of design and methodology.116 Similarly, the CONSORT-AI extension, stemming from the original CONSORT statement in 1996, aims to ensure quality reporting of randomized controlled trials in AI.117

Another risk with AI is that while an individual physician making a mistake may adversely affect 1 patient, a single mistake in an AI algorithm could potentially affect thousands of patients.48 Also, AI programs developed for patient populations at a facility may not translate to another. Referred to as overfitting, this phenomenon relates to selection bias in training data sets.15,34,49,51,52 Studies have shown that programs that underrepresent certain group characteristics such as age, sex, or race may be less effective when applied to a population in which these characteristics have differing representations.8,48,49 This problem of underrepresentation has been demonstrated in programs interpreting pathology slides, radiographs, and skin lesions.15,32,51

Admittedly, most of these challenges are not specific to AI and existed in health care previously. Physicians make mistakes, treatments are sometimes used without adequate prospective studies, and medications are given without understanding their mechanism of action, much like AI-facilitated processes reach a conclusion that cannot be fully explained.48

Conclusions

The view that AI will dramatically impact health care in the coming years will likely prove true. However, much work is needed, especially because of the paucity of prospective clinical trials as has been historically required in medical research. Any concern that AI will replace HCPs seems unwarranted. Early studies suggest that even AI programs that appear to exceed human interpretation perform best when working in cooperation with and oversight from clinicians. AI’s greatest potential appears to be its ability to augment care from health professionals, improving efficiency and accuracy, and should be anticipated with enthusiasm as the field moves forward at an exponential rate.

Acknowledgments

The authors thank Makenna G. Thomas for proofreading and review of the manuscript. This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital. This research has been approved by the James A. Haley Veteran’s Hospital Office of Communications and Media.

Artificial Intelligence (AI) was first described in 1956 and refers to machines having the ability to learn as they receive and process information, resulting in the ability to “think” like humans.1 AI’s impact in medicine is increasing; currently, at least 29 AI medical devices and algorithms are approved by the US Food and Drug Administration (FDA) in a variety of areas, including radiograph interpretation, managing glucose levels in patients with diabetes mellitus, analyzing electrocardiograms (ECGs), and diagnosing sleep disorders among others.2 Significantly, in 2020, the Centers for Medicare and Medicaid Services (CMS) announced the first reimbursement to hospitals for an AI platform, a model for early detection of strokes.3 AI is rapidly becoming an integral part of health care, and its role will only increase in the future (Table).

Key Historical Events in Artifical Intelligence Development With a Focus on Health Care Applications Table

As knowledge in medicine is expanding exponentially, AI has great potential to assist with handling complex patient care data. The concept of exponential growth is not a natural one. As Bini described, with exponential growth the volume of knowledge amassed over the past 10 years will now occur in perhaps only 1 year.1 Likewise, equivalent advances over the past year may take just a few months. This phenomenon is partly due to the law of accelerating returns, which states that advances feed on themselves, continually increasing the rate of further advances.4 The volume of medical data doubles every 2 to 5 years.5 Fortunately, the field of AI is growing exponentially as well and can help health care practitioners (HCPs) keep pace, allowing the continued delivery of effective health care.

In this report, we review common terminology, principles, and general applications of AI, followed by current and potential applications of AI for selected medical specialties. Finally, we discuss AI’s future in health care, along with potential risks and pitfalls.

 

AI Overview

AI refers to machine programs that can “learn” or think based on past experiences. This functionality contrasts with simple rules-based programming available to health care for years. An example of rules-based programming is the warfarindosing.org website developed by Barnes-Jewish Hospital at Washington University Medical Center, which guides initial warfarin dosing.6,7 The prescriber inputs detailed patient information, including age, sex, height, weight, tobacco history, medications, laboratory results, and genotype if available. The application then calculates recommended warfarin dosing regimens to avoid over- or underanticoagulation. While the dosing algorithm may be complex, it depends entirely on preprogrammed rules. The program does not learn to reach its conclusions and recommendations from patient data.

In contrast, one of the most common subsets of AI is machine learning (ML). ML describes a program that “learns from experience and improves its performance as it learns.”1 With ML, the computer is initially provided with a training data set—data with known outcomes or labels. Because the initial data are input from known samples, this type of AI is known as supervised learning.8-10 As an example, we recently reported using ML to diagnose various types of cancer from pathology slides.11 In one experiment, we captured images of colon adenocarcinoma and normal colon (these 2 groups represent the training data set). Unlike traditional programming, we did not define characteristics that would differentiate colon cancer from normal; rather, the machine learned these characteristics independently by assessing the labeled images provided. A second data set (the validation data set) was used to evaluate the program and fine-tune the ML training model’s parameters. Finally, the program was presented with new images of cancer and normal cases for final assessment of accuracy (test data set). Our program learned to recognize differences from the images provided and was able to differentiate normal and cancer images with > 95% accuracy.

Advances in computer processing have allowed for the development of artificial neural networks (ANNs). While there are several types of ANNs, the most common types used for image classification and segmentation are known as convolutional neural networks (CNNs).9,12-14 The programs are designed to work similar to the human brain, specifically the visual cortex.15,16 As data are acquired, they are processed by various layers in the program. Much like neurons in the brain, one layer decides whether to advance information to the next.13,14 CNNs can be many layers deep, leading to the term deep learning: “computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.”1,13,17

ANNs can process larger volumes of data. This advance has led to the development of unstructured or unsupervised learning. With this type of learning, imputing defined features (ie, predetermined answers) of the training data set described above is no longer required.1,8,10,14 The advantage of unsupervised learning is that the program can be presented raw data and extract meaningful interpretation without human input, often with less bias than may exist with supervised learning.1,18 If shown enough data, the program can extract relevant features to make conclusions independently without predefined definitions, potentially uncovering markers not previously known. For example, several studies have used unsupervised learning to search patient data to assess readmission risks of patients with congestive heart failure.10,19,20 AI compiled features independently and not previously defined, predicting patients at greater risk for readmission superior to traditional methods.

Artificial Intelligence Health Care Applications Figure


A more detailed description of the various terminologies and techniques of AI is beyond the scope of this review.9,10,17,21 However, in this basic overview, we describe 4 general areas that AI impacts health care (Figure).

 

 

Health Care Applications

Image analysis has seen the most AI health care applications.8,15 AI has shown potential in interpreting many types of medical images, including pathology slides, radiographs of various types, retina and other eye scans, and photographs of skin lesions. Many studies have demonstrated that AI can interpret these images as accurately as or even better than experienced clinicians.9,13,22-29 Studies have suggested AI interpretation of radiographs may better distinguish patients infected with COVID-19 from other causes of pneumonia, and AI interpretation of pathology slides may detect specific genetic mutations not previously identified without additional molecular tests.11,14,23,24,30-32

The second area in which AI can impact health care is improving workflow and efficiency. AI has improved surgery scheduling, saving significant revenue, and decreased patient wait times for appointments.1 AI can screen and triage radiographs, allowing attention to be directed to critical patients. This use would be valuable in many busy clinical settings, such as the recent COVID-19 pandemic.8,23 Similarly, AI can screen retina images to prioritize urgent conditions.25 AI has improved pathologists’ efficiency when used to detect breast metastases.33 Finally, AI may reduce medical errors, thereby ensuring patient safety.8,9,34

A third health care benefit of AI is in public health and epidemiology. AI can assist with clinical decision-making and diagnoses in low-income countries and areas with limited health care resources and personnel.25,29 AI can improve identification of infectious outbreaks, such as tuberculosis, malaria, dengue fever, and influenza.29,35-40 AI has been used to predict transmission patterns of the Zika virus and the current COVID-19 pandemic.41,42 Applications can stratify the risk of outbreaks based on multiple factors, including age, income, race, atypical geographic clusters, and seasonal factors like rainfall and temperature.35,36,38,43 AI has been used to assess morbidity and mortality, such as predicting disease severity with malaria and identifying treatment failures in tuberculosis.29

Finally, AI can dramatically impact health care due to processing large data sets or disconnected volumes of patient information—so-called big data.44-46 An example is the widespread use of electronic health records (EHRs) such as the Computerized Patient Record System used in Veteran Affairs medical centers (VAMCs). Much of patient information exists in written text: HCP notes, laboratory and radiology reports, medication records, etc. Natural language processing (NLP) allows platforms to sort through extensive volumes of data on complex patients at rates much faster than human capability, which has great potential to assist with diagnosis and treatment decisions.9

Medical literature is being produced at rates that exceed our ability to digest. More than 200,000 cancer-related articles were published in 2019 alone.14 NLP capabilities of AI have the potential to rapidly sort through this extensive medical literature and relate specific verbiage in patient records guiding therapy.46 IBM Watson, a supercomputer based on ML and NLP, demonstrates this concept with many potential applications, only some of which relate to health care.1,9 Watson has an oncology component to assimilate multiple aspects of patient care, including clinical notes, pathology results, radiograph findings, staging, and a tumor’s genetic profile. It coordinates these inputs from the EHR and mines medical literature and research databases to recommend treatment options.1,46 AI can assess and compile far greater patient data and therapeutic options than would be feasible by individual clinicians, thus providing customized patient care.47 Watson has partnered with numerous medical centers, including MD Anderson Cancer Center and Memorial Sloan Kettering Cancer Center, with variable success.44,47-49 While the full potential of Watson appears not yet realized, these AI-driven approaches will likely play an important role in leveraging the hidden value in the expanding volume of health care information.

Medical Specialty Applications

Radiology

Currently > 70% of FDA-approved AI medical devices are in the field of radiology.2 Most radiology departments have used AI-friendly digital imaging for years, such as the picture archiving and communication systems used by numerous health care systems, including VAMCs.2,15 Gray-scale images common in radiology lend themselves to standardization, although AI is not limited to black-and- white image interpretation.15

An abundance of literature describes plain radiograph interpretation using AI. One FDA-approved platform improved X-ray diagnosis of wrist fractures when used by emergency medicine clinicians.2,50 AI has been applied to chest X-ray (CXR) interpretation of many conditions, including pneumonia, tuberculosis, malignant lung lesions, and COVID-19.23,25,28,44,51-53 For example, Nam and colleagues suggested AI is better at diagnosing malignant pulmonary nodules from CXRs than are trained radiologists.28

In addition to plain radiographs, AI has been applied to many other imaging technologies, including ultrasounds, positron emission tomography, mammograms, computed tomography (CT), and magnetic resonance imaging (MRI).15,26,44,48,54-56 A large study demonstrated that ML platforms significantly reduced the time to diagnose intracranial hemorrhages on CT and identified subtle hemorrhages missed by radiologists.55 Other studies have claimed that AI programs may be better than radiologists in detecting cancer in screening mammograms, and 3 FDA-approved devices focus on mammogram interpretation.2,15,54,57 There is also great interest in MRI applications to detect and predict prognosis for breast cancer based on imaging findings.21,56

Aside from providing accurate diagnoses, other studies focus on AI radiograph interpretation to assist with patient screening, triage, improving time to final diagnosis, providing a rapid “second opinion,” and even monitoring disease progression and offering insights into prognosis.8,21,23,52,55,56,58 These features help in busy urban centers but may play an even greater role in areas with limited access to health care or trained specialists such as radiologists.52

 

 

Cardiology

Cardiology has the second highest number of FDA-approved AI applications.2 Many cardiology AI platforms involve image analysis, as described in several recent reviews.45,59,60 AI has been applied to echocardiography to measure ejection fractions, detect valvular disease, and assess heart failure from hypertrophic and restrictive cardiomyopathy and amyloidosis.45,48,59 Applications for cardiac CT scans and CT angiography have successfully quantified both calcified and noncalcified coronary artery plaques and lumen assessments, assessed myocardial perfusion, and performed coronary artery calcium scoring.45,59,60 Likewise, AI applications for cardiac MRI have been used to quantitate ejection fraction, large vessel flow assessment, and cardiac scar burden.45,59

For years ECG devices have provided interpretation with limited accuracy using preprogrammed parameters.48 However, the application of AI allows ECG interpretation on par with trained cardiologists. Numerous such AI applications exist, and 2 FDA-approved devices perform ECG interpretation.2,61-64 One of these devices incorporates an AI-powered stethoscope to detect atrial fibrillation and heart murmurs.65

Pathology

The advancement of whole slide imaging, wherein entire slides can be scanned and digitized at high speed and resolution, creates great potential for AI applications in pathology.12,24,32,33,66 A landmark study demonstrating the potential of AI for assessing whole slide imaging examined sentinel lymph node metastases in patients with breast cancer.22 Multiple algorithms in the study demonstrated that AI was equivalent or better than pathologists in detecting metastases, especially when the pathologists were time-constrained consistent with a normal working environment. Significantly, the most accurate and efficient diagnoses were achieved when the pathologist and AI interpretations were used together.22,33

AI has shown promise in diagnosing many other entities, including cancers of the prostate (including Gleason scoring), lung, colon, breast, and skin.11,12,24,27,32,67 In addition, AI has shown great potential in scoring biomarkers important for prognosis and treatment, such as immunohistochemistry (IHC) labeling of Ki-67 and PD-L1.32 Pathologists can have difficulty classifying certain tumors or determining the site of origin for metastases, often having to rely on IHC with limited success. The unique features of image analysis with AI have the potential to assist in classifying difficult tumors and identifying sites of origin for metastatic disease based on morphology alone.11

Oncology depends heavily on molecular pathology testing to dictate treatment options and determine prognosis. Preliminary studies suggest that AI interpretation alone has the potential to delineate whether certain molecular mutations are present in tumors from various sites.11,14,24,32 One study combined histology and genomic results for AI interpretation that improved prognostic predictions.68 In addition, AI analysis may have potential in predicting tumor recurrence or prognosis based on cellular features, as demonstrated for lung cancer and melanoma.67,69,70

Ophthalmology

AI applications for ophthalmology have focused on diabetic retinopathy, age-related macular degeneration, glaucoma, retinopathy of prematurity, age-related and congenital cataracts, and retinal vein occlusion.71-73 Diabetic retinopathy is a leading cause of blindness and has been studied by numerous platforms with good success, most having used color fundus photography.71,72 One study showed AI could diagnose diabetic retinopathy and diabetic macular edema with specificities similar to ophthalmologists.74 In 2018, the FDA approved the AI platform IDx-DR. This diagnostic system classifies retinal images and recommends referral for patients determined to have “more than mild diabetic retinopathy” and reexamination within a year for other patients.8,75 Significantly, the platform recommendations do not require confirmation by a clinician.8

AI has been applied to other modalities in ophthalmology such as optical coherence tomography (OCT) to diagnose retinal disease and to predict appropriate management of congenital cataracts.25,73,76 For example, an AI application using OCT has been demonstrated to match or exceed the accuracy of retinal experts in diagnosing and triaging patients with a variety of retinal pathologies, including patients needing urgent referrals.77

Dermatology

Multiple studies demonstrate AI performs at least equal to experienced dermatologists in differentiating selected skin lesions.78-81 For example, Esteva and colleagues demonstrated AI could differentiate keratinocyte carcinomas from benign seborrheic keratoses and malignant melanomas from benign nevi with accuracy equal to 21 board-certified dermatologists.78

 

 

AI is applicable to various imaging procedures common to dermatology, such as dermoscopy, very high-frequency ultrasound, and reflectance confocal microscopy.82 Several studies have demonstrated that AI interpretation compared favorably to dermatologists evaluating dermoscopy to assess melanocytic lesions.78-81,83

A limitation in these studies is that they differentiate only a few diagnoses.82 Furthermore, dermatologists have sensory input such as touch and visual examination under various conditions, something AI has yet to replicate.15,34,84 Also, most AI devices use no or limited clinical information.81 Dermatologists can recognize rarer conditions for which AI models may have had limited or no training.34 Nevertheless, a recent study assessed AI for the diagnosis of 134 separate skin disorders with promising results, including providing diagnoses with accuracy comparable to that of dermatologists and providing accurate treatment strategies.84 As Topol points out, most skin lesions are diagnosed in the primary care setting where AI can have a greater impact when used in conjunction with the clinical impression, especially where specialists are in limited supply.48,78

Finally, dermatology lends itself to using portable or smartphone applications (apps) wherein the user can photograph a lesion for analysis by AI algorithms to assess the need for further evaluation or make treatment recommendations.34,84,85 Although results from currently available apps are not encouraging, they may play a greater role as the technology advances.34,85

 

Oncology

Applications of AI in oncology include predicting prognosis for patients with cancer based on histologic and/or genetic information.14,68,86 Programs can predict the risk of complications before and recurrence risks after surgery for malignancies.44,87-89 AI can also assist in treatment planning and predict treatment failure with radiation therapy.90,91

AI has great potential in processing the large volumes of patient data in cancer genomics. Next-generation sequencing has allowed for the identification of millions of DNA sequences in a single tumor to detect genetic anomalies.92 Thousands of mutations can be found in individual tumor samples, and processing this information and determining its significance can be beyond human capability.14 We know little about the effects of various mutation combinations, and most tumors have a heterogeneous molecular profile among different cell populations.14,93 The presence or absence of various mutations can have diagnostic, prognostic, and therapeutic implications.93 AI has great potential to sort through these complex data and identify actionable findings.

More than 200,000 cancer-related articles were published in 2019, and publications in the field of cancer genomics are increasing exponentially.14,92,93 Patel and colleagues assessed the utility of IBM Watson for Genomics against results from a molecular tumor board.93 Watson for Genomics identified potentially significant mutations not identified by the tumor board in 32% of patients. Most mutations were related to new clinical trials not yet added to the tumor board watch list, demonstrating the role AI will have in processing the large volume of genetic data required to deliver personalized medicine moving forward.

Gastroenterology

AI has shown promise in predicting risk or outcomes based on clinical parameters in various common gastroenterology problems, including gastric reflux, acute pancreatitis, gastrointestinal bleeding, celiac disease, and inflammatory bowel disease.94,95 AI endoscopic analysis has demonstrated potential in assessing Barrett’s esophagus, gastric Helicobacter pylori infections, gastric atrophy, and gastric intestinal metaplasia.95 Applications have been used to assess esophageal, gastric, and colonic malignancies, including depth of invasion based on endoscopic images.95 Finally, studies have evaluated AI to assess small colon polyps during colonoscopy, including differentiating benign and premalignant polyps with success comparable to gastroenterologists.94,95 AI has been shown to increase the speed and accuracy of gastroenterologists in detecting small polyps during colonoscopy.48 In a prospective randomized study, colonoscopies performed using an AI device identified significantly more small adenomatous polyps than colonoscopies without AI.96

Neurology

It has been suggested that AI technologies are well suited for application in neurology due to the subtle presentation of many neurologic diseases.16 Viz LVO, the first CMS-approved AI reimbursement for the diagnosis of strokes, analyzes CTs to detect early ischemic strokes and alerts the medical team, thus shortening time to treatment.3,97 Many other AI platforms are in use or development that use CT and MRI for the early detection of strokes as well as for treatment and prognosis.9,97

AI technologies have been applied to neurodegenerative diseases, such as Alzheimer and Parkinson diseases.16,98 For example, several studies have evaluated patient movements in Parkinson disease for both early diagnosis and to assess response to treatment.98 These evaluations included assessment with both external cameras as well as wearable devices and smartphone apps.

 

 



AI has also been applied to seizure disorders, attempting to determine seizure type, localize the area of seizure onset, and address the challenges of identifying seizures in neonates.99,100 Other potential applications range from early detection and prognosis predictions for cases of multiple sclerosis to restoring movement in paralysis from a variety of conditions such as spinal cord injury.9,101,102
 

 

Mental Health

Due to the interactive nature of mental health care, the field has been slower to develop AI applications.18 With heavy reliance on textual information (eg, clinic notes, mood rating scales, and documentation of conversations), successful AI applications in this field will likely rely heavily on NLP.18 However, studies investigating the application of AI to mental health have also incorporated data such as brain imaging, smartphone monitoring, and social media platforms, such as Facebook and Twitter.18,103,104

The risk of suicide is higher in veteran patients, and ML algorithms have had limited success in predicting suicide risk in both veteran and nonveteran populations.104-106 While early models have low positive predictive values and low sensitivities, they still promise to be a useful tool in conjunction with traditional risk assessments.106 Kessler and colleagues suggest that combining multiple rather than single ML algorithms might lead to greater success.105,106

AI may assist in diagnosing other mental health disorders, including major depressive disorder, attention deficit hyperactivity disorder (ADHD), schizophrenia, posttraumatic stress disorder, and Alzheimer disease.103,104,107 These investigations are in the early stages with limited clinical applicability. However, 2 AI applications awaiting FDA approval relate to ADHD and opioid use.2 Furthermore, potential exists for AI to not only assist with prevention and diagnosis of ADHD, but also to identify optimal treatment options.2,103

General and Personalized Medicine

Additional AI applications include diagnosing patients with suspected sepsis, measuring liver iron concentrations, predicting hospital mortality at the time of admission, and more.2,108,109 AI can guide end-of-life decisions such as resuscitation status or whether to initiate mechanical ventilation.48

AI-driven smartphone apps can be beneficial to both patients and clinicians. Examples include predicting nonadherence to anticoagulation therapy, monitoring heart rhythms for atrial fibrillation or signs of hyperkalemia in patients with renal failure, and improving outcomes for patients with diabetes mellitus by decreasing glycemic variability and reducing hypoglycemia.8,48,110,111 The potential for AI applications to health care and personalized medicine are almost limitless.

Discussion

With ever-increasing expectations for all health care sectors to deliver timely, fiscally-responsible, high-quality health care, AI has the potential to have numerous impacts. AI can improve diagnostic accuracy while limiting errors and impact patient safety such as assisting with prescription delivery.8,9,34 It can screen and triage patients, alerting clinicians to those needing more urgent evaluation.8,23,77,97 AI also may increase a clinician’s efficiency and speed to render a diagnosis.12,13,55,97 AI can provide a rapid second opinion, an ability especially beneficial in underserved areas with shortages of specialists.23,25,26,29,34 Similarly, AI may decrease the inter- and intraobserver variability common in many medical specialties.12,27,45 AI applications can also monitor disease progression, identifying patients at greatest risk, and provide information for prognosis.21,23,56,58 Finally, as described with applications using IBM Watson, AI can allow for an integrated approach to health care that is currently lacking.

We have described many reports suggesting AI can render diagnoses as well as or better than experienced clinicians, and speculation exists that AI will replace many roles currently performed by health care practitioners.9,26 However, most studies demonstrate that AI’s diagnostic benefits are best realized when used to supplement a clinician’s impression.8,22,30,33,52,54,56,69,84 AI is not likely to replace humans in health care in the foreseeable future. The technology can be likened to the impact of CT scans developed in the 1970s in neurology. Prior to such detailed imaging, neurologists spent extensive time performing detailed physicals to render diagnoses and locate lesions before surgery. There was mistrust of this new technology and concern that CT scans would eliminate the need for neurologists.112 On the contrary, neurology is alive and well, frequently being augmented by the technologies once speculated to replace it.

Commercial AI health care platforms represented a $2 billion industry in 2018 and are growing rapidly each year.13,32 Many AI products are offered ready for implementation for various tasks, including diagnostics, patient management, and improved efficiency. Others will likely be provided as templates suitable for modification to meet the specific needs of the facility, practice, or specialty for its patient population.

 

 

AI Risks and Limitations

AI has several risks and limitations. Although there is progress in explainable AI, at times we still struggle to understand how the output provided by machine learning algorithms was created.44,48 The many layers associated with deep learning self-determine the criteria to reach its conclusion, and these criteria can continually evolve. The parameters of deep learning are not preprogrammed, and there are too many individual data points to be extrapolated or deconvoluted for evaluation at our current level of knowledge.26,51 These apparent lack of constraints cause concern for patient safety and suggest that greater validation and continued scrutiny of validity is required.8,48 Efforts are underway to create explainable AI programs to make their processes more transparent, but such clarification is limited presently.14,26,48,77

Another challenge of AI is determining the amount of training data required to function optimally. Also, if the output describes multiple variables or diagnoses, are each equally valid?113 Furthermore, many AI applications look for a specific process, such as cancer diagnoses on CXRs. However, how coexisting conditions like cardiomegaly, emphysema, pneumonia, etc, seen on CXRs will affect the diagnosis needs to be considered.51,52 Zech and colleagues provide the example that diagnoses for pneumothorax are frequently rendered on CXRs with chest tubes in place.51 They suggest that CNNs may develop a bias toward diagnosing pneumothorax when chest tubes are present. Many current studies approach an issue in isolation, a situation not realistic in real-world clinical practice.26

Most studies on AI have been retrospective, and frequently data used to train the program are preselected.13,26 The data are typically validated on available databases rather than actual patients in the clinical setting, limiting confidence in the validity of the AI output when applied to real-world situations. Currently, fewer than 12 prospective trials had been published comparing AI with traditional clinical care.13,114 Randomized prospective clinical trials are even fewer, with none currently reported from the United States.13,114 The results from several studies have been shown to diminish when repeated prospectively.114

The FDA has created a new category known as Software as a Medical Device and has a Digital Health Innovation Action Plan to regulate AI platforms. Still, the process of AI regulation is of necessity different from traditional approval processes and is continually evolving.8 The FDA approval process cannot account for the fact that the program’s parameters may continually evolve or adapt.2

Guidelines for investigating and reporting AI research with its unique attributes are being developed. Examples include the TRIPOD-ML statement and others.49,115 In September 2020, 2 publications addressed the paucity of gold-standard randomized clinical trials in clinical AI applications.116,117 The SPIRIT-AI statement expands on the original SPIRIT statement published in 2013 to guide minimal reporting standards for AI clinical trial protocols to promote transparency of design and methodology.116 Similarly, the CONSORT-AI extension, stemming from the original CONSORT statement in 1996, aims to ensure quality reporting of randomized controlled trials in AI.117

Another risk with AI is that while an individual physician making a mistake may adversely affect 1 patient, a single mistake in an AI algorithm could potentially affect thousands of patients.48 Also, AI programs developed for patient populations at a facility may not translate to another. Referred to as overfitting, this phenomenon relates to selection bias in training data sets.15,34,49,51,52 Studies have shown that programs that underrepresent certain group characteristics such as age, sex, or race may be less effective when applied to a population in which these characteristics have differing representations.8,48,49 This problem of underrepresentation has been demonstrated in programs interpreting pathology slides, radiographs, and skin lesions.15,32,51

Admittedly, most of these challenges are not specific to AI and existed in health care previously. Physicians make mistakes, treatments are sometimes used without adequate prospective studies, and medications are given without understanding their mechanism of action, much like AI-facilitated processes reach a conclusion that cannot be fully explained.48

Conclusions

The view that AI will dramatically impact health care in the coming years will likely prove true. However, much work is needed, especially because of the paucity of prospective clinical trials as has been historically required in medical research. Any concern that AI will replace HCPs seems unwarranted. Early studies suggest that even AI programs that appear to exceed human interpretation perform best when working in cooperation with and oversight from clinicians. AI’s greatest potential appears to be its ability to augment care from health professionals, improving efficiency and accuracy, and should be anticipated with enthusiasm as the field moves forward at an exponential rate.

Acknowledgments

The authors thank Makenna G. Thomas for proofreading and review of the manuscript. This material is the result of work supported with resources and the use of facilities at the James A. Haley Veterans’ Hospital. This research has been approved by the James A. Haley Veteran’s Hospital Office of Communications and Media.

References

1. Bini SA. Artificial intelligence, machine learning, deep learning, and cognitive computing: what do these terms mean and how will they impact health care? J Arthroplasty. 2018;33(8):2358-2361. doi:10.1016/j.arth.2018.02.067

2. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3:118. doi:10.1038/s41746-020-00324-0

3. Viz. AI powered synchronized stroke care. Accessed September 15, 2021. https://www.viz.ai/ischemic-stroke

4. Buchanan M. The law of accelerating returns. Nat Phys. 2008;4(7):507. doi:10.1038/nphys1010

5. IBM Watson Health computes a pair of new solutions to improve healthcare data and security. Published September 10, 2015. Accessed October 21, 2020. https://www.techrepublic.com/article/ibm-watson-health-computes-a-pair-of-new-solutions-to-improve-healthcare-data-and-security

6. Borkowski AA, Kardani A, Mastorides SM, Thomas LB. Warfarin pharmacogenomics: recommendations with available patented clinical technologies. Recent Pat Biotechnol. 2014;8(2):110-115. doi:10.2174/1872208309666140904112003

7. Washington University in St. Louis. Warfarin dosing. Accessed September 15, 2021. http://www.warfarindosing.org/Source/Home.aspx

8. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-36. doi:10.1038/s41591-018-0307-0

9. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. Published 2017 Jun 21. doi:10.1136/svn-2017-000101

10. Johnson KW, Torres Soto J, Glicksberg BS, et al. Artificial intelligence in cardiology. J Am Coll Cardiol. 2018;71(23):2668-2679. doi:10.1016/j.jacc.2018.03.521

11. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.

12. Cruz-Roa A, Gilmore H, Basavanhally A, et al. High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: application to invasive breast cancer detection. PLoS One. 2018;13(5):e0196828. Published 2018 May 24. doi:10.1371/journal.pone.0196828

13. Nagendran M, Chen Y, Lovejoy CA, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689. Published 2020 Mar 25. doi:10.1136/bmj.m689

14. Shimizu H, Nakayama KI. Artificial intelligence in oncology. Cancer Sci. 2020;111(5):1452-1460. doi:10.1111/cas.14377

15. Talebi-Liasi F, Markowitz O. Is artificial intelligence going to replace dermatologists?. Cutis. 2020;105(1):28-31.

16. Valliani AA, Ranti D, Oermann EK. Deep learning and neurology: a systematic review. Neurol Ther. 2019;8(2):351-365. doi:10.1007/s40120-019-00153-8

17. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539

18. Graham S, Depp C, Lee EE, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep. 2019;21(11):116. Published 2019 Nov 7. doi:10.1007/s11920-019-1094-0

19. Golas SB, Shibahara T, Agboola S, et al. A machine learning model to predict the risk of 30-day readmissions in patients with heart failure: a retrospective analysis of electronic medical records data. BMC Med Inform Decis Mak. 2018;18(1):44. Published 2018 Jun 22. doi:10.1186/s12911-018-0620-z

20. Mortazavi BJ, Downing NS, Bucholz EM, et al. Analysis of machine learning techniques for heart failure readmissions. Circ Cardiovasc Qual Outcomes. 2016;9(6):629-640. doi:10.1161/CIRCOUTCOMES.116.003039

21. Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current status and future perspectives of artificial intelligence in magnetic resonance breast imaging. Contrast Media Mol Imaging. 2020;2020:6805710. Published 2020 Aug 28. doi:10.1155/2020/6805710

22. Ehteshami Bejnordi B, Veta M, Johannes van Diest P, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318(22):2199-2210. doi:10.1001/jama.2017.14585

23. Borkowski AA, Viswanadhan NA, Thomas LB, Guzman RD, Deland LA, Mastorides SM. Using artificial intelligence for COVID-19 chest X-ray diagnosis. Fed Pract. 2020;37(9):398-404. doi:10.12788/fp.0045

24. Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med. 2018;24(10):1559-1567. doi:10.1038/s41591-018-0177-5

25. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

26. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271-e297. doi:10.1016/S2589-7500(19)30123-2

27. Nagpal K, Foote D, Liu Y, et al. Development and validation of a deep learning algorithm for improving Gleason scoring of prostate cancer [published correction appears in NPJ Digit Med. 2019 Nov 19;2:113]. NPJ Digit Med. 2019;2:48. Published 2019 Jun 7. doi:10.1038/s41746-019-0112-2

28. Nam JG, Park S, Hwang EJ, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2019;290(1):218-228. doi:10.1148/radiol.2018180237

29. Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet. 2020;395(10236):1579-1586. doi:10.1016/S0140-6736(20)30226-9

30. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT [published correction appears in Radiology. 2021 Apr;299(1):E225]. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491

31. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905

32. Serag A, Ion-Margineanu A, Qureshi H, et al. Translational AI and deep learning in diagnostic pathology. Front Med (Lausanne). 2019;6:185. Published 2019 Oct 1. doi:10.3389/fmed.2019.00185

<--pagebreak-->

33. Wang D, Khosla A, Gargeya R, Irshad H, Beck AH. Deep learning for identifying metastatic breast cancer. ArXiv. 2016 June 18:arXiv:1606.05718v1. Published online June 18, 2016. Accessed September 15, 2021. http://arxiv.org/abs/1606.05718

34. Alabdulkareem A. Artificial intelligence and dermatologists: friends or foes? J Dermatology Dermatol Surg. 2019;23(2):57-60. doi:10.4103/jdds.jdds_19_19

35. Mollalo A, Mao L, Rashidi P, Glass GE. A GIS-based artificial neural network model for spatial distribution of tuberculosis across the continental United States. Int J Environ Res Public Health. 2019;16(1):157. Published 2019 Jan 8. doi:10.3390/ijerph16010157

36. Haddawy P, Hasan AHMI, Kasantikul R, et al. Spatiotemporal Bayesian networks for malaria prediction. Artif Intell Med. 2018;84:127-138. doi:10.1016/j.artmed.2017.12.002

37. Laureano-Rosario AE, Duncan AP, Mendez-Lazaro PA, et al. Application of artificial neural networks for dengue fever outbreak predictions in the northwest coast of Yucatan, Mexico and San Juan, Puerto Rico. Trop Med Infect Dis. 2018;3(1):5. Published 2018 Jan 5. doi:10.3390/tropicalmed3010005

38. Buczak AL, Koshute PT, Babin SM, Feighner BH, Lewis SH. A data-driven epidemiological prediction method for dengue outbreaks using local and remote sensing data. BMC Med Inform Decis Mak. 2012;12:124. Published 2012 Nov 5. doi:10.1186/1472-6947-12-124

39. Scavuzzo JM, Trucco F, Espinosa M, et al. Modeling dengue vector population using remotely sensed data and machine learning. Acta Trop. 2018;185:167-175. doi:10.1016/j.actatropica.2018.05.003

40. Xue H, Bai Y, Hu H, Liang H. Influenza activity surveillance based on multiple regression model and artificial neural network. IEEE Access. 2018;6:563-575. doi:10.1109/ACCESS.2017.2771798

41. Jiang D, Hao M, Ding F, Fu J, Li M. Mapping the transmission risk of Zika virus using machine learning models. Acta Trop. 2018;185:391-399. doi:10.1016/j.actatropica.2018.06.021

42. Bragazzi NL, Dai H, Damiani G, Behzadifar M, Martini M, Wu J. How big data and artificial intelligence can help better manage the COVID-19 pandemic. Int J Environ Res Public Health. 2020;17(9):3176. Published 2020 May 2. doi:10.3390/ijerph17093176

43. Lake IR, Colón-González FJ, Barker GC, Morbey RA, Smith GE, Elliot AJ. Machine learning to refine decision making within a syndromic surveillance service. BMC Public Health. 2019;19(1):559. Published 2019 May 14. doi:10.1186/s12889-019-6916-9

44. Khan OF, Bebb G, Alimohamed NA. Artificial intelligence in medicine: what oncologists need to know about its potential-and its limitations. Oncol Exch. 2017;16(4):8-13. Accessed September 1, 2021. http://www.oncologyex.com/pdf/vol16_no4/feature_khan-ai.pdf

45. Badano LP, Keller DM, Muraru D, Torlasco C, Parati G. Artificial intelligence and cardiovascular imaging: A win-win combination. Anatol J Cardiol. 2020;24(4):214-223. doi:10.14744/AnatolJCardiol.2020.94491

46. Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA. 2013;309(13):1351-1352. doi:10.1001/jama.2013.393

47. Greatbatch O, Garrett A, Snape K. The impact of artificial intelligence on the current and future practice of clinical cancer genomics. Genet Res (Camb). 2019;101:e9. Published 2019 Oct 31. doi:10.1017/S0016672319000089

48. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7

49. Vollmer S, Mateen BA, Bohner G, et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness [published correction appears in BMJ. 2020 Apr 1;369:m1312]. BMJ. 2020;368:l6927. Published 2020 Mar 20. doi:10.1136/bmj.l6927

50. Lindsey R, Daluiski A, Chopra S, et al. Deep neural network improves fracture detection by clinicians. Proc Natl Acad Sci U S A. 2018;115(45):11591-11596. doi:10.1073/pnas.1806905115

51. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 2018;15(11):e1002683. doi:10.1371/journal.pmed.1002683

52. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582. doi:10.1148/radiol.2017162326

53. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. ArXiv. 2020 Feb 26:arXiv:2002.11379v2. Revised March 11, 2020. Accessed September 15, 2021. http://arxiv.org/abs/2002.11379

54. Salim M, Wåhlin E, Dembrower K, et al. External evaluation of 3 commercial artificial intelligence algorithms for independent assessment of screening mammograms. JAMA Oncol. 2020;6(10):1581-1588. doi:10.1001/jamaoncol.2020.3321

55. Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digit Med. 2018;1:9. doi:10.1038/s41746-017-0015-z

56. Sheth D, Giger ML. Artificial intelligence in the interpretation of breast cancer on MRI. J Magn Reson Imaging. 2020;51(5):1310-1324. doi:10.1002/jmri.26878

57. McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89-94. doi:10.1038/s41586-019-1799-6

58. Booth AL, Abels E, McCaffrey P. Development of a prognostic model for mortality in COVID-19 infection using machine learning. Mod Pathol. 2021;34(3):522-531. doi:10.1038/s41379-020-00700-x

59. Xu B, Kocyigit D, Grimm R, Griffin BP, Cheng F. Applications of artificial intelligence in multimodality cardiovascular imaging: a state-of-the-art review. Prog Cardiovasc Dis. 2020;63(3):367-376. doi:10.1016/j.pcad.2020.03.003

60. Dey D, Slomka PJ, Leeson P, et al. Artificial intelligence in cardiovascular imaging: JACC state-of-the-art review. J Am Coll Cardiol. 2019;73(11):1317-1335. doi:10.1016/j.jacc.2018.12.054

61. Carewell Health. AI powered ECG diagnosis solutions. Accessed November 2, 2020. https://www.carewellhealth.com/products_aiecg.html

62. Strodthoff N, Strodthoff C. Detecting and interpreting myocardial infarction using fully convolutional neural networks. Physiol Meas. 2019;40(1):015001. doi:10.1088/1361-6579/aaf34d

63. Hannun AY, Rajpurkar P, Haghpanahi M, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019;25(1):65-69. doi:10.1038/s41591-018-0268-3

64. Kwon JM, Jeon KH, Kim HM, et al. Comparing the performance of artificial intelligence and conventional diagnosis criteria for detecting left ventricular hypertrophy using electrocardiography. Europace. 2020;22(3):412-419. doi:10.1093/europace/euz324

<--pagebreak-->

65. Eko. FDA clears Eko’s AFib and heart murmur detection algorithms, making it the first AI-powered stethoscope to screen for serious heart conditions [press release]. Published January 28, 2020. Accessed September 15, 2021. https://www.businesswire.com/news/home/20200128005232/en/FDA-Clears-Eko’s-AFib-and-Heart-Murmur-Detection-Algorithms-Making-It-the-First-AI-Powered-Stethoscope-to-Screen-for-Serious-Heart-Conditions

66. Cruz-Roa A, Gilmore H, Basavanhally A, et al. Accurate and reproducible invasive breast cancer detection in whole-slide images: a deep learning approach for quantifying tumor extent. Sci Rep. 2017;7:46450. doi:10.1038/srep46450

67. Acs B, Rantalainen M, Hartman J. Artificial intelligence as the next step towards precision pathology. J Intern Med. 2020;288(1):62-81. doi:10.1111/joim.13030

68. Mobadersany P, Yousefi S, Amgad M, et al. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc Natl Acad Sci U S A. 2018;115(13):E2970-E2979. doi:10.1073/pnas.1717139115

69. Wang X, Janowczyk A, Zhou Y, et al. Prediction of recurrence in early stage non-small cell lung cancer using computer extracted nuclear features from digital H&E images. Sci Rep. 2017;7:13543. doi:10.1038/s41598-017-13773-7

70. Kulkarni PM, Robinson EJ, Pradhan JS, et al. Deep learning based on standard H&E images of primary melanoma tumors identifies patients at risk for visceral recurrence and death. Clin Cancer Res. 2020;26(5):1126-1134. doi:10.1158/1078-0432.CCR-19-1495

71. Du XL, Li WB, Hu BJ. Application of artificial intelligence in ophthalmology. Int J Ophthalmol. 2018;11(9):1555-1561. doi:10.18240/ijo.2018.09.21

72. Gunasekeran DV, Wong TY. Artificial intelligence in ophthalmology in 2020: a technology on the cusp for translation and implementation. Asia Pac J Ophthalmol (Phila). 2020;9(2):61-66. doi:10.1097/01.APO.0000656984.56467.2c

73. Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103(2):167-175. doi:10.1136/bjophthalmol-2018-313173

74. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2410. doi:10.1001/jama.2016.17216

75. US Food and Drug Administration. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems [press release]. Published April 11, 2018. Accessed September 15, 2021. https://www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-intelligence-based-device-detect-certain-diabetes-related-eye

76. Long E, Chen J, Wu X, et al. Artificial intelligence manages congenital cataract with individualized prediction and telehealth computing. NPJ Digit Med. 2020;3:112. doi:10.1038/s41746-020-00319-x

77. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24(9):1342-1350. doi:10.1038/s41591-018-0107-6

78. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118. doi:10.1038/nature21056

79. Brinker TJ, Hekler A, Enk AH, et al. Deep neural networks are superior to dermatologists in melanoma image classification. Eur J Cancer. 2019;119:11-17. doi:10.1016/j.ejca.2019.05.023

80. Brinker TJ, Hekler A, Enk AH, et al. A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task. Eur J Cancer. 2019;111:148-154. doi:10.1016/j.ejca.2019.02.005

81. Haenssle HA, Fink C, Schneiderbauer R, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):1836-1842. doi:10.1093/annonc/mdy166

82. Li CX, Shen CB, Xue K, et al. Artificial intelligence in dermatology: past, present, and future. Chin Med J (Engl). 2019;132(17):2017-2020. doi:10.1097/CM9.0000000000000372

83. Tschandl P, Codella N, Akay BN, et al. Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study. Lancet Oncol. 2019;20(7):938-947. doi:10.1016/S1470-2045(19)30333-X

84. Han SS, Park I, Eun Chang SE, et al. Augmented intelligence dermatology: deep neural networks empower medical professionals in diagnosing skin cancer and predicting treatment options for 134 skin disorders. J Invest Dermatol. 2020;140(9):1753-1761. doi:10.1016/j.jid.2020.01.019

85. Freeman K, Dinnes J, Chuchu N, et al. Algorithm based smartphone apps to assess risk of skin cancer in adults: systematic review of diagnostic accuracy studies [published correction appears in BMJ. 2020 Feb 25;368:m645]. BMJ. 2020;368:m127. Published 2020 Feb 10. doi:10.1136/bmj.m127

86. Chen YC, Ke WC, Chiu HW. Risk classification of cancer survival using ANN with gene expression data from multiple laboratories. Comput Biol Med. 2014;48:1-7. doi:10.1016/j.compbiomed.2014.02.006

87. Kim W, Kim KS, Lee JE, et al. Development of novel breast cancer recurrence prediction model using support vector machine. J Breast Cancer. 2012;15(2):230-238. doi:10.4048/jbc.2012.15.2.230

88. Merath K, Hyer JM, Mehta R, et al. Use of machine learning for prediction of patient risk of postoperative complications after liver, pancreatic, and colorectal surgery. J Gastrointest Surg. 2020;24(8):1843-1851. doi:10.1007/s11605-019-04338-2

89. Santos-García G, Varela G, Novoa N, Jiménez MF. Prediction of postoperative morbidity after lung resection using an artificial neural network ensemble. Artif Intell Med. 2004;30(1):61-69. doi:10.1016/S0933-3657(03)00059-9

90. Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys. 2017;44(2):547-557. doi:10.1002/mp.12045

91. Lou B, Doken S, Zhuang T, et al. An image-based deep learning framework for individualizing radiotherapy dose. Lancet Digit Health. 2019;1(3):e136-e147. doi:10.1016/S2589-7500(19)30058-5

92. Xu J, Yang P, Xue S, et al. Translating cancer genomics into precision medicine with artificial intelligence: applications, challenges and future perspectives. Hum Genet. 2019;138(2):109-124. doi:10.1007/s00439-019-01970-5

93. Patel NM, Michelini VV, Snell JM, et al. Enhancing next‐generation sequencing‐guided cancer care through cognitive computing. Oncologist. 2018;23(2):179-185. doi:10.1634/theoncologist.2017-0170

94. Le Berre C, Sandborn WJ, Aridhi S, et al. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology. 2020;158(1):76-94.e2. doi:10.1053/j.gastro.2019.08.058

95. Yang YJ, Bang CS. Application of artificial intelligence in gastroenterology. World J Gastroenterol. 2019;25(14):1666-1683. doi:10.3748/wjg.v25.i14.1666

96. Wang P, Berzin TM, Glissen Brown JR, et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut. 2019;68(10):1813-1819. doi:10.1136/gutjnl-2018-317500

<--pagebreak-->

97. Gupta R, Krishnam SP, Schaefer PW, Lev MH, Gonzalez RG. An East Coast perspective on artificial intelligence and machine learning: part 2: ischemic stroke imaging and triage. Neuroimaging Clin N Am. 2020;30(4):467-478. doi:10.1016/j.nic.2020.08.002

98. Beli M, Bobi V, Badža M, Šolaja N, Duri-Jovii M, Kosti VS. Artificial intelligence for assisting diagnostics and assessment of Parkinson’s disease—a review. Clin Neurol Neurosurg. 2019;184:105442. doi:10.1016/j.clineuro.2019.105442

99. An S, Kang C, Lee HW. Artificial intelligence and computational approaches for epilepsy. J Epilepsy Res. 2020;10(1):8-17. doi:10.14581/jer.20003

100. Pavel AM, Rennie JM, de Vries LS, et al. A machine-learning algorithm for neonatal seizure recognition: a multicentre, randomised, controlled trial. Lancet Child Adolesc Health. 2020;4(10):740-749. doi:10.1016/S2352-4642(20)30239-X

101. Afzal HMR, Luo S, Ramadan S, Lechner-Scott J. The emerging role of artificial intelligence in multiple sclerosis imaging [published online ahead of print, 2020 Oct 28]. Mult Scler. 2020;1352458520966298. doi:10.1177/1352458520966298

102. Bouton CE. Restoring movement in paralysis with a bioelectronic neural bypass approach: current state and future directions. Cold Spring Harb Perspect Med. 2019;9(11):a034306. doi:10.1101/cshperspect.a034306

103. Durstewitz D, Koppe G, Meyer-Lindenberg A. Deep neural networks in psychiatry. Mol Psychiatry. 2019;24(11):1583-1598. doi:10.1038/s41380-019-0365-9

104. Fonseka TM, Bhat V, Kennedy SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Aust N Z J Psychiatry. 2019;53(10):954-964. doi:10.1177/0004867419864428

105. Kessler RC, Hwang I, Hoffmire CA, et al. Developing a practical suicide risk prediction model for targeting high-risk patients in the Veterans Health Administration. Int J Methods Psychiatr Res. 2017;26(3):e1575. doi:10.1002/mpr.1575

106. Kessler RC, Bauer MS, Bishop TM, et al. Using administrative data to predict suicide after psychiatric hospitalization in the Veterans Health Administration System. Front Psychiatry. 2020;11:390. doi:10.3389/fpsyt.2020.00390

107. Kessler RC, van Loo HM, Wardenaar KJ, et al. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports. Mol Psychiatry. 2016;21(10):1366-1371. doi:10.1038/mp.2015.198

108. Horng S, Sontag DA, Halpern Y, Jernite Y, Shapiro NI, Nathanson LA. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. PLoS One. 2017;12(4):e0174708. doi:10.1371/journal.pone.0174708

109. Soffer S, Klang E, Barash Y, Grossman E, Zimlichman E. Predicting in-hospital mortality at admission to the medical ward: a big-data machine learning model. Am J Med. 2021;134(2):227-234.e4. doi:10.1016/j.amjmed.2020.07.014

110. Labovitz DL, Shafner L, Reyes Gil M, Virmani D, Hanina A. Using artificial intelligence to reduce the risk of nonadherence in patients on anticoagulation therapy. Stroke. 2017;48(5):1416-1419. doi:10.1161/STROKEAHA.116.016281

111. Forlenza GP. Use of artificial intelligence to improve diabetes outcomes in patients using multiple daily injections therapy. Diabetes Technol Ther. 2019;21(S2):S24-S28. doi:10.1089/dia.2019.0077

112. Poser CM. CT scan and the practice of neurology. Arch Neurol. 1977;34(2):132. doi:10.1001/archneur.1977.00500140086023

113. Angus DC. Randomized clinical trials of artificial intelligence. JAMA. 2020;323(11):1043-1045. doi:10.1001/jama.2020.1039

114. Topol EJ. Welcoming new guidelines for AI clinical research. Nat Med. 2020;26(9):1318-1320. doi:10.1038/s41591-020-1042-x

115. Collins GS, Moons KGM. Reporting of artificial intelligence prediction models. Lancet. 2019;393(10181):1577-1579. doi:10.1016/S0140-6736(19)30037-6

116. Cruz Rivera S, Liu X, Chan AW, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020;26(9):1351-1363. doi:10.1038/s41591-020-1037-7

117. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med. 2020;26(9):1364-1374. doi:10.1038/s41591-020-1034-x

118. McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5(4):115-133. doi:10.1007/BF02478259

119. Samuel AL. Some studies in machine learning using the game of Checkers. IBM J Res Dev. 1959;3(3):535-554. Accessed September 15, 2021. https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.368.2254

120. Sonoda M, Takano M, Miyahara J, Kato H. Computed radiography utilizing scanning laser stimulated luminescence. Radiology. 1983;148(3):833-838. doi:10.1148/radiology.148.3.6878707

121. Dechter R. Learning while searching in constraint-satisfaction-problems. AAAI’86: proceedings of the fifth AAAI national conference on artificial intelligence. Published 1986. Accessed September 15, 2021. https://www.aaai.org/Papers/AAAI/1986/AAAI86-029.pdf

122. Le Cun Y, Jackel LD, Boser B, et al. Handwritten digit recognition: applications of neural network chips and automatic learning. IEEE Commun Mag. 1989;27(11):41-46. doi:10.1109/35.41400

123. US Food and Drug Administration. FDA allows marketing of first whole slide imaging system for digital pathology [press release]. Published April 12, 2017. Accessed September 15, 2021. https://www.fda.gov/news-events/press-announcements/fda-allows-marketing-first-whole-slide-imaging-system-digital-pathology

References

1. Bini SA. Artificial intelligence, machine learning, deep learning, and cognitive computing: what do these terms mean and how will they impact health care? J Arthroplasty. 2018;33(8):2358-2361. doi:10.1016/j.arth.2018.02.067

2. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3:118. doi:10.1038/s41746-020-00324-0

3. Viz. AI powered synchronized stroke care. Accessed September 15, 2021. https://www.viz.ai/ischemic-stroke

4. Buchanan M. The law of accelerating returns. Nat Phys. 2008;4(7):507. doi:10.1038/nphys1010

5. IBM Watson Health computes a pair of new solutions to improve healthcare data and security. Published September 10, 2015. Accessed October 21, 2020. https://www.techrepublic.com/article/ibm-watson-health-computes-a-pair-of-new-solutions-to-improve-healthcare-data-and-security

6. Borkowski AA, Kardani A, Mastorides SM, Thomas LB. Warfarin pharmacogenomics: recommendations with available patented clinical technologies. Recent Pat Biotechnol. 2014;8(2):110-115. doi:10.2174/1872208309666140904112003

7. Washington University in St. Louis. Warfarin dosing. Accessed September 15, 2021. http://www.warfarindosing.org/Source/Home.aspx

8. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-36. doi:10.1038/s41591-018-0307-0

9. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. Published 2017 Jun 21. doi:10.1136/svn-2017-000101

10. Johnson KW, Torres Soto J, Glicksberg BS, et al. Artificial intelligence in cardiology. J Am Coll Cardiol. 2018;71(23):2668-2679. doi:10.1016/j.jacc.2018.03.521

11. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.

12. Cruz-Roa A, Gilmore H, Basavanhally A, et al. High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: application to invasive breast cancer detection. PLoS One. 2018;13(5):e0196828. Published 2018 May 24. doi:10.1371/journal.pone.0196828

13. Nagendran M, Chen Y, Lovejoy CA, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689. Published 2020 Mar 25. doi:10.1136/bmj.m689

14. Shimizu H, Nakayama KI. Artificial intelligence in oncology. Cancer Sci. 2020;111(5):1452-1460. doi:10.1111/cas.14377

15. Talebi-Liasi F, Markowitz O. Is artificial intelligence going to replace dermatologists?. Cutis. 2020;105(1):28-31.

16. Valliani AA, Ranti D, Oermann EK. Deep learning and neurology: a systematic review. Neurol Ther. 2019;8(2):351-365. doi:10.1007/s40120-019-00153-8

17. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539

18. Graham S, Depp C, Lee EE, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep. 2019;21(11):116. Published 2019 Nov 7. doi:10.1007/s11920-019-1094-0

19. Golas SB, Shibahara T, Agboola S, et al. A machine learning model to predict the risk of 30-day readmissions in patients with heart failure: a retrospective analysis of electronic medical records data. BMC Med Inform Decis Mak. 2018;18(1):44. Published 2018 Jun 22. doi:10.1186/s12911-018-0620-z

20. Mortazavi BJ, Downing NS, Bucholz EM, et al. Analysis of machine learning techniques for heart failure readmissions. Circ Cardiovasc Qual Outcomes. 2016;9(6):629-640. doi:10.1161/CIRCOUTCOMES.116.003039

21. Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current status and future perspectives of artificial intelligence in magnetic resonance breast imaging. Contrast Media Mol Imaging. 2020;2020:6805710. Published 2020 Aug 28. doi:10.1155/2020/6805710

22. Ehteshami Bejnordi B, Veta M, Johannes van Diest P, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA. 2017;318(22):2199-2210. doi:10.1001/jama.2017.14585

23. Borkowski AA, Viswanadhan NA, Thomas LB, Guzman RD, Deland LA, Mastorides SM. Using artificial intelligence for COVID-19 chest X-ray diagnosis. Fed Pract. 2020;37(9):398-404. doi:10.12788/fp.0045

24. Coudray N, Ocampo PS, Sakellaropoulos T, et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med. 2018;24(10):1559-1567. doi:10.1038/s41591-018-0177-5

25. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

26. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271-e297. doi:10.1016/S2589-7500(19)30123-2

27. Nagpal K, Foote D, Liu Y, et al. Development and validation of a deep learning algorithm for improving Gleason scoring of prostate cancer [published correction appears in NPJ Digit Med. 2019 Nov 19;2:113]. NPJ Digit Med. 2019;2:48. Published 2019 Jun 7. doi:10.1038/s41746-019-0112-2

28. Nam JG, Park S, Hwang EJ, et al. Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2019;290(1):218-228. doi:10.1148/radiol.2018180237

29. Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet. 2020;395(10236):1579-1586. doi:10.1016/S0140-6736(20)30226-9

30. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT [published correction appears in Radiology. 2021 Apr;299(1):E225]. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491

31. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905

32. Serag A, Ion-Margineanu A, Qureshi H, et al. Translational AI and deep learning in diagnostic pathology. Front Med (Lausanne). 2019;6:185. Published 2019 Oct 1. doi:10.3389/fmed.2019.00185

<--pagebreak-->

33. Wang D, Khosla A, Gargeya R, Irshad H, Beck AH. Deep learning for identifying metastatic breast cancer. ArXiv. 2016 June 18:arXiv:1606.05718v1. Published online June 18, 2016. Accessed September 15, 2021. http://arxiv.org/abs/1606.05718

34. Alabdulkareem A. Artificial intelligence and dermatologists: friends or foes? J Dermatology Dermatol Surg. 2019;23(2):57-60. doi:10.4103/jdds.jdds_19_19

35. Mollalo A, Mao L, Rashidi P, Glass GE. A GIS-based artificial neural network model for spatial distribution of tuberculosis across the continental United States. Int J Environ Res Public Health. 2019;16(1):157. Published 2019 Jan 8. doi:10.3390/ijerph16010157

36. Haddawy P, Hasan AHMI, Kasantikul R, et al. Spatiotemporal Bayesian networks for malaria prediction. Artif Intell Med. 2018;84:127-138. doi:10.1016/j.artmed.2017.12.002

37. Laureano-Rosario AE, Duncan AP, Mendez-Lazaro PA, et al. Application of artificial neural networks for dengue fever outbreak predictions in the northwest coast of Yucatan, Mexico and San Juan, Puerto Rico. Trop Med Infect Dis. 2018;3(1):5. Published 2018 Jan 5. doi:10.3390/tropicalmed3010005

38. Buczak AL, Koshute PT, Babin SM, Feighner BH, Lewis SH. A data-driven epidemiological prediction method for dengue outbreaks using local and remote sensing data. BMC Med Inform Decis Mak. 2012;12:124. Published 2012 Nov 5. doi:10.1186/1472-6947-12-124

39. Scavuzzo JM, Trucco F, Espinosa M, et al. Modeling dengue vector population using remotely sensed data and machine learning. Acta Trop. 2018;185:167-175. doi:10.1016/j.actatropica.2018.05.003

40. Xue H, Bai Y, Hu H, Liang H. Influenza activity surveillance based on multiple regression model and artificial neural network. IEEE Access. 2018;6:563-575. doi:10.1109/ACCESS.2017.2771798

41. Jiang D, Hao M, Ding F, Fu J, Li M. Mapping the transmission risk of Zika virus using machine learning models. Acta Trop. 2018;185:391-399. doi:10.1016/j.actatropica.2018.06.021

42. Bragazzi NL, Dai H, Damiani G, Behzadifar M, Martini M, Wu J. How big data and artificial intelligence can help better manage the COVID-19 pandemic. Int J Environ Res Public Health. 2020;17(9):3176. Published 2020 May 2. doi:10.3390/ijerph17093176

43. Lake IR, Colón-González FJ, Barker GC, Morbey RA, Smith GE, Elliot AJ. Machine learning to refine decision making within a syndromic surveillance service. BMC Public Health. 2019;19(1):559. Published 2019 May 14. doi:10.1186/s12889-019-6916-9

44. Khan OF, Bebb G, Alimohamed NA. Artificial intelligence in medicine: what oncologists need to know about its potential-and its limitations. Oncol Exch. 2017;16(4):8-13. Accessed September 1, 2021. http://www.oncologyex.com/pdf/vol16_no4/feature_khan-ai.pdf

45. Badano LP, Keller DM, Muraru D, Torlasco C, Parati G. Artificial intelligence and cardiovascular imaging: A win-win combination. Anatol J Cardiol. 2020;24(4):214-223. doi:10.14744/AnatolJCardiol.2020.94491

46. Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA. 2013;309(13):1351-1352. doi:10.1001/jama.2013.393

47. Greatbatch O, Garrett A, Snape K. The impact of artificial intelligence on the current and future practice of clinical cancer genomics. Genet Res (Camb). 2019;101:e9. Published 2019 Oct 31. doi:10.1017/S0016672319000089

48. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. doi:10.1038/s41591-018-0300-7

49. Vollmer S, Mateen BA, Bohner G, et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness [published correction appears in BMJ. 2020 Apr 1;369:m1312]. BMJ. 2020;368:l6927. Published 2020 Mar 20. doi:10.1136/bmj.l6927

50. Lindsey R, Daluiski A, Chopra S, et al. Deep neural network improves fracture detection by clinicians. Proc Natl Acad Sci U S A. 2018;115(45):11591-11596. doi:10.1073/pnas.1806905115

51. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 2018;15(11):e1002683. doi:10.1371/journal.pmed.1002683

52. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582. doi:10.1148/radiol.2017162326

53. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. ArXiv. 2020 Feb 26:arXiv:2002.11379v2. Revised March 11, 2020. Accessed September 15, 2021. http://arxiv.org/abs/2002.11379

54. Salim M, Wåhlin E, Dembrower K, et al. External evaluation of 3 commercial artificial intelligence algorithms for independent assessment of screening mammograms. JAMA Oncol. 2020;6(10):1581-1588. doi:10.1001/jamaoncol.2020.3321

55. Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digit Med. 2018;1:9. doi:10.1038/s41746-017-0015-z

56. Sheth D, Giger ML. Artificial intelligence in the interpretation of breast cancer on MRI. J Magn Reson Imaging. 2020;51(5):1310-1324. doi:10.1002/jmri.26878

57. McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89-94. doi:10.1038/s41586-019-1799-6

58. Booth AL, Abels E, McCaffrey P. Development of a prognostic model for mortality in COVID-19 infection using machine learning. Mod Pathol. 2021;34(3):522-531. doi:10.1038/s41379-020-00700-x

59. Xu B, Kocyigit D, Grimm R, Griffin BP, Cheng F. Applications of artificial intelligence in multimodality cardiovascular imaging: a state-of-the-art review. Prog Cardiovasc Dis. 2020;63(3):367-376. doi:10.1016/j.pcad.2020.03.003

60. Dey D, Slomka PJ, Leeson P, et al. Artificial intelligence in cardiovascular imaging: JACC state-of-the-art review. J Am Coll Cardiol. 2019;73(11):1317-1335. doi:10.1016/j.jacc.2018.12.054

61. Carewell Health. AI powered ECG diagnosis solutions. Accessed November 2, 2020. https://www.carewellhealth.com/products_aiecg.html

62. Strodthoff N, Strodthoff C. Detecting and interpreting myocardial infarction using fully convolutional neural networks. Physiol Meas. 2019;40(1):015001. doi:10.1088/1361-6579/aaf34d

63. Hannun AY, Rajpurkar P, Haghpanahi M, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019;25(1):65-69. doi:10.1038/s41591-018-0268-3

64. Kwon JM, Jeon KH, Kim HM, et al. Comparing the performance of artificial intelligence and conventional diagnosis criteria for detecting left ventricular hypertrophy using electrocardiography. Europace. 2020;22(3):412-419. doi:10.1093/europace/euz324

<--pagebreak-->

65. Eko. FDA clears Eko’s AFib and heart murmur detection algorithms, making it the first AI-powered stethoscope to screen for serious heart conditions [press release]. Published January 28, 2020. Accessed September 15, 2021. https://www.businesswire.com/news/home/20200128005232/en/FDA-Clears-Eko’s-AFib-and-Heart-Murmur-Detection-Algorithms-Making-It-the-First-AI-Powered-Stethoscope-to-Screen-for-Serious-Heart-Conditions

66. Cruz-Roa A, Gilmore H, Basavanhally A, et al. Accurate and reproducible invasive breast cancer detection in whole-slide images: a deep learning approach for quantifying tumor extent. Sci Rep. 2017;7:46450. doi:10.1038/srep46450

67. Acs B, Rantalainen M, Hartman J. Artificial intelligence as the next step towards precision pathology. J Intern Med. 2020;288(1):62-81. doi:10.1111/joim.13030

68. Mobadersany P, Yousefi S, Amgad M, et al. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc Natl Acad Sci U S A. 2018;115(13):E2970-E2979. doi:10.1073/pnas.1717139115

69. Wang X, Janowczyk A, Zhou Y, et al. Prediction of recurrence in early stage non-small cell lung cancer using computer extracted nuclear features from digital H&E images. Sci Rep. 2017;7:13543. doi:10.1038/s41598-017-13773-7

70. Kulkarni PM, Robinson EJ, Pradhan JS, et al. Deep learning based on standard H&E images of primary melanoma tumors identifies patients at risk for visceral recurrence and death. Clin Cancer Res. 2020;26(5):1126-1134. doi:10.1158/1078-0432.CCR-19-1495

71. Du XL, Li WB, Hu BJ. Application of artificial intelligence in ophthalmology. Int J Ophthalmol. 2018;11(9):1555-1561. doi:10.18240/ijo.2018.09.21

72. Gunasekeran DV, Wong TY. Artificial intelligence in ophthalmology in 2020: a technology on the cusp for translation and implementation. Asia Pac J Ophthalmol (Phila). 2020;9(2):61-66. doi:10.1097/01.APO.0000656984.56467.2c

73. Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103(2):167-175. doi:10.1136/bjophthalmol-2018-313173

74. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402-2410. doi:10.1001/jama.2016.17216

75. US Food and Drug Administration. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems [press release]. Published April 11, 2018. Accessed September 15, 2021. https://www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-intelligence-based-device-detect-certain-diabetes-related-eye

76. Long E, Chen J, Wu X, et al. Artificial intelligence manages congenital cataract with individualized prediction and telehealth computing. NPJ Digit Med. 2020;3:112. doi:10.1038/s41746-020-00319-x

77. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24(9):1342-1350. doi:10.1038/s41591-018-0107-6

78. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118. doi:10.1038/nature21056

79. Brinker TJ, Hekler A, Enk AH, et al. Deep neural networks are superior to dermatologists in melanoma image classification. Eur J Cancer. 2019;119:11-17. doi:10.1016/j.ejca.2019.05.023

80. Brinker TJ, Hekler A, Enk AH, et al. A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task. Eur J Cancer. 2019;111:148-154. doi:10.1016/j.ejca.2019.02.005

81. Haenssle HA, Fink C, Schneiderbauer R, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):1836-1842. doi:10.1093/annonc/mdy166

82. Li CX, Shen CB, Xue K, et al. Artificial intelligence in dermatology: past, present, and future. Chin Med J (Engl). 2019;132(17):2017-2020. doi:10.1097/CM9.0000000000000372

83. Tschandl P, Codella N, Akay BN, et al. Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study. Lancet Oncol. 2019;20(7):938-947. doi:10.1016/S1470-2045(19)30333-X

84. Han SS, Park I, Eun Chang SE, et al. Augmented intelligence dermatology: deep neural networks empower medical professionals in diagnosing skin cancer and predicting treatment options for 134 skin disorders. J Invest Dermatol. 2020;140(9):1753-1761. doi:10.1016/j.jid.2020.01.019

85. Freeman K, Dinnes J, Chuchu N, et al. Algorithm based smartphone apps to assess risk of skin cancer in adults: systematic review of diagnostic accuracy studies [published correction appears in BMJ. 2020 Feb 25;368:m645]. BMJ. 2020;368:m127. Published 2020 Feb 10. doi:10.1136/bmj.m127

86. Chen YC, Ke WC, Chiu HW. Risk classification of cancer survival using ANN with gene expression data from multiple laboratories. Comput Biol Med. 2014;48:1-7. doi:10.1016/j.compbiomed.2014.02.006

87. Kim W, Kim KS, Lee JE, et al. Development of novel breast cancer recurrence prediction model using support vector machine. J Breast Cancer. 2012;15(2):230-238. doi:10.4048/jbc.2012.15.2.230

88. Merath K, Hyer JM, Mehta R, et al. Use of machine learning for prediction of patient risk of postoperative complications after liver, pancreatic, and colorectal surgery. J Gastrointest Surg. 2020;24(8):1843-1851. doi:10.1007/s11605-019-04338-2

89. Santos-García G, Varela G, Novoa N, Jiménez MF. Prediction of postoperative morbidity after lung resection using an artificial neural network ensemble. Artif Intell Med. 2004;30(1):61-69. doi:10.1016/S0933-3657(03)00059-9

90. Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys. 2017;44(2):547-557. doi:10.1002/mp.12045

91. Lou B, Doken S, Zhuang T, et al. An image-based deep learning framework for individualizing radiotherapy dose. Lancet Digit Health. 2019;1(3):e136-e147. doi:10.1016/S2589-7500(19)30058-5

92. Xu J, Yang P, Xue S, et al. Translating cancer genomics into precision medicine with artificial intelligence: applications, challenges and future perspectives. Hum Genet. 2019;138(2):109-124. doi:10.1007/s00439-019-01970-5

93. Patel NM, Michelini VV, Snell JM, et al. Enhancing next‐generation sequencing‐guided cancer care through cognitive computing. Oncologist. 2018;23(2):179-185. doi:10.1634/theoncologist.2017-0170

94. Le Berre C, Sandborn WJ, Aridhi S, et al. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology. 2020;158(1):76-94.e2. doi:10.1053/j.gastro.2019.08.058

95. Yang YJ, Bang CS. Application of artificial intelligence in gastroenterology. World J Gastroenterol. 2019;25(14):1666-1683. doi:10.3748/wjg.v25.i14.1666

96. Wang P, Berzin TM, Glissen Brown JR, et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut. 2019;68(10):1813-1819. doi:10.1136/gutjnl-2018-317500

<--pagebreak-->

97. Gupta R, Krishnam SP, Schaefer PW, Lev MH, Gonzalez RG. An East Coast perspective on artificial intelligence and machine learning: part 2: ischemic stroke imaging and triage. Neuroimaging Clin N Am. 2020;30(4):467-478. doi:10.1016/j.nic.2020.08.002

98. Beli M, Bobi V, Badža M, Šolaja N, Duri-Jovii M, Kosti VS. Artificial intelligence for assisting diagnostics and assessment of Parkinson’s disease—a review. Clin Neurol Neurosurg. 2019;184:105442. doi:10.1016/j.clineuro.2019.105442

99. An S, Kang C, Lee HW. Artificial intelligence and computational approaches for epilepsy. J Epilepsy Res. 2020;10(1):8-17. doi:10.14581/jer.20003

100. Pavel AM, Rennie JM, de Vries LS, et al. A machine-learning algorithm for neonatal seizure recognition: a multicentre, randomised, controlled trial. Lancet Child Adolesc Health. 2020;4(10):740-749. doi:10.1016/S2352-4642(20)30239-X

101. Afzal HMR, Luo S, Ramadan S, Lechner-Scott J. The emerging role of artificial intelligence in multiple sclerosis imaging [published online ahead of print, 2020 Oct 28]. Mult Scler. 2020;1352458520966298. doi:10.1177/1352458520966298

102. Bouton CE. Restoring movement in paralysis with a bioelectronic neural bypass approach: current state and future directions. Cold Spring Harb Perspect Med. 2019;9(11):a034306. doi:10.1101/cshperspect.a034306

103. Durstewitz D, Koppe G, Meyer-Lindenberg A. Deep neural networks in psychiatry. Mol Psychiatry. 2019;24(11):1583-1598. doi:10.1038/s41380-019-0365-9

104. Fonseka TM, Bhat V, Kennedy SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Aust N Z J Psychiatry. 2019;53(10):954-964. doi:10.1177/0004867419864428

105. Kessler RC, Hwang I, Hoffmire CA, et al. Developing a practical suicide risk prediction model for targeting high-risk patients in the Veterans Health Administration. Int J Methods Psychiatr Res. 2017;26(3):e1575. doi:10.1002/mpr.1575

106. Kessler RC, Bauer MS, Bishop TM, et al. Using administrative data to predict suicide after psychiatric hospitalization in the Veterans Health Administration System. Front Psychiatry. 2020;11:390. doi:10.3389/fpsyt.2020.00390

107. Kessler RC, van Loo HM, Wardenaar KJ, et al. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports. Mol Psychiatry. 2016;21(10):1366-1371. doi:10.1038/mp.2015.198

108. Horng S, Sontag DA, Halpern Y, Jernite Y, Shapiro NI, Nathanson LA. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. PLoS One. 2017;12(4):e0174708. doi:10.1371/journal.pone.0174708

109. Soffer S, Klang E, Barash Y, Grossman E, Zimlichman E. Predicting in-hospital mortality at admission to the medical ward: a big-data machine learning model. Am J Med. 2021;134(2):227-234.e4. doi:10.1016/j.amjmed.2020.07.014

110. Labovitz DL, Shafner L, Reyes Gil M, Virmani D, Hanina A. Using artificial intelligence to reduce the risk of nonadherence in patients on anticoagulation therapy. Stroke. 2017;48(5):1416-1419. doi:10.1161/STROKEAHA.116.016281

111. Forlenza GP. Use of artificial intelligence to improve diabetes outcomes in patients using multiple daily injections therapy. Diabetes Technol Ther. 2019;21(S2):S24-S28. doi:10.1089/dia.2019.0077

112. Poser CM. CT scan and the practice of neurology. Arch Neurol. 1977;34(2):132. doi:10.1001/archneur.1977.00500140086023

113. Angus DC. Randomized clinical trials of artificial intelligence. JAMA. 2020;323(11):1043-1045. doi:10.1001/jama.2020.1039

114. Topol EJ. Welcoming new guidelines for AI clinical research. Nat Med. 2020;26(9):1318-1320. doi:10.1038/s41591-020-1042-x

115. Collins GS, Moons KGM. Reporting of artificial intelligence prediction models. Lancet. 2019;393(10181):1577-1579. doi:10.1016/S0140-6736(19)30037-6

116. Cruz Rivera S, Liu X, Chan AW, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med. 2020;26(9):1351-1363. doi:10.1038/s41591-020-1037-7

117. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK; SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med. 2020;26(9):1364-1374. doi:10.1038/s41591-020-1034-x

118. McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5(4):115-133. doi:10.1007/BF02478259

119. Samuel AL. Some studies in machine learning using the game of Checkers. IBM J Res Dev. 1959;3(3):535-554. Accessed September 15, 2021. https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.368.2254

120. Sonoda M, Takano M, Miyahara J, Kato H. Computed radiography utilizing scanning laser stimulated luminescence. Radiology. 1983;148(3):833-838. doi:10.1148/radiology.148.3.6878707

121. Dechter R. Learning while searching in constraint-satisfaction-problems. AAAI’86: proceedings of the fifth AAAI national conference on artificial intelligence. Published 1986. Accessed September 15, 2021. https://www.aaai.org/Papers/AAAI/1986/AAAI86-029.pdf

122. Le Cun Y, Jackel LD, Boser B, et al. Handwritten digit recognition: applications of neural network chips and automatic learning. IEEE Commun Mag. 1989;27(11):41-46. doi:10.1109/35.41400

123. US Food and Drug Administration. FDA allows marketing of first whole slide imaging system for digital pathology [press release]. Published April 12, 2017. Accessed September 15, 2021. https://www.fda.gov/news-events/press-announcements/fda-allows-marketing-first-whole-slide-imaging-system-digital-pathology

Issue
Federal Practitioner - 38(11)a
Issue
Federal Practitioner - 38(11)a
Page Number
527-538
Page Number
527-538
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Role of 3D Printing and Modeling to Aid in Neuroradiology Education for Medical Trainees

Article Type
Changed
Tue, 06/15/2021 - 12:20

Applications of 3-dimensional (3D) printing in medical imaging and health care are expanding. 3D printing may serve a variety of roles and is used increasingly in the context of presurgical planning, as specific medical models may be created using individual patient imaging data.1 These patient-specific models may assist in medical trainee education, decrease operating room time, improve patient education for potential planned surgery, and guide clinicians for optimizing therapy.1,2 This article discusses the utility of 3D printing at a single institution to serve in enhancing specifically neuroradiology education.

Background

As digital imaging and 3D printing have increased in popularity, the potential application of using imaging data to guide patient therapy has shown significant promise. Computed tomography (CT) is a commonly used modality that can be used to create 3D anatomical models, as it is frequently used in the medical setting, demonstrates excellent resolution to the millimeter scale, and can readily pinpoint pathology on imaging.

Image Acquisition

CT scans can be rapidly obtained, which adds significant value, particularly in the context of point-of-care 3D printing. Another modality commonly used for 3D printing is magnetic resonance imaging (MRI), which unlike CT, does not expose the patient to ionizing radiation. The 3D printing process is initiated with patient-specific CT or MRI data stored in the digital imaging and communications in medicine (DICOM) format, which is the international standard for communication and management of medical imaging information and related data. DICOM allows for faster and robust collaboration among imaging professionals.3

 

Image Processing 

To print 3D anatomical models, patient-specific data must be converted from DICOM into standard tessellation language (STL) format, which can be created and edited with a variety of softwares.3 At James A. Haley Veterans’ Hospital in Tampa, Florida, we use an image processing package that includes the Materialise 3-matic and interactive medical image control system. Image quality is essential; therefore, careful attention to details such as pixel dimensions, slice thickness, and slice increments must be considered.3,4

An STL file creates a 3D image from triangle approximations. The entire 3D shape will be made of numerous large or small triangles, depending on the slice thickness, therefore, quality of the original radiologic image. The size and position of the triangles used to make the model can be varied to approximate the object’s shape. The smaller the triangles, the better the image quality and vice versa. This concept is analogous to approximating a circle using straight lines of equal length—more, smaller lines will result in better approximation of a circle (Figure 1).5,6 Similarly, using smaller triangles allows for better approximation of the image. As the human body is a complex structure, mimicking the body requires a system able to create nongeometrical shapes, which is made possible via these triangle approximations in a 3D STL file.

The creation of an STL file from DICOM data starts with a threshold-based segmentation process followed by additional fine-tuning and edits, and ends in the creation of a 3D part. The initial segmentation can be created with the threshold tool, using a Hounsfield unit range based on the area of interest desired (eg, bone, blood, fat). This is used to create an initial mask, which can be further optimized. The region grow tool allows the user to focus the segmentation by discarding areas that are not directly connected to the region of interest. In contrast, the split mask tool divides areas that are connected. Next, fine-tuning the segmentation using tools such as multiple slice edit helps to optimize the model. After all edits are made, the calculate part tool converts the mask into a 3D component that can be used in downstream applications. For the purposes of demonstration and proof of concept, the models provided in this article were created via open-source hardware designs under free or open licenses.7-9

3D Printing in Neuroradiology Education

Neuroradiologists focus on diagnosing pathology related to the brain, head and neck, and spine. CT and MRI scans are the primary modalities used to diagnose these conditions. 3D printing is a useful tool for the trainee who wishes to fully understand neuroanatomy and obtain further appreciation of imaging pathology as it relates to 3D anatomy. Head and neck imaging are a complex subdiscipline of neuroradiology that often require further training beyond radiology residency. A neuroradiology fellowship that focuses on head and neck imaging extends the training.

 

 

3D printing has the potential to improve the understanding of various imaging pathologies by providing the trainee with a more in-depth appreciation of the anterior, middle, and posterior cranial fossa, the skull base foramina (ie, foramen ovale, spinosum, rotundum), and complex 3D areas, such as the pterygopalatine fossa, which are all critical areas to investigate on imaging. Figure 2 highlights how a complex anatomical structure, such as the sphenoid bone when printed in 3D, can be correlated with CT cross-sectional images to supplement the educational experience.

Correlation of the Sphenoid Bone Between Computed Tomography and 3-Dimmensional Model


Furthermore, the various lobes, sulci, and gyri of the brain and cerebellum and how they interrelate to nearby vasculature and bony structures can be difficult to conceptualize for early trainees. A 3D-printed cerebellum and its relation to the brainstem is illustrated in Figure 3A. Additional complex head and neck structures of the middle ear membranous and bony labyrinth and ossicles and multiple views of the mandible are shown in Figures 3B through 3E.

Models of Complex Structures of the Head and Neck


3D printing in the context of neurovascular pathology holds great promise, particularly as these models may provide the trainee, patient, and proceduralist essential details such as appearance and morphology of an intracranial aneurysm, relationship and size of the neck of aneurysm, incorporation of vessels emanating from the aneurysmal sac, and details of the dome of the aneurysm. For example, the normal circle of Willis in Figure 4A is juxtaposed with an example of a saccular internal carotid artery aneurysm (Figure 4B).

Normal Intracranial Vasculature vs a Pathologic Aneurysm Models


A variety of conditions can affect the bony spine from degenerative, trauma, neoplastic, and inflammatory etiologies. A CT scan of the spine is readily used to detect these different conditions and often is used in the initial evaluation of trauma as indicated in the American College of Radiology appropriateness criteria.10 In addition, MRI is used to evaluate the spinal cord and to further define spinal stenosis as well as evaluate radiculopathy. An appreciation of the bony and soft tissue structures within the spine can be garnered with the use of 3D models (Figure 5). 

Trainees can further their understanding of approaches in spinal procedures, including lumbar puncture, myelography, and facet injections. A variety of approaches to access the spinal canal have been documented, such as interspinous, paraspinous, and interlaminar oblique; 3D-printed models can aid in practicing these procedures.11 For example, a water-filled tube can be inserted into the vertebral canal to provide realistic tactile feedback for simulation of a lumbar puncture. An appreciation of the 3D anatomy can guide the clinician on the optimal approach, which can help limit time and potentially improve outcomes.

Lumbar Spine 3-Dimensional Model

Future Directions

Artificial Intelligence (AI) offers the ability to teach computers to perform tasks that ordinarily require human intelligence. In the context of 3D printing, the ability to use AI to readily convert and process DICOM data into printable STL models holds significant promise. Currently, the manual conversion of a DICOM file into a segmented 3D model may take several days, necessitating a number of productive hours even from the imaging and engineering champion. If machines could aid in this process, the ability to readily scale clinical 3D printing and promote widespread adoption would be feasible. Several studies already are looking into this concept to determine how deep learning networks may automatically recognize lesions on medical imaging to assist a human operator, potentially cutting hours from the clinical 3D printing workflow.12,13

Furthermore, there are several applications for AI in the context of 3D printing upstream or before the creation of a 3D model. A number of AI tools are already in use at the CT and MRI scanner. Current strategies leverage deep learning and advances in neural networks to improve image quality and create thin section DICOM data, which can be converted into printable 3D files. Additionally, the ability to automate tasks using AI can improve production capacity by assessing material costs and ensuring cost efficiency, which will be critical as point-of-care 3D printing develops widespread adoption. AI also can reduce printing errors by using automated adaptive feedback, using machine learning to search for possible print errors, and sending feedback to the computer to ensure appropriate settings (eg, temperature settings/environmental conditions).

Conclusions

Based on this single-institution experience, 3D-printed complex neuroanatomical structures seems feasible and may enhance resident education and patient safety. Interested trainees may have the opportunity to learn and be involved in the printing process of new and innovative ideas. Further studies may involve printing various pathologic processes and applying these same steps and principles to other subspecialties of radiology. Finally, AI has the potential to advance the 3D printing process in the future.

References

1. Rengier F, Mehndiratta A, von Tengg-Kobligk H, et al. 3D printing based on imaging data: review of medical applications. Int J Comput Assist Radiol Surg. 2010;5(4):335-341. doi:10.1007/s11548-010-0476-x

2. Perica E, Sun Z. Patient-specific three-dimensional printing for pre-surgical planning in hepatocellular carcinoma treatment. Quant Imaging Med Surg. 2017;7(6):668-677. doi:10.21037/qims.2017.11.02

3. Hwang JJ, Jung Y-H, Cho B-H. The need for DICOM encapsulation of 3D scanning STL data. Imaging Sci Dent. 2018;48(4):301-302. doi:10.5624/isd.2018.48.4.301

4. Whyms BJ, Vorperian HK, Gentry LR, Schimek EM, Bersu ET, Chung MK. The effect of computed tomographic scanner parameters and 3-dimensional volume rendering techniques on the accuracy of linear, angular, and volumetric measurements of the mandible. Oral Surg Oral Med, Oral Pathol Oral Radiol. 2013;115(5):682-691. doi:10.1016/j.oooo.2013.02.008

5. Materialise Cloud. Triangle reduction. Accessed May 20, 2021. https://cloud.materialise.com/tools/triangle-reduction

6. Comaneanu RM, Tarcolea M, Vlasceanu D, Cotrut MC. Virtual 3D reconstruction, diagnosis and surgical planning with Mimics software. Int J Nano Biomaterials. 2012;4(1);69-77.

7. Thingiverse: Digital designs for physical objects. Accessed May 20, 2021. https://www.thingiverse.com

8. Cults. Download for free 3D models for 3D printers. Accessed May 20, 2021. https://cults3d.com/en

9. yeggi. Search engine for 3D printer models. Accessed May 20, 2021. https://www.yeggi.com

10. Expert Panel on Neurological Imaging and Musculoskeletal Imaging; Beckmann NM, West OC, Nunez D, et al. ACR appropriateness criteria suspected spine trauma. J Am Coll Radiol. 2919;16(5):S264-285. doi:10.1016/j.jacr.2019.02.002

11. McKinney AM. Normal variants of the lumbar and sacral spine. In: Atlas of Head/Neck and Spine Normal Imaging Variants. Springer; 2018:263-321.

12. Sollini M, Bartoli F, Marciano A, et al. Artificial intelligence and hybrid imaging: the best match for personalized medicine in oncology. Eur J Hybrid Imaging. 2020;4(1):24. doi:10.1186/s41824-020-00094-8

13. Küstner T, Hepp T, Fischer M, et al. Fully automated and standardized segmentation of adipose tissue compartments via deep learning in 3D whole-body MRI of epidemiologic cohort studies. Radiol Artif Intell.2020;2(6):e200010. doi:10.1148/ryai.2020200010

Article PDF
Author and Disclosure Information

Michael Markovitz and Sen Lu are Radiology Resident Physicians at the University of South Florida in Tampa. Narayan Viswanadhan is Assistant Chief of Radiology at James A. Haley Veterans’ Hospital in Tampa.
Correspondence: Michael Markovitz ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 38(6)a
Publications
Topics
Page Number
256-260
Sections
Author and Disclosure Information

Michael Markovitz and Sen Lu are Radiology Resident Physicians at the University of South Florida in Tampa. Narayan Viswanadhan is Assistant Chief of Radiology at James A. Haley Veterans’ Hospital in Tampa.
Correspondence: Michael Markovitz ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Michael Markovitz and Sen Lu are Radiology Resident Physicians at the University of South Florida in Tampa. Narayan Viswanadhan is Assistant Chief of Radiology at James A. Haley Veterans’ Hospital in Tampa.
Correspondence: Michael Markovitz ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF
Related Articles

Applications of 3-dimensional (3D) printing in medical imaging and health care are expanding. 3D printing may serve a variety of roles and is used increasingly in the context of presurgical planning, as specific medical models may be created using individual patient imaging data.1 These patient-specific models may assist in medical trainee education, decrease operating room time, improve patient education for potential planned surgery, and guide clinicians for optimizing therapy.1,2 This article discusses the utility of 3D printing at a single institution to serve in enhancing specifically neuroradiology education.

Background

As digital imaging and 3D printing have increased in popularity, the potential application of using imaging data to guide patient therapy has shown significant promise. Computed tomography (CT) is a commonly used modality that can be used to create 3D anatomical models, as it is frequently used in the medical setting, demonstrates excellent resolution to the millimeter scale, and can readily pinpoint pathology on imaging.

Image Acquisition

CT scans can be rapidly obtained, which adds significant value, particularly in the context of point-of-care 3D printing. Another modality commonly used for 3D printing is magnetic resonance imaging (MRI), which unlike CT, does not expose the patient to ionizing radiation. The 3D printing process is initiated with patient-specific CT or MRI data stored in the digital imaging and communications in medicine (DICOM) format, which is the international standard for communication and management of medical imaging information and related data. DICOM allows for faster and robust collaboration among imaging professionals.3

 

Image Processing 

To print 3D anatomical models, patient-specific data must be converted from DICOM into standard tessellation language (STL) format, which can be created and edited with a variety of softwares.3 At James A. Haley Veterans’ Hospital in Tampa, Florida, we use an image processing package that includes the Materialise 3-matic and interactive medical image control system. Image quality is essential; therefore, careful attention to details such as pixel dimensions, slice thickness, and slice increments must be considered.3,4

An STL file creates a 3D image from triangle approximations. The entire 3D shape will be made of numerous large or small triangles, depending on the slice thickness, therefore, quality of the original radiologic image. The size and position of the triangles used to make the model can be varied to approximate the object’s shape. The smaller the triangles, the better the image quality and vice versa. This concept is analogous to approximating a circle using straight lines of equal length—more, smaller lines will result in better approximation of a circle (Figure 1).5,6 Similarly, using smaller triangles allows for better approximation of the image. As the human body is a complex structure, mimicking the body requires a system able to create nongeometrical shapes, which is made possible via these triangle approximations in a 3D STL file.

The creation of an STL file from DICOM data starts with a threshold-based segmentation process followed by additional fine-tuning and edits, and ends in the creation of a 3D part. The initial segmentation can be created with the threshold tool, using a Hounsfield unit range based on the area of interest desired (eg, bone, blood, fat). This is used to create an initial mask, which can be further optimized. The region grow tool allows the user to focus the segmentation by discarding areas that are not directly connected to the region of interest. In contrast, the split mask tool divides areas that are connected. Next, fine-tuning the segmentation using tools such as multiple slice edit helps to optimize the model. After all edits are made, the calculate part tool converts the mask into a 3D component that can be used in downstream applications. For the purposes of demonstration and proof of concept, the models provided in this article were created via open-source hardware designs under free or open licenses.7-9

3D Printing in Neuroradiology Education

Neuroradiologists focus on diagnosing pathology related to the brain, head and neck, and spine. CT and MRI scans are the primary modalities used to diagnose these conditions. 3D printing is a useful tool for the trainee who wishes to fully understand neuroanatomy and obtain further appreciation of imaging pathology as it relates to 3D anatomy. Head and neck imaging are a complex subdiscipline of neuroradiology that often require further training beyond radiology residency. A neuroradiology fellowship that focuses on head and neck imaging extends the training.

 

 

3D printing has the potential to improve the understanding of various imaging pathologies by providing the trainee with a more in-depth appreciation of the anterior, middle, and posterior cranial fossa, the skull base foramina (ie, foramen ovale, spinosum, rotundum), and complex 3D areas, such as the pterygopalatine fossa, which are all critical areas to investigate on imaging. Figure 2 highlights how a complex anatomical structure, such as the sphenoid bone when printed in 3D, can be correlated with CT cross-sectional images to supplement the educational experience.

Correlation of the Sphenoid Bone Between Computed Tomography and 3-Dimmensional Model


Furthermore, the various lobes, sulci, and gyri of the brain and cerebellum and how they interrelate to nearby vasculature and bony structures can be difficult to conceptualize for early trainees. A 3D-printed cerebellum and its relation to the brainstem is illustrated in Figure 3A. Additional complex head and neck structures of the middle ear membranous and bony labyrinth and ossicles and multiple views of the mandible are shown in Figures 3B through 3E.

Models of Complex Structures of the Head and Neck


3D printing in the context of neurovascular pathology holds great promise, particularly as these models may provide the trainee, patient, and proceduralist essential details such as appearance and morphology of an intracranial aneurysm, relationship and size of the neck of aneurysm, incorporation of vessels emanating from the aneurysmal sac, and details of the dome of the aneurysm. For example, the normal circle of Willis in Figure 4A is juxtaposed with an example of a saccular internal carotid artery aneurysm (Figure 4B).

Normal Intracranial Vasculature vs a Pathologic Aneurysm Models


A variety of conditions can affect the bony spine from degenerative, trauma, neoplastic, and inflammatory etiologies. A CT scan of the spine is readily used to detect these different conditions and often is used in the initial evaluation of trauma as indicated in the American College of Radiology appropriateness criteria.10 In addition, MRI is used to evaluate the spinal cord and to further define spinal stenosis as well as evaluate radiculopathy. An appreciation of the bony and soft tissue structures within the spine can be garnered with the use of 3D models (Figure 5). 

Trainees can further their understanding of approaches in spinal procedures, including lumbar puncture, myelography, and facet injections. A variety of approaches to access the spinal canal have been documented, such as interspinous, paraspinous, and interlaminar oblique; 3D-printed models can aid in practicing these procedures.11 For example, a water-filled tube can be inserted into the vertebral canal to provide realistic tactile feedback for simulation of a lumbar puncture. An appreciation of the 3D anatomy can guide the clinician on the optimal approach, which can help limit time and potentially improve outcomes.

Lumbar Spine 3-Dimensional Model

Future Directions

Artificial Intelligence (AI) offers the ability to teach computers to perform tasks that ordinarily require human intelligence. In the context of 3D printing, the ability to use AI to readily convert and process DICOM data into printable STL models holds significant promise. Currently, the manual conversion of a DICOM file into a segmented 3D model may take several days, necessitating a number of productive hours even from the imaging and engineering champion. If machines could aid in this process, the ability to readily scale clinical 3D printing and promote widespread adoption would be feasible. Several studies already are looking into this concept to determine how deep learning networks may automatically recognize lesions on medical imaging to assist a human operator, potentially cutting hours from the clinical 3D printing workflow.12,13

Furthermore, there are several applications for AI in the context of 3D printing upstream or before the creation of a 3D model. A number of AI tools are already in use at the CT and MRI scanner. Current strategies leverage deep learning and advances in neural networks to improve image quality and create thin section DICOM data, which can be converted into printable 3D files. Additionally, the ability to automate tasks using AI can improve production capacity by assessing material costs and ensuring cost efficiency, which will be critical as point-of-care 3D printing develops widespread adoption. AI also can reduce printing errors by using automated adaptive feedback, using machine learning to search for possible print errors, and sending feedback to the computer to ensure appropriate settings (eg, temperature settings/environmental conditions).

Conclusions

Based on this single-institution experience, 3D-printed complex neuroanatomical structures seems feasible and may enhance resident education and patient safety. Interested trainees may have the opportunity to learn and be involved in the printing process of new and innovative ideas. Further studies may involve printing various pathologic processes and applying these same steps and principles to other subspecialties of radiology. Finally, AI has the potential to advance the 3D printing process in the future.

Applications of 3-dimensional (3D) printing in medical imaging and health care are expanding. 3D printing may serve a variety of roles and is used increasingly in the context of presurgical planning, as specific medical models may be created using individual patient imaging data.1 These patient-specific models may assist in medical trainee education, decrease operating room time, improve patient education for potential planned surgery, and guide clinicians for optimizing therapy.1,2 This article discusses the utility of 3D printing at a single institution to serve in enhancing specifically neuroradiology education.

Background

As digital imaging and 3D printing have increased in popularity, the potential application of using imaging data to guide patient therapy has shown significant promise. Computed tomography (CT) is a commonly used modality that can be used to create 3D anatomical models, as it is frequently used in the medical setting, demonstrates excellent resolution to the millimeter scale, and can readily pinpoint pathology on imaging.

Image Acquisition

CT scans can be rapidly obtained, which adds significant value, particularly in the context of point-of-care 3D printing. Another modality commonly used for 3D printing is magnetic resonance imaging (MRI), which unlike CT, does not expose the patient to ionizing radiation. The 3D printing process is initiated with patient-specific CT or MRI data stored in the digital imaging and communications in medicine (DICOM) format, which is the international standard for communication and management of medical imaging information and related data. DICOM allows for faster and robust collaboration among imaging professionals.3

 

Image Processing 

To print 3D anatomical models, patient-specific data must be converted from DICOM into standard tessellation language (STL) format, which can be created and edited with a variety of softwares.3 At James A. Haley Veterans’ Hospital in Tampa, Florida, we use an image processing package that includes the Materialise 3-matic and interactive medical image control system. Image quality is essential; therefore, careful attention to details such as pixel dimensions, slice thickness, and slice increments must be considered.3,4

An STL file creates a 3D image from triangle approximations. The entire 3D shape will be made of numerous large or small triangles, depending on the slice thickness, therefore, quality of the original radiologic image. The size and position of the triangles used to make the model can be varied to approximate the object’s shape. The smaller the triangles, the better the image quality and vice versa. This concept is analogous to approximating a circle using straight lines of equal length—more, smaller lines will result in better approximation of a circle (Figure 1).5,6 Similarly, using smaller triangles allows for better approximation of the image. As the human body is a complex structure, mimicking the body requires a system able to create nongeometrical shapes, which is made possible via these triangle approximations in a 3D STL file.

The creation of an STL file from DICOM data starts with a threshold-based segmentation process followed by additional fine-tuning and edits, and ends in the creation of a 3D part. The initial segmentation can be created with the threshold tool, using a Hounsfield unit range based on the area of interest desired (eg, bone, blood, fat). This is used to create an initial mask, which can be further optimized. The region grow tool allows the user to focus the segmentation by discarding areas that are not directly connected to the region of interest. In contrast, the split mask tool divides areas that are connected. Next, fine-tuning the segmentation using tools such as multiple slice edit helps to optimize the model. After all edits are made, the calculate part tool converts the mask into a 3D component that can be used in downstream applications. For the purposes of demonstration and proof of concept, the models provided in this article were created via open-source hardware designs under free or open licenses.7-9

3D Printing in Neuroradiology Education

Neuroradiologists focus on diagnosing pathology related to the brain, head and neck, and spine. CT and MRI scans are the primary modalities used to diagnose these conditions. 3D printing is a useful tool for the trainee who wishes to fully understand neuroanatomy and obtain further appreciation of imaging pathology as it relates to 3D anatomy. Head and neck imaging are a complex subdiscipline of neuroradiology that often require further training beyond radiology residency. A neuroradiology fellowship that focuses on head and neck imaging extends the training.

 

 

3D printing has the potential to improve the understanding of various imaging pathologies by providing the trainee with a more in-depth appreciation of the anterior, middle, and posterior cranial fossa, the skull base foramina (ie, foramen ovale, spinosum, rotundum), and complex 3D areas, such as the pterygopalatine fossa, which are all critical areas to investigate on imaging. Figure 2 highlights how a complex anatomical structure, such as the sphenoid bone when printed in 3D, can be correlated with CT cross-sectional images to supplement the educational experience.

Correlation of the Sphenoid Bone Between Computed Tomography and 3-Dimmensional Model


Furthermore, the various lobes, sulci, and gyri of the brain and cerebellum and how they interrelate to nearby vasculature and bony structures can be difficult to conceptualize for early trainees. A 3D-printed cerebellum and its relation to the brainstem is illustrated in Figure 3A. Additional complex head and neck structures of the middle ear membranous and bony labyrinth and ossicles and multiple views of the mandible are shown in Figures 3B through 3E.

Models of Complex Structures of the Head and Neck


3D printing in the context of neurovascular pathology holds great promise, particularly as these models may provide the trainee, patient, and proceduralist essential details such as appearance and morphology of an intracranial aneurysm, relationship and size of the neck of aneurysm, incorporation of vessels emanating from the aneurysmal sac, and details of the dome of the aneurysm. For example, the normal circle of Willis in Figure 4A is juxtaposed with an example of a saccular internal carotid artery aneurysm (Figure 4B).

Normal Intracranial Vasculature vs a Pathologic Aneurysm Models


A variety of conditions can affect the bony spine from degenerative, trauma, neoplastic, and inflammatory etiologies. A CT scan of the spine is readily used to detect these different conditions and often is used in the initial evaluation of trauma as indicated in the American College of Radiology appropriateness criteria.10 In addition, MRI is used to evaluate the spinal cord and to further define spinal stenosis as well as evaluate radiculopathy. An appreciation of the bony and soft tissue structures within the spine can be garnered with the use of 3D models (Figure 5). 

Trainees can further their understanding of approaches in spinal procedures, including lumbar puncture, myelography, and facet injections. A variety of approaches to access the spinal canal have been documented, such as interspinous, paraspinous, and interlaminar oblique; 3D-printed models can aid in practicing these procedures.11 For example, a water-filled tube can be inserted into the vertebral canal to provide realistic tactile feedback for simulation of a lumbar puncture. An appreciation of the 3D anatomy can guide the clinician on the optimal approach, which can help limit time and potentially improve outcomes.

Lumbar Spine 3-Dimensional Model

Future Directions

Artificial Intelligence (AI) offers the ability to teach computers to perform tasks that ordinarily require human intelligence. In the context of 3D printing, the ability to use AI to readily convert and process DICOM data into printable STL models holds significant promise. Currently, the manual conversion of a DICOM file into a segmented 3D model may take several days, necessitating a number of productive hours even from the imaging and engineering champion. If machines could aid in this process, the ability to readily scale clinical 3D printing and promote widespread adoption would be feasible. Several studies already are looking into this concept to determine how deep learning networks may automatically recognize lesions on medical imaging to assist a human operator, potentially cutting hours from the clinical 3D printing workflow.12,13

Furthermore, there are several applications for AI in the context of 3D printing upstream or before the creation of a 3D model. A number of AI tools are already in use at the CT and MRI scanner. Current strategies leverage deep learning and advances in neural networks to improve image quality and create thin section DICOM data, which can be converted into printable 3D files. Additionally, the ability to automate tasks using AI can improve production capacity by assessing material costs and ensuring cost efficiency, which will be critical as point-of-care 3D printing develops widespread adoption. AI also can reduce printing errors by using automated adaptive feedback, using machine learning to search for possible print errors, and sending feedback to the computer to ensure appropriate settings (eg, temperature settings/environmental conditions).

Conclusions

Based on this single-institution experience, 3D-printed complex neuroanatomical structures seems feasible and may enhance resident education and patient safety. Interested trainees may have the opportunity to learn and be involved in the printing process of new and innovative ideas. Further studies may involve printing various pathologic processes and applying these same steps and principles to other subspecialties of radiology. Finally, AI has the potential to advance the 3D printing process in the future.

References

1. Rengier F, Mehndiratta A, von Tengg-Kobligk H, et al. 3D printing based on imaging data: review of medical applications. Int J Comput Assist Radiol Surg. 2010;5(4):335-341. doi:10.1007/s11548-010-0476-x

2. Perica E, Sun Z. Patient-specific three-dimensional printing for pre-surgical planning in hepatocellular carcinoma treatment. Quant Imaging Med Surg. 2017;7(6):668-677. doi:10.21037/qims.2017.11.02

3. Hwang JJ, Jung Y-H, Cho B-H. The need for DICOM encapsulation of 3D scanning STL data. Imaging Sci Dent. 2018;48(4):301-302. doi:10.5624/isd.2018.48.4.301

4. Whyms BJ, Vorperian HK, Gentry LR, Schimek EM, Bersu ET, Chung MK. The effect of computed tomographic scanner parameters and 3-dimensional volume rendering techniques on the accuracy of linear, angular, and volumetric measurements of the mandible. Oral Surg Oral Med, Oral Pathol Oral Radiol. 2013;115(5):682-691. doi:10.1016/j.oooo.2013.02.008

5. Materialise Cloud. Triangle reduction. Accessed May 20, 2021. https://cloud.materialise.com/tools/triangle-reduction

6. Comaneanu RM, Tarcolea M, Vlasceanu D, Cotrut MC. Virtual 3D reconstruction, diagnosis and surgical planning with Mimics software. Int J Nano Biomaterials. 2012;4(1);69-77.

7. Thingiverse: Digital designs for physical objects. Accessed May 20, 2021. https://www.thingiverse.com

8. Cults. Download for free 3D models for 3D printers. Accessed May 20, 2021. https://cults3d.com/en

9. yeggi. Search engine for 3D printer models. Accessed May 20, 2021. https://www.yeggi.com

10. Expert Panel on Neurological Imaging and Musculoskeletal Imaging; Beckmann NM, West OC, Nunez D, et al. ACR appropriateness criteria suspected spine trauma. J Am Coll Radiol. 2919;16(5):S264-285. doi:10.1016/j.jacr.2019.02.002

11. McKinney AM. Normal variants of the lumbar and sacral spine. In: Atlas of Head/Neck and Spine Normal Imaging Variants. Springer; 2018:263-321.

12. Sollini M, Bartoli F, Marciano A, et al. Artificial intelligence and hybrid imaging: the best match for personalized medicine in oncology. Eur J Hybrid Imaging. 2020;4(1):24. doi:10.1186/s41824-020-00094-8

13. Küstner T, Hepp T, Fischer M, et al. Fully automated and standardized segmentation of adipose tissue compartments via deep learning in 3D whole-body MRI of epidemiologic cohort studies. Radiol Artif Intell.2020;2(6):e200010. doi:10.1148/ryai.2020200010

References

1. Rengier F, Mehndiratta A, von Tengg-Kobligk H, et al. 3D printing based on imaging data: review of medical applications. Int J Comput Assist Radiol Surg. 2010;5(4):335-341. doi:10.1007/s11548-010-0476-x

2. Perica E, Sun Z. Patient-specific three-dimensional printing for pre-surgical planning in hepatocellular carcinoma treatment. Quant Imaging Med Surg. 2017;7(6):668-677. doi:10.21037/qims.2017.11.02

3. Hwang JJ, Jung Y-H, Cho B-H. The need for DICOM encapsulation of 3D scanning STL data. Imaging Sci Dent. 2018;48(4):301-302. doi:10.5624/isd.2018.48.4.301

4. Whyms BJ, Vorperian HK, Gentry LR, Schimek EM, Bersu ET, Chung MK. The effect of computed tomographic scanner parameters and 3-dimensional volume rendering techniques on the accuracy of linear, angular, and volumetric measurements of the mandible. Oral Surg Oral Med, Oral Pathol Oral Radiol. 2013;115(5):682-691. doi:10.1016/j.oooo.2013.02.008

5. Materialise Cloud. Triangle reduction. Accessed May 20, 2021. https://cloud.materialise.com/tools/triangle-reduction

6. Comaneanu RM, Tarcolea M, Vlasceanu D, Cotrut MC. Virtual 3D reconstruction, diagnosis and surgical planning with Mimics software. Int J Nano Biomaterials. 2012;4(1);69-77.

7. Thingiverse: Digital designs for physical objects. Accessed May 20, 2021. https://www.thingiverse.com

8. Cults. Download for free 3D models for 3D printers. Accessed May 20, 2021. https://cults3d.com/en

9. yeggi. Search engine for 3D printer models. Accessed May 20, 2021. https://www.yeggi.com

10. Expert Panel on Neurological Imaging and Musculoskeletal Imaging; Beckmann NM, West OC, Nunez D, et al. ACR appropriateness criteria suspected spine trauma. J Am Coll Radiol. 2919;16(5):S264-285. doi:10.1016/j.jacr.2019.02.002

11. McKinney AM. Normal variants of the lumbar and sacral spine. In: Atlas of Head/Neck and Spine Normal Imaging Variants. Springer; 2018:263-321.

12. Sollini M, Bartoli F, Marciano A, et al. Artificial intelligence and hybrid imaging: the best match for personalized medicine in oncology. Eur J Hybrid Imaging. 2020;4(1):24. doi:10.1186/s41824-020-00094-8

13. Küstner T, Hepp T, Fischer M, et al. Fully automated and standardized segmentation of adipose tissue compartments via deep learning in 3D whole-body MRI of epidemiologic cohort studies. Radiol Artif Intell.2020;2(6):e200010. doi:10.1148/ryai.2020200010

Issue
Federal Practitioner - 38(6)a
Issue
Federal Practitioner - 38(6)a
Page Number
256-260
Page Number
256-260
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Using Artificial Intelligence for COVID-19 Chest X-ray Diagnosis

Article Type
Changed
Thu, 08/26/2021 - 16:00

The novel coronavirus severe acute respiratory syndrome coronavirus 2 (SARSCoV- 2), which causes the respiratory disease coronavirus disease-19 (COVID- 19), was first identified as a cluster of cases of pneumonia in Wuhan, Hubei Province of China on December 31, 2019.1 Within a month, the disease had spread significantly, leading the World Health Organization (WHO) to designate COVID-19 a public health emergency of international concern. On March 11, 2020, the WHO declared COVID-19 a global pandemic.2 As of August 18, 2020, the virus has infected > 21 million people, with > 750,000 deaths worldwide.3 The spread of COVID-19 has had a dramatic impact on social, economic, and health care issues throughout the world, which has been discussed elsewhere.4

Prior to the this century, members of the coronavirus family had minimal impact on human health.5 However, in the past 20 years, outbreaks have highlighted an emerging importance of coronaviruses in morbidity and mortality on a global scale. Although less prevalent than COVID-19, severe acute respiratory syndrome (SARS) in 2002 to 2003 and Middle East respiratory syndrome (MERS) in 2012 likely had higher mortality rates than the current pandemic.5 Based on this recent history, it is reasonable to assume that we will continue to see novel diseases with similar significant health and societal implications. The challenges presented to health care providers (HCPs) by such novel viral pathogens are numerous, including methods for rapid diagnosis, prevention, and treatment. In the current study, we focus on diagnosis issues, which were evident with COVID-19 with the time required to develop rapid and effective diagnostic modalities.

We have previously reported the utility of using artificial intelligence (AI) in the histopathologic diagnosis of cancer.6-8 AI was first described in 1956 and involves the field of computer science in which machines are trained to learn from experience.9 Machine learning (ML) is a subset of AI and is achieved by using mathematic models to compute sample datasets.10 Current ML employs deep learning with neural network algorithms, which can recognize patterns and achieve complex computational tasks often far quicker and with increased precision than can humans.11-13 In addition to applications in pathology, ML algorithms have both prognostic and diagnostic applications in multiple medical specialties, such as radiology, dermatology, ophthalmology, and cardiology.6 It is predicted that AI will impact almost every aspect of health care in the future.14

In this article, we examine the potential for AI to diagnose patients with COVID-19 pneumonia using chest radiographs (CXR) alone. This is done using Microsoft CustomVision (www.customvision.ai), a readily available, automated ML platform. Employing AI to both screen and diagnose emerging health emergencies such as COVID-19 has the potential to dramatically change how we approach medical care in the future. In addition, we describe the creation of a publicly available website (interknowlogy-covid-19 .azurewebsites.net) that could augment COVID-19 pneumonia CXR diagnosis.

Methods

For the training dataset, 103 CXR images of COVID-19 were downloaded from GitHub covid-chest-xray dataset.15 Five hundred images of non-COVID-19 pneumonia and 500 images of the normal lung were downloaded from the Kaggle RSNA Pneumonia Detection Challenge dataset.16 To balance the dataset, we expanded the COVID-19 dataset to 500 images by slight rotation (probability = 1, max rotation = 5) and zooming (probability = 0.5, percentage area = 0.9) of the original images using the Augmentor Python package.17

Validation Dataset

For the validation dataset 30 random CXR images were obtained from the US Department of Veterans Affairs (VA) PACS (picture archiving and communication system). This dataset included 10 CXR images from hospitalized patients with COVID-19, 10 CXR pneumonia images from patients without COVID-19, and 10 normal CXRs. COVID-19 diagnoses were confirmed with a positive test result from the Xpert Xpress SARS-CoV-2 polymerase chain reaction (PCR) platform.18

 

 

Microsoft Custom

Vision Microsoft CustomVision is an automated image classification and object detection system that is a part of Microsoft Azure Cognitive Services (azure.microsoft.com). It has a pay-as-you-go model with fees depending on the computing needs and usage. It offers a free trial to users for 2 initial projects. The service is online with an easy-to-follow graphical user interface. No coding skills are necessary.

We created a new classification project in CustomVision and chose a compact general domain for small size and easy export to TensorFlow. js model format. TensorFlow.js is a JavaScript library that enables dynamic download and execution of ML models. After the project was created, we proceeded to upload our image dataset. Each class was uploaded separately and tagged with the appropriate label (covid pneumonia, non-covid pneumonia, or normal lung). The system rejected 16 COVID-19 images as duplicates. The final CustomVision training dataset consisted of 484 images of COVID-19 pneumonia, 500 images of non-COVID-19 pneumonia, and 500 images of normal lungs. Once uploaded, CustomVision self-trains using the dataset upon initiating the program (Figure 1).

 

Website Creation

CustomVision was used to train the model. It can be used to execute the model continuously, or the model can be compacted and decoupled from CustomVision. In this case, the model was compacted and decoupled for use in an online application. An Angular online application was created with TensorFlow.js. Within a user’s web browser, the model is executed when an image of a CXR is submitted. Confidence values for each classification are returned. In this design, after the initial webpage and model is downloaded, the webpage no longer needs to access any server components and performs all operations in the browser. Although the solution works well on mobile phone browsers and in low bandwidth situations, the quality of predictions may depend on the browser and device used. At no time does an image get submitted to the cloud.

Result

Overall, our trained model showed 92.9% precision and recall. Precision and recall results for each label were 98.9% and 94.8%, respectively for COVID-19 pneumonia; 91.8% and 89%, respectively, for non- COVID-19 pneumonia; and 88.8% and 95%, respectively, for normal lung (Figure 2). Next, we proceeded to validate the training model on the VA data by making individual predictions on 30 images from the VA dataset. Our model performed well with 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value (Table).

 

Discussion

We successfully demonstrated the potential of using AI algorithms in assessing CXRs for COVID-19. We first trained the CustomVision automated image classification and object detection system to differentiate cases of COVID-19 from pneumonia from other etiologies as well as normal lung CXRs. We then tested our model against known patients from the James A. Haley Veterans’ Hospital in Tampa, Florida. The program achieved 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value in differentiating the 3 scenarios. Using the trained ML model, we proceeded to create a website that could augment COVID-19 CXR diagnosis.19 The website works on mobile as well as desktop platforms. A health care provider can take a CXR photo with a mobile phone or upload the image file. The ML algorithm would provide the probability of COVID-19 pneumonia, non-COVID-19 pneumonia, or normal lung diagnosis (Figure 3).

Emerging diseases such as COVID-19 present numerous challenges to HCPs, governments, and businesses, as well as to individual members of society. As evidenced with COVID-19, the time from first recognition of an emerging pathogen to the development of methods for reliable diagnosis and treatment can be months, even with a concerted international effort. The gold standard for diagnosis of COVID-19 is by reverse transcriptase PCR (RT-PCR) technologies; however, early RT-PCR testing produced less than optimal results.20-22 Even after the development of reliable tests for detection, making test kits readily available to health care providers on an adequate scale presents an additional challenge as evident with COVID-19.

Use of X-ray vs Computed Tomography

The lack of availability of diagnostic RTPCR with COVID-19 initially placed increased reliability on presumptive diagnoses via imaging in some situations.23 Most of the literature evaluating radiographs of patients with COVID-19 focuses on chest computed tomography (CT) findings, with initial results suggesting CT was more accurate than early RT-PCR methodologies.21,22,24 The Radiological Society of North America Expert consensus statement on chest CT for COVID-19 states that CT findings can even precede positivity on RT-PCR in some cases.22 However, currently it does not recommend the use of CT scanning as a screening tool. Furthermore, the actual sensitivity and specificity of CT interpretation by radiologists for COVID-19 are unknown.22

 

 

Characteristic CT findings include ground-glass opacities (GGOs) and consolidation most commonly in the lung periphery, though a diffuse distribution was found in a minority of patients.21,23,25-27 Lomoro and colleagues recently summarized the CT findings from several reports that described abnormalities as most often bilateral and peripheral, subpleural, and affecting the lower lobes.26 Not surprisingly, CT appears more sensitive at detecting changes with COVID-19 than does CXR, with reports that a minority of patients exhibited CT changes before changes were visible on CXR.23,26

We focused our study on the potential of AI in the examination of CXRs in patients with COVID-19, as there are several limitations to the routine use of CT scans with conditions such as COVID-19. Aside from the more considerable time required to obtain CTs, there are issues with contamination of CT suites, sometimes requiring a dedicated COVID-19 CT scanner.23,28 The time constraints of decontamination or limited utilization of CT suites can delay or disrupt services for patients with and without COVID-19. Because of these factors, CXR may be a better resource to minimize the risk of infection to other patients. Also, accurate assessment of abnormalities on CXR for COVID-19 may identify patients in whom the CXR was performed for other purposes.23 CXR is more readily available than CT, especially in more remote or underdeveloped areas.28 Finally, as with CT, CXR abnormalities are reported to have appeared before RT-PCR tests became positive for a minority of patients.23

CXR findings described in patients with COVID-19 are similar to those of CT and include GGOs, consolidation, and hazy increased opacities.23,25,26,28,29 Like CT, the majority of patients who received CXR demonstrated greater involvement in the lower zones and peripherally.23,25,26,28,29 Most patients showed bilateral involvement. However, while these findings are common in patients with COVID-19, they are not specific and can be seen in other conditions, such as other viral pneumonia, bacterial pneumonia, injury from drug toxicity, inhalation injury, connective tissue disease, and idiopathic conditions.

Application of AI for COVID-19

Applications of AI in interpreting radiographs of various types are numerous, and extensive literature has been written on the topic.30 Using deep learning algorithms, AI has multiple possible roles to augment traditional radiograph interpretation. These include the potential for screening, triaging, and increasing the speed to render diagnoses. It also can provide a rapid “second opinion” to the radiologist to support the final interpretation. In areas with critical shortages of radiologists, AI potentially can be used to render the definitive diagnosis. In COVID- 19, imaging studies have been shown to correlate with disease severity and mortality, and AI could assist in monitoring the course of the disease as it progresses and potentially identify patients at greatest risk.27 Furthermore, early results from PCR have been considered suboptimal, and it is known that patients with COVID-19 can test negative initially even by reliable testing methodologies. As AI technology progresses, interpretation can detect and guide triage and treatment of patients with high suspicions of COVID-19 but negative initial PCR results, or in situations where test availability is limited or results are delayed. There are numerous potential benefits should a rapid diagnostic test as simple as a CXR be able to reliably impact containment and prevention of the spread of contagions such as COVID- 19 early in its course.

Few studies have assessed using AI in the radiologic diagnosis of COVID-19, most of which use CT scanning. Bai and colleagues demonstrated increased accuracy, sensitivity, and specificity in distinguishing chest CTs of COVID-19 patients from other types of pneumonia.21,31 A separate study demonstrated the utility of using AI to differentiate COVID-19 from community-acquired pneumonia with CT.32 However, the effective utility of AI for CXR interpretation also has been demonstrated.14,33 Implementation of convolutional neural network layers has allowed for reliable differentiation of viral and bacterial pneumonia with CXR imaging.34 Evidence suggests that there is great potential in the application of AI in the interpretation of radiographs of all types.

Finally, we have developed a publicly available website based on our studies.18 This website is for research use only as it is based on data from our preliminary investigation. To appear within the website, images must have protected health information removed before uploading. The information on the website, including text, graphics, images, or other material, is for research and may not be appropriate for all circumstances. The website does not provide medical, professional, or licensed advice and is not a substitute for consultation with a HCP. Medical advice should be sought from a qualified HCP for any questions, and the website should not be used for medical diagnosis or treatment.

 

 

Limitations

In our preliminary study, we have demonstrated the potential impact AI can have in multiple aspects of patient care for emerging pathogens such as COVID-19 using a test as readily available as a CXR. However, several limitations to this investigation should be mentioned. The study is retrospective in nature with limited sample size and with X-rays from patients with various stages of COVID-19 pneumonia. Also, cases of non-COVID-19 pneumonia are not stratified into different types or etiologies. We intend to demonstrate the potential of AI in differentiating COVID-19 pneumonia from non-COVID-19 pneumonia of any etiology, though future studies should address comparison of COVID-19 cases to more specific types of pneumonias, such as of bacterial or viral origin. Furthermore, the present study does not address any potential effects of additional radiographic findings from coexistent conditions, such as pulmonary edema as seen in congestive heart failure, pleural effusions (which can be seen with COVID-19 pneumonia, though rarely), interstitial lung disease, etc. Future studies are required to address these issues. Ultimately, prospective studies to assess AI-assisted radiographic interpretation in conditions such as COVID-19 are required to demonstrate the impact on diagnosis, treatment, outcome, and patient safety as these technologies are implemented.

Conclusions

We have used a readily available, commercial platform to demonstrate the potential of AI to assist in the successful diagnosis of COVID-19 pneumonia on CXR images. While this technology has numerous applications in radiology, we have focused on the potential impact on future world health crises such as COVID-19. The findings have implications for screening and triage, initial diagnosis, monitoring disease progression, and identifying patients at increased risk of morbidity and mortality. Based on the data, a website was created to demonstrate how such technologies could be shared and distributed to others to combat entities such as COVID-19 moving forward. Our study offers a small window into the potential for how AI will likely dramatically change the practice of medicine in the future.

References

1. World Health Organization. Coronavirus disease (COVID- 19) pandemic. https://www.who.int/emergencies/diseases /novel-coronavirus2019. Updated August 23, 2020. Accessed August 24, 2020.

2. World Health Organization. WHO Director-General’s opening remarks at the media briefing on COVID-19 - 11 March 2020. https://www.who.int/dg/speeches/detail/who -director-general-sopening-remarks-at-the-media-briefing -on-covid-19---11-march2020. Published March 11, 2020. Accessed August 24, 2020.

3. World Health Organization. Coronavirus disease (COVID- 19): situation report--209. https://www.who.int/docs /default-source/coronaviruse/situation-reports/20200816 -covid-19-sitrep-209.pdf. Updated August 16, 2020. Accessed August 24, 2020.

4. Nicola M, Alsafi Z, Sohrabi C, et al. The socio-economic implications of the coronavirus pandemic (COVID-19): a review. Int J Surg. 2020;78:185-193. doi:10.1016/j.ijsu.2020.04.018

5. da Costa VG, Moreli ML, Saivish MV. The emergence of SARS, MERS and novel SARS-2 coronaviruses in the 21st century. Arch Virol. 2020;165(7):1517-1526. doi:10.1007/s00705-020-04628-0

6. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.

7. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Updated January 15, 2019. Accessed August 24, 2020.

8. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. http:// arxiv.org/abs/1808.08230. Updated January 15, 2019. Accessed August 24, 2020.

9. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87. doi:10.1609/AIMAG.V27I4.1911

10. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229. doi:10.1147/rd.33.0210

11. Sarle WS. Neural networks and statistical models https:// people.orie.cornell.edu/davidr/or474/nn_sas.pdf. Published April 1994. Accessed August 24, 2020.

12. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85-117. doi:10.1016/j.neunet.2014.09.003

13. 13. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539

14. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44- 56. doi:10.1038/s41591-018-0300-7

15. Cohen JP, Morrison P, Dao L. COVID-19 Image Data Collection. Published online March 25, 2020. Accessed May 13, 2020. http://arxiv.org/abs/2003.11597

16. Radiological Society of America. RSNA pneumonia detection challenge. https://www.kaggle.com/c/rsnapneumonia- detectionchallenge. Accessed August 24, 2020.

17. Bloice MD, Roth PM, Holzinger A. Biomedical image augmentation using Augmentor. Bioinformatics. 2019;35(21):4522-4524. doi:10.1093/bioinformatics/btz259

18. Cepheid. Xpert Xpress SARS-CoV-2. https://www.cepheid .com/coronavirus. Accessed August 24, 2020.

19. Interknowlogy. COVID-19 detection in chest X-rays. https://interknowlogy-covid-19.azurewebsites.net. Accessed August 27, 2020.

20. Bernheim A, Mei X, Huang M, et al. Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection. Radiology. 2020;295(3):200463. doi:10.1148/radiol.2020200463

21. Ai T, Yang Z, Hou H, et al. Correlation of Chest CT and RTPCR Testing for Coronavirus Disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32- E40. doi:10.1148/radiol.2020200642

22. Simpson S, Kay FU, Abbara S, et al. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA - Secondary Publication. J Thorac Imaging. 2020;35(4):219-227. doi:10.1097/RTI.0000000000000524

23. Wong HYF, Lam HYS, Fong AH, et al. Frequency and distribution of chest radiographic findings in patients positive for COVID-19. Radiology. 2020;296(2):E72-E78. doi:10.1148/radiol.2020201160

24. Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115-E117. doi:10.1148/radiol.2020200432

25. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507-513. doi:10.1016/S0140-6736(20)30211-7

26. Lomoro P, Verde F, Zerboni F, et al. COVID-19 pneumonia manifestations at the admission on chest ultrasound, radiographs, and CT: single-center study and comprehensive radiologic literature review. Eur J Radiol Open. 2020;7:100231. doi:10.1016/j.ejro.2020.100231

27. Salehi S, Abedi A, Balakrishnan S, Gholamrezanezhad A. Coronavirus disease 2019 (COVID-19) imaging reporting and data system (COVID-RADS) and common lexicon: a proposal based on the imaging data of 37 studies. Eur Radiol. 2020;30(9):4930-4942. doi:10.1007/s00330-020-06863-0

28. Jacobi A, Chung M, Bernheim A, Eber C. Portable chest X-ray in coronavirus disease-19 (COVID- 19): a pictorial review. Clin Imaging. 2020;64:35-42. doi:10.1016/j.clinimag.2020.04.001

29. Bhat R, Hamid A, Kunin JR, et al. Chest imaging in patients hospitalized With COVID-19 infection - a case series. Curr Probl Diagn Radiol. 2020;49(4):294-301. doi:10.1067/j.cpradiol.2020.04.001

30. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Heal. 2019;1(6):E271- E297. doi:10.1016/S2589-7500(19)30123-2

31. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491

32. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905

33. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. http://arxiv.org /abs/2002.11379. Updated March 11, 2020. Accessed August 24, 2020.

34. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by imagebased deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

Article PDF
Author and Disclosure Information

Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory, L. Brannon Thomas is Chief of the Microbiology Laboratory, Lauren Deland is a Research Coordinator, and Stephen Mastorides is Chief of Pathology; Narayan Viswanadhan is Assistant Chief of Radiology; all at the James A. Haley Veterans’ Hospital in Tampa, Florida. Rodney Guzman is a Cofounder of InterKnowlogy, LLC in Carlsbad, California. Andrew Borkowski and Stephen Mastorides are Professors and L. Brannon Thomas is an Assistant Professor, all in the Department of Pathology and Cell Biology, University of South Florida, Morsani College of Medicine in Tampa, Florida
Correspondence: Andrew Borkowski ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 37(9)a
Publications
Topics
Page Number
398-404
Sections
Author and Disclosure Information

Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory, L. Brannon Thomas is Chief of the Microbiology Laboratory, Lauren Deland is a Research Coordinator, and Stephen Mastorides is Chief of Pathology; Narayan Viswanadhan is Assistant Chief of Radiology; all at the James A. Haley Veterans’ Hospital in Tampa, Florida. Rodney Guzman is a Cofounder of InterKnowlogy, LLC in Carlsbad, California. Andrew Borkowski and Stephen Mastorides are Professors and L. Brannon Thomas is an Assistant Professor, all in the Department of Pathology and Cell Biology, University of South Florida, Morsani College of Medicine in Tampa, Florida
Correspondence: Andrew Borkowski ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Andrew Borkowski is Chief of the Molecular Diagnostics Laboratory, L. Brannon Thomas is Chief of the Microbiology Laboratory, Lauren Deland is a Research Coordinator, and Stephen Mastorides is Chief of Pathology; Narayan Viswanadhan is Assistant Chief of Radiology; all at the James A. Haley Veterans’ Hospital in Tampa, Florida. Rodney Guzman is a Cofounder of InterKnowlogy, LLC in Carlsbad, California. Andrew Borkowski and Stephen Mastorides are Professors and L. Brannon Thomas is an Assistant Professor, all in the Department of Pathology and Cell Biology, University of South Florida, Morsani College of Medicine in Tampa, Florida
Correspondence: Andrew Borkowski ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF

The novel coronavirus severe acute respiratory syndrome coronavirus 2 (SARSCoV- 2), which causes the respiratory disease coronavirus disease-19 (COVID- 19), was first identified as a cluster of cases of pneumonia in Wuhan, Hubei Province of China on December 31, 2019.1 Within a month, the disease had spread significantly, leading the World Health Organization (WHO) to designate COVID-19 a public health emergency of international concern. On March 11, 2020, the WHO declared COVID-19 a global pandemic.2 As of August 18, 2020, the virus has infected > 21 million people, with > 750,000 deaths worldwide.3 The spread of COVID-19 has had a dramatic impact on social, economic, and health care issues throughout the world, which has been discussed elsewhere.4

Prior to the this century, members of the coronavirus family had minimal impact on human health.5 However, in the past 20 years, outbreaks have highlighted an emerging importance of coronaviruses in morbidity and mortality on a global scale. Although less prevalent than COVID-19, severe acute respiratory syndrome (SARS) in 2002 to 2003 and Middle East respiratory syndrome (MERS) in 2012 likely had higher mortality rates than the current pandemic.5 Based on this recent history, it is reasonable to assume that we will continue to see novel diseases with similar significant health and societal implications. The challenges presented to health care providers (HCPs) by such novel viral pathogens are numerous, including methods for rapid diagnosis, prevention, and treatment. In the current study, we focus on diagnosis issues, which were evident with COVID-19 with the time required to develop rapid and effective diagnostic modalities.

We have previously reported the utility of using artificial intelligence (AI) in the histopathologic diagnosis of cancer.6-8 AI was first described in 1956 and involves the field of computer science in which machines are trained to learn from experience.9 Machine learning (ML) is a subset of AI and is achieved by using mathematic models to compute sample datasets.10 Current ML employs deep learning with neural network algorithms, which can recognize patterns and achieve complex computational tasks often far quicker and with increased precision than can humans.11-13 In addition to applications in pathology, ML algorithms have both prognostic and diagnostic applications in multiple medical specialties, such as radiology, dermatology, ophthalmology, and cardiology.6 It is predicted that AI will impact almost every aspect of health care in the future.14

In this article, we examine the potential for AI to diagnose patients with COVID-19 pneumonia using chest radiographs (CXR) alone. This is done using Microsoft CustomVision (www.customvision.ai), a readily available, automated ML platform. Employing AI to both screen and diagnose emerging health emergencies such as COVID-19 has the potential to dramatically change how we approach medical care in the future. In addition, we describe the creation of a publicly available website (interknowlogy-covid-19 .azurewebsites.net) that could augment COVID-19 pneumonia CXR diagnosis.

Methods

For the training dataset, 103 CXR images of COVID-19 were downloaded from GitHub covid-chest-xray dataset.15 Five hundred images of non-COVID-19 pneumonia and 500 images of the normal lung were downloaded from the Kaggle RSNA Pneumonia Detection Challenge dataset.16 To balance the dataset, we expanded the COVID-19 dataset to 500 images by slight rotation (probability = 1, max rotation = 5) and zooming (probability = 0.5, percentage area = 0.9) of the original images using the Augmentor Python package.17

Validation Dataset

For the validation dataset 30 random CXR images were obtained from the US Department of Veterans Affairs (VA) PACS (picture archiving and communication system). This dataset included 10 CXR images from hospitalized patients with COVID-19, 10 CXR pneumonia images from patients without COVID-19, and 10 normal CXRs. COVID-19 diagnoses were confirmed with a positive test result from the Xpert Xpress SARS-CoV-2 polymerase chain reaction (PCR) platform.18

 

 

Microsoft Custom

Vision Microsoft CustomVision is an automated image classification and object detection system that is a part of Microsoft Azure Cognitive Services (azure.microsoft.com). It has a pay-as-you-go model with fees depending on the computing needs and usage. It offers a free trial to users for 2 initial projects. The service is online with an easy-to-follow graphical user interface. No coding skills are necessary.

We created a new classification project in CustomVision and chose a compact general domain for small size and easy export to TensorFlow. js model format. TensorFlow.js is a JavaScript library that enables dynamic download and execution of ML models. After the project was created, we proceeded to upload our image dataset. Each class was uploaded separately and tagged with the appropriate label (covid pneumonia, non-covid pneumonia, or normal lung). The system rejected 16 COVID-19 images as duplicates. The final CustomVision training dataset consisted of 484 images of COVID-19 pneumonia, 500 images of non-COVID-19 pneumonia, and 500 images of normal lungs. Once uploaded, CustomVision self-trains using the dataset upon initiating the program (Figure 1).

 

Website Creation

CustomVision was used to train the model. It can be used to execute the model continuously, or the model can be compacted and decoupled from CustomVision. In this case, the model was compacted and decoupled for use in an online application. An Angular online application was created with TensorFlow.js. Within a user’s web browser, the model is executed when an image of a CXR is submitted. Confidence values for each classification are returned. In this design, after the initial webpage and model is downloaded, the webpage no longer needs to access any server components and performs all operations in the browser. Although the solution works well on mobile phone browsers and in low bandwidth situations, the quality of predictions may depend on the browser and device used. At no time does an image get submitted to the cloud.

Result

Overall, our trained model showed 92.9% precision and recall. Precision and recall results for each label were 98.9% and 94.8%, respectively for COVID-19 pneumonia; 91.8% and 89%, respectively, for non- COVID-19 pneumonia; and 88.8% and 95%, respectively, for normal lung (Figure 2). Next, we proceeded to validate the training model on the VA data by making individual predictions on 30 images from the VA dataset. Our model performed well with 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value (Table).

 

Discussion

We successfully demonstrated the potential of using AI algorithms in assessing CXRs for COVID-19. We first trained the CustomVision automated image classification and object detection system to differentiate cases of COVID-19 from pneumonia from other etiologies as well as normal lung CXRs. We then tested our model against known patients from the James A. Haley Veterans’ Hospital in Tampa, Florida. The program achieved 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value in differentiating the 3 scenarios. Using the trained ML model, we proceeded to create a website that could augment COVID-19 CXR diagnosis.19 The website works on mobile as well as desktop platforms. A health care provider can take a CXR photo with a mobile phone or upload the image file. The ML algorithm would provide the probability of COVID-19 pneumonia, non-COVID-19 pneumonia, or normal lung diagnosis (Figure 3).

Emerging diseases such as COVID-19 present numerous challenges to HCPs, governments, and businesses, as well as to individual members of society. As evidenced with COVID-19, the time from first recognition of an emerging pathogen to the development of methods for reliable diagnosis and treatment can be months, even with a concerted international effort. The gold standard for diagnosis of COVID-19 is by reverse transcriptase PCR (RT-PCR) technologies; however, early RT-PCR testing produced less than optimal results.20-22 Even after the development of reliable tests for detection, making test kits readily available to health care providers on an adequate scale presents an additional challenge as evident with COVID-19.

Use of X-ray vs Computed Tomography

The lack of availability of diagnostic RTPCR with COVID-19 initially placed increased reliability on presumptive diagnoses via imaging in some situations.23 Most of the literature evaluating radiographs of patients with COVID-19 focuses on chest computed tomography (CT) findings, with initial results suggesting CT was more accurate than early RT-PCR methodologies.21,22,24 The Radiological Society of North America Expert consensus statement on chest CT for COVID-19 states that CT findings can even precede positivity on RT-PCR in some cases.22 However, currently it does not recommend the use of CT scanning as a screening tool. Furthermore, the actual sensitivity and specificity of CT interpretation by radiologists for COVID-19 are unknown.22

 

 

Characteristic CT findings include ground-glass opacities (GGOs) and consolidation most commonly in the lung periphery, though a diffuse distribution was found in a minority of patients.21,23,25-27 Lomoro and colleagues recently summarized the CT findings from several reports that described abnormalities as most often bilateral and peripheral, subpleural, and affecting the lower lobes.26 Not surprisingly, CT appears more sensitive at detecting changes with COVID-19 than does CXR, with reports that a minority of patients exhibited CT changes before changes were visible on CXR.23,26

We focused our study on the potential of AI in the examination of CXRs in patients with COVID-19, as there are several limitations to the routine use of CT scans with conditions such as COVID-19. Aside from the more considerable time required to obtain CTs, there are issues with contamination of CT suites, sometimes requiring a dedicated COVID-19 CT scanner.23,28 The time constraints of decontamination or limited utilization of CT suites can delay or disrupt services for patients with and without COVID-19. Because of these factors, CXR may be a better resource to minimize the risk of infection to other patients. Also, accurate assessment of abnormalities on CXR for COVID-19 may identify patients in whom the CXR was performed for other purposes.23 CXR is more readily available than CT, especially in more remote or underdeveloped areas.28 Finally, as with CT, CXR abnormalities are reported to have appeared before RT-PCR tests became positive for a minority of patients.23

CXR findings described in patients with COVID-19 are similar to those of CT and include GGOs, consolidation, and hazy increased opacities.23,25,26,28,29 Like CT, the majority of patients who received CXR demonstrated greater involvement in the lower zones and peripherally.23,25,26,28,29 Most patients showed bilateral involvement. However, while these findings are common in patients with COVID-19, they are not specific and can be seen in other conditions, such as other viral pneumonia, bacterial pneumonia, injury from drug toxicity, inhalation injury, connective tissue disease, and idiopathic conditions.

Application of AI for COVID-19

Applications of AI in interpreting radiographs of various types are numerous, and extensive literature has been written on the topic.30 Using deep learning algorithms, AI has multiple possible roles to augment traditional radiograph interpretation. These include the potential for screening, triaging, and increasing the speed to render diagnoses. It also can provide a rapid “second opinion” to the radiologist to support the final interpretation. In areas with critical shortages of radiologists, AI potentially can be used to render the definitive diagnosis. In COVID- 19, imaging studies have been shown to correlate with disease severity and mortality, and AI could assist in monitoring the course of the disease as it progresses and potentially identify patients at greatest risk.27 Furthermore, early results from PCR have been considered suboptimal, and it is known that patients with COVID-19 can test negative initially even by reliable testing methodologies. As AI technology progresses, interpretation can detect and guide triage and treatment of patients with high suspicions of COVID-19 but negative initial PCR results, or in situations where test availability is limited or results are delayed. There are numerous potential benefits should a rapid diagnostic test as simple as a CXR be able to reliably impact containment and prevention of the spread of contagions such as COVID- 19 early in its course.

Few studies have assessed using AI in the radiologic diagnosis of COVID-19, most of which use CT scanning. Bai and colleagues demonstrated increased accuracy, sensitivity, and specificity in distinguishing chest CTs of COVID-19 patients from other types of pneumonia.21,31 A separate study demonstrated the utility of using AI to differentiate COVID-19 from community-acquired pneumonia with CT.32 However, the effective utility of AI for CXR interpretation also has been demonstrated.14,33 Implementation of convolutional neural network layers has allowed for reliable differentiation of viral and bacterial pneumonia with CXR imaging.34 Evidence suggests that there is great potential in the application of AI in the interpretation of radiographs of all types.

Finally, we have developed a publicly available website based on our studies.18 This website is for research use only as it is based on data from our preliminary investigation. To appear within the website, images must have protected health information removed before uploading. The information on the website, including text, graphics, images, or other material, is for research and may not be appropriate for all circumstances. The website does not provide medical, professional, or licensed advice and is not a substitute for consultation with a HCP. Medical advice should be sought from a qualified HCP for any questions, and the website should not be used for medical diagnosis or treatment.

 

 

Limitations

In our preliminary study, we have demonstrated the potential impact AI can have in multiple aspects of patient care for emerging pathogens such as COVID-19 using a test as readily available as a CXR. However, several limitations to this investigation should be mentioned. The study is retrospective in nature with limited sample size and with X-rays from patients with various stages of COVID-19 pneumonia. Also, cases of non-COVID-19 pneumonia are not stratified into different types or etiologies. We intend to demonstrate the potential of AI in differentiating COVID-19 pneumonia from non-COVID-19 pneumonia of any etiology, though future studies should address comparison of COVID-19 cases to more specific types of pneumonias, such as of bacterial or viral origin. Furthermore, the present study does not address any potential effects of additional radiographic findings from coexistent conditions, such as pulmonary edema as seen in congestive heart failure, pleural effusions (which can be seen with COVID-19 pneumonia, though rarely), interstitial lung disease, etc. Future studies are required to address these issues. Ultimately, prospective studies to assess AI-assisted radiographic interpretation in conditions such as COVID-19 are required to demonstrate the impact on diagnosis, treatment, outcome, and patient safety as these technologies are implemented.

Conclusions

We have used a readily available, commercial platform to demonstrate the potential of AI to assist in the successful diagnosis of COVID-19 pneumonia on CXR images. While this technology has numerous applications in radiology, we have focused on the potential impact on future world health crises such as COVID-19. The findings have implications for screening and triage, initial diagnosis, monitoring disease progression, and identifying patients at increased risk of morbidity and mortality. Based on the data, a website was created to demonstrate how such technologies could be shared and distributed to others to combat entities such as COVID-19 moving forward. Our study offers a small window into the potential for how AI will likely dramatically change the practice of medicine in the future.

The novel coronavirus severe acute respiratory syndrome coronavirus 2 (SARSCoV- 2), which causes the respiratory disease coronavirus disease-19 (COVID- 19), was first identified as a cluster of cases of pneumonia in Wuhan, Hubei Province of China on December 31, 2019.1 Within a month, the disease had spread significantly, leading the World Health Organization (WHO) to designate COVID-19 a public health emergency of international concern. On March 11, 2020, the WHO declared COVID-19 a global pandemic.2 As of August 18, 2020, the virus has infected > 21 million people, with > 750,000 deaths worldwide.3 The spread of COVID-19 has had a dramatic impact on social, economic, and health care issues throughout the world, which has been discussed elsewhere.4

Prior to the this century, members of the coronavirus family had minimal impact on human health.5 However, in the past 20 years, outbreaks have highlighted an emerging importance of coronaviruses in morbidity and mortality on a global scale. Although less prevalent than COVID-19, severe acute respiratory syndrome (SARS) in 2002 to 2003 and Middle East respiratory syndrome (MERS) in 2012 likely had higher mortality rates than the current pandemic.5 Based on this recent history, it is reasonable to assume that we will continue to see novel diseases with similar significant health and societal implications. The challenges presented to health care providers (HCPs) by such novel viral pathogens are numerous, including methods for rapid diagnosis, prevention, and treatment. In the current study, we focus on diagnosis issues, which were evident with COVID-19 with the time required to develop rapid and effective diagnostic modalities.

We have previously reported the utility of using artificial intelligence (AI) in the histopathologic diagnosis of cancer.6-8 AI was first described in 1956 and involves the field of computer science in which machines are trained to learn from experience.9 Machine learning (ML) is a subset of AI and is achieved by using mathematic models to compute sample datasets.10 Current ML employs deep learning with neural network algorithms, which can recognize patterns and achieve complex computational tasks often far quicker and with increased precision than can humans.11-13 In addition to applications in pathology, ML algorithms have both prognostic and diagnostic applications in multiple medical specialties, such as radiology, dermatology, ophthalmology, and cardiology.6 It is predicted that AI will impact almost every aspect of health care in the future.14

In this article, we examine the potential for AI to diagnose patients with COVID-19 pneumonia using chest radiographs (CXR) alone. This is done using Microsoft CustomVision (www.customvision.ai), a readily available, automated ML platform. Employing AI to both screen and diagnose emerging health emergencies such as COVID-19 has the potential to dramatically change how we approach medical care in the future. In addition, we describe the creation of a publicly available website (interknowlogy-covid-19 .azurewebsites.net) that could augment COVID-19 pneumonia CXR diagnosis.

Methods

For the training dataset, 103 CXR images of COVID-19 were downloaded from GitHub covid-chest-xray dataset.15 Five hundred images of non-COVID-19 pneumonia and 500 images of the normal lung were downloaded from the Kaggle RSNA Pneumonia Detection Challenge dataset.16 To balance the dataset, we expanded the COVID-19 dataset to 500 images by slight rotation (probability = 1, max rotation = 5) and zooming (probability = 0.5, percentage area = 0.9) of the original images using the Augmentor Python package.17

Validation Dataset

For the validation dataset 30 random CXR images were obtained from the US Department of Veterans Affairs (VA) PACS (picture archiving and communication system). This dataset included 10 CXR images from hospitalized patients with COVID-19, 10 CXR pneumonia images from patients without COVID-19, and 10 normal CXRs. COVID-19 diagnoses were confirmed with a positive test result from the Xpert Xpress SARS-CoV-2 polymerase chain reaction (PCR) platform.18

 

 

Microsoft Custom

Vision Microsoft CustomVision is an automated image classification and object detection system that is a part of Microsoft Azure Cognitive Services (azure.microsoft.com). It has a pay-as-you-go model with fees depending on the computing needs and usage. It offers a free trial to users for 2 initial projects. The service is online with an easy-to-follow graphical user interface. No coding skills are necessary.

We created a new classification project in CustomVision and chose a compact general domain for small size and easy export to TensorFlow. js model format. TensorFlow.js is a JavaScript library that enables dynamic download and execution of ML models. After the project was created, we proceeded to upload our image dataset. Each class was uploaded separately and tagged with the appropriate label (covid pneumonia, non-covid pneumonia, or normal lung). The system rejected 16 COVID-19 images as duplicates. The final CustomVision training dataset consisted of 484 images of COVID-19 pneumonia, 500 images of non-COVID-19 pneumonia, and 500 images of normal lungs. Once uploaded, CustomVision self-trains using the dataset upon initiating the program (Figure 1).

 

Website Creation

CustomVision was used to train the model. It can be used to execute the model continuously, or the model can be compacted and decoupled from CustomVision. In this case, the model was compacted and decoupled for use in an online application. An Angular online application was created with TensorFlow.js. Within a user’s web browser, the model is executed when an image of a CXR is submitted. Confidence values for each classification are returned. In this design, after the initial webpage and model is downloaded, the webpage no longer needs to access any server components and performs all operations in the browser. Although the solution works well on mobile phone browsers and in low bandwidth situations, the quality of predictions may depend on the browser and device used. At no time does an image get submitted to the cloud.

Result

Overall, our trained model showed 92.9% precision and recall. Precision and recall results for each label were 98.9% and 94.8%, respectively for COVID-19 pneumonia; 91.8% and 89%, respectively, for non- COVID-19 pneumonia; and 88.8% and 95%, respectively, for normal lung (Figure 2). Next, we proceeded to validate the training model on the VA data by making individual predictions on 30 images from the VA dataset. Our model performed well with 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value (Table).

 

Discussion

We successfully demonstrated the potential of using AI algorithms in assessing CXRs for COVID-19. We first trained the CustomVision automated image classification and object detection system to differentiate cases of COVID-19 from pneumonia from other etiologies as well as normal lung CXRs. We then tested our model against known patients from the James A. Haley Veterans’ Hospital in Tampa, Florida. The program achieved 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value in differentiating the 3 scenarios. Using the trained ML model, we proceeded to create a website that could augment COVID-19 CXR diagnosis.19 The website works on mobile as well as desktop platforms. A health care provider can take a CXR photo with a mobile phone or upload the image file. The ML algorithm would provide the probability of COVID-19 pneumonia, non-COVID-19 pneumonia, or normal lung diagnosis (Figure 3).

Emerging diseases such as COVID-19 present numerous challenges to HCPs, governments, and businesses, as well as to individual members of society. As evidenced with COVID-19, the time from first recognition of an emerging pathogen to the development of methods for reliable diagnosis and treatment can be months, even with a concerted international effort. The gold standard for diagnosis of COVID-19 is by reverse transcriptase PCR (RT-PCR) technologies; however, early RT-PCR testing produced less than optimal results.20-22 Even after the development of reliable tests for detection, making test kits readily available to health care providers on an adequate scale presents an additional challenge as evident with COVID-19.

Use of X-ray vs Computed Tomography

The lack of availability of diagnostic RTPCR with COVID-19 initially placed increased reliability on presumptive diagnoses via imaging in some situations.23 Most of the literature evaluating radiographs of patients with COVID-19 focuses on chest computed tomography (CT) findings, with initial results suggesting CT was more accurate than early RT-PCR methodologies.21,22,24 The Radiological Society of North America Expert consensus statement on chest CT for COVID-19 states that CT findings can even precede positivity on RT-PCR in some cases.22 However, currently it does not recommend the use of CT scanning as a screening tool. Furthermore, the actual sensitivity and specificity of CT interpretation by radiologists for COVID-19 are unknown.22

 

 

Characteristic CT findings include ground-glass opacities (GGOs) and consolidation most commonly in the lung periphery, though a diffuse distribution was found in a minority of patients.21,23,25-27 Lomoro and colleagues recently summarized the CT findings from several reports that described abnormalities as most often bilateral and peripheral, subpleural, and affecting the lower lobes.26 Not surprisingly, CT appears more sensitive at detecting changes with COVID-19 than does CXR, with reports that a minority of patients exhibited CT changes before changes were visible on CXR.23,26

We focused our study on the potential of AI in the examination of CXRs in patients with COVID-19, as there are several limitations to the routine use of CT scans with conditions such as COVID-19. Aside from the more considerable time required to obtain CTs, there are issues with contamination of CT suites, sometimes requiring a dedicated COVID-19 CT scanner.23,28 The time constraints of decontamination or limited utilization of CT suites can delay or disrupt services for patients with and without COVID-19. Because of these factors, CXR may be a better resource to minimize the risk of infection to other patients. Also, accurate assessment of abnormalities on CXR for COVID-19 may identify patients in whom the CXR was performed for other purposes.23 CXR is more readily available than CT, especially in more remote or underdeveloped areas.28 Finally, as with CT, CXR abnormalities are reported to have appeared before RT-PCR tests became positive for a minority of patients.23

CXR findings described in patients with COVID-19 are similar to those of CT and include GGOs, consolidation, and hazy increased opacities.23,25,26,28,29 Like CT, the majority of patients who received CXR demonstrated greater involvement in the lower zones and peripherally.23,25,26,28,29 Most patients showed bilateral involvement. However, while these findings are common in patients with COVID-19, they are not specific and can be seen in other conditions, such as other viral pneumonia, bacterial pneumonia, injury from drug toxicity, inhalation injury, connective tissue disease, and idiopathic conditions.

Application of AI for COVID-19

Applications of AI in interpreting radiographs of various types are numerous, and extensive literature has been written on the topic.30 Using deep learning algorithms, AI has multiple possible roles to augment traditional radiograph interpretation. These include the potential for screening, triaging, and increasing the speed to render diagnoses. It also can provide a rapid “second opinion” to the radiologist to support the final interpretation. In areas with critical shortages of radiologists, AI potentially can be used to render the definitive diagnosis. In COVID- 19, imaging studies have been shown to correlate with disease severity and mortality, and AI could assist in monitoring the course of the disease as it progresses and potentially identify patients at greatest risk.27 Furthermore, early results from PCR have been considered suboptimal, and it is known that patients with COVID-19 can test negative initially even by reliable testing methodologies. As AI technology progresses, interpretation can detect and guide triage and treatment of patients with high suspicions of COVID-19 but negative initial PCR results, or in situations where test availability is limited or results are delayed. There are numerous potential benefits should a rapid diagnostic test as simple as a CXR be able to reliably impact containment and prevention of the spread of contagions such as COVID- 19 early in its course.

Few studies have assessed using AI in the radiologic diagnosis of COVID-19, most of which use CT scanning. Bai and colleagues demonstrated increased accuracy, sensitivity, and specificity in distinguishing chest CTs of COVID-19 patients from other types of pneumonia.21,31 A separate study demonstrated the utility of using AI to differentiate COVID-19 from community-acquired pneumonia with CT.32 However, the effective utility of AI for CXR interpretation also has been demonstrated.14,33 Implementation of convolutional neural network layers has allowed for reliable differentiation of viral and bacterial pneumonia with CXR imaging.34 Evidence suggests that there is great potential in the application of AI in the interpretation of radiographs of all types.

Finally, we have developed a publicly available website based on our studies.18 This website is for research use only as it is based on data from our preliminary investigation. To appear within the website, images must have protected health information removed before uploading. The information on the website, including text, graphics, images, or other material, is for research and may not be appropriate for all circumstances. The website does not provide medical, professional, or licensed advice and is not a substitute for consultation with a HCP. Medical advice should be sought from a qualified HCP for any questions, and the website should not be used for medical diagnosis or treatment.

 

 

Limitations

In our preliminary study, we have demonstrated the potential impact AI can have in multiple aspects of patient care for emerging pathogens such as COVID-19 using a test as readily available as a CXR. However, several limitations to this investigation should be mentioned. The study is retrospective in nature with limited sample size and with X-rays from patients with various stages of COVID-19 pneumonia. Also, cases of non-COVID-19 pneumonia are not stratified into different types or etiologies. We intend to demonstrate the potential of AI in differentiating COVID-19 pneumonia from non-COVID-19 pneumonia of any etiology, though future studies should address comparison of COVID-19 cases to more specific types of pneumonias, such as of bacterial or viral origin. Furthermore, the present study does not address any potential effects of additional radiographic findings from coexistent conditions, such as pulmonary edema as seen in congestive heart failure, pleural effusions (which can be seen with COVID-19 pneumonia, though rarely), interstitial lung disease, etc. Future studies are required to address these issues. Ultimately, prospective studies to assess AI-assisted radiographic interpretation in conditions such as COVID-19 are required to demonstrate the impact on diagnosis, treatment, outcome, and patient safety as these technologies are implemented.

Conclusions

We have used a readily available, commercial platform to demonstrate the potential of AI to assist in the successful diagnosis of COVID-19 pneumonia on CXR images. While this technology has numerous applications in radiology, we have focused on the potential impact on future world health crises such as COVID-19. The findings have implications for screening and triage, initial diagnosis, monitoring disease progression, and identifying patients at increased risk of morbidity and mortality. Based on the data, a website was created to demonstrate how such technologies could be shared and distributed to others to combat entities such as COVID-19 moving forward. Our study offers a small window into the potential for how AI will likely dramatically change the practice of medicine in the future.

References

1. World Health Organization. Coronavirus disease (COVID- 19) pandemic. https://www.who.int/emergencies/diseases /novel-coronavirus2019. Updated August 23, 2020. Accessed August 24, 2020.

2. World Health Organization. WHO Director-General’s opening remarks at the media briefing on COVID-19 - 11 March 2020. https://www.who.int/dg/speeches/detail/who -director-general-sopening-remarks-at-the-media-briefing -on-covid-19---11-march2020. Published March 11, 2020. Accessed August 24, 2020.

3. World Health Organization. Coronavirus disease (COVID- 19): situation report--209. https://www.who.int/docs /default-source/coronaviruse/situation-reports/20200816 -covid-19-sitrep-209.pdf. Updated August 16, 2020. Accessed August 24, 2020.

4. Nicola M, Alsafi Z, Sohrabi C, et al. The socio-economic implications of the coronavirus pandemic (COVID-19): a review. Int J Surg. 2020;78:185-193. doi:10.1016/j.ijsu.2020.04.018

5. da Costa VG, Moreli ML, Saivish MV. The emergence of SARS, MERS and novel SARS-2 coronaviruses in the 21st century. Arch Virol. 2020;165(7):1517-1526. doi:10.1007/s00705-020-04628-0

6. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.

7. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Updated January 15, 2019. Accessed August 24, 2020.

8. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. http:// arxiv.org/abs/1808.08230. Updated January 15, 2019. Accessed August 24, 2020.

9. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87. doi:10.1609/AIMAG.V27I4.1911

10. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229. doi:10.1147/rd.33.0210

11. Sarle WS. Neural networks and statistical models https:// people.orie.cornell.edu/davidr/or474/nn_sas.pdf. Published April 1994. Accessed August 24, 2020.

12. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85-117. doi:10.1016/j.neunet.2014.09.003

13. 13. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539

14. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44- 56. doi:10.1038/s41591-018-0300-7

15. Cohen JP, Morrison P, Dao L. COVID-19 Image Data Collection. Published online March 25, 2020. Accessed May 13, 2020. http://arxiv.org/abs/2003.11597

16. Radiological Society of America. RSNA pneumonia detection challenge. https://www.kaggle.com/c/rsnapneumonia- detectionchallenge. Accessed August 24, 2020.

17. Bloice MD, Roth PM, Holzinger A. Biomedical image augmentation using Augmentor. Bioinformatics. 2019;35(21):4522-4524. doi:10.1093/bioinformatics/btz259

18. Cepheid. Xpert Xpress SARS-CoV-2. https://www.cepheid .com/coronavirus. Accessed August 24, 2020.

19. Interknowlogy. COVID-19 detection in chest X-rays. https://interknowlogy-covid-19.azurewebsites.net. Accessed August 27, 2020.

20. Bernheim A, Mei X, Huang M, et al. Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection. Radiology. 2020;295(3):200463. doi:10.1148/radiol.2020200463

21. Ai T, Yang Z, Hou H, et al. Correlation of Chest CT and RTPCR Testing for Coronavirus Disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32- E40. doi:10.1148/radiol.2020200642

22. Simpson S, Kay FU, Abbara S, et al. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA - Secondary Publication. J Thorac Imaging. 2020;35(4):219-227. doi:10.1097/RTI.0000000000000524

23. Wong HYF, Lam HYS, Fong AH, et al. Frequency and distribution of chest radiographic findings in patients positive for COVID-19. Radiology. 2020;296(2):E72-E78. doi:10.1148/radiol.2020201160

24. Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115-E117. doi:10.1148/radiol.2020200432

25. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507-513. doi:10.1016/S0140-6736(20)30211-7

26. Lomoro P, Verde F, Zerboni F, et al. COVID-19 pneumonia manifestations at the admission on chest ultrasound, radiographs, and CT: single-center study and comprehensive radiologic literature review. Eur J Radiol Open. 2020;7:100231. doi:10.1016/j.ejro.2020.100231

27. Salehi S, Abedi A, Balakrishnan S, Gholamrezanezhad A. Coronavirus disease 2019 (COVID-19) imaging reporting and data system (COVID-RADS) and common lexicon: a proposal based on the imaging data of 37 studies. Eur Radiol. 2020;30(9):4930-4942. doi:10.1007/s00330-020-06863-0

28. Jacobi A, Chung M, Bernheim A, Eber C. Portable chest X-ray in coronavirus disease-19 (COVID- 19): a pictorial review. Clin Imaging. 2020;64:35-42. doi:10.1016/j.clinimag.2020.04.001

29. Bhat R, Hamid A, Kunin JR, et al. Chest imaging in patients hospitalized With COVID-19 infection - a case series. Curr Probl Diagn Radiol. 2020;49(4):294-301. doi:10.1067/j.cpradiol.2020.04.001

30. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Heal. 2019;1(6):E271- E297. doi:10.1016/S2589-7500(19)30123-2

31. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491

32. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905

33. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. http://arxiv.org /abs/2002.11379. Updated March 11, 2020. Accessed August 24, 2020.

34. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by imagebased deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

References

1. World Health Organization. Coronavirus disease (COVID- 19) pandemic. https://www.who.int/emergencies/diseases /novel-coronavirus2019. Updated August 23, 2020. Accessed August 24, 2020.

2. World Health Organization. WHO Director-General’s opening remarks at the media briefing on COVID-19 - 11 March 2020. https://www.who.int/dg/speeches/detail/who -director-general-sopening-remarks-at-the-media-briefing -on-covid-19---11-march2020. Published March 11, 2020. Accessed August 24, 2020.

3. World Health Organization. Coronavirus disease (COVID- 19): situation report--209. https://www.who.int/docs /default-source/coronaviruse/situation-reports/20200816 -covid-19-sitrep-209.pdf. Updated August 16, 2020. Accessed August 24, 2020.

4. Nicola M, Alsafi Z, Sohrabi C, et al. The socio-economic implications of the coronavirus pandemic (COVID-19): a review. Int J Surg. 2020;78:185-193. doi:10.1016/j.ijsu.2020.04.018

5. da Costa VG, Moreli ML, Saivish MV. The emergence of SARS, MERS and novel SARS-2 coronaviruses in the 21st century. Arch Virol. 2020;165(7):1517-1526. doi:10.1007/s00705-020-04628-0

6. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.

7. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Updated January 15, 2019. Accessed August 24, 2020.

8. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. http:// arxiv.org/abs/1808.08230. Updated January 15, 2019. Accessed August 24, 2020.

9. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87. doi:10.1609/AIMAG.V27I4.1911

10. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229. doi:10.1147/rd.33.0210

11. Sarle WS. Neural networks and statistical models https:// people.orie.cornell.edu/davidr/or474/nn_sas.pdf. Published April 1994. Accessed August 24, 2020.

12. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85-117. doi:10.1016/j.neunet.2014.09.003

13. 13. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539

14. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44- 56. doi:10.1038/s41591-018-0300-7

15. Cohen JP, Morrison P, Dao L. COVID-19 Image Data Collection. Published online March 25, 2020. Accessed May 13, 2020. http://arxiv.org/abs/2003.11597

16. Radiological Society of America. RSNA pneumonia detection challenge. https://www.kaggle.com/c/rsnapneumonia- detectionchallenge. Accessed August 24, 2020.

17. Bloice MD, Roth PM, Holzinger A. Biomedical image augmentation using Augmentor. Bioinformatics. 2019;35(21):4522-4524. doi:10.1093/bioinformatics/btz259

18. Cepheid. Xpert Xpress SARS-CoV-2. https://www.cepheid .com/coronavirus. Accessed August 24, 2020.

19. Interknowlogy. COVID-19 detection in chest X-rays. https://interknowlogy-covid-19.azurewebsites.net. Accessed August 27, 2020.

20. Bernheim A, Mei X, Huang M, et al. Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection. Radiology. 2020;295(3):200463. doi:10.1148/radiol.2020200463

21. Ai T, Yang Z, Hou H, et al. Correlation of Chest CT and RTPCR Testing for Coronavirus Disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32- E40. doi:10.1148/radiol.2020200642

22. Simpson S, Kay FU, Abbara S, et al. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA - Secondary Publication. J Thorac Imaging. 2020;35(4):219-227. doi:10.1097/RTI.0000000000000524

23. Wong HYF, Lam HYS, Fong AH, et al. Frequency and distribution of chest radiographic findings in patients positive for COVID-19. Radiology. 2020;296(2):E72-E78. doi:10.1148/radiol.2020201160

24. Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115-E117. doi:10.1148/radiol.2020200432

25. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507-513. doi:10.1016/S0140-6736(20)30211-7

26. Lomoro P, Verde F, Zerboni F, et al. COVID-19 pneumonia manifestations at the admission on chest ultrasound, radiographs, and CT: single-center study and comprehensive radiologic literature review. Eur J Radiol Open. 2020;7:100231. doi:10.1016/j.ejro.2020.100231

27. Salehi S, Abedi A, Balakrishnan S, Gholamrezanezhad A. Coronavirus disease 2019 (COVID-19) imaging reporting and data system (COVID-RADS) and common lexicon: a proposal based on the imaging data of 37 studies. Eur Radiol. 2020;30(9):4930-4942. doi:10.1007/s00330-020-06863-0

28. Jacobi A, Chung M, Bernheim A, Eber C. Portable chest X-ray in coronavirus disease-19 (COVID- 19): a pictorial review. Clin Imaging. 2020;64:35-42. doi:10.1016/j.clinimag.2020.04.001

29. Bhat R, Hamid A, Kunin JR, et al. Chest imaging in patients hospitalized With COVID-19 infection - a case series. Curr Probl Diagn Radiol. 2020;49(4):294-301. doi:10.1067/j.cpradiol.2020.04.001

30. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Heal. 2019;1(6):E271- E297. doi:10.1016/S2589-7500(19)30123-2

31. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491

32. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905

33. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. http://arxiv.org /abs/2002.11379. Updated March 11, 2020. Accessed August 24, 2020.

34. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by imagebased deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010

Issue
Federal Practitioner - 37(9)a
Issue
Federal Practitioner - 37(9)a
Page Number
398-404
Page Number
398-404
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 09/04/2020 - 12:15
Un-Gate On Date
Fri, 09/04/2020 - 12:15
Use ProPublica
CFC Schedule Remove Status
Fri, 09/04/2020 - 12:15
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Article PDF Media