User login
Artificial intelligence (AI) has drawn interest in ophthalmology for its potential to track disease trends in huge populations, such as the 38.4 million people in the United States with diabetes who are at risk for diabetic eye disease. However, a recent study using AI to detect diabetic retinopathy from retinal photo screenings has found wide disparities in the quality of data being fed into the algorithm.
And screening photos captured in nine primary care settings were three times more likely to be unusable than those obtained in two ophthalmology clinics, a study at Temple University in Philadelphia found. The results of the new research were reported at the Association for Research in Vision and Ophthalmology (ARVO) 2024 annual meeting.
“AI-assisted diabetic retinopathy screenings were more successful when completed in the ophthalmology clinic setting compared to the primary care setting,” study leader Madelyn Class, a medical student at Temple, told this news organization. One key difference, Ms. Class said, was that the specialty clinics used a photographer training in capturing ophthalmic images, while the primary care sites had medical assistants taking the photos.
Challenges of Screening in Primary Care
The American Diabetes Association acknowledged in a 2017 position statement that retinal photography has the potential to bring screening into settings where optometrists or ophthalmologists are unavailable. This study showed the potential may not yet be realized.
In the primary care setting, 42.5% of retinal photos were ungradable compared with 14.5% in the specialty settings.
The number of patients diagnosed with more-than-mild diabetic retinopathy also varied significantly between the two settings — 13% in primary care and 24% in ophthalmology — as did the rates of follow-up appointments: 58% and 80%, respectively.
“It seems user error played a role in the quality of photographs that were taken,” Ms. Class said. “Some of the images we received from the primary care settings were actually of the eyelid, or even the curtains on the wall, rather than the fundus.
“All the camera operators in the study received training on the imaging device,” Ms. Class added. “This suggests that some of the photographers were rushed, out of practice, or simply no longer interested in taking photos,” she said. “Apparently, we will have to continuously monitor the performance of each photographer to ensure that quality photos are being taken.”
The findings may also point to the need for using different equipment for screening in primary care, Ms. Class added. “Robotic as opposed to manual cameras may help eliminate some of the user error that was experienced with primary care screenings,” she said.
Need for Training ‘Fixable’
These findings demonstrate the challenges of capturing usable retinal images outside of an eye care professional’s office, according to Jennifer Lim, MD, director of the retina service at the University of Illinois Chicago.
“This study illustrates that implementation is the rub of AI,” Dr. Lim told this news organization. “Getting primary care doctors and clinics to want to adopt and figure out how to implement AI screening [for diabetic retinopathy] in a healthcare system is difficult, so I applaud the Temple University system for trying to integrate retinal photography-based AI screening into the primary care outpatient centers and comparing outcomes to the ophthalmology clinics.”
The study showed that photographers need not only initial training but also monitoring to avoid ungradable images, Dr. Lim added, a problem that is “fixable.”
“It’s going to take a lot of work to get the message out to the primary care practices that these autonomous, cloud-based systems are available and effective for detecting retinopathy,” she said.
But the effort is worth it, she added: “It doesn’t take much time to take these photos for diabetic retinopathy screening, and the potential benefits are huge because the earlier you diagnose diabetic retinopathy that’s more than mild, the more likely the patient can be sent for eye care in a timely fashion and thus prevent visual loss from diabetic retinopathy.”
Ms. Class had no relevant disclosures. Dr. Lim disclosed a past relationship with Eyenuk, the maker of retinal screening cameras.
A version of this article appeared on Medscape.com .
Artificial intelligence (AI) has drawn interest in ophthalmology for its potential to track disease trends in huge populations, such as the 38.4 million people in the United States with diabetes who are at risk for diabetic eye disease. However, a recent study using AI to detect diabetic retinopathy from retinal photo screenings has found wide disparities in the quality of data being fed into the algorithm.
And screening photos captured in nine primary care settings were three times more likely to be unusable than those obtained in two ophthalmology clinics, a study at Temple University in Philadelphia found. The results of the new research were reported at the Association for Research in Vision and Ophthalmology (ARVO) 2024 annual meeting.
“AI-assisted diabetic retinopathy screenings were more successful when completed in the ophthalmology clinic setting compared to the primary care setting,” study leader Madelyn Class, a medical student at Temple, told this news organization. One key difference, Ms. Class said, was that the specialty clinics used a photographer training in capturing ophthalmic images, while the primary care sites had medical assistants taking the photos.
Challenges of Screening in Primary Care
The American Diabetes Association acknowledged in a 2017 position statement that retinal photography has the potential to bring screening into settings where optometrists or ophthalmologists are unavailable. This study showed the potential may not yet be realized.
In the primary care setting, 42.5% of retinal photos were ungradable compared with 14.5% in the specialty settings.
The number of patients diagnosed with more-than-mild diabetic retinopathy also varied significantly between the two settings — 13% in primary care and 24% in ophthalmology — as did the rates of follow-up appointments: 58% and 80%, respectively.
“It seems user error played a role in the quality of photographs that were taken,” Ms. Class said. “Some of the images we received from the primary care settings were actually of the eyelid, or even the curtains on the wall, rather than the fundus.
“All the camera operators in the study received training on the imaging device,” Ms. Class added. “This suggests that some of the photographers were rushed, out of practice, or simply no longer interested in taking photos,” she said. “Apparently, we will have to continuously monitor the performance of each photographer to ensure that quality photos are being taken.”
The findings may also point to the need for using different equipment for screening in primary care, Ms. Class added. “Robotic as opposed to manual cameras may help eliminate some of the user error that was experienced with primary care screenings,” she said.
Need for Training ‘Fixable’
These findings demonstrate the challenges of capturing usable retinal images outside of an eye care professional’s office, according to Jennifer Lim, MD, director of the retina service at the University of Illinois Chicago.
“This study illustrates that implementation is the rub of AI,” Dr. Lim told this news organization. “Getting primary care doctors and clinics to want to adopt and figure out how to implement AI screening [for diabetic retinopathy] in a healthcare system is difficult, so I applaud the Temple University system for trying to integrate retinal photography-based AI screening into the primary care outpatient centers and comparing outcomes to the ophthalmology clinics.”
The study showed that photographers need not only initial training but also monitoring to avoid ungradable images, Dr. Lim added, a problem that is “fixable.”
“It’s going to take a lot of work to get the message out to the primary care practices that these autonomous, cloud-based systems are available and effective for detecting retinopathy,” she said.
But the effort is worth it, she added: “It doesn’t take much time to take these photos for diabetic retinopathy screening, and the potential benefits are huge because the earlier you diagnose diabetic retinopathy that’s more than mild, the more likely the patient can be sent for eye care in a timely fashion and thus prevent visual loss from diabetic retinopathy.”
Ms. Class had no relevant disclosures. Dr. Lim disclosed a past relationship with Eyenuk, the maker of retinal screening cameras.
A version of this article appeared on Medscape.com .
Artificial intelligence (AI) has drawn interest in ophthalmology for its potential to track disease trends in huge populations, such as the 38.4 million people in the United States with diabetes who are at risk for diabetic eye disease. However, a recent study using AI to detect diabetic retinopathy from retinal photo screenings has found wide disparities in the quality of data being fed into the algorithm.
And screening photos captured in nine primary care settings were three times more likely to be unusable than those obtained in two ophthalmology clinics, a study at Temple University in Philadelphia found. The results of the new research were reported at the Association for Research in Vision and Ophthalmology (ARVO) 2024 annual meeting.
“AI-assisted diabetic retinopathy screenings were more successful when completed in the ophthalmology clinic setting compared to the primary care setting,” study leader Madelyn Class, a medical student at Temple, told this news organization. One key difference, Ms. Class said, was that the specialty clinics used a photographer training in capturing ophthalmic images, while the primary care sites had medical assistants taking the photos.
Challenges of Screening in Primary Care
The American Diabetes Association acknowledged in a 2017 position statement that retinal photography has the potential to bring screening into settings where optometrists or ophthalmologists are unavailable. This study showed the potential may not yet be realized.
In the primary care setting, 42.5% of retinal photos were ungradable compared with 14.5% in the specialty settings.
The number of patients diagnosed with more-than-mild diabetic retinopathy also varied significantly between the two settings — 13% in primary care and 24% in ophthalmology — as did the rates of follow-up appointments: 58% and 80%, respectively.
“It seems user error played a role in the quality of photographs that were taken,” Ms. Class said. “Some of the images we received from the primary care settings were actually of the eyelid, or even the curtains on the wall, rather than the fundus.
“All the camera operators in the study received training on the imaging device,” Ms. Class added. “This suggests that some of the photographers were rushed, out of practice, or simply no longer interested in taking photos,” she said. “Apparently, we will have to continuously monitor the performance of each photographer to ensure that quality photos are being taken.”
The findings may also point to the need for using different equipment for screening in primary care, Ms. Class added. “Robotic as opposed to manual cameras may help eliminate some of the user error that was experienced with primary care screenings,” she said.
Need for Training ‘Fixable’
These findings demonstrate the challenges of capturing usable retinal images outside of an eye care professional’s office, according to Jennifer Lim, MD, director of the retina service at the University of Illinois Chicago.
“This study illustrates that implementation is the rub of AI,” Dr. Lim told this news organization. “Getting primary care doctors and clinics to want to adopt and figure out how to implement AI screening [for diabetic retinopathy] in a healthcare system is difficult, so I applaud the Temple University system for trying to integrate retinal photography-based AI screening into the primary care outpatient centers and comparing outcomes to the ophthalmology clinics.”
The study showed that photographers need not only initial training but also monitoring to avoid ungradable images, Dr. Lim added, a problem that is “fixable.”
“It’s going to take a lot of work to get the message out to the primary care practices that these autonomous, cloud-based systems are available and effective for detecting retinopathy,” she said.
But the effort is worth it, she added: “It doesn’t take much time to take these photos for diabetic retinopathy screening, and the potential benefits are huge because the earlier you diagnose diabetic retinopathy that’s more than mild, the more likely the patient can be sent for eye care in a timely fashion and thus prevent visual loss from diabetic retinopathy.”
Ms. Class had no relevant disclosures. Dr. Lim disclosed a past relationship with Eyenuk, the maker of retinal screening cameras.
A version of this article appeared on Medscape.com .