News
Video
Author(s):
At the Envision Summit 2025 in San Juan, Puerto Rico, Sharon Fekrat, MD, FACS, FASRS talked about multi-modal retinal and choroidal imaging to diagnose and identify neurodegenerative diseases using machine learning.
At the Envision Summit 2025 in San Juan, Puerto Rico, Sharon Fekrat, MD, FACS, FASRS talked about multi-modal retinal and choroidal imaging to diagnose and identify neurodegenerative diseases using machine learning.
Editor's note: The below transcript has been lightly edited for clarity.
My name is Sharon Fekrat, and I am a retina specialist at the Duke University School of Medicine. I am vice chair of faculty affairs and professor of ophthalmology and neurology. I'm also proud to say that I am director of the iMIND Research Group.
Today I had a chance to talk about our research, multi-modal retinal and choroidal imaging to diagnose and identify neurodegenerative diseases using machine learning. And so we have a very exciting group of students and residents, and we started our research in 2017 when we had these 96-year-old identical twins in my clinic. One of them had very advanced Alzheimer's disease, and the other one was cognitively normal—using a smartphone and driving. And I knew that this was an opportunity to take pictures of the retina and look for differences. And boy, did we find some differences.
The twin with advanced Alzheimer's disease has markedly decreased vessel density in her retina. And so we knew we were on to something. We have since developed a convolutional neural network that can identify Alzheimer's disease compared to those that are cognitively normal. We also have a convolutional neural network that can identify mild cognitive impairment from normals. And very excitingly, we have one now that can identify Parkinson's disease in comparison to those who are cognitively normal.
So a lot of exciting work, and I was really glad to share it with everyone. You know, our group is very excited to use machine learning, because traditional statistics alone looking at the quantitative metrics from the retinal images such as OCT and OCT angiography. The traditional statistics show us differences between the two groups, but we're not exactly clear–should we be looking in this part of the retina? Should we be looking at a different part of the retina, what metrics should we be looking at? And so machine learning sort of neutralizes that playing field. We're looking at, you know, attention maps and trying to figure out what exactly the machine learning models are looking at on the image inputs. But yes, attention maps are still something that is very novel to many of us.