Video
Author(s):
Andrew Lee, MD, and Andrew Carey, MD, sit down on another episode of the NeuroOp Guru to discuss if Artificial Intelligence is ready for the clinic and the emergency room
Editor's note - This transcript has been edited for clarity.
Hello and welcome to yet another edition of the NeuroOp Guru. I'm here with my good friend Drew Carey from Johns Hopkins and the Wilmer Eye Institute. Hi Drew.
Hi Andy, happy to be here.
And today we're going to be talking about the question "Is artificial intelligence for optic disc photos ready for the clinic and the ER?" So Drew, maybe you could just give us a little background on why is this even a question?
Well, I think for a long time, we've come to realize that colleagues without ophthalmologic training, ER doctors, primary care doctors, neurologists are not so good at fundoscopy, looking at the back of the eye, and specifically the optic nerve. You know, they don't do it through dilated pupils like ophthalmologist do, they don't do it a lot. So it's not a skill set that they've kept up if they ever really refined it in medical school. But there are some conditions where it is really important to look in the back of the eye and at the optic nerve, especially if you have a patient coming in with headaches and vision changes. W
e want to know is the optic nerve swollen, you know, could this be ischemic optic neuropathy, an emergency like giant cell arteritis, or papilledema, where they could have some kind of Bergeon, intracranial CNS process going on. And we can't get an ophthalmologist in every emergency room in America, but it would be feasible to put a camera. And then the question is, well, who's going to look at the picture? It should be somebody who knows what an optic nerve is supposed to look like. Or could it be an artificial intelligence that's been trained. And so I think that was the major initiative for this project – was trying to improve the diagnostic value of fundoscopy in conditions where it would be desired.
And so this AI trained on thousands of photos that they just loaded in there and taught it, what it's supposed to look for.
Yeah, so there's been, you know, a lot of work in in AI and subtypes of AI, including deep learning systems, machine learning. And it does, it takes thousands and thousands of images that have been carefully combed through, and that we labeled with what we call ground truth where we know exactly what that picture represents, to train the system.
Kind of like a resident has to see thousands of cases during their training, in order to, you know, develop good clinical intuition and understanding what's going on. So for this, this group, the BONSAI consortium, based out of Singapore, they, you know, asked for pictures from neuro-ophthalmologist all across the world to try and develop a diverse training set. With patients who they knew what the diagnosis was, they knew what that optic nerve was showing. And that's what they trained it on.
And so maybe you could just walk us through these results of the BONSAI and you can see that it was already 168 times faster, but let's see if it's better. We know it can be faster. But is it better? Just maybe you could walk us through A and B here in terms of error rate?
Yeah absolutely. So what they did is they took 800 new photos that the machine had never seen before. Which is really important. You don't want to ask the artificial intelligence to answer a question that it already knows the answer to. And they so they showed that to BONSAI, and then they showed it to 30 different clinicians, six were general ophthalmologists, six were optometrists, six neurologists, six internal medicine doctors and six emergency medicine doctors.
And they asked them to classify these optic nerve photos as normal papilledema or other. And so they said, they split the groups into two different – the doctors into two different groups. They said these are folks with opthalmic expertise, ophthalmologist and optometrist and the other folks, the neurologists, internal medicine and emergency medicine. And so in A, they said the error rate for the doctors with ophthalmic expertise looking at one photo of one eye, so they didn't get the benefit of two eyes. They said the error rate was about 25% for doctors with ophthalmic expertise. And for doctors without ophthalmic expertise it was close to 45%.
And the deep learning system was about 16% compared to what we knew the actual photo was. We know that that the machine is really good. You know that's what it was trained to do. And then in B they broke it down. They said these are ophthalmologists, optometrists, neurologists, internist and the emergency medicine doctors. And the ophthalmologists and optometrists were both very similar at about 25%, which is what we saw when they were together. And then the neurologists were, you know, not quite as good running around 38%. While the internal medicine doctors who, I don't remember the last time my eye was looked at in a primary care doctor office visit, was 43%.
And then the emergency medicine doctors were about 45%. And this is, they didn't even have to look inside the eye, this is a good quality fundus photo that we were able to just give the doctor and say, you know, this is, this is what the optic nerves looks like. And again, you know, the deep learning system was running around 16% for, you know, all the pictures. So that's, that's the comparison, which I think is really good. And we know that right, it's a machine, it doesn't have to stop and think about it. And you know, go through, okay, what's this blood vessel doing? What's that blood vessel do? It just looks at it and runs it through the algorithm, and takes about 25 seconds for it to look at all 800 photos.
Versus 70 minutes for the doctors, which I think 70 minutes to look at 800 photos is still pretty good. So that's what we found out. So the BONSAI had significantly higher accuracies in 100% of the papilledema cases, 87% of the normal cases, and 93% of the other cases, compared to the clinicians. So it's really good. I don't think it's ready to replace doctors, you know, neuro-ophthalmologists, because it's not perfect. And there's a lot of other clinical information.
We all know how important that history is for neuro-ophthalmology that it can't do. But it could really help to risk stratify. You know, this is somebody that really needs to see neuro ophthalmology or get an ophthalmologist in here in person to look at the patient, or say no, this is normal. Or this is somebody who needs to proceed to neuroimaging, lumbar puncture, you know, even if we can't get an ophthalmologist in here.
So do you think it's more like a decision support right now, like helps you make a decision? Or do you think it's not even that?
I think that's where it would be, you know, if this could be clinically implemented into emergency rooms, neurologist office. You know, the patient comes in and every patient with a headache, they get their blood pressure check to make sure it's not hypertensive emergency. They should get a photo of their optic nerve to make sure it's not elevated intracranial pressure or hypertensive emergency.
So and then, you know, the doctor can look at it. And the other thing that we know about the AI, compared to a doctor is it's not just a yes, no, it also gives it probabilities. It'll say I'm 100% certain this is normal. Or it'll say I'm 100% certain this is pappilledema. Ot it might say, it's probably papilladema, but I'm 65%.
You say okay, well, let's get some more data. Let's get an ophthalmologist in here to look at both eyes and ask some important questions like, do you have headaches? Do you have wooshing sounds in your ears? If you're having transient visual obscurations when you're bending over or coughing?
Well, so maybe the answer is stay tuned to this channel. But it certainly sounds like the machine is faster. And maybe even better than the doctors. The question is, is it cheaper?
Well, like you know, a lot of emergency rooms don't have an ophthalmologist on call. And if you're asking how much is it going to cost to pay somebody to cover call, he's not going to do it for free. And, you know, we could bill for photos that might be revenue generating as opposed to revenue loss. I think the big questions is regulatory.
You know, I think in the United States, we have one, that I'm aware of, FDA approved AI system, which is for retinal screening for diabetic retinopathy. They're looking at it for implementing into neuroimaging for CT scans to help to triage, this is a CT scan, we need the neuroradiologist to look at right now or put this one at the end of the pile to finish by the end of their shift. Yeah, cost is a big question. And it still has to go through FDA approval and then you know, it's still wrong 16% of the time, who's liable when it's wrong? You know, what's, what's the safety mechanism for the patient?
But compared to the safety mechanism, without it, you know, either nobody's looking or somebody's looking who's gonna be wrong, like half the time, I'd say, you know, if I was in the emergency room, have the machine, take a picture and tell me how I'm doing.
Well, Drew, as always a pleasure to chat with you. And that concludes yet another edition of the NeuroOp Guru. We'll see you guys next time.