Article

Artificial intelligence will bring ‘reality’ to glaucoma diagnosis

Robert Chang, MD, outlines how artificial intelligence might work in glaucoma.

Could a computer one day beat the best ophthalmologists in diagnosing glaucoma? Not soon, but that day may be just around the corner.

Computers already using artificial intelligence (AI) have equaled dermatologists in diagnosing skin cancer and nearly matched retina specialists in recognizing diabetic retinopathy from fundus images.

In his presentation, “New Innovations in Hacking Glaucoma,” during the Glaucoma Symposium at the 2017 Glaucoma 360 meeting, Robert Chang, MD, an assistant professor of ophthalmology, Stanford University, outlined how the technology might work. Dr. Chang is developing AI for glaucoma.

The technology has made startling strides in recent years, Dr. Chan said. He pointed to the recent success of a computer program in beating champion poker players.

“We thought with missing information and bluffing it would be very hard for a machine to beat the world’s best poker players,” Dr. Chang said. “AI didn’t just beat the players, it crushed them. This is happening in every gaming field.”

Power and data grow

The success of the programs stems from rapid growth in computing power and the availability of massive amounts of data, said Dr. Chang. In the past, most AI programs used a pattern recognition technique in which programmers identified key features of the object to be identified and programmed computers to look for these features. But this approach proved cumbersome.

“As deep learning came about, it was decided that you didn’t have to identify what were the key features, you just needed enough training examples,” Dr. Chang explained. “Then, the machine would be able to identify statistically what were those key features.

“It doesn’t stop there,” he added. “You can keep feeding the algorithm more and more data so it can get more accurate.”

For example, in the past, a programmer might have taught a computer to recognize an Audi A7 by identifying a combination of vertical and horizontal lines unique to that model of car. Now programmers give computers thousands of photographs and label the Audi A7s among them, Dr. Chang said.

The computer itself picks out distinguishing characteristics and tests them to see how many of the Audi A7s have those features. It may find features in this way that a human would not have noticed. This is more similar to the way human beings think, he said.

First attempts in 1950s

 

First attempts in 1950s

The first attempts at this approach to AI date back to the 1950s. Inventors created the perceptron, a mathematical model based on neural networks in biology.

Inputs resemble dendritic inputs and outputs resemble axonal outputs. However, the programs didn’t work well because they did not have enough data or computing power.

Advances in computing power, especially graphical processing units, and the large number of digital photographs now available have given a boost to artificial neural networks, said Dr Chang. “We have many more training photos that can be analyzed by a computer and, thus, come up with a way to classify them,” he said.

Likewise, computing power has been growing exponentially, and if it follows its present curve, computers will match the human brain in 50 years or so, he said.

“It’s an opportunity to train algorithms to be just as powerful as how we go through 12 years of training to become ophthalmologists,” Dr. Chang said. Now, programmers are creating multilayered, neural networks that search for subtle patterns, a type of AI called deep learning.

A few years ago, a programmer trying to make a computer diagnose diabetic retinopathy might have told it to look for hemorrhage and exudation, and to locate the macula and the optic disc. In a recent paper, researchers at Google used 9,963 images from 4,997 patients to train a computer to recognize diabetic retinopathy. The computer achieved 97.5% sensitivity and 93.4% specificity.

“You’re using the power of mathematics and statistics to allow the computer to see what stands out between what you classify as normal and what you classify as diabetic retinopathy,” said Dr. Chang. “The algorithm performed remarkably well and this is happening in every industry for computer vision.”

Place in glaucoma

 

Place in glaucoma

The same approach could be used in glaucoma, he said. “You would need at lot of training cases, so it would take our people to work together and say, ‘what is your definition of glaucoma.’”

A supercomputer like IBM’s Watson or Nvidia DGX-1 would be required. Google’s DeepMind Health is already taking this approach using National Health Service data from the United Kingdom.

Stanford University researchers recently reported using deep learning to create an algorithm capable of recognizing skin cancer, said Dr. Chang. “It’s a matter of physicians becoming more familiar with this,” he added.

In the near term, Dr. Chang expects physicians to use AI as an adjunct. “AI is more like a support tool,” he said. “It helps to bring up people who are not trained as an expert.”

A computer could use a camera to do the first read of glaucoma patients. Then, an ophthalmologist could look at cases the computer flagged as abnormal. “There is lowering the cost and increasing the access to care,” said Dr. Chang. “You many tie this to telemedicine so you’re able to reach out to more areas.”

As AI becomes more available, physicians will figure out how to work it into their practices, said Dr. Chang, “Who knows, maybe in the future 10-12 years from now people will be searching on the internet and maybe there will be AI answering their questions.”

Related Videos
4 experts are featured in this series.
4 experts are featured in this series.
4 experts are featured in this series.
4 experts are featured in this series.
Bonnie An Henderson, MD, and EnVision Summit 2025 preview
© 2024 MJH Life Sciences

All rights reserved.