NEW YORK, Jan 26: Even though the phrase “image recognition technologies” conjures visions of high-tech surveillance, these tools may soon be used in medicine more than in spycraft.
A team of Stanford researchers trained a computer to identify images of skin cancer moles and lesions as accurately as a dermatologist, according to a new paper published in the journal Nature.
In the future, this new research suggests, a simple cell phone app may help patients diagnose a skin cancer — the most common of all cancers in the United States — for themselves.
“Our objective is to bring the expertise of top-level dermatologists to places where the dermatologist is not available,” said Sebastian Thrun, senior author of the new study, founder of research and development lab Google X and an adjunct professor at Stanford University. He added that those who live in developing countries do not have the same level of care as can be found in the US and other industrialized nations.
Melanomas represent fewer than 5% of all skin malignancies diagnosed in the US, yet they account for nearly three-quarters of all deaths related to this form of cancer. If detected early, the five-year survival rate for melanoma is 99%. When detected in its latest stage, the survival rate plummets to just 14%.
Generally, dermatologists identify whether a mole or other abnormality is cancerous by looking at it. They can confirm their diagnosis with follow-up biopsies and tests.
With a team of researchers, Thrun developed a deep learning computer system to perform the first task in detecting skin cancer: identifying it at a glance.
Essentially, the team created an automated dermatologist.
How it works
Thrun and his colleagues began by coaching a computer to develop pattern recognition skills. The method they used is an algorithm-based technique known as “deep learning.”
Specifically, the research team employed a convolutional neural network.
Carl Vondrick, a Ph.D. candidate at MIT’s Computer Science and Artificial Intelligence Lab, who was not involved in this study, explained the process.
“A convolutional neural net is a type of computer software that is very good at learning to recognize different concepts,” he said. By downloading digital images, researchers can “tell” the computer they are images of skin cancer, or without skin cancer. The machine will basically try to learn some rules that can predict whether it’s cancer.
“An algorithm is just a fancy name for a sequence of steps that the computer takes. So in this case, the algorithm refers to the whole process that they did to train the system,” Vondrick said.
Andre Esteva, first author of the new paper and an electrical engineering Ph.D. student at Stanford, said he, Thrun and their colleagues began by “basically teaching the algorithm what the world looks like.”
“We taught it with cats and dogs and tables and chairs and all sorts of normal everyday objects look like,” Esteva said. “We used a massive data set of well over a million images.” This phase of learning took about a week.
Then, Esteva trained the algorithm in different skin conditions. Here, the team addressed a complex problem: Cancerous and noncancerous skin aberrations vary greatly in appearance from patient to patient.
To overcome this difficulty, the researchers presented the now-trained — or “artificially intelligent” — computer with an extensive dataset of 129,450 images representing more than 2,000 skin diseases. The images came from 18 doctor-curated online repositories as well as the Stanford University Medical Center.
Since each image of a mole or abrasion had been diagnosed, the computer was fed this information as well.