2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT? [1]

jamshed_bharucha's picture [5]
Psychologist; President Emeritus, Cooper Union
The Neural Net Reloaded

The neural network has been resurrected. After a troubled sixty-year history, it has crept into the daily lives of hundreds of millions of people, in the span of just three years.

In May, 2015, Sundar Pichai announced that Google had reduced errors in speech recognition to 8 percent, from 23 percent only two years earlier. The key? Neural networks, rebranded as "deep learning." Google reported dramatic improvements in image recognition just six months after acquiring DNN Research, a startup founded by Geoffrey Hinton and two of his students. Backpropagation is back—with a big data bang. And it’s suddenly worth a fortune.

The news wasn’t on the front pages. There was no scientific breakthrough. Nor was there a novel application.

Why is it news? The scale of the impact is astonishing, as is the pace at which it was achieved. Making sense of noisy, infinitely variable, visual and auditory patterns has been a holy grail of artificial intelligence. Raw computing power has caught up with decades-old algorithms. In just a few short years, the technology has leapt from laboratory simulations of oversimplified problems to cell phone apps for the recognition of speech and images in the real world.

Theoretical developments in neural networks have been mostly incremental since the pioneering work on self-organization in the 1970s and backpropagation in the 1980s. The tipping point was reached recently not by fundamentally new insights, but by processing speeds that make possible larger networks, bigger datasets, and more iterations.

This is the second resurrection of neural networks. The first was the discovery by Geoffrey Hinton and Yann LeCun that multilayered networks can learn nonlinear classification. Before this breakthrough, Marvin Minsky had all but decimated the field with his publication in 1969, Perceptrons. Among other things, he proved that Frank Rosenblatt’s perceptron could not learn classifications that are nonlinear.

Rosenblatt developed the perceptron in the 1950s. He built on foundational work in the 1940 by McCulloch and Pitts, who showed how patterns could be handled by networks of neurons, and Donald Hebb, who hypothesized that the connection between neurons is strengthened when connected neurons are active. The buzz created by the perceptron can be relived by reading "Electronic ‘Brain’ Teaches Itself," published by the New York Times on July 13, 1958. The Times quoted Rosenblatt saying that Perceptron "will grow wiser as it gains experience," adding that "the Navy said it would use the principle to build the first Perceptron ‘thinking machines’ that will be able to read or write."

Minsky’s critique was a major setback, if not a fatal one, for Rosenblatt and neural networks. But a few people persisted quietly, among them Stephen Grossberg, who began working on these problems while an undergraduate at Dartmouth in the 1950s. By the 1970s, Grossberg had developed an unsupervised, (self-organizing) learning algorithm that balanced the stability of acquired categories with the plasticity necessary to learn new ones.

Hinton and LeCun addressed Minsky’s challenge and brought neural nets back from obscurity. The excitement about backpropagation drew attention to Grossberg’s model, as well as to the models of Fukushima and Kohonen. But in 1988, Steven Pinker and Alan Prince did to neural nets what Minsky did two decades earlier, with a withering attack on their worthiness for explaining the acquisition of language. Neural networks faded into the background again.

After Geoffrey Hinton and his students won the ImageNet challenge in 2012, with a quantum improvement in performance on image recognition, Google seized the moment, and neural networks came alive again.

The opposition to deep learning is gearing up already. All methods benefit from powerful computing, and traditional symbolic approaches also have demonstrated gains. Time will tell which approaches prevail, and for what problems. Regardless, 2012-2015 will have been the time when neural networks placed artificial intelligence at our fingertips.