2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

gary_marcus's picture
Professor of Psychology, Director NYU Center for Language and Music; Author, Guitar Zero
Machines Won't Be Thinking Anytime Soon

What I think about machines thinking is that it won't happen anytime soon. I don't imagine that there is any in-principle limitation; carbon isn't magical, and I suspect silicon will do just fine. But lately the hype has gotten way ahead of reality. Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we have "solved" AI doesn't realize the limitations of the current technology.

To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there has been scarcely more than linear progress in five decade of working towards strong AI. For example, the different flavors of "intelligent personal assistants" available on your smart phone are only modestly better than ELIZA, an early example of primitive natural language processing from the mid-60s.

We still have no machine that can, say, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class, or an eighth-grade science exam.

Why so little progress, despite the spectacular increases in memory and CPU power? When Marvin Minksy and Gerald Sussman attempted the construction a visual system in 1966, did they envision super-clusters or gigabytes that would sit in your pocket? Why haven't advances of this nature led us straight to machines with the flexibility of human minds?

Consider three possibilities:

(a) We will solve AI (and this will finally produce machines that can think) as soon as our machines get bigger and faster.
(b) We will solve AI when our learning algorithms get better. Or when we have even Bigger Data.
(c) We will solve AI when we finally understand what it is that evolution did in the construction of the human brain.

Ray Kurzweil and many others seem to put their weight on option (a), sufficient CPU power. But how many doublings in CPU power would be enough? Have all the doublings so far gotten us closer to true intelligence? Or just to narrow agents that can give us movie times?

In option (b), big data and better learning algorithms, have so far gotten us only to innovations such as machine translations, which provide fast but mediocre translations piggybacking onto the prior work of human translators, without any semblance of thinking. The machine translation engines available today cannot, for example, answer basic queries about what they just translated. Think of them more as idiot savants than fluent thinkers.

My bet is on option (c). Evolution seems to have endowed us with a very powerful set of priors (or what Noam Chomsky or Steven Pinker might call innate constraints) that allow us to make sense of the world based on very limited data. Big Efforts with Big Data aren't really getting us closer to understanding those priors, so while we are getting better and better at the sort of problem that can be narrowly engineered (like driving on extremely well-mapped roads), we are not getting appreciably closer to machines with commonsense understanding, or the ability to process natural language. Or, more to the point of this year's Edge Question, to machines that actually think.