2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

ziyad_marar's picture
President of Global Publishing, SAGE; Author, Judged: The Value of Being Misunderstood
Are We Thinking More Like Machines That Think?

There is something old-fashioned about visions of the future. The majority of predictions, like 3 day weeks, personal jet packs and the paperless office tell us more about the times in which they were proposed than about contemporary experience. When people point to the future we would do well to run an eye back up the arm to see who is doing the pointing.

The possibility of artificial general intelligence has long invited such crystal ball gazing, whether utopian or dystopian in tone. Yet speculations on this theme seem to have reached such a pitch and intensity in the last few months alone (enough to trigger an Edge question no less) that this may reveal something about ourselves and our culture today.                                                                

We've known for some time that machines can out-think humans in a narrow sense. The question is whether they do so in any way that could or should ever resemble the baggier mode of human thought. Even when dealing with as "tame" a domain as chess the computer and the human diverge widely.

"Tame" problems (like establishing the height of a mountain), which are well formulated and have clear solutions, are good grist to the mill of narrow, brute force, thinking. Sometimes even narrower thinking is called for when huge data sets can be mined for correlations, leaving aside the distraction of thinking about underlying causes.

But many of the problems we face (from challenging inequality to choosing the right school for your child) are "wicked" in that they don't have right or wrong answers (though hopefully they do have better or worse ones). They are uniquely contextual and have complex overlapping causes that change based on the level of explanation being used. These problems don't suit narrow computational thinking well. In blurring facts with values they resemble the messy emotion-riddled thinking that reflects the human minds that conjured them up.

To tackle wicked problems requires peculiarly human judgement even if these are illogical in some sense; especially in the moral sphere. Notwithstanding Joshua Greene and Peter Singer's logical urging of a consequentialist frame of mind, one that a computer could reproduce, the human tendency to distinguish acts from omissions and to blur intentions with outcomes (as in the principle of double effect) means we need solutions that will satisfy the instincts of human judges if they are to be stable over time.  

And that very feature of human thinking (shaped by evolutionary pressures) points to the widest gulf of all between machine and human thinking. Thinking is not motivated (literally has no point) without preferences, and machines don't have those on their own. Only affect-addled minds conjure up motives. So if goals, wants, values are features of human minds then why predict that artificial super-intelligences will become more than tools in the hands of those who program in those preferences?

If the welter of prognostications about AI and machine learning tell us anything, I don't think it is about how a machine will emulate a human mind any time soon. We can do that easily enough just by having more children and educating them. Rather it tells us that our appetites are shifting.

We are understandably awed by what sheer computation has achieved and will achieve (I'm happy to jump on the driverless, virtual reality bandwagon that careens off into that over-predicted future). But this awe is leading to a tilt in our culture. The digital republic of letters is yielding up engineering as the thinking metaphor of our time. In its wake lies the once complacent, now anxious, figure with a more literary, less literal, cast of mind.

It is not that thinking machines will be emulating human minds any time soon: quite the reverse. We are cleaning up our acts, embarrassed by the fumbling inconclusiveness of messy thinking. It is little surprise to see that the UK's Education Secretary has recently advised teenagers to steer away from arts and humanities in favour of STEM disciplines if they are to flourish in the future. The sheer obviousness of a certain kind of progress has made narrow thinking gleam with a new and addictive lustre.

But something is lost as whole fields of enquiry succeed or fail by the standard of narrow thinking; and a new impediment is created. Alongside the true we need to think well about the good and the beautiful, and indeed the wicked. This requires opening up vocabularies that better reflect our crooked timber (whether thought of, by turns, as bug or feature). Meanwhile, the understandable desire to upgrade those wicked problems to mere tame ones, is leading us to taming ourselves.