2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

molly_crockett's picture [5]
Assistant Professor of Psychology, Yale University; Distinguished Research Fellow, Oxford Centre for Neuroethics
Could Thinking Machines Bridge The Empathy Gap?

We humans are sentenced to spend our lives trapped in our own heads. Try as we might, we can never truly know what it is like to be someone else. Even the most empathetic among us will inevitably encounter an unbridgeable gap between self and other. We may feel pangs of distress upon seeing someone else stub their toe, or when learning of another's heartbreak. But these are mere simulations; others' experiences can never be felt directly, and so can never be directly compared with our own. The empathy gap is responsible for most interpersonal conflicts, from prosaic quibbles over who should wash the dishes to violent disputes over sacred land.

This problem is especially acute in moral dilemmas. Utilitarian ethics stipulates that the basic criterion of morality is maximizing the greatest good for the greatest number—a calculus that requires the ability to compare welfare, or "utility", across individuals. But the empathy gap makes such "interpersonal utility comparisons" difficult, if not impossible. You and I may both claim to enjoy champagne, but we will never be able to know who enjoys it more because we lack a common scale for comparing these rather subjective values. As a result we have no empirical basis for determining which of us most deserves the last glass. Jeremy Bentham, the father of utilitarianism, recognized this problem: "One man's happiness will never be another man's happiness; a gain to one man is no gain to another. You might as well pretend to add 20 apples to 20 pears."

Human brains are incapable of solving the interpersonal utility comparison problem. Nobel laureate John Harsanyi worked on it for a couple of decades in the middle of the 20th century. His theory is recognized as one of the best attempts so far, but it falls short because it fails to account for the empathy gap. In other words, Harsanyi's theory assumes perfect empathy, where my simulation of your utility is identical to your utility. But the fallibility of human empathy is indisputable in the face of psychology research and our own personal experience.

Could thinking machines be up for the job? Bridging the empathy gap would require a way to quantify preferences and translate them into a common currency that is comparable across individuals. Such an algorithm could provide an uncontroversial set of standards that could be used to create better social contracts. Imagine a machine that could compute an optimal solution for wealth redistribution by accounting for the preferences of everyone subject to taxation, weighing them equally and comparing them accurately. Although the shape of the solution is far from clear, its potential benefits are self-evident.

Machines that can bridge the empathy gap could also help us with self-control. In addition to the empathy gap that resides between self and others, there exists a similar gap between our present and future selves. Self-control problems stem from the never-ending tug-of-war between current and future desires. Perhaps AI will one day end this stalemate by learning the preferences of our present and future selves, comparing and integrating them, and making behavioral recommendations on the basis of these integrated utilities. Think of a diet that is healthy enough to foster weight loss, but just tasty enough so you're not tempted to cheat, or an exercise plan that is challenging enough to improve your fitness, but just easy enough that you can stick with it.

Neuroscientists are now uncovering how the human brain represents preferences. We should keep in mind that AI preferences need not resemble human ones, and indeed may require a different code altogether if they are to tackle problems that human brains can't solve. Ultimately, though, the code will be up to us, and what it should look like is as much of an ethical question as it is a scientific one. We've already built computers that can see, hear, and calculate better than we can. Creating machines that are better empathizers is a knottier problem—but achieving this feat could be essential to our survival.