2006 : WHAT IS YOUR DANGEROUS IDEA? [1]

sherry_turkle's picture [5]
Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, MIT; Internet Culture Researcher; Author, The Empathy Diaries
After several generations of living in the computer culture, simulation will become fully naturalized. Authenticity in the traditional sense loses its value, a vestige of another time.

Consider this moment from 2005: I take my fourteen-year-old daughter to the Darwin exhibit at the American Museum of Natural History. The exhibit documents Darwin's life and thought, and with a somewhat defensive tone (in light of current challenges to evolution by proponents of intelligent design), presents the theory of evolution as the central truth that underpins contemporary biology. The Darwin exhibit wants to convince and it wants to please. At the entrance to the exhibit is a turtle from the Galapagos Islands, a seminal object in the development of evolutionary theory. The turtle rests in its cage, utterly still. "They could have used a robot," comments my daughter. It was a shame to bring the turtle all this way and put it in a cage for a performance that draws so little on the turtle's "aliveness." I am startled by her comments, both solicitous of the imprisoned turtle because it is alive and unconcerned by its authenticity. The museum has been advertising these turtles as wonders, curiosities, marvels — among the plastic models of life at the museum, here is the life that Darwin saw. I begin to talk with others at the exhibit, parents and children. It is Thanksgiving weekend. The line is long, the crowd frozen in place. My question, "Do you care that the turtle is alive?" is welcome diversion. A ten year old girl would prefer a robot turtle because aliveness comes with aesthetic inconvenience: "It's water looks dirty. Gross." More usually, the votes for the robots echo my daughter's sentiment that in this setting, aliveness doesn't seem worth the trouble. A twelve-year-old girl opines: "For what the turtles do, you didn't have to have the live ones." Her father looks at her, uncomprehending: "But the point is that they are real, that's the whole point."

The Darwin exhibit is about authenticity: on display are the actual magnifying glass that Darwin used, the actual notebooks in which he recorded his observations, indeed, the very notebook in which he wrote the famous sentences that first described his theory of evolution But in the children's reactions to the inert but alive Galapagos turtle, the idea of the "original" is in crisis.

I have long believed that in the culture of simulation, the notion of authenticity is for us what sex was to the Victorians — "threat and obsession, taboo and fascination." I have lived with this idea for many years, yet at the museum, I find the children's position startling, strangely unsettling. For these children, in this context, aliveness seems to have no intrinsic value. Rather, it is useful only if needed for a specific purpose. "If you put in a robot instead of the live turtle, do you think people should be told that the turtle is not alive?" I ask. Not really, say several of the children. Data on "aliveness" can be shared on a "need to know" basis, for a purpose. But what are the purposes of living things? When do we need to know if something is alive?

Consider another vignette from 2005: an elderly woman in a nursing home outside of Boston is sad. Her son has broken off his relationship with her. Her nursing home is part of a study I am conducting on robotics for the elderly. I am recording her reactions as she sits with the robot Paro, a seal-like creature, advertised as the first "therapeutic robot" for its ostensibly positive effects on the ill, the elderly, and the emotionally troubled. Paro is able to make eye contact through sensing the direction of a human voice, is sensitive to touch, and has "states of mind" that are affected by how it is treated, for example, is it stroked gently or with agressivity? In this session with Paro, the woman, depressed because of her son's abandonment, comes to believe that the robot is depressed as well. She turns to Paro, strokes him and says: "Yes, you're sad, aren't you. It's tough out there. Yes, it's hard." And then she pets the robot once again, attempting to provide it with comfort. And in so doing, she tries to comfort herself.

The woman's sense of being understood is based on the ability of computational objects like Paro to convince their users that they are in a relationship. I call these creatures (some virtual, some physical robots) "relational artifacts." Their ability to inspire relationship is not based on their intelligence or consciousness, but on their ability to push certain "Darwinian" buttons in people (making eye contact, for example) that make people respond as though they were in relationship. For me, relational artifacts are the new uncanny in our computer culture — as Freud once put it, the long familiar taking a form that is strangely unfamiliar. As such, they confront us with new questions.

What does this deployment of "nurturing technology" at the two most dependent moments of the life cycle say about us? What will it do to us? Do plans to provide relational robots to attend to children and the elderly make us less likely to look for other solutions for their care? People come to feel love for their robots, but if our experience with relational artifacts is based on a fundamentally deceitful interchange, can it be good for us? Or might it be good for us in the "feel good" sense, but bad for us in our lives as moral beings?

Relationships with robots bring us back to Darwin and his dangerous idea: the challenge to human uniqueness. When we see children and the elderly exchanging tendernesses with robotic pets the most important question is not whether children will love their robotic pets more than their real life pets or even their parents, but rather, what will loving come to mean?