Seminars
Event Date: [ 11.12.14 4:45 PM ]
Location:
United States

"To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves." 

HEADCON '14

In September a group of social scientists gathered for HEADCON '14, an Edge Conference at Eastover Farm. Speakers addressed a range of topics concerning the social (or moral, or emotional) brain: Sarah-Jayne Blakemore: "The Teenager's Sense Of Social Self"; Lawrence Ian Reed: "The Face Of Emotion"; Molly Crockett: "The Neuroscience of Moral Decision Making"; Hugo Mercier: "Toward The Seamless Integration Of The Sciences"; Jennifer Jacquet: "Shaming At Scale"; Simone Schnall: "Moral Intuitions, Replication, and the Scientific Study of Human Nature"; David Rand: "How Do You Change People's Minds About What Is Right And Wrong?"; L.A. Paul: "The Transformative Experience"; Michael McCullough: "Two Cheers For Falsification". Also participating as "kibitzers" were four speakers from HEADCON '13, [2] the previous year's event: Fiery CushmanJoshua KnobeDavid Pizarro, and Laurie Santos.

We are now pleased to present the program in its entiretynearly six hours of Edge Video and a downloadable PDF of the 55,000-word transcript.

[6 hours] 

John Brockman [3], Editor
Russell Weinberger [4], Associate Publisher

 Download PDF of Manuscript [5]  

Copyright (c) 2014 by Edge Foundation, Inc. All Rights Reserved. Please feel free to use for personal, noncommercial use (only).

_____
 

Related on Edge:

HeadCon '13 [2]
Edge Meetings & Seminars [6]
Edge Master Classes [7]
 


CONTENTS
 

Sarah-Jayne Blakemore: "The Teenager's Sense Of Social Self" [8]

The reason why that letter is nice is because it illustrates what's important to that girl at that particular moment in her life. Less important that man landed on moon than things like what she was wearing, what clothes she was into, who she liked, who she didn't like. This is the period of life where that sense of self, and particularly sense of social self, undergoes profound transition. Just think back to when you were a teenager. It's not that before then you don't have a sense of self, of course you do.  A sense of self develops very early. What happens during the teenage years is that your sense of who you are—your moral beliefs, your political beliefs, what music you're into, fashion, what social group you're into—that's what undergoes profound change.

[8]


[36.22]

SARAH-JAYNE BLAKEMORE is a Royal Society University Research Fellow and Professor of Cognitive Neuroscience, Institute of Cognitive Neuroscience, University College London. Sarah-Jayne Blakemore's Edge Bio Page [9]


Lawrence Ian Reed: "The Face Of Emotion" [10]

What can we tell from the face? There’s some mixed data, but data out that there’s a pretty strong coherence between what is felt and what’s expressed on the face. Happiness, sadness, disgust, contempt, fear, anger, all have prototypic or characteristic facial expressions. In addition to that, you can tell whether two emotions are blended together. You can tell the difference between surprise and happiness, and surprise and anger, or surprise and sadness. You can also tell the strength of an emotion. There seems to be a relationship between the strength of the emotion and the strength of the contraction of the associated facial muscles. 

[10]

[26:27]

LAWRENCE IAN REED is a Visiting Assistant Professor of Psychology, Skidmore College. Lawrence Ian Reed's Edge [11] Bio [9] Page [11]


Molly Crockett: "The Neuroscience of Moral Decision Making" [12]

Imagine we could develop a precise drug that amplifies people’s aversion to harming others; you won’t hurt a fly, everyone becomes Buddhist monks or something. Who should take this drug? Only convicted criminals—people who have committed violent crimes? Should we put it in the water supply? These are normative questions. These are questions about what should be done. I feel grossly unprepared to answer these questions with the training that I have, but these are important conversations to have between disciplines. Psychologists and neuroscientists need to be talking to philosophers about this and these are conversations that we need to have because we don’t want to get to the point where we have the technology and then we haven’t had this conversation because then terrible things could happen. 

[12]

[44:00]

MOLLY CROCKETT is Associate Professor, Department of Experimental Psychology, University of Oxford; Wellcome Trust Postdoctoral Fellow, Wellcome Trust Centre for Neuroimaging. Molly Crockett's Edge [13] Bio Page  [13]


Hugo Mercier: "Toward The Seamless Integration Of The Sciences" [14]

One of the great things about cognitive science is that it allowed us to continue that seamless integration of the sciences, from physics, to chemistry, to biology, and then to the mind sciences, and it's been quite successful at doing this in a relatively short time. But on the whole, I feel there's still a failure to continue this thing towards some of the social sciences such as, anthropology, to some extent, and sociology or history that still remain very much shut off from what some would see as progress, and as further integration. 

[14]

[39:34]

HUGO MERCIER, a Cognitive Scientist, is an Ambizione Fellow at the Cognitive Science Center at the University of Neuchâtel. Hugo Mercier's Edge Bio Page [15]


Jennifer Jacquet: "Shaming At Scale" [16]

Shaming, in this case, was a fairly low-cost form of punishment that had high reputational impact on the U.S. government, and led to a change in behavior. It worked at scale—one group of people using it against another group of people at the group level. This is the kind of scale that interests me. And the other thing that it points to, which is interesting, is the question of when shaming works. In part, it's when there's an absence of any other option. Shaming is a little bit like antibiotics. We can overuse it and actually dilute its effectiveness, because it's linked to attention, and attention is finite. With punishment, in general, using it sparingly is best. But in the international arena, and in cases in which there is no other option, there is no formalized institution, or no formal legislation, shaming might be the only tool that we have, and that's why it interests me. 

[16]

[31:58]

JENNIFER JACQUET is Assistant Professor of Environmental Studies, NYU; Researching cooperation and the tragedy of the commons; Author, Is Shame Necessary? Jennifer Jacquet's Edge Bio Page [17]


Simone Schnall: "Moral Intuitions, Replication, and the Scientific Study of Human Nature" [18]

In the end, it's about admissible evidence and ultimately, we need to hold all scientific evidence to the same high standard. Right now we're using a lower standard for the replications involving negative findings when in fact this standard needs to be higher. To establish the absence of an effect is much more difficult than the presence of an effect. 

[18]

[42:15]

SIMONE SCHNALL is a University Senior Lecturer and Director of the Cambridge Embodied Cognition and Emotion Laboratory at Cambridge University. Simone Schnall's Edge Bio Page [19] 


David Rand: "How Do You Change People's Minds About What Is Right And Wrong?" [20] 

What all these different things boil down to is the idea that there are future consequences for your current behavior. You can't just do whatever you want because if you are selfish now, it'll come back to bite you. I should say that there are lots of theoretical models, math models, computational models, lab experiments, and also real world field data from field experiments showing the power of these reputation observability effects for getting people to cooperate.

[20]

[34:37]

DAVID RAND is Assistant Professor of Psychology, Economics, and Management at Yale University, and the Director of Yale University's Human Cooperation Laboratory. David Rand's Edge Bio page [21]


L.A. Paul: "The Transformative Experience" [22]

We're going to pretend that modern-day vampires don't drink the blood of humans; they're vegetarian vampires, which means they only drink the blood of humanely-farmed animals. You have a one-time-only chance to become a modern-day vampire. You think, "This is a pretty amazing opportunity, but do I want to gain immortality, amazing speed, strength, and power? Do I want to become undead, become an immortal monster and have to drink blood? It's a tough call." Then you go around asking people for their advice and you discover that all of your friends and family members have already become vampires. They tell you, "It is amazing. It is the best thing ever. It's absolutely fabulous. It's incredible. You get these new sensory capacities. You should definitely become a vampire." Then you say, " Can you tell me a little more about it?" And they say, "You have to become a vampire to know what it's like. You can't, as a mere human, understand what it's like to become a vampire just by hearing me talk about it. Until you're a vampire, you're just not going to know what it's going to be like."

[22]

[48:42]

L.A. PAUL is Professor of Philosophy at the University of North Carolina at Chapel Hill, and Professorial Fellow in the Arché Research Centre at the University of St. Andrews.  L.A. Paul's Edge Bio page [23]


Michael McCullough: "Two Cheers For Falsification" [24]

What I want to do today is raise one cheer for falsification, maybe two cheers for falsification. Maybe it’s not philosophical falsificationism I’m calling for, but maybe something more like methodological falsificationism. It has an important role to play in theory development that maybe we have turned our backs on in some areas of this racket we’re in, particularly the part of it that I do—Ev Psych—more than we should have.

edge.org/conversation/michael_mccullough [24]

[43:37]

MICHAEL MCCULLOUGH is Director, Evolution and Human Behavior Laboratory, Professor of Psychology, Cooper Fellow, University of Miami; Author, Beyond Revenge. Michael McCullough's Edge Bio page [25]


Also Participating

FIERY CUSHMAN [26] is Assistant Professor, Department of Psychology, Harvard University. JOSHUA KNOBE [27] is an Experimental Philosopher; Associate Professor of Philosophy and Cognitive Science, Yale University. DAVID PIZARRO [28] is Associate Professor of Psychology, Cornell University, specializing in moral judgment. LAURIE SANTOS [29] is Associate Professor, Department of Psychology; Director, Comparative Cognition Laboratory, Yale University. 

 

Seminars
Event Date: [ 10.20.13 11:45 AM ]
Location:
United States

In July, 2013, Edge invited a group of social scientists to participate in an Edge Seminar at Eastover Farm [6] focusing on the state of the art of what the social sciences have to tell us about human nature. The ten speakers were Sendhil Mullainathan [33], June Gruber [34] Fiery Cushman [26] Rob Kurzban [35]Nicholas Christakis [36]Joshua Greene [37]Laurie Santos [29]Joshua Knobe [38]David Pizarro [39]Daniel C. Dennett [40]Also participating were Daniel Kahneman [41]Anne Treisman [42]Jennifer Jacquet [43].

HeadCon '13: WHAT'S NEW IN SOCIAL SCIENCE?
 


[2]

We asked the participants to consider the following questions: 

"What's new in your field of social science in the last year or two, and why should we care?" "Why do we want or need to know about it?" "How does it change our view of human nature?"

And in so doing we also asked them to focus broadly and address the major developments in their field (including but not limited to their own research agenda). The goal: to get new, fresh, and original up-to-date field reports on different areas of social science.

What Big Data Means For Social Science [44] (Sendhil Mullainathan) | The Scientific Study of Positive Emotion [45] (June Gruber) | The Paradox of Automatic Planning [46] (Fiery Cushman) | P-Hacking and the Replication Crisis [47] (Rob Kurzban) | The Science of Social Connections [48] (Nicholas Christakis) | The Role of Brain Imaging in Social Science [49] (Joshua Greene) | What Makes Humans Unique [50] (Laurie Santos) | Experimental Philosophy and the Notion of the Self [51]  (Joshua Knobe) | The Failure of Social and Moral Intuitions [52] (David Pizarro) | The De-Darwinizing of Cultural Change [53] (Daniel C. Dennett)

HeadCon '13: WHAT'S NEW IN SOCIAL SCIENCE was also an experiment in online video designed to capture the dynamic of an Edge seminar, focusing on the interaction of ideas, and of people. The documentary film-maker Jason Wishnow [54], the pioneer of "TED Talks" during his tenure as director of film and video at TED (2006-2012), helped us develop this new iteration of Edge Video, filming the ten sessions in split-screen with five cameras, presenting each speaker and the surrounding participants from multiple simultaneous camera perspectives.  

We are now pleased to present the program in its entirety, nearly six hours of Edge Video and a downloadable PDF of the 58,000-word transcript.

The great biologist Ernst Mayr (the "Darwin of the 20th Century") once said to me: "Edge is a conversation." And like any conversation, it is evolving. And what a conversation it is! 

[2]
(6 hours of video; 58,000 words) 

John Brockman [3]Editor
Russell Weinberger [4]Associate Publisher


Download PDF of Manuscript
[55] | Continue to Video and Online Text  [2]

 


Sendhil Mullainathan: What Big Data Means For Social Science [44] (Part I)

[44]

We've known big data has had big impacts in business, and in lots of prediction tasks. I want to understand, what does big data mean for what we do for science? Specifically, I want to think about the following context:  You have a scientist who has a hypothesis that they would like to test, and I want to think about how the testing of that hypothesis might change as data gets bigger and bigger. So that's going to be the rule of the game. Scientists start with a hypothesis and they want to test it; what's going to happen?

Sendhil Mullainathan [33] is Professor of Economics, Harvard; Assistant Director for Research, The Consumer Financial Protection Bureau (CFPB), U.S. Treasury Department (2011-2013); Coauthor, Scarcity: Why Having Too Little Means So Much.
 

June Gruber: The Scientific Study of Positive Emotion [45] (Part II)

[56]

What I'm really interested in is the science of human emotion. In particular, what's captivated my field and my interest the most is trying to understand positive emotions. Not only the ways in which perhaps we think they're beneficial for us or confer some sort of adaptive value, but actually the ways in which they may signal dysfunction and may not actually, in all circumstances and in all intensities, be good for us.

June Gruber [34] is Assistant Professor of Psychology, Director, Positive Emotion & Psychopatology Lab, Yale University. 


Fiery Cushman: The Paradox of Automatic Planning [46] (Part III)

[57]

I want to tell you about a problem that I have because it highlights a deep problem for the field of psychology. The problem is that every time I sit down to try to write a manuscript I end up eating Ben and Jerry's instead. I sit down and then a voice comes into my head and it says, "How about Ben and Jerry's? You deserve it. You've been working hard for almost ten minutes now." Before I know it, I'm on the way out the door.

Fiery Cushman [26] is Assistant Professor, Cognitive, Linguistic, Social Science, Brown University.


Rob Kurzban: P-Hacking and the Replication Crisis [47] (Part IV)

[47]

The first three talks this morning I think have been optimistic. We've heard about the promise of big data, we've heard about advances in emotions, and we've just heard from Fiery, who very cleverly managed to find a way to leave before I gave my remarks about how we're understanding something deep about human nature. I think there's a risk that my remarks are going to be understood as pessimistic but they're really not. My optimism is embodied in the notion that what we're doing here is important and we can do it better.

Rob Kurzban [35] is an Associate Professor, University of Pennsylvania specializing in evolutionary psychology: Author, Why Everyone (Else) Is A Hypocrite.


Nicholas Christakis: The Science of Social Connections [48] (Part V)

[48]

If you think about it, humans are extremely unusual as a species in that we form long-term, non-reproductive unions to other members of our species, namely, we have friends. Why do we do this? Why do we have friends? It's not hard to construct an argument as to why we would have sex with other people but it's rather more difficult to construct an argument as to why we would befriend other people. Yet we and very few other species do this thing. So I'd like to problematize that, I'd like to problematize friendship first.

Nicholas Christakis [36] is a Physician and Social Scientist; Director, The Human Nature Lab, Yale University; Coauthor, Connected: The Surprising Power Of Our Social Networks And How They Shape Our Lives.


Joshua Greene: The Role of Brain Imaging in Social Science [49] (Part VI) 

[58]

We're here in early September 2013 and the topic that's on everybody's minds, (not just here but everywhere) is Syria. Will the U.S. bomb Syria? Should the U.S. bomb Syria? Why do some people think that the U.S. should? Why do other people think that the U.S. shouldn't? These are the kinds of questions that occupy us every day. This is a big national and global issue, sometimes it's personal issues, and these are the kinds of questions that social science tries to answer.

Joshua Greene [37] is John and Ruth Hazel Associate Professor of the Social Sciences and the director of the Moral Cognition Laboratory in the Department of Psychology, Harvard University. Author, Moral Tribes: Emotion, Reason, And The Gap Between Us And Them.


Laurie Santos: What Makes Humans Unique [50] (Part VII)

[50]

The findings in comparative cognition I'm going to talk about are often different than the ones you hear comparative cognitive researchers typically talking about. Usually when somebody up here is talking about how animals are redefining human nature, it's cases where we're seeing animals being really similar to humans—elephants who do mirror self-recognition; rodents who have empathy; capuchin monkeys who obey prospect theory—all these cases where we see animals doing something really similar.

Laurie Santos [29] is Associate Professor, Department of Psychology; Director, Comparative Cognition Laboratory, Yale University.


Joshua Knobe: Experimental Philosophy and the Notion of the Self [51] (Part VIII)  

What is the field of experimental philosophy? Experimental philosophy is a relatively new field—one that just cropped up around the past ten years or so, and it's an interdisciplinary field, uniting ideas from philosophy and psychology. In particular, what experimental philosophers tend to do is to go after questions that are traditionally associated with philosophy but to go after them using the methods that have been traditionally associated with psychology. 

 Joshua Knobe [38] is an Experimental Philosopher; Associate Professor of Philosophy and Cognitive Science, Yale University.

[52]

We had people interact—strangers interact in the lab—and we filmed them, and we got the cues that seemed to indicate that somebody's going to be either more cooperative or less cooperative. But the fun part of this study was that for the second part we got those cues and we programmed a robot—Nexi the robot, from the lab of Cynthia Breazeal at MIT—to emulate, in one condition, those non-verbal gestures. So what I'm talking about today is not about the results of that study, but rather what was interesting about looking at people interacting with the robot.

David Pizarro [39] is Associate Professor of Psychology, Cornell University, specializing in moral judgement.


Daniel C. Dennett: The De-Darwinizing of Cultural Change [53] (Part X)

[53]

Think for a moment about a termite colony or an ant colony—amazingly competent in many ways, we can do all sorts of things, treat the whole entity as a sort of cognitive agent and it accomplishes all sorts of quite impressive behavior. But if I ask you, "What is it like to be a termite colony?" most people would say, "It's not like anything." Well, now let's look at a brain, let's look at a human brain—100 billion neurons, roughly speaking, and each one of them is dumber than a termite and they're all sort of semi-independent. If you stop and think about it, they're all direct descendants of free-swimming unicellular organisms that fended for themselves for a billion years on their own. There's a lot of competence, a lot of can-do in their background, in their ancestry. Now they're trapped in the skull and they may well have agendas of their own; they have competences of their own, no two are alike. Now the question is, how is a brain inside a head any more integrated, any more capable of there being something that it's like to be that than a termite colony? What can we do with our brains that the termite colony couldn't do or maybe that many animals couldn't do? 

Daniel C. Dennett [40]is a Philosopher; Austin B. Fletcher Professor of Philosophy, Co-Director, Center for Cognitive Studies, Tufts University; Author, Intuition Pumps.


ALSO PARTICIPATING

[43]
Daniel Kahneman [41] is Recipient, Nobel Prize in Economics, 2002; Presidential Medal of Freedom, 2013; Eugene Higgins Professor of Psychology, Princeton University; Author, Thinking Fast And Slow. Anne Treisman [42] is Professor Emeritus of Psychology, Princeton University; Recipient, National Medal of Science, 2013.

[43]
Jennifer Jacquet [43] is Clinical Assistant Professor of Environmental Studies, NYU; Researching cooperation and the tragedy of the commons.


[59]

 (click for image gallery) [59]



Out-take from the trailer I made for the 1968 movie "Head" (Columbia Pictures; Directed by Bob Rafelson; Written by Jack Nicholson) 


Seminars
Event Date: [ 7.20.10 ]
Location:
United States

 

Something radically new is in the air: new ways of understanding physical systems, new ways of thinking about thinking that call into question many of our basic assumptions. A realistic biology of the mind, advances in evolutionary biology, physics, information technology, genetics, neurobiology, psychology, engineering, the chemistry of materials: all are questions of critical importance with respect to what it means to be human. For the first time, we have the tools and the will to undertake the scientific study of human nature.

This began in the early seventies, when, as a graduate student at Harvard, evolutionary biologist Robert Trivers wrote five papers that set forth an agenda for a new field: the scientific study of human nature. In the past thirty-five years this work has spawned thousands of scientific experiments, new and important evidence, and exciting new ideas about who and what we are presented in books by scientists such as Richard Dawkins, Daniel C. Dennett, Steven Pinker, and Edward O. Wilson among many others.

In 1975, Wilson, a colleague of Trivers at Harvard, predicted that ethics would someday be taken out of the hands of philosophers and incorporated into the "new synthesis" of evolutionary and biological thinking. He was right.

Scientists engaged in the scientific study of human nature are gaining sway over the scientists and others in disciplines that rely on studying social actions and human cultures independent from their biological foundation.

No where is this more apparent than in the field of moral psychology. Using babies, psychopaths, chimpanzees, fMRI scanners, web surveys, agent-based modeling, and ultimatum games, moral psychology has become a major convergence zone for research in the behavioral sciences.

So what do we have to say? Are we moving toward consensus on some points? What are the most pressing questions for the next five years? And what do we have to offer a world in which so many global and national crises are caused or exacerbated by moral failures and moral conflicts? It seems like everyone is studying morality these days, reaching findings that complement each other more often than they clash.

Culture is humankind’s biological strategy, according to Roy F. Baumeister, and so human nature was shaped by an evolutionary process that selected in favor of traits conducive to this new, advanced kind of social life (culture). To him, therefore, studies of brain processes will augment rather than replace other approaches to studying human behavior, and he fears that the widespread neglect of the interpersonal dimension will compromise our understanding of human nature. Morality is ultimately a system of rules that enables groups of people to live together in reasonable harmony. Among other things, culture seeks to replace aggression with morals and laws as the primary means to solve the conflicts that inevitably arise in social life. Baumeister’s work has explored such morally relevant topics as evil, self-control, choice, and free will. [More] [78]

According to Yale psychologist Paul Bloom, humans are born with a hard-wired morality. A deep sense of good and evil is bred in the bone. His research shows that babies and toddlers can judge the goodness and badness of others' actions; they want to reward the good and punish the bad; they act to help those in distress; they feel guilt, shame, pride, and righteous anger. [More] [79]

Harvard cognitive neuroscientist and philosopher Joshua D. Greene sees our biggest social problems — war, terrorism, the destruction of the environment, etc. — arising from our unwitting tendency to apply paleolithic moral thinking (also known as "common sense") to the complex problems of modern life. Our brains trick us into thinking that we have Moral Truth on our side when in fact we don't, and blind us to important truths that our brains were not designed to appreciate. [More] [80]

University of Virginia psychologist Jonathan Haidt's research indicates that morality is a social construction which has evolved out of raw materials provided by five (or more) innate "psychological" foundations: Harm, Fairness, Ingroup, Authority, and Purity. Highly educated liberals generally rely upon and endorse only the first two foundations, whereas people who are more conservative, more religious, or of lower social class usually rely upon and endorse all five foundations. [More] [81]

The failure of science to address questions of meaning, morality, and values, notes neuroscientist Sam Harris, has become the primary justification for religious faith. In doubting our ability to address questions of meaning and morality through rational argument and scientific inquiry, we offer a mandate to religious dogmatism, superstition, and sectarian conflict. The greater the doubt, the greater the impetus to nurture divisive delusions. [More] [82]

A lot of Yale experimental philosopher Joshua Knobe's recent research has been concerned with the impact of people's moral judgments on their intuitions about questions that might initially appear to be entirely independent of morality (questions about intention, causation, etc.). It has often been suggested that people's basic approach to thinking about such questions is best understood as being something like a scientific theory. He has offered a somewhat different view, according to which people's ordinary way of understanding the world is actually infused through and through with moral considerations. He is arguably most widely known for what has come to be called "the Knobe effect" or the "Side-Effect Effect." [More] [83]

NYU psychologist Elizabeth Phelps investigates the brain activity underlying memory and emotion. Much of Phelps' research has focused on the phenomenon of "learned fear," a tendency of animals to fear situations associated with frightening events. Her primary focus has been to understand how human learning and memory are changed by emotion and to investigate the neural systems mediating their interactions. A recent study published in Nature by Phelps and her colleagues, shows how fearful memories can be wiped out for at least a year using a drug-free technique that exploits the way that human brains store and recall memories. [More] [84]

Disgust has been keeping Cornell psychologist David Pizarro particularly busy, as it has been implicated by many as an emotion that plays a large role in many moral judgments. His lab results have shown that an increased tendency to experience disgust (as measured using the Disgust Sensitivity Scale, developed by Jon Haidt and colleagues), is related to political orientation. [More] [85]

Each of the above participants led a 45-minute session on Day One that consisted of a 25-minute talk. Day Two consisted of two 90-minute open discussions on "The Science of Morality", intended as a starting point to begin work on a consensus document on the state of moral psychology to be published onEdge in the near future.

Among the members of the press in attendance were: Sharon Begley [86], Newsweek, Drake Bennett [87], Ideas, Boston Globe, David Brooks [88], OpEd Columnist, New York Times, Daniel Engber [89], Slate, Amanda Gefter, Opinion Editor,New Scientist, Jordan Mejias [90], Frankfurter Allgemeine Zeitung,Gary Stix [91], Scientific American, Pamela Weintraub [92]Discover Magazine.


The New Science of Morality, Part 1 [81]

[JONATHAN HAIDT [72]:] As the first speaker, I'd like to thank the Edge Foundation for bringing us all together, and bringing us all together in this beautiful place. I'm looking forward to having these conversations with all of you.

I was recently at a conference on moral development, and a prominent Kohlbergian moral psychologist stood up and said, "Moral psychology is dying."  And I thought, well, maybe in your neighborhood property values are plummeting, but in the rest of the city, we are going through a renaissance. We are in a golden age. 


The New Science of Morality, Part 2 [80]

[JOSHUA D. GREENE [93]:] Now, it's true that, as scientists, our basic job is to describe the world as it is. But I don't think that that's the only thing that matters. In fact, I think the reason why we're here, the reason why we think this is such an exciting topic, is not that we think that the new moral psychology is going to cure cancer. Rather, we think that understanding this aspect of human nature is going to perhaps change the way we think and change the way we respond to important problems and issues in the real world. If all we were going to do is just describe how people think and never do anything with it, never use our knowledge to change the way we relate to our problems, then I don't think there would be much of a payoff. I think that applying our scientific knowledge to real problems is the payoff. 


The New Science of Morality, Part 3 [82]

[SAM HARRIS: [74]] ...I think we should differentiate three projects that seem to me to be easily conflated, but which are distinct and independently worthy endeavors. The first project is to understand what people do in the name of "morality." We can look at the world, witnessing all of the diverse behaviors, rules, cultural artifacts, and morally salient emotions like empathy and disgust, and we can study how these things play out in human communities, both in our time and throughout history. We can examine all these phenomena in as nonjudgmental a way as possible and seek to understand them. We can understand them in evolutionary terms, and we can understand them in psychological and neurobiological terms, as they arise in the present. And we can call the resulting data and the entire effort a "science of morality". This would be a purely descriptive science of the sort that I hear Jonathan Haidt advocating. 


The New Science of Morality, Part 4 [78]

[ROY BAUMEISTER [75]:] And so that said, in terms of trying to understand human nature, well, and morality too, nature and culture certainly combine in some ways to do this, and I'd put these together in a slightly different way, it's not nature's over here and culture's over there and they're both pulling us in different directions. Rather, nature made us for culture. I'm convinced that the distinctively human aspects of psychology, the human aspects of evolution were adaptations to enable us to have this new and better kind of social life, namely culture.

Culture is our biological strategy. It's a new and better way of relating to each other, based on shared information and division of labor, interlocking roles and things like that. And it's worked. It's how we solve the problems of survival and reproduction, and it's worked pretty well for us in that regard. And so the distinctively human traits are ones often there to make this new kind of social life work.

Now, where does this leave us with morality?  


The New Science of Morality, Part 5 [79]

[PAUL BLOOM [71]:] What I want to do today is talk about some ideas I've been exploring concerning the origin of human kindness. And I'll begin with a story that Sarah Hrdy tells at the beginning of her excellent new book, "Mothers And Others."  She describes herself flying on an airplane. It’s a crowded airplane, and she's flying coach. She's waits in line to get to her seat; later in the flight, food is going around, but she's not the first person to be served; other people are getting their meals ahead of her. And there's a crying baby. The mother's soothing the baby, the person next to them is trying to hide his annoyance, other people are coo-cooing the baby, and so on.

As Hrdy points out, this is entirely unexceptional. Billions of people fly each year, and this is how most flights are. But she then imagines what would happen if every individual on the plane was transformed into a chimp. Chaos would reign. By the time the plane landed, there'd be body parts all over the aisles, and the baby would be lucky to make it out alive.            

The point here is that people are nicer than chimps.


The New Science of Morality, Part 6 [85]

[DAVID PIZARRO [39]:] What I want to talk about is piggybacking off of the end of Paul's talk, where he started to speak a little bit about the debate that we've had in moral psychology and in philosophy, on the role of reason and emotion in moral judgment. I'm going to keep my claim simple, but I want to argue against a view that probably nobody here has, (because we're all very sophisticated), but it's often spoken of emotion and reason as being at odds with each other — in a sense that to the extent that emotion is active, reason is not active, and to the extent that reason is active, emotion is not active. (By emotion here, I mean, broadly speaking, affective influences).

I think that this view is mistaken (although it is certainly the case sometimes). The interaction between these two is much more interesting.  So I'm going to talk a bit about some studies that we've done. Some of them have been published, and a couple of them haven't (because they're probably too inappropriate to publish anywhere, but not too inappropriate to speak to this audience). They are on the role of emotive forces in shaping our moral judgment. I use the term "emotive," because they are about motivation and how motivation affects the reasoning process when it comes to moral judgment.


The New Science of Morality, Part 7 [84]

[ELIZABETH PHELPS [77]:] In spite of these beliefs I do think about decisions as reasoned or instinctual when I'm thinking about them for myself. And this has obviously been a very powerful way of thinking about how we do things  because it goes back to earliest written thoughts. We have reason, we have emotion, and these two things can compete. And some are unique to humans and others are shared with other species.

And economists, when thinking about decisions, have also adopted what we call a dual system approach. This is obviously a different dual system approach and here I'm focusing mostly on Kahneman's System 1 and System 2. As probably everybody in this room knows Kahneman and Tversky showed that there were a number of ways in which we make decisions that didn't seem to be completely consistent with classical economic theory and easy to explain. And they proposed Prospect Theory and suggested that we actually have two systems we use when making decisions, one of which we call reason, one of which we call intuition.

Kahneman didn't say emotion. He didn't equate emotion with intuition.


The New Science of Morality, Part 8 [83]

[JOSHUA KNOBE [27]:] ...what's really exciting about this new work is not so much just the very idea of philosophers doing experiments but rather the particular things that these people ended up showing. When these people went out and started doing these experimental studies, they didn't end up finding results that conformed to the traditional picture. They didn't find that there was a kind of initial stage in which people just figured out, on a factual level, what was going on in a situation, followed by a subsequent stage in which they used that information in order to make a moral judgment. Rather they really seemed to be finding exactly the opposite.

What they seemed to be finding is that people's moral judgments were influencing the process from the very beginning, so that people's whole way of making sense of their world seemed to be suffused through and through with moral considerations. In this sense, our ordinary way of making sense of the world really seems to look very, very deeply different from a kind of scientific perspective on the world. It seems to be value-laden in this really fundamental sense. 


EDGE IN THE NEWS


BOSTON GLOBE

August 15, 2010

IDEAS

EWWWWWWWW! [94]

The surprising moral force of disgust
By Drake Bennett

...Psychologists like Haidt are leading a wave of research into the so-called moral emotions — not just disgust, but others like anger and compassion — and the role those feelings play in how we form moral codes and apply them in our daily lives. A few, like Haidt, go so far as to claim that all the world's moral systems can best be characterized not by what their adherents believe, but what emotions they rely on.

There is deep skepticism in parts of the psychology world about claims like these. And even within the movement there is a lively debate over how much power moral reasoning has — whether our behavior is driven by thinking and reasoning, or whether thinking and reasoning are nothing more than ornate rationalizations of what our emotions ineluctably drive us to do. Some argue that morality is simply how human beings and societies explain the peculiar tendencies and biases that evolved to help our ancestors survive in a world very different from ours.

A few of the leading researchers in the new field met late last month at a small conference in western Connecticut, hosted by the Edge Foundation, to present their work and discuss the implications. Among the points they debated was whether their work should be seen as merely descriptive, or whether it should also be a tool for evaluating religions and moral systems and deciding which were more and less legitimate — an idea that would be deeply offensive to religious believers around the world.

But even doing the research in the first place is a radical step. The agnosticism central to scientific inquiry is part of what feels so dangerous to philosophers and theologians. By telling a story in which morality grows out of the vagaries of human evolution, the new moral psychologists threaten the claim of universality on which most moral systems depend — the idea that certain things are simply right, others simply wrong. If the evolutionary story about the moral emotions is correct, then human beings, by being a less social species or even having a significantly different prehistoric diet, might have ended up today with an entirely different set of religions and ethical codes. Or we might never have evolved the concept of morals at all. ...


THE ATLANTIC

July 29, 2010

THE FIVE MORAL SENSES? [95]

Alexis Madrigal

University of Virginia moral psychologist Jonathan Haidt [96] delivered an absolutely dynamite talk on new advances in his field last week. The video and a transcript have been posted by Edge.org [97], a loose consortium of very smart people run by John Brockman. Haidt whips us through centuries of moral thought, recent evolutionary psychology, and discloses which two papers every single psychology student should have to read. Through it all, he's funny, erudite, and understandable. Here, we excerpt a few paragraphs from his conclusion, in which Haidt tells us how to think about our moral minds: ...


FRANKFURTER ALLGEMEINE ZEITUNG

July 28, 2010

FEUILLETON

Moral reasoning
SOLEMN HIGH MASS IN THE TEMPLE OF REASON [98]

How do you train a moral muscle? American researchers take their first steps on the path to a science of morality without God hypothesis. The last word should have the reason.
By Jordan Mejias

[Google translation:]

28th July 2010 One was missing and had he turned up, the illustrious company would have had nothing more to discuss and think. Even John Brockman, literary agent, and guru of the third culture, it could not move, stop by in his salon, which he every summer from the virtuality of the Internet, click on edge.org [97] moved, in a New England idyl. There, in the green countryside of Washington, Connecticut, it was time to morality as a new science. When new it was announced, because their devoted not philosophers and theologians, but psychologists, biologists, neurologists, and at most such philosophers, based on experiments and the insights of brain research. They all had to admit, even to be on the search, but they missed not one who lacked the authority in matters of morality: God.

The secular science dominated the conference. As it should come to an end, however, a consensus first, were the conclusions apart properly. Even on the question of whether religion should be regarded as part of evolution, remained out of the clear answer. Agreement, the participants were at least that is to renounce God. Him, the unanimous result of her certainly has not been completed or not may be locked investigations, did not owe the man morality. That it is innate in him, but did so categorically not allege any. Only on the findings that morality is a natural phenomenon, there was agreement, even if only to a certain degree. For, should be understood not only the surely. Besides nature makes itself in morality and the culture just noticeable, and where the effect of one ends and the other begins, is anything but settled.

Better be nice

In a baby science, as Elizabeth Phelps, a neuroscientist at New York University, called the moral psychology may by way of surprise not much groping. How about some with free will, will still remain for the foreseeable future a mystery. Moral instincts was, after all, with some certainty Roy Baumeister, a social psychologist at Florida State University, are not built into us. We are only given the ability to acquire systems of morality. Gives us to be altruistic, we are selfish by nature, benefits. It is moral to be compared with a muscle, the fatigue, but can also be strengthened through regular training. What sounds easier than is done, if not clear what is to train as well. A moral center that we can selectively edit points, our brain does not occur.

But amazingly, with all that we are nice to each other are forced reproduction, and Paul Bloom, a psychologist at Yale, is noticed. Obviously, we have realized that our lives more comfortable when others do not fight us. Factors of Nettigkeitswachstums Bloom also recognizes in capitalism that will work better with nice people, and world religions, which act in large groups and their dynamics as it used to strangers to meet each other favorably. The fact that we have developed over the millennia morally beneficial, holds not only he has been proved. Even the neurologist Sam Harris, author of "The Moral Landscape. How Science Can Determine Human Values "(Free Press), wants to make this progress not immoral monsters like Hitler and Stalin spoil. ...

[...Continue: German language original [99] | Google translation [98]]


ANDREW SULLIVAN — THE DAILY DISH

25 JUL 2010

FACTS INFUSED WITH MORALITY [100]

Edge held a seminar on morality. Here's [38] Joshua Knobe:

Over the past few years, a series of recent experimental studies have reexamined the ways in which people answer seemingly ordinary questions about human behavior. Did this person act intentionally? What did her actions cause? Did she make people happy or unhappy? It had long been assumed that people's answers to these questions somehow preceded all moral thinking, but the latest research has been moving in a radically different direction. It is beginning to appear that people's whole way of making sense of the world might be suffused with moral judgment, so that people's moral beliefs can actually transform their most basic understanding of what is happening in a situation.

David Brooks' illuminating column [101]on this topic covered the same ground:

...

...Advantage Locke over Hobbes.

[...Continue [100]]


THE NEW YORK TIMES

July 23, 2010
OP-ED COLUMNIST

THE MORAL NATURALISTS [102]

Scientific research is showing that we are born with an innate moral sense.
By DAVID BROOKS

Washington, Conn.

Where does our sense of right and wrong come from? Most people think it is a gift from God, who revealed His laws and elevates us with His love. A smaller number think that we figure the rules out for ourselves, using our capacity to reason and choosing a philosophical system to live by.

Moral naturalists, on the other hand, believe that we have moral sentiments that have merged from a long history of relationships. To learn about morality, you don't rely upon revelation or metaphysics; you observe people as they live.

This week a group of moral naturalists gathered in Connecticut at a conference organized by the Edge Foundation. ...

By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.

Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it. ...

[...Continue [102]]


THE REALITY CLUB

QUESTIONS FOR "THE MORAL NINE" FROM THE EDGE COMMUNITY

Howard Gardner, Geoffrey Miller, Brian Eno, James Fowler, Rebecca Mackinnon, Jaron Lanier, Eva Wisten, Brian Knutson, Andrian Kreye, Anonymous, Alison Gopnik, Robert Trivers, Randoph Nesse, M.D.


HOWARD GARDNER [103]
Psychologist, Harvard University; Author, Changing Minds

Enlightenment ideas were the product of white male Christians living in the 18th century. They form the basis of the Universal Declaration of Human Rights and other Western-inflected documents. But in our global world, Confucian societies and Islamic societies have their own guidelines about progress, individuality, democratic processes, human obligations. In numbers they represent more of humanity and are likely to become even more numerous in this century. What do the human sciences have to contribute to an understanding of these 'multiple voices' ? Can they combined harmoniously or are there unbridgeable gaps?

GEOFFREY MILLER [104]
Evolutionary Psychologist, University of New Mexico; Author, Spent: Sex, Evolution, and Consumer Behavior

1) Many people become vegans, protect animal rights, and care about the long-term future of the environment. It seems hard to explain these 'green virtues' in terms of the usual evolutionary-psychology selection pressures: reciprocity, kin selection, group selection -- so how can we explain their popularity (or unpopularity?)

2) What are the main sex differences in human morality, and why?

3) What role did costly signaling play in the evolution of human morality (i.e. 'showing off' certain moral virtues' to attract mates, friends, or allies, or to intimidate rival individuals or competing groups)?

4) Given the utility of 'adaptive self-deception' in human evolution -- one part of the mind not knowing what adaptive strategies another part is pursuing -- what could it mean to have the moral virtue of 'integrity' for an evolved being?

5) Why do all 'mental illnesses' (depression, mania, schizophrenia, borderline, psychopathy, narcissism, mental retardation, etc.) reduce altruism, compassion, and loving-kindness? Is this partly why they are recognized as mental illnesses?

BRIAN ENO [105]
Artist; Composer; Recording Producer: U2, Cold Play, Talking Heads, Paul Simon; Recording Artist

Is morality a human invention - a way of trying to stabilise human societies and make them coherent - or is there evidence of a more fundamental sense of morality in creatures other than humans?

Another way of asking this question is: are there moral concepts that are not specifically human?

Yet another way of asking this is: are moral concepts specifically the province of human brains? And, if they are, is there any basis for suggesting that there are any 'absolute' moral precepts?

Or: do any other creatures exhibit signs of 'honour' or 'shame'??

JAMES FOWLER [106]
Political Scientist, University of California, San Diego; Coauthor, Connected

Given recent evidence about the power of social networks, what is our personal responsibility to our friends' friends?

REBECCA MACKINNON [107]
Blogger & Cofounder, Global Voices Online; Former CNN journalist and head of CNN bureaus in Beijing & Tokyo; Visiting Fellow, Princeton University's Center for Information Technology Policy

Does the human race require a major moral evolution in order to survive? Isn't part of the problem that our intelligence has vastly out-evolved our morality, which is still stuck back in the paleolithic age? Is there anything we can do? Or is this the tragic flaw that dooms us? Might technology help to facilitate or speed up our moral evolution, as some say technology is already doing for human intelligence? We have artificial intelligence and augmented reality. What about artificial or augmented morality?

JARON LANIER [108]
Musician, Computer Scientist; Pioneer of Virtural Reality; Author, You Are Not A Gadget: A Manifesto

A crucial topic is how group interactions change moral perception. To what degree are there clan-oriented processes inherent in the human brain? In particular, how can well-informed software designs for network-mediated social experience play a role in changing behavior and values? Is there anything specific that can be done to reduce mob-like phenomena, as is spawned in online forums like 4chan's /b/, without resorting to degrees of imposed control? This is where a science of moral psychology could inform engineering.

EVA WISTEN [109]
Journalist; Author, Single in Manhattan

What's would be a good definition - a few examples - of common moral sense? How does an averagely moral human think and behave (it's easy to paint a picture of the actions of an immoral person...) Now, how can this be expanded?

Could an understanding/acceptance of the idea that we are all having unconscious instincts for what's right and wrong replace the idea of religion as necessary for moral behavior?

What tends to be the hierarchy of "blinders" - the arguments we, consciously or unconsciously, use to relabel exploitative acts as good? (I did it for God, I did it for the German People, I did it for Jodie Foster...) What evolutionary purpose have they filled?

BRIAN KNUTSON [110]
Psychologist & Neuroscientist, Stanford

What is the difference between morality and emotion? How can scientists distinguish between the two (or should they)? Why has Western culture been so historically reluctant to recognize emotion as a major influence on moral judgments?

ANDRIAN KREYE [111]?
Feuilleton Editor, Sueddutsche Zeitung

Is there a fine line or a wide gap between morality and ideology?

ANONYMOUS

1. Some of the new literature on moral psychology feels like traditional discussions of ethics with a few numbers attached from surveys; almost like old ideas in a new can. As an outsider I'd be curious to know what's really new here. Specifically, if William James were resurrected what might be the new findings we could explain to him that would astound him or fundamentally change his way of thinking?

2. Is there a reason to believe there is such a thing as moral psychology that transcends upbringing and culture? Are we really studying a fundamental feature of the mind or simply the outcome of a social process?

ALISON GOPNIK [112]
Psychologist, UC, Berkeley; Author, The Philosophical Baby

Many people have proposed an evolutionary psychology/ nativist view of moral capacities. But surely one of the most dramatic and obvious features of our moral capacities is their capacity for change and even radical transformation with new experiences. At the same time this transformation isn't just random but seems to have a progressive quality. Its analogous to science which presents similar challenges to a nativist view. And even young children are empirically, capable of this kind of change in both domains. How do we get to new and better conceptions of the world, cognitive or moral, if the nativists are right?

ROBERT TRIVERS [113]
Evolutionary Biologist, Rutgers University; Coauthor, Genes In Conflict: The Biology of Selfish Genetic Elements

Shame.

What is it? When does it occur? What function does it serve? How is it related, if at all, to guilt? Is it related to "morality" and if so how?

Key point, John, is that shame is a complex mixture of self and other: Tiger Woods SHAMES his wife in public — he may likewise be ashamed.

If i fuck a goat i may feel ashamed if someone saw it, but absent harm to the goat, not clear how i should respond if i alone witness it.


Seminars
Event Date: [ 8.25.07 ]
Location:
United States

"Life consists of propositions about life."
— Wallace Stevens ("Men Made Out Of Words")

"I just read the Life transcript book and it is fantastic. One of the better books I've read in a while. Super rich, high signal to noise, great subject."
— Kevin Kelly [117], Editor-At-Large, Wired

"The more I think about it the more I'm convinced that Life: What A Concept! was one of those memorable events that people in years to come will see as a crucial moment in history. After all, it's where the dawning of the age of biology was officially announced."
— Andrian Kreye [118]Süddeutsche Zeitung

EDGE PUBLISHES "LIFE: WHAT A CONCEPT!" TRANSCRIPT AS DOWNLOADABLE PDF BOOK [1.14.08]

Edge is pleased to announce the online publication of the complete transcript of this summer's Edge event, Life: What a Concept! as a 43,000- word downloadable PDF Edgebook.

The event took place at Eastover Farm in Bethlehem, CT on Monday, August 27th (see below). Invited to address the topic "Life: What a Concept!" were Freeman Dyson [119]J. Craig Venter [120]George Church [121]Robert Shapiro [122]Dimitar Sasselov [123], and Seth Lloyd [124], who focused on their new, and in more than a few cases, startling research, and/or ideas in the biological sciences.


[125]      pdf download (click here) [125]

 

Reporting on the August event, Andrian Kreye [118], Feuilleton (Arts & Ideas) Editor of Süddeutsche Zeitung wrote:

Soon genetic engineering will shape our daily life to the same extent that computers do today. This sounds like science fiction, but it is already reality in science. Thus genetic engineer George Church talks about the biological building blocks that he is able to synthetically manufacture. It is only a matter of time until we will be able to manufacture organisms that can self-reproduce, he claims. Most notably J. Craig Venter succeeded in introducing a copy of a DNA-based chromosome into a cell, which from then on was controlled by that strand of DNA.

J [126]ordan Mejias [126], Arts Correspondent of Frankfurter Allgemeine Zeitung, noted that:

These are thoughts to make jaws drop...Nobody at Eastover Farm seemed afraid of a eugenic revival. What in German circles would have released violent controversies, here drifts by unopposed under mighty maple trees that gently whisper in the breeze.

The following Edge feature on the "Life: What a Concept!" August event includes a photo album; streaming video; and html files of each of the individual talks.


In April, Dennis Overbye, writing in the New York Times "Science Times," broke the story of the discovery by Dimitar Sasselov and his colleagues of five earth-like exo-planets, one of which "might be the first habitable planet outside the solar system."

At the end of June, Craig Venter has announced the results of his lab's work on genome transplantation methods that allows for the transformation of one type of bacteria into another, dictated by the transplanted chromosome. In other words, one species becomes another. In talking to Edge about the research, Venter noted the following:

Now we know we can boot up a chromosome system. It doesn't matter if the DNA is chemically made in a cell or made in a test tube. Until this development, if you made a synthetic chomosome you had the question of what do you do with it. Replacing the chomosome with existing cells, if it works, seems the most effective to way to replace one already in an existing cell systems. We didn't know if it would work or not. Now we do. This is a major advance in the field of synthetic genomics. We now know we can create a synthetic organism. It's not a question of 'if', or 'how', but 'when', and in this regard, think weeks and months, not years.

In July, in an interesting and provocative essay in New York Review of Books entitled "Our Biotech Future," [127] Freeman Dyson wrote:

The Darwinian interlude has lasted for two or three billion years. It probably slowed down the pace of evolution considerably. The basic biochemical machinery o life had evolved rapidly during the few hundreds of millions of years of the pre-Darwinian era, and changed very little in the next two billion years of microbial evolution. Darwinian evolution is slow because individual species, once established evolve very little. With rare exceptions, Darwinian evolution requires established species to become extinct so that new species can replace them.

Now, after three billion years, the Darwinian interlude is over. It was an interlude between two periods of horizontal gene transfer. The epoch of Darwinian evolution based on competition between species ended about ten thousand years ago, when a single species, Homo sapiens, began to dominate and reorganize the biosphere. Since that time, cultural evolution has replaced biological evolution as the main driving force of change. Cultural evolution is not Darwinian. Cultures spread by horizontal transfer of ideas more than by genetic inheritance. Cultural evolution is running a thousand times faster than Darwinian evolution, taking us into a new era of cultural interdependence which we call globalization. And now, as Homo sapiens domesticates the new biotechnology, we are reviving the ancient pre-Darwinian practice of horizontal gene transfer, moving genes easily from microbes to plants and animals, blurring the boundaries between species. We are moving rapidly into the post-Darwinian era, when species other than our own will no longer exist, and the rules of Open Source sharing will be extended from the exchange of software to the exchange of genes. Then the evolution of life will once again be communal, as it was in the good old days before separate species and intellectual property were invented.

It's clear from these developments as well as others, that we are at the end of one empirical road and ready for adventures that will lead us into new realms.

This year's Annual Edge Event took place at Eastover Farm in Bethlehem, CT on Monday, August 27th. Invited to address the topic "Life: What a Concept!" were Freeman Dyson [128]J. Craig Venter [129]George Church [130]Robert Shapiro [131]Dimitar Sasselov [132], and Seth Lloyd, [133] who focused on their new, and in more than a few cases, startling research, and/or ideas in the biological sciences.

Physicist Freeman Dyson envisions a biotech future which supplants physics and notes that after three billion years, the Darwinian interlude is over. He refers to an interlude between two periods of horizontal gene transfer, a subject explored in his abovementioned essay.

Craig Venter, who decoded the human genome, surprised the world in late June by announcing the results of his lab's work on genome transplantation methods that allows for the transformation of one type of bacteria into another, dictated by the transplanted chromosome. In other words, one species becomes another.

George Church, the pioneer of the Synthetic Biology revolution, thinks of the cell as operating system, and engineers taking the place of traditional biologists in retooling stripped down components of cells (bio-bricks) in much the vein as in the late 70s when electrical engineers were working their way to the first personal computer by assembling circuit boards, hard drives, monitors, etc.

Biologist Robert Shapiro disagrees with scientists who believe that an extreme stroke of luck was needed to get life started in a non-living environment. He favors the idea that life arose through the normal operation of the laws of physics and chemistry. If he is right, then life may be widespread in the cosmos.

Dimitar Sasselov, Planetary Astrophysicist, and Director of the Harvard Origins of Life Initiative, has made recent discoveries of exo-planets ("Super-Earths"). He looks at new evidence to explore the question of how chemical systems become living systems.

Quantum engineer Seth Lloyd sees the universe as an information processing system in which simple systems such as atoms and molecules must necessarily give rise complex structures such as life, and life itself must give rise to even greater complexity, such as human beings, societies, and whatever comes next.

A small group of journalists interested in the kind of issues that are explored on Edge were present: Corey Powell [134]DiscoverJordan Mejias [135]Frankfurter Allgemeine ZeitungHeidi Ledford [136]NatureGreg Huang [137]New ScientistDeborah Treisman [138]New YorkerEdward Rothstein [139], The New York TimesAndrian Kreye [140]Süddeutsche ZeitungAntonio Regalado, [141] Wall Street Journal. Guests included Heather Kowalski [142], The J. Craig Venter Institute, Ting Wu [143], The Wu Lab, Harvard Medical School, and the artist Stephanie Rudloe [144]. Attending for EdgeKatinka Matson [145]Russell Weinberger [146]Max Brockman [147], and Karla Taylor [148].

We are witnessing a point in which the empirical has intersected with the epistemological: everything becomes new, everything is up for grabs. Big questions are being asked, questions that affect the lives of everyone on the planet. And don't even try to talk about religion: the gods are gone.

Following the theme of new technologies=new perceptions, I asked the speakers to take a third culture slant in the proceedings and explore not only the science but the potential for changes in the intellectual landscape as well.

We are pleased to present the transcripts of the talks and conversation along with streaming video clips (links below).

— JB




FREEMAN DYSON

 

The essential idea is that you separate metabolism from replication. We know modern life has both metabolism and replication, but they're   carried out by separate groups of molecules. Metabolism is carried out by proteins and all kinds of other molecules, and replication is carried out by DNA and RNA. That maybe is a clue to the fact that   they started out separate rather than together. So my version of the origin of life is that it started with metabolism only.

FREEMAN DYSON [119]

 

FREEMAN DYSON: First of all I wanted to talk a bit about origin of life. To me the most interesting question in biology has always been how it all got started. That has been a hobby of mine. We're all equally ignorant, as far as I can see. That's why somebody like me can pretend to be an expert.

I was struck by the picture of early life that appeared in Carl Woese's article three years ago. He had this picture of the pre-Darwinian epoch when genetic information was open source and everything was shared between different organisms. That picture fits very nicely with my speculative version of origin of life.

The essential idea is that you separate metabolism from replication. We know modern life has both metabolism and replication, but they're carried out by separate groups of molecules. Metabolism is carried out by proteins and all kinds of small molecules, and replication is carried out by DNA and RNA. That maybe is a clue to the fact that they started out separate rather than together. So my version of the origin of life is it started with metabolism only. ...

[Continue... [149]]

___

FREEMAN DYSON is professor of physics at the Institute for Advanced Study, in Princeton. His professional interests are in mathematics and astronomy. Among his many books are Disturbing the Universe, Infinite in All Directions Origins of Life, From Eros to Gaia, Imagined Worlds, The Sun, the Genome, and the Internet, and most recently A Many Colored Glass: Reflections on the Place of Life in the Universe.

Freeman Dyson's Edge Bio Page [119]


CRAIG VENTER

 

I have come to think of life in much more a gene-centric view than even a genome-centric view, although it kind of oscillates.  And when we talk about the transplant work, genome-centric becomes more important than gene-centric. From the first third of the Sorcerer II expedition we discovered roughly 6 million new genes that has doubled the number in the public databases when we put them in a few months ago, and in 2008 we are likely to double that entire number again.  We're just at the tip of the iceberg of what the divergence is on this planet. We are in a linear phase of gene discovery maybe in a linear phase of unique biological entities if you call those species, discovery, and I think eventually we can have databases that represent the gene repertoire of our planet.

One question is, can we extrapolate back from this data set to describe the most recent common ancestor. I don't necessarily buy that there is a single ancestor. It’s counterintuitive to me. I think we may have thousands of recent common ancestors and they are not necessarily so common.

J. CRAIG VENTER [120]

J. CRAIG VENTER: Seth's statement about digitization is basically what I've spent the last fifteen years of my career doing, digitizing biology. That's what DNA sequencing has been about. I view biology as an analog world that DNA sequencing has taking into the digital world . I'll talk about some of the observations that we have made for a few minutes, and then I will talk about once we can read the genetic code, we've now started the phase where we can write it. And how that is going to be the end of Darwinism.

On the reading side, some of you have heard of our Sorcerer II expedition for the last few years where we've been just shotgun sequencing the ocean. We've just applied the same tools we developed for sequencing the human genome to the environment, and we could apply it to any environment; we could dig up some soil here, or take water from the pond, and discover biology at a scale that people really have not even imagined.

The world of microbiology as we've come to know it is based on over a hundred year old technology of seeing what will grow in culture. Only about a tenth of a percent of microbiological organisms, will grow in the lab using traditional techniques. We decided to go straight to the DNA world to shotgun sequence what's there; using very simple techniques of filtering seawater into different size fractions, and sequencing everything at once that's in the fractions. ...

[Continue... [150]]

___

J. CRAIG VENTER is one of leading scientists of the 21st century for his visionary contributions in genomic research. He is founder and president of the J. Craig Venter Institute. The Venter Institute conducts basic research that advances the science of genomics; specializes inhuman genome based medicine, infectious disease, environmental genomics and synthetic genomics and synthetic life, and explores the ethical and policy implications of genomic discoveries and advances.  The Venter Institute employes more than 400 scientist and staff in Rockville, Md and in La Jolla, Ca. He is the author of A Life Decoded: My Genome: My Life.

Craig Venter's Edge Bio Page [120]


GEORGE CHURCH

Many of the people here worry about what life is, but maybe in a slightly more general way, not just ribosomes, but inorganic life. Would we know it if we saw it? It's important as we go and discover other worlds, as we start creating more complicated robots, and so forth, to know, where do we draw the line?

GEORGE CHURCH [121]

GEORGE CHURCH: We've heard a little bit about the ancient past of biology, and possible futures, and I'd like to frame what I'm talking about in terms of four subjects that elaborate on that. In terms of past and future, what have we learned from the past, how does that help us design the future, what would we like it to do in the future, how do we know what we should be doing? This sounds like a moral or ethical issue, but it's actually a very practical one too.

One of the things we've learned from the past is that diversity and dispersion are good. How do we inject that into a technological context? That brings the second topic, which is, if we're going to do something, if we have some idea what direction we want to go in, what sort of useful constructions we would like to make, say with biology, what would those useful constructs be? By useful we might mean that the benefits outweigh the costs — and the risks.  Not simply costs, you have to have risks, and humans as a species have trouble estimating the long tails of some of the risks, which have big consequences and unintended consequences. So that's utility. 1) What we learn from the future and the past 2) the utility 3) kind of a generalization of life.

Many of the people here worry about what life is, but maybe in a slightly more general way, not just ribosomes, but inorganic life. Would we know it if we saw it? It's important as we go and discover other worlds, as we start creating more complicated robots, and so forth, to know, where do we draw the line? I think that's interesting. And then finally — that's kind of generalizational life, at a basic level — but 4) the kind of life that we are particularly enamored of — partly because of egocentricity, but also for very philosophical reasons — is intelligent life. But how do we talk about that? ...

[Continue... [151]]

___

GEORGE CHURCH is Professor of Genetics at Harvard Medical School and Director of the Center for Computational Genetics. He invented the broadly applied concepts of molecular multiplexing and tags, homologous recombination methods, and array DNA synthesizers. Technology transfer of automated sequencing & software to Genome Therapeutics Corp. resulted in the first commercial genome sequence (the human pathogen, H. pylori,1994). He has served in advisory roles for 12 journals, 5 granting agencies and 22 biotech companies. Current research focuses on integrating biosystems-modeling with personal genomics & synthetic biology.

George Church's Edge Bio Page [121]


ROBERT SHAPIRO

 

I looked at the papers published on the origin of life and decided that it was absurd that the thought of nature of its own volition putting together a DNA or an RNA molecule was unbelievable.

I'm always running out of metaphors to try and explain what the difficulty is. But suppose you took Scrabble sets, or any word game sets, blocks with letters, containing every language on Earth, and you heap them together and you then took a scoop and you scooped into that heap, and you flung it out on the lawn there, and the letters fell into a line which contained the words “To be or not to be, that is the question,” that is roughly the odds of an RNA molecule, given no feedback — and there would be no feedback, because it wouldn't be functional until it attained a certain length and could copy itself — appearing on the Earth.

ROBERT SHAPIRO [122]

ROBERT SHAPIRO: I was originally an organic chemist — perhaps the only one of the six of us — and worked in the field of organic synthesis, and then I got my PhD, which was in 1959, believe it or not. I had realized that there was a lot of action in Cambridge, England, which was basically organic chemistry, and I went to work with a gentleman named Alexander Todd, promoted eventually to Lord Todd, and I published one paper with him, which was the closest I ever got to the Lord. I then spent decades running a laboratory in DNA chemistry, and so many people were working on DNA synthesis — which has been put to good use as you can see — that I decided to do the opposite, and studied the chemistry of how DNA could be kicked to Hell by environmental agents. Among the most lethal environmental agents I discovered for DNA — pardon me, I'm about to imbibe it — was water. Because water does nasty things to DNA. For example, there's a process I heard you mention called DNA animation, where it kicks off part of the coding part of DNA from the units — that was discovered in my laboratory.

Another thing water does is help the information units fall off of DNA, which is called depurination and ought to apply only one of the subunits — but works under physiological conditions for the pyrimidines as well, and I helped elaborate the mechanism by which water helped destroy that part of DNA structure. I realized what a fragile and vulnerable molecule it was, even if was the center of Earth life. After water, or competing with water, the other thing that really does damage to DNA, that is very much the center of hot research now — again I can't tell you to stop using it — is oxygen. If you don't drink the water and don't breathe the air, as Tom Lehrer used to say, and you should be perfectly safe. ...

[Continue... [152]]

___

ROBERT SHAPIRO is professor emeritus of chemistry and senior research scientist at New York University. He has written four books for the general public: Life Beyond Earth (with Gerald Feinberg); Origins, a Skeptic's Guide to the Creation of Life on Earth; The Human Blueprint (on the effort to read the human genome); and Planetary Dreams (on the search for life in our Solar System).

Robert Shapiro Edge Bio Page [122]


DIMITAR SASSELOV

Is Earth the ideal planet for life? What is the future of life in our universe? We often imagine our place in the universe in the same way we experience our lives and the places we inhabit. We imagine a practically static eternal universe where we, and life in general, are born, grow up, and mature; we are merely one of numerous generations.

This is so untrue! We now know that the universe is 14 and Earth life is 4 billion years old: life and the universe are almost peers. If the universe were a 55-year old, life would be a 16-year old teenager. The universe is nowhere close to being static and unchanging either.

Together with this realization of our changing universe, we are now facing a second, seemingly unrelated realization: there is a new kind of planet out there which have been named super-Earths, that can provide to life all that our little Earth does. And more.

DIMITAR SASSELOV [123]

DIMITAR SASSELOV: I will start the same way, by introducing my background. I am a physicist, just like Freeman and Seth, in background, but my expertise is astrophysics, and more particularly planetary astrophysics. So that means I'm here to try to tell you a little bit of what's new in the big picture, and also to warn you that my background basically means that I'm looking for general relationships — for generalities rather than specific answers to the questions that we are discussing here today.

So, for example, I am personally more interested in the question of the origins of life, rather than the origin of life. What I mean by that is I'm trying to understand what we could learn about pathways to life, or pathways to the complex chemistry that we recognize as life. As opposed to narrowly answering the question of what is the origin of life on this planet. And that's not to say there is more value in one or the other; it's just the approach that somebody with my background would naturally try to take. And also the approach, which — I would agree to some extent with what was said already — is in need of more research and has some promise.

One of the reasons why I think there are a lot of interesting new things coming from that perspective, that is from the cosmic perspective, or planetary perspective, is because we have a lot more evidence for what is out there in the universe than we did even a few years ago. So to some extent, what I want to tell you here is some of this new evidence and why is it so exciting, in being able to actually inform what we are discussing here. ...

[Continue... [153]]

___

DIMITAR SASSELOV is Professor of Astronomy at Harvard University and Director, Harvard Origins of Life Initiative. Most recently his research has led him to explore the nature of planets orbiting other stars. Using novel techniques, he has discovered a few such planets, and his hope is to use these techniques to find planets like Earth. He is the founder and director of the new Harvard Origins of Life Initiative, a multidisciplinary center bridging scientists in the physical and in the life sciences, intent to study the transition from chemistry to life and its place in the context of the Universe.

Dimitar Sasselov's Edge Bio Page [123]


SETH LLOYD

If you program a computer at random, it will start producing other computers, other ways of computing, other more complicated, composite ways of computing. And here is where life shows up. Because the universe is already computing from the very beginning when it starts, starting from the Big Bang, as soon as elementary particles show up. Then it starts exploring — I'm sorry to have to use anthropomorphic language about this, I'm not imputing any kind of actual intent to the universe as a whole, but I have to use it for this to describe it — it starts to explore other ways of computing.

SETH LLOYD [124]

SETH LLOYD: I'd like to step back from talking about life itself. Instead I'd like to talk about what information processing in the universe can tell us about things like life. There's something rather mysterious about the universe. Not just rather mysterious, extremely mysterious. At bottom, the laws of physics are very simple. You can write them down on the back of a T-shirt: I see them written on the backs of T-shirts at MIT all the time, even in size petite. IN addition to that, the initial state of the universe, from what we can tell from observation, was also extremely simple. It can be described by a very few bits of information.

So we have simple laws and simple initial conditions. Yet if you look around you right now you see a huge amount of complexity. I see a bunch of human beings, each of whom is at least as complex as I am. I see trees and plants, I see cars, and as a mechanical engineer, I have to pay attention to cars. The world is extremely complex.

If you look up at the heavens, the heavens are no longer very uniform. There are clusters of galaxies and galaxies and stars and all sorts of different kinds of planets and super-earths and sub-earths, and super-humans and sub-humans, no doubt. The question is, what in the heck happened? Who ordered that? Where did this come from? Why is the universe complex? Because normally you would think, okay, I start off with very simple initial conditions and very simple laws, and then I should get something that's simple. In fact, mathematical definitions of complexity like algorithmic information say, simple laws, simple initial conditions, imply the state is always simple. It's kind of bizarre. So what is it about the universe that makes it complex, that makes it spontaneously generate complexity? I'm not going to talk about super-natural explanations. What are natural explanations — scientific explanations of our universe and why it generates complexity, including complex things like life? ...

[Continue... [154]]

___

SETH LLOYD is Professor of Mechanical Engineering at MIT and Director of the W.M. Keck Center for Extreme Quantum Information Theory (xQIT). He works on problems having to do with information and complex systems from the very small—how do atoms process information, how can you make them compute, to the very large — how does society process information? And how can we understand society in terms of its ability to process information? He is the author if Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos.

Seth Lloyd's Edge Bio Page [124]



FRANKFURTER 
August 31,.2007

FEUILLETON — Front Page

 

Let's play God!; Life's questions: J. Craig Venter programs the future(Lasst uns Gott spielen!)By Jordan Mejias [90]

Was Evolution only an interlude?  At the invitation of John Brockman, science luminaries such as J. Craig Venter, Freeman Dyson, Seth Lloyd, Robert Shapiro and others discussed the question: What is Life? 

EASTOVER FARM, August 30th

It sounds like seaman's yarn that the scientist with the look of an experienced seafarer has in store for us. The suntanned adventurer with the close-clipped grey beard vaunts the ocean as a sea of bacteria and viruses, unimaginable in their varieties. And in their lifestyle, as we might call it. But what do organisms live off? Like man, not off air or love alone. There can be no life without nutrients, it is said. Not true, says the sea dog. Sometimes a source of energy is enough, for instance, when energy is abundantly provided by sunlight. Could that teach us anything about our very special form of life?

J. Craig Venter, the ingenious decoder of the genome, who takes time off to sail around the world on expeditions, balances his flip-flops on his naked feet as he tells us about such astounding phenomena of life. Us, that means a few hand-picked journalists and half a dozen stars of science, invited by John Brockman, the Guru of the all encompassing "Third Culture", to his farm in Connecticut.

Relaxed, always open for a witty remark, but nevertheless with the indispensable seriousness, the scientific luminaries go to work under Brockman's direction. He, the master of the easy, direct question that unfailingly draws out the most complicated answers, the hottest speculations and debates, has for today transferred his virtual salon, always accessible on the Internet under the name Edge, [97] to a very real and idyllic summer's day. This time the subject matter is nothing other than life itself.

When Venter speaks of life, it's almost as if he were reading from the script of a highly elaborate Science Fiction film. We are told to imagine organisms that not only can survive dangerous radiations, but that remain hale and hearty as they journey through the Universe. Still, he of all people, the revolutionary geneticist, warns against setting off in an overly gene-centric direction when trying to track down Life. For the way in which a gene makes itself known, will depend to a large degree upon the aid of overlooked transporter genes. In spite of this he considers the genetic code a better instrument to organize living organisms than the conventional system of classification by species.

Many colleagues nod in agreement, when they are not smiling in agreement. But this cannot be all that Venter has up his sleeve. Just a short while ago, he created a stir with the announcement that his Institute had succeeded in transplanting the genome of one bacterium into another. With this, he had newly programmed an organism. Should he be allowed to do this?  A question not only for scientists. Eastover Farm was lacking in ethicists, philosophers and theologians, but Venter had taken precautions. He took a year to learn from the world's large religions whether it was permissible to synthesize life in the lab. Not a single religious representative could find grounds to object. All essentially agreed: It's okay to play God.

Maybe some of the participants would have liked to hear more on the subject, but the day in Nature's lap was for identifying themes, not giving and receiving exhaustive amounts of information. A whiff of the most breathtaking visions, both good and bad, was enough. There were already frightening hues in the ultimate identity theft, to which Venter admitted with his genome exchange. What if a cell were captured by foreign DNA? Wouldn't it be a nightmare in the shape of a genuine Darwinian victory of the strong over the weak? Venter was applying dark colors here, whereas Freeman Dyson had painted us a much more mellow picture of the future.

Dyson, the great, not yet quite eighty-four year old youngster, physicist and futurist, regards evolution as an interlude. According to his calculations, the competition between species has gone on for just three billion years. Before that, according to Dyson, living organisms participated in horizontal gene transfers; if you will, they preferred the peaceful exchange of information among themselves. In the ten thousand years since Homo sapiens conquered the biosphere, Dyson once again sees a return of the old Modus Operandi, although in a modified form.

The scenario goes as follows: Cultural evolution, characterized by the transfer of ideas, has replaced the much slower biological evolution. Today, ideas, not genes, tip the scales. In availing himself of biotechnology, Man has picked up the torn pre-evolutionary thread and revived the genetic back and forth between microbes, plants and animals. Bit by bit the borders between species are disappearing. Soon only one species will remain, namely the genetically modified human, while the rules of Open Source, which guarantee the unhindered exchange of software in computers, will also apply to the exchange of genes. The evolution of life, in nutshell, will return soon to a state of agreeable unity, as it existed in good old pre-Darwinian times, when life had not yet been separated into distinct species.

Though Venter may not trust in this future peace, he nearly matches Dyson in his futuristic enthusiasm. But he is enough of a realist to stress that he has never talked of creating new life from scratch. He is confident that he can develop new species and life forms, but will always have to rely on existing materials that he finds. Even he cannot conjure a cell out of nothing. So far, so good and so humble.

The rest is sheer bravado. He considers manipulation of human genes not only possible, but desirable. There's no question that he will continue to disappoint the inmate who once asked him to fashion an attractive cellmate, just as he refused the wish of an unsavory gentleman who yearned for mentally underdeveloped working-class people. But, Venter asks, who can object to humans having genetically beefed-up Intelligence? Or to new genomes that open the door to new, undreamt-of sources of bio fuel?  Nobody at Eastover Farm seemed afraid of a eugenic revival. What in German circles would have released violent controversies, here drifts by unopposed under mighty maple trees that gently whisper in the breeze.

All the same, Venter does confess that such life transforming technology, more powerful than any, humanity could harness until now, inevitably plunges him in doubt, particularly when looking back on human history. Still, he looks toward the future with hope and confidence. As does George Church, the molecular geneticist from Harvard, who wouldn't be surprised if a future computer would be able outperform the human brain. Could resourcefully mixed DNA be helpful to us?  The organic chemist Robert Shapiro, Emeritus of New York University, objects strongly to viewing DNA as a monopolistic force. Will he assure us, that life consists of more than DNA?  But of what? Is it conceivable that there are certain forms of life we still are unable to recognize?  Who wants to confirm that nothing runs without DNA?  Why should life not also arise from minerals???These are thoughts to make jaws drop, not only among laymen. Venter also is concerned that Shapiro defines life all too loosely. But both, the geneticist and the chemist focus on the moment at which life is breathed into an inanimate object. This will be, in Venter's opinion, the next milestone in the investigation and conditioning of life. We can no longer beat around the bush: What is Life? Venter declines to answer, he doesn't want to be drawn into philosophical bullshit, as he says. Is a virus a life form? Must life, in order to be recognized as life, be self-reproducing? A colorful butterfly glides through the debate. Life can appear so weightless. And it is so difficult to describe and define.

Seth Lloyd, the quantum mechanic from MIT points out mischievously that we know far more about the origin of the universe than we do about the origin of life. Using the quantum computer as his departing point, he tries to give us an idea of the huge number of possibilities out of which life could have developed. If Albert Einstein did not wish to envisage a dice-playing god, Lloyd, the entertaining thinker, can't help to see only dice-playing, though presumably without the assistance of god. Everything reveals itself in his life panorama as a result of chance, whether here on Earth or in an incomprehensible distance

Astrophysicist Dimitar Sasselov works also under the auspices of chance. Although his field of research necessarily widens our perspective, he can present us only a few places in the universe that could be suitable for life. Only five Super-Earths, as Sasselov calls those planets that are larger than Earth, are known to us at this point. With improved recognition technologies, perhaps a hundred million could be found in the universe in all. No, that is still, distributed throughout and applied to the entire universe, not a grand number. But the number is large enough to give us hope for real co-inhabitants of our universe. Somewhere, sometime, we could encounter microbial life. 

Most likely this would be life in a form that we cannot even fathom yet. It will all depend on what we, strange life forms that we are, can acknowledge as life. At Eastover Farm our imaginative powers were already being vigorously tested.

Text: F.A.Z., 31.08.2007, No. 202 / page 33

Translated by Karla taylor

 


SUEDDEUTSCHE ZEITUNG
September 3, 2007
FEUILLETON — Front Page

 

DARWIN WAS JUST A PHASE?(Darwin war nur eine Phase)Country Life in Connecticut: Six scientists find the future in genetic engineering??By Andrian Kreye [111]

The origins of life were the subject of discussion on a summer day when six pioneers of science convened at Eastover Farm in Connecticut. The physicist and scientific theorist Freeman Dyson was the first of the speakers to talk on the theme: "Life: What a Concept!" An ironic slogan for one of the most complex problems. Seth Lloyd, quantum physicist at MIT, summed it up with his remark that scientists now know everything about the origin of the Universe and virtually nothing about the origin of life. Which makes it rather difficult to deal with the new world view currently taking shape in the wake of the emerging age of biology.

The roster of thinkers had assembled at the invitation of literary agent John Brockman, who specializes in scientific ideas. The setting was distinguished. Eastover Farm sits in the part of Connecticut where the rich and famous New Yorkers who find the beach resorts of the Hamptons too loud and pretentious have settled. Here the scientific luminaries sat at long tables in the shade of the rustling leaves of maple trees, breaking just for lunch at the farmhouse.

The day remained on topic, as Brockman had invited only half a dozen journalists, to avoid slowing the thinkers down with an onslaught of too many layman's questions. The object was to have them talk about ideas mainly amongst themselves in the manner of a salon, not unlike his online forum edge.org. Not that the day went over the heads of the non-scientist guests. With Dyson, Lloyd, genetic engineer George Church, chemist Robert Shapiro, astronomer Dimitar Sasselov and biologist and decoder of the genome J. Craig Venter, six men came together, each of whom have made enormous contributions in interdiscplinary sciences, and as a consequence have mastered the ability to talk to people who are not well-read in their respective fields. This made it possible for an outsider to follow the discussions, even if at moments, he was made to feel just that, as when Robert Shapiro cracked a joke about RNA that was met with great laughter from the scientists.

Freeman Dyson, a fragile gentleman of 84 years, opened the morning with his legendary provocation that Darwinian evolution represents only a short phase of three billion years in the life of this planet, a phase that will soon reach its end. According to this view, life began in primeval times with a haphazard assemblage of cells, RNA-driven organisms ensued, which, in the third phase of terrestrial life would have learned to function together. Reproduction appeared on the scene in the fourth phase, multicellular beings and the principle of death appeared in the fifth phase.

The End of Natural Selection

We humans belong to the sixth phase of evolution, which progresses very slowly by way of Darwinian natural selection. But this according to Dyson will soon come to an end, because men like George Church and J. Craig Venter are expected to succeed not only in reading the genome, but also in writing new genomes in the next five to ten years. This would constitute the ultimate "Intelligent Design", pun fully intended. Where this could lead is still difficult to anticipate. Yet Freeman Dyson finds a meaningful illustration. He spent the early nineteen fifties at Princeton, with mathematician John von Neuman, who designed one of the earliest programmable computers. When asked how many computers might be in demand, von Neumann assured him that 18 would be sufficient to meet the demand of a nation like the United States. Now, 55 years later, we are in the middle of the age of physics where computers play an integral role in modern life and culture.

Now though we are entering the age of biology. Soon genetic engineering will shape our daily life to the same extent that computers do today. This sounds like science fiction, but it is already reality in science. Thus genetic engineer George Church talks about the biological building blocks that he is able to synthetically manufacture. It is only a matter of time until we will be able to manufacture organisms that can self-reproduce, he claims. Most notably J. Craig Venter succeeded in introducing a copy of a DNA-based chromosome into a cell, which from then on was controlled by that strand of DNA.

Venter, a suntanned giant with the build of a surfer and the hunting instinct of a captain of industry, understands the magnitude of this feat in microbiology. And he understands the potential of his research to create biofuel from bacteria. He wouldn't dare to say it, but he very well might be a Bill Gates of the age of biology. Venter also understands the moral implications. He approached bioethicist Art Kaplan in the nineties and asked him to do a study on whether in designing a new genome he would raise ethical or religious objections. Not a single religious leader or philosopher involved in the study could find a problem there. Such contract studies are debatable. But here at Eastover Farm scientists dream of a glorious future. Because science as such is morally neutral—every scientific breakthrough can be applied for good or for bad.

The sun is already turning pink behind the treetops, when Dimitar Sasselov, the Bulgarian astronomer from Harvard, once more reminds us how unique and at the same time, how unstable the balance of our terrestrial life is. In our galaxy, astronomers have found roughly one hundred million planets that could theoretically harbor organic life. Not only does Earth not have the best conditions among them; it is actually at the very edge of the spectrum. "Earth is not particularly inhabitable," he says, wrapping up his talk. Here J. Craig Venter cannot help but remark as an idealist: "But it is getting better all the time".

Translated by Karla Taylor

 


Andrian Kreye, Süddeutsche Zeitung

Jordan Mejias [90]Frankfurter Allgemeine Zeitung



RICHARD DAWKINS—FREEMAN DYSON: AN EXCHANGE

As part of this year's Edge Event at Eastover Farm in Bethlehem, CT, I invited three of the participants—Freeman Dyson, George Church, and Craig Venter—to come up a day early, which gave me an opportunity to talk to Dyson about his abovementioned essay in New York Review of Books entitled "Our Biotech Future" [127].

I also sent the link to the essay to Richard Dawkins, and asked if he would would comment on what Dyson termed the end of "the Darwinian interlude".

Early the next morning, prior to the all-day discussion (which also included as participants Robert Shapiro, Dimitar Sasselov, and Seth Lloyd) Dawkins emailed his thoughts which I read to the group during the discussion following Dyson's talk. [NOTE: Dawkins asked me to make it clear that his email below "was written hastily as a letter to you, and was not designed for publication, or indeed to be read out at a meeting of biologists at your farm!"].

Now Dyson has responded and the exchange is below.

JB [3]


RICHARD DAWKINS [155] [8.27.07] Evolutionary Biologist, Charles Simonyi Professor For The Understanding Of Science, Oxford University; Author, The God Delusion

"By Darwinian evolution he [Woese] means evolution as Darwin understood it, based on the competition for survival of noninterbreeding species."

"With rare exceptions, Darwinian evolution requires established species to become extinct so that new species can replace them."

These two quotations from Dyson constitute a classic schoolboy howler, a catastrophic misunderstanding of Darwinian evolution. Darwinian evolution, both as Darwin understood it, and as we understand it today in rather different language, is NOT based on the competition for survival of species. It is based on competition for survival WITHIN species. Darwin would have said competition between individuals within every species. I would say competition between genes within gene pools. The difference between those two ways of putting it is small compared with Dyson's howler (shared by most laymen: it is the howler that I wrote The Selfish Gene partly to dispel, and I thought I had pretty much succeeded, but Dyson obviously hasn't read it!) that natural selection is about the differential survival or extinction of species. Of course the extinction of species is extremely important in the history of life, and there may very well be non-random aspects of it (some species are more likely to go extinct than others) but, although this may in some superficial sense resemble Darwinian selection, it is NOT the selection process that has driven evolution. Moreover, arms races between species constitute an important part of the competitive climate that drives Darwinian evolution. But in, for example, the arms race between predators and prey, or parasites and hosts, the competition that drives evolution is all going on within species. Individual foxes don't compete with rabbits, they compete with other individual foxes within their own species to be the ones that catch the rabbits (I would prefer to rephrase it as competition between genes within the fox gene pool).

The rest of Dyson's piece is interesting, as you'd expect, and there really is an interesting sense in which there is an interlude between two periods of horizontal transfer (and we mustn't forget that bacteria still practise horizontal transfer and have done throughout the time when eucaryotes have been in the 'Interlude'). But the interlude in the middle is not the Darwinian Interlude, it is the Meiosis / Sex / Gene-Pool / Species Interlude. Darwinian selection between genes still goes on during eras of horizontal transfer, just as it does during the Interlude. What happened during the 3-billion-year Interlude is that genes were confined to gene pools and limited to competing with other genes within the same species. Previously (and still in bacteria) they were free to compete with other genes more widely (there was no such thing as a species outside the 'Interlude'). If a new period of horizontal transfer is indeed now dawning through technology, genes may become free to compete with other genes more widely yet again.

As I said, there are fascinating ideas in Freeman Dyson's piece. But it is a huge pity it is marred by such an elementary mistake at the heart of it.


FREEMAN DYSON [156] [8.30.07] Physicist, Institute of Advanced Study, Author, Many Colored Glass: Reflections on the Place of Life in the Universe 

Dear Richard Dawkins,

Thank you for the E-mail that you sent to John Brockman, saying that I had made a "school-boy howler" when I said that Darwinian evolution was a competition between species rather than between individuals. You also said I obviously had not read The Selfish Gene. In fact I did read your book and disagreed with it for the following reasons.

Here are two replies to your E-mail. The first was a verbal response made immediately when Brockman read your E-mail aloud at a meeting of biologists at his farm. The second was written the following day after thinking more carefully about the question.

First response. What I wrote is not a howler and Dawkins is wrong. Species once established evolve very little, and the big steps in evolution mostly occur at speciation events when new species appear with new adaptations. The reason for this is that the rate of evolution of a population is roughly proportional to the inverse square root of the population size. So big steps are most likely when populations are small, giving rise to the ``punctuated equilibrium'' that is seen in the fossil record. The competition is between the new species with a small population adapting fast to new conditions and the old species with a big population adapting slowly.

In my opinion, both these responses are valid, but the second one goes more directly to the issue that divides us. Yours sincerely, Freeman Dyson.


Dimitar Sasselov [123], George Church [157], Robert Shapiro [122], John Brockman [3],

J. Craig Venter [120],Seth Lloyd [124], Freeman Dyson [156]


Seminars
Event Date: [ 7.21.02 ]
Location:
United States

The metaphors of information processing and computation are at the center of today's intellectual action. A new and unified language of science is beginning to emerge.

 

Participants:

Seth Lloyd Paul Steinhardt Alan Guth Marvin Minsky Ray Kurzweil
Computational Universe Cyclic Universe Inflationary Universe Emotion Universe Intelligent Universe

What's happening in these new scientific endeavors is truly a work in progress. A year ago, at the first REBOOTING CIVILIZATION  [178]meeting in July, 2001, physicists Alan Guth and Brian Greene, computer scientists David Gelernter, Jaron Lanier, and Jordan Pollack, and research psychologist Marc D. Hauser could not reach a consensus about exactly what computation is, when it is useful, when it is inappropriate, and what it reveals. Reporting on the event in The New York Times ("Time of Growing Pains for Information Age" [179], August 7, 2001), Dennis Overbye [180] wrote:On July 21, Edge held an event at Eastover Farm which included the physicists Seth Lloyd, Paul Steinhardt, and Alan Guth, computer scientist Marvin Minsky, and technologist Ray Kurzweil. This year, I noted there are a lot of "universes" floating around. Seth Lloyd: the computational universe (or, if you prefer, the it and bit-itty bitty-universe); Paul Steinhardt: the cyclic universe; Alan Guth: the inflationary universe; Marvin Minsky: the emotion universe, Ray Kurzweil: the intelligent universe. I asked each of the speakers to comment on their "universe". All, to some degree, were concerned with information processing and computation as central metaphors. See below for their links to their talks and streaming video.??Concepts of information and computation have infiltrated a wide range of sciences, from physics and cosmology, to cognitive psychology, to evolutionary biology, to genetic engineering. Such innovations as the binary code, the bit, and the algorithm have been applied in ways that reach far beyond the programming of computers, and are being used to understand such mysteries as the origins of the universe, the operation of the human body, and the working of the mind. ?

Mr. Brockman said he had been inspired to gather the group by a conversation with Dr. Seth Lloyd, a professor of mechanical engineering and quantum computing expert at M.I.T. Mr. Brockman recently posted Dr. Lloyd's statement on his Web site, www.edge.org: [181] "Of course, one way of thinking about all of life and civilization," Dr. Lloyd said, "is as being about how the world registers and processes information. Certainly that's what sex is about; that's what history is about.

Humans have always tended to try to envision the world and themselves in terms of the latest technology. In the 17th and 18th centuries, for example, workings of the cosmos were thought of as the workings of a clock, and the building of clockwork automata was fashionable. But not everybody in the world of computers and science agrees with Dr. Lloyd that the computation metaphor is ready for prime time.

Several of the people gathered under the maple tree had come in the hopes of debating that issue with Dr. Lloyd, but he could not attend at the last moment. Others were drawn by what Dr. Greene called "the glimmer of a unified language" in which to talk about physics, biology, neuroscience and other realms of thought. What happened instead was an illustration of how hard it is to define a revolution from the inside.

Indeed, exactly what computation and information are continue to be subjects of intense debate. But less than a year later, in the "Week In Review" section of the Sunday New York Times ("What's So New In A Newfangled Science?" [182], June 16, 2002) George Johnson [183] wrote about "a movement some call digital physics or digital philosophy — a worldview that has been slowly developing for 20 years."...

Just last week, a professor at the Massachusetts Institute of Technology named Seth Lloyd published a paper in Physical Review Letters estimating how many calculations the universe could have performed since the Big Bang — 10^120 operations on 10^90 bits of data, putting the mightiest supercomputer to shame. This grand computation essentially consists of subatomic particles ricocheting off one another and "calculating" where to go.

As the researcher Tommaso Toffoli mused back in 1984, "In a sense, nature has been continually computing the `next state' of the universe for billions of years; all we have to do — and, actually, all we can do — is `hitch a ride' on this huge ongoing computation."

This may seem like an odd way to think about cosmology. But some scientists find it no weirder than imagining that particles dutifully obey ethereal equations expressing the laws of physics. Last year Dr. Lloyd created a stir on Edge.org, a Web site devoted to discussions of cutting edge science, when he proposed "Lloyd's hypothesis": "Everything that's worth understanding about a complex system can be understood in terms of how it processes information."*....

[*See "Seth Lloyd: How Fast, How Small, and How Powerful: Moore's Law and the Ultimate Laptop [184]"]

Dr, Lloyd did indeed cause a stir when his ideas were presented on Edge in 2001, but George Johnson's recent New York Times piece caused an even greater stir, as Edge received over half a million unique visits the following week, a strong confirmation that something is indeed happening here. (Usual Edge readership is about 60,000 unique visitors a month). There is no longer any doubt that the metaphors of information processing and computation are at the center of today's intellectual action. A new and unified language of science is beginning to emerge.


THE COMPUTATIONAL UNIVERSE [185]:SETH LLOYD [185] [9.19.02]

Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I've been building quantum computers I've come around to thinking about the world in terms of how it processes information.

 

SETH LLOYD is Professor of Mechanical Engineering at MIT and a principal investigator at the Research Laboratory of Electronics. He is also adjunct assistant professor at the Santa Fe Institute. He works on problems having to do with information and complex systems from the very small—how do atoms process information, how can you make them compute, to the very large — how does society process information? And how can we understand society in terms of its ability to process information?

His seminal work in the fields of quantum computation and quantum communications — including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction — has gained him a reputation as an innovator and leader in the field of quantum computing. Lloyd has been featured widely in the mainstream media including the front page of The New York Times, The LA Times, The Washington Post, The Economist, Wired, The Dallas Morning News, and The Times (London), among others. His name also frequently appears (both as writer and subject) in the pages of Nature, New Scientist, Science and Scientific American.


THE CYCLIC UNIVERSE: PAUL STEINHARDT [186] [9.16.02]

...in the last year I've been involved in the development of an alternative theory that turns the cosmic history topsy-turvy. All the events that created the important features of our universe occur in a different order, by different physics, at different times, over different time scales—and yet this model seems capable of reproducing all of the successful predictions of the consensus picture with the same exquisite detail.


THE INFLATIONARY UNIVERSE:ALAN GUTH  [187][9.16.02]

Inflationary theory itself is a twist on the conventional Big Bang theory. The shortcoming that inflation is intended to fill in is the basic fact that although the Big Bang theory is called the Big Bang theory it is, in fact, not really a theory of a bang at all; it never was.


THE EMOTION UNIVERSE: MARVIN MINSKY  [188][9.16.02]

To say that the universe exists is silly, because it's saying that the universe is one of the things in the universe. There's something wrong with that idea. If you carry that a little further, then it doesn't make any sense to have a predicate like, "Where did the universe come from?" or "Why does it exist?"


THE INTELLIGENT UNIVERSE: RAY KURZWEIL   [189][9.16.02]

The universe has been set up in an exquisitely specific way so that evolution could produce the people that are sitting here today and we could use our intelligence to talk about the universe. We see a formidable power in the ability to use our minds and the tools we've created to gather evidence, to use our inferential abilities to develop theories, to test the theories, and to understand the universe at increasingly precise levels.


 

WHICH UNIVERSE WOULD YOU LIKE?

Five stars of American science meet in Connecticut to explain first and last things.

By Jordan Mejias [126]

August 28, 2002

 

They begin a free-floating debate, which drives them back and forth across the universe. Guth encourages the exploration of black holes, not to be confused with cosmic wormholes, which Kurzweil — just like the heroes of Star Trek — wants to use as a shortcut for his intergalactic excursions and as a means of overtaking light. Steinhardt suggests that we should realize that we are not familiar with most of what the cosmos consists of and do not understand its greatest force, dark matter. Understand? There is no such thing as a rational process, Minsky objects; it is simply a myth. In his cosmos, emotion is a word we use to circumscribe another form of our thinking that we cannot yet conceive of. Emotion, Kurzweil interrupts, is a highly intelligent form of thinking. "We have a dinner reservation at a nearby country restaurant," says Brockman in an emotionally neutral tone.

Seminars
Event Date: [ 9.10.01 ]
Location:
United States

Everything is up for grabs. Everything will change. There is a magnificent sweep of intellectual landscape right in front of us.

One aspect of our culture that is no longer open to question is that the most signigicant developments in the sciences today (i.e. those that affect the lives of everybody on the planet) are about, informed by, or implemented through advances in software and computation. This Edgeevent presented an opportunity for people in the various fields of computer science, cosmology, cognition, evolutionary biology, etc., to begin talking to each other, to become aware of interesting and important work in other fields.

Participants:

Marc D. Hauser Lee Smolin Brain Greene Jaron Lanier Jordan Pollack David Gelernter Alan Guth


HAUSER, SMOLIN, GREENE, LANIER, POLLACK, GELERNTER, GUTH at the Edge "REBOOTING CIVILIZATION" meeting at Eastover Farm. Opening comments [12,000 words] and streaming video. 


Software and computation are reinventing the civilized world —"rebooting civilization," in the words of David Gelernter. "It's a software-first world," notes Stanford AI expert Edward Feigenbaum, chief scientist of the U.S. Air Force in the mid-nineties. "It's not a mistake that the world's two richest men are pure software plays. Or that the most advanced fighter planes in the U.S. Air Force are bundles of software wrapped in aluminum shells, or that the most advanced bomber is run by computers and cannot be flown manually". Everybody in business today is in the software business. But what comes after software?

Experimental psychologist Steven Pinker speaks of "a new understanding that the human mind is a remarkably complex processor of information." To Pinker, our minds are "organs of computation." To philosopher Daniel C. Dennett, "the basic idea of computation, as formulated by the mathematicians John von Neumann and Alan Turing, is in a class by itself as a breakthrough idea." Dennett asks us to think about the idea that what we have in our heads is software, "a virtual machine, in the same way that a word processor is a virtual machine." Pinker and Dennett are talking about our mental life in terms of the idea of computation, not simply proposing the digital computer as a metaphor for the mind. Other scientists such as physicist Freeman Dyson disagree , but most recognize that these are big questions.??Physicist David Deutsch, a pioneer in the development of the quantum computer, points out that "the chances are that the technological implications of quantum computers, though large by some standards, are never going to be the really important thing about them. The really important thing is the philosophical implications, epistemological and metaphysical. The largest implication, from my point of view, is the one that we get right from the beginning, even before we build the first quantum computer, before we build the first cubit. The very theory of quantum computers already forces upon us a view of physical reality as a multiverse."??Computer scientist and AI researcher Rodney Brooks is puzzled that "we've got all these biological metaphors that we're playing around with — artificial immunology systems, building robots that appear lifelike — but none of them come close to real biological systems in robustness and in performance. They look a little like it, but they're not really like biological systems." Brooks worries that in looking at biological systems we are missing something that is already there — that has always been there. To Brooks, this might be called "the essence of life," but he is talking about a biochemical phenomenon, not a metaphysical one. Brooks is searching for a new conceptual framework that, like computation, does not involve any new physics or chemistry — a framework that gives us a different way of thinking about the stuff that's there. "We see the biological systems, we see how they operate," he says, "but we don't have the right explanatory modes to explain what's going on and therefore we can't reproduce all these sorts of biological processes. That to me right now is the deep question."

One aspect of our culture that is no longer open to question is that the most signigicant developments in the sciences today (i.e. the developments that affect the lives of everybody on the planet) are about, informed by, or implemented through advances in software and computation.

This Edge event is an opportunity for people in various fields such as computer science, cosmology, cognition, evolutionary biology, etc., to begin talking to each other, to become aware of interesting and important work in other fields.

— JB


MARC HAUSER [195]

Some of the problems that we've been dealing with in the neurosciences and the cognitive sciences concerns the initial state of the organism. What do animals, including humans, come equipped with? What are the tools that they have to deal with the world as it is? There's somewhat of an illusion in the neurosciences that we have really begun to understand how the brain works. That's put quite nicely in a recent talk by Noam Chomsky. The title of the talk was "Language and the Brain."

Everybody's very surprised to hear him mention the brain word, since he's mostly referred to the mind. The talk was a warning to the neuroscientists about how little we know about, especially when it comes to understanding how the brain actually does language. Here's the idea Chomsky played with, which I think is quite right. Let's take a very simple system that is actually very good at a kind of computation: the honey bee. Here is this very little insect, tiny little brain, simple nervous system, that is capable of transmitting information about where it's been and what it's eaten to a colony and that information is sufficiently precise that the colony members can go find the food. We know that that kind of information is encoded in the signal because people in Denmark have created a robotic honey bee that you can plop in the middle of a colony, programmed to dance in a certain way, and the hive members will actually follow the information precisely to that location. Researchers have been able to understand the information processing system to this level, and consequently, can actually transmit it through the robot to other members of the hive. When you step back and say, what do we know about how the brain of a honeybee represents that information, the answer is: we know nothing. Thus, our understanding of the way in which a bee's brain represents its dance, its language, is quite poor. And this lack of understanding comes from the study of a relatively simple nervous system, especially when contrasted with the human nervous system.

So the point that Chomsky made, which I think is a very powerful one, and not that well understood, is that what we actually know about how the human brain represents language is at some level very trivial. That's not to say that neuroscientists haven't made quite a lot of impact on, for example, what areas of the brain when damaged will wipe out language. For example, we know that you can find patients who have damage to a particular part of the brain that results in the loss of representations for consonants, while other patients have damage that results in the loss of representations for vowels.

But we know relatively little about how the circuitry of the brain represents the consonants and vowels. The chasm between the neurosciences today and understanding representations like language is very wide. It's a delusion that we are going to get close to that any time soon. We've gotten almost nowhere in how the bee's brain represents the simplicity of the dance language. Although any good biologist, after several hours of observation, can predict accurately where the bee is going, we currently have no understanding of how the brain actually performs that computation.

The reason there have been some advances in the computational domain is there's been a lot of systems where the behavior showcases what the problem truly is, ranging from echolocation in bats to long distance navigation in birds. For humans, Chomsky's insights into the computational mechanisms underlying language really revolutionized the field, even though not all would agree with the approach he has taken. Nonetheless, the fact that he pointed to the universality of many linguistic features, and the poverty of the input for the child acquiring language, suggested that an innate computational mechanism must be at play. This insight revolutionized the field of linguistics, and set much of the cognitive sciences in motion. That's a verbal claim, and as Chomsky himself would quickly recognize, we really don't know how the brain generates such computation.

One of the interesting things about evolution that's been telling us more and more is that even though evolution has no direction, one of the things you can see, for example, within the primates is that a part of the brain that actually stores the information for a representation, the frontal lobes of our brain, has undergone quite a massive change over time. So you have systems like the apes who probably don't have the neural structures that would allow them to do the kind of computations you need to do language-processing. In our own work we've begun to look at the kinds of computations that animals are capable of, as well as the kind of computations that human infants are capable of, to try to see where the constraints lie.

Whenever nature has created systems that seem to be open-ended and generative, they've used some kind of system with a discrete set of recombinable elements. The question you can begin to ask in biology is, what kind of systems are capable of those kinds of computational processes. For example, many organisms seem to be capable of quite simple statistical computations, such as conditional probabilities that focus on local dependencies: if A, then B. Lots of animals seem capable of that. But when you step up to the next level in the computational hierarchy, one that requires recursion, you find great limitations both among animals and human infants. For example, an animal that can do if A then B, would have great difficulty doing if A to the N, then B to the N. We now begin to have a loop. If animals lack this capacity, which we believe is true, then we have identified an evolutionary constraint; humans seem to have evolved the capacity for recursion, a computation that liberated us in an incredible way.

It allows us to do mathematics as well as language. And this system of taking discrete or particulate elements and recombining them, is what gives genetics and chemistry their open ended structure. Given this pattern, an interesting question then is: what were the selective pressures that led to the evolution of a recursive system? Why is it that humans seem to be the only organisms on the planet, the only natural system, that has this capacity? What were the pressures that created it? Thinking about things like artificial intelligence, what would be the kinds of pressures on an artificial system that would get to that end point?

An interesting problem for natural biological systems as well as artificial systems is whether the two can meet, to try to figure out what kinds of pressures lead to a capacity for recursion, what are the building blocks that must be in place for the system to evolve? Comparative biology doesn't provide any helpful hints at present because we simply have two end points, humans that do it, and other organisms that don't. At this point in time, therefore, this evolutionary transition is opaque.

MARC D. HAUSER, a cognitive neuroscientist, is a professor in the departments of Psychology and the Program in Neurosciences at Harvard, where he is also a fellow of the Mind, Brain, and Behavior Program. He is the author of The Evolution of Communication, The Design of Animal Communication (with M. Konishi), and Wild Minds: What Animals Really Think.


LEE SMOLIN [196]

As a theoretical physicist, my main concern is space, time and cosmology. The metaphor about information and computation is interesting. There are some people in physics who have begun to talk as if we all know that what's really behind physics is computation and information, who find it very natural to say things like anything that's happening in the world is a computation, and all of physics can be understood in terms of information. There's another set of physicists who have no idea what those people are talking about. And there's a third set — and I'm among them — who begin by saying we have no idea what you're talking about, but we have reasons why it would be nice if it was useful to talk about physics in terms of information.

I can mention two ways in which the metaphor of information and computation may be infiltrating into our thinking about fundamental physics, although we're a long way from really understanding these things. The first is that the mathematical metaphor and the conceptual metaphor of a system of relationships which evolves in time is something which is found in physics. It is also something that we clearly see when we talk to computer scientists and biologists and people who work on evolutionary theory, that they tend to model their systems in terms of networks where there are nodes and there are relationships between the nodes, and those things evolve in time, and they can be asking questions about the time evolution, what happens after a long time, what are the statistical properties of subsystems.

That kind of idea came into physics a long time ago with relativity theory and general relativity. The idea that all the properties of interest are really about relationships between things and not a relationship between some thing and some absolute fixed background that defines what anything means is an important idea and an old idea in physics. In classical general relativity, one sees the realization of the idea that all the properties that we observe are about relationships. Those of us who are interested in quantum gravity are thinking a lot about how to bring that picture, in which the world is an evolving network of relationships, into quantum physics.

And there are several different aspects of that. There are very interesting ideas around but they're in the stage of interesting ideas, interesting models, interesting attempts — it is science in progress.

That's the first thing. To the extent to which our physics will turn out to look like a network of relationships which are evolving in time, physics will look like some system that computational people or biologists using the computational metaphor may be studying. Part of that is the questions of whether nature is really discrete — that underlying the continuous notion of space and time there's really some discrete structure, that's also something that from different points of view — when we work on quantum gravity we find evidence that space and time are really discrete and are really made up on processes which may have some discrete character. But again, this is something in progress.

One piece of evidence that nature is discrete is something called the holographic principle. This leads some of us physicists to use the word information even when we don't really know what we're talking about but it is interesting and worth exposing. It comes from an idea called the Bekenstein Bound, a conjecture of Jacob Bekenstein that there is more and more theoretical evidence for. The Bekenstein Bound says that if I have a surface and I'm making observations on that surface —that surface could be my retina, or it could be some screen in front of me — I observe the world through the screen, at any one moment there's a limitation to the amount of information that could be observed on that screen.

First of all that amount of information is finite, and it's four bits of information per Planck area of the screen, where a Planck area is 10 to the minus 66 centimeters squared. And there are various arguments that if that bound were to be exceeded, in a world where there is relativity and black holes, then we would violate the Second Law of Thermodynamics. Since none of us wants to violate the Second Law of Thermodynamics, I think it's an important clue, and it says something important about the underlying discreteness of nature. It also suggests that information, although we don't know what information is, may have some fundamental place in physics.

The holographic principle, of which there are several versions by different people — the idea was invented by Dutch theoretical physicist Gerard 't Hooft — is that the laws of physics should be rewritten, or could be rewritten including dynamics, how things evolve in time, so we're no longer talking about things happening out there in the world in space, we're talking about representing systems that we observe in terms of the information as it evolves on the screen. The metaphor is that there's a screen through which we're observing the world. There are various claims that this idea is realized at least partly in several different versions of string theory or quantum gravity This is an idea there's a lot of interest in, but we really don't know whether it can be realized completely or not.

One extreme form of it, which I like, is that perhaps the way to read the Bekenstein Bound is not that there are two different things, geometry and flow of information and a law that relates them, but somehow we could try to envision the world as one of these evolving networks. What happens is processes where "information", whatever information is, flows from event to event, and geometry is defined by saying that the measure of the information capacity of some channel by which information is flowing, from the past to the future, would be the area of a surface, so that somehow geometry that is space would turn out to be some derived quantity, like temperature or density, and just the same way that temperature is a measure of the average energy of some particles, the area of some surface would turn out to be an approximate measure of the capacity of some channel in the world would fundamentally be information flow. It's an idea that some of us like to play with, but we have not yet constructed physics on those grounds, and it's not at all clear that it will work. This is a transition to a computational metaphor in physics — it's something which is in progress, and may or may not happen.

LEE SMOLIN, a theoretical physicist, is a founding member and research physicist at the Perimeter Institute in Waterloo Canada. He is the author ofThe Life of The Cosmos and Three Roads to Quantum Gravity.


BRIAN GREENE [197]

Physics and everything we know in the world around us may really be tied to processes whose fundamental existence is not here around us, but rather exists in some distant bounding surface like some thin hologram, which by virtue of illuminating it in the right way can reproduce what looks like a 3-dimensional world. Perhaps our three dimensional world is really just a holographic illumination of laws that exist on some thin bounding slice, like that thin little piece of plastic, that thin hologram. It's an amazing idea, and I think is likely to be where physics goes in the next few years or in the next decade, at least when one's talking about quantum gravity or quantum string theory.

[Opening comments to come.]

BRIAN GREENE, professor of physics and of mathematics, at Columbia University is widely regarded for a number of groundbreaking discoveries in superstring theory. He is the author of The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for an Ultimate Theory.


JARON LANIER [198]

One of the striking things about being a computer scientist in this age is that all sorts of other people are happy to tell us that what we do is the central metaphor of everything, which is very ego-gratifying. We hear from various quarters that our work can serve as the best way of understanding - if not in the present but any minute now because of Moore's law - of everything from biology to the economy to aesthetics, child-rearing, sex, you name it. I have found myself being critical of what I view as this overuse as the computational metaphor. My initial motivation was because I thought there was naive and poorly constructed philosophy at work. It's as if these people had never read philosophy at all and there was no sense of epistemological or other problems.

Then I became concerned for a different reason which was pragmatic and immediate: I became convinced that the overuse of the computational metaphor was actually harming the quality of the present-day design of computer systems. One example of that, the belief that people and computers are similar, the artificial intelligence mindset, has a tendency to create systems that are naively and overly automated. An example of that is the Microsoft word processor that attempts to retype what you've just typed, the notion of trying to make computers into people because somehow that agenda of making them into people is so important that if you jump the gun it has to be for the greater good, even if it makes the current software stupid.

There's a third reason to be suspicious of the overuse of computer metaphors, and that is that it leads us by reflection to have an overly simplistic view of computers. The particular simplification of computers I'm concerned with is imagining that Moore's Law applies to software as well as hardware. More specifically, that Moore's Law applies to things that have to have complicated interfaces with their surroundings as opposed to things that have simple interfaces with their surroundings, which I think is the better distinction.

Moore's Law is truly an overwhelming phenomenon; it represents the greatest triumph of technology ever, the fact that we could keep on this track that was predicted for all these many years and that we have machines that are a million times better than they were at the dawn of our work, which was just a half century ago. And yet during that same period of time our software has really not kept pace. In fact not only could you argue that software has not improved at the same rate as hardware, you could even argue that it's often been in retrograde. It seems to me that our software architectures have not even been able to maintain their initial functionality as they've scaled with hardware, so that in effect we've had worse and worse software. Most people who use personal computers can experience that effect directly, and it's true in most situations.

But I want to emphasize that the real distinction that I see is between systems with simple interfaces to their surroundings and systems with complex interfaces. If you want to have a fancy user interface and you run a bigger thing it just gets awful. Windows doesn't scale.

One question to ask is, why does software suck so badly? There are a number of answers to that. The first thing I would say is that I have absolutely no doubt that David Gelernter's framework of streams is fundamentally and overwhelmingly superior to the basis in which our current software is designed. The next question is, is that enough to cause it to come about? It really becomes a competition between good taste and good judgment on the one hand, and legacy and corruption on the other - which are effectively two words for the same thing, in effect. What happens with software systems is that the legacy effects end up being the overwhelming determinants of what can happen next as the systems scale.

For instance, there is the idea of the computer file, which was debated up until the early 80s. There was an active contingent that thought that the idea of the file wasn't a good thing and we should instead have a massive distributed data base with a micro-structure of some sort. The first (unreleased) version of the Macintosh did not have files. But Unix jumped the fence from the academic to the business world and it had files, and Macintosh ultimately came out with files, and the Microsoft world had files, and basically everything has files. At this point, when we teach undergraduates computer science, we do not talk about the file as an invention, but speak of it as if it were a photon, because it in effect is more likely to still be around in 50 years than the photon.

I can imagine physicists coming up with some reasons not to believe in photons any more, but I cannot imagine any way that we can tell you not to believe in files. We are stuck with the damn things. That legacy effect is truly astonishing, the sort of non-linearity of the costs of undoing decisions that have been made. The remarkable degree to which the arrow of time is amplified in software development in its brutalness is extraordinary, and perhaps one of the things that really distinguishes software from other phenomena.

Back to the physics for a second. One of the most remarkable and startling insights in 20th century thought was Claude Shannon's connection of information and thermodynamics. Somehow for all of these years working with computers I've been looking at these things and I've been thinking, "Are these bits the same bits Shannon was talking about, or is there something different?" I still don't know the answer, but I'd like to share my recent thoughts because I think this all ties together. If you wish to treat the world as being computational and if you wish to say that the pair of sunglasses I am wearing is a computer that has sunglass input and output- if you wish to think of things that way, you would have to say that not all of the bits that are potentially measurable are in practice having an effect. Most of them are lost in statistical effects, and the situation has to be rather special for a particular bit to matter.

In fact, bits really do matter. If somebody says "I do" in the right context that means a lot, whereas a similar number of bits of information coming in another context might mean much less. Various measurable bits in the universe have vastly different potentials to have a causal impact. If you could possibly delineate all the bits you would probably see some dramatic power law where there would be a small number of bits that had tremendously greater potential for having an effect, and a vast number that had very small potentials. It's those bits that have the potential for great effect that are probably the ones that computer scientists are concerned with, and probably Shannon doesn't differentiate between those bits as far as he went.

Then the question is how do we distinguish between the bits; what differentiates one from the other, how can we talk about them? One speculation is that legacy effects have something to do with it. If you have a system with a vast configuration space, as is our world, and you have some process, perhaps an evolutionary process, that's searching through possible configurations, rather than just a meandering random walk, perhaps what we see in nature is a series of stair steps where legacies are created that prohibit large numbers of configurations from every being searched again, and that there's a series of refinements.

Once DNA has won out, variants of DNA are very unlikely to appear. Once Windows has appeared, it's stuck around, and so forth. Perhaps what happens is that the legacy effect, which is because of the non-linearity of the tremendous expense of reversing certain kinds of systems. Legacies that are created are like lenses that amplify certain bits to be more important. This suggests that legacies are similar to semantics on some fundamental level. And it suggests that the legacy effect might have something to do with the syntax/semantics distinction, to the degree that might be meaningful. And it's the first glimmer of a definition of semantics I've ever had, because I've always thought the word didn't mean a damn thing except "what we don't understand". But I'm beginning to think what it might be is the legacies that we're stuck with.

To tie the circle back to the "Rebooting Civilization" question, what I'm hoping might happen is as we start to gain a better understanding of how enormously difficult, slow, expensive, tedious and rare an event it is to program a very large computer well; as soon as we have a sense and appreciation of that, I think we can overcome the sort of intoxication that overcomes us when we think about Moore's Law, and start to apply computation metaphors more soberly to both natural science and to metaphorical purposes for society and so forth. A well-appreciated computer that included the difficulty of making large software well could serve as a far more beneficial metaphor than the cartoon computer, which is based only on Moore's Law; all you have to do is make it fast and everything will suddenly work, and the computers-will-become-smarter than-us-if-you just-wait-for-20-years sort of metaphor that has been prevalent lately.

The really good computer simulations that do exist in biology and in other areas of science, and I've been part of a few that count, particularly in surgical prediction and simulation, and in certain neuroscience simulations, have been enormously expensive. It took 18 years and 5,000 patients to get the first surgical simulation to the point of testable usability. That is what software is, that's what computers are, and we should de-intoxicate ourselves from Moore's Law before continuing with the use of this metaphor.

JARON LANIER, a computer scientist and musician, is best known for his work in virtual reality. He is the lead scientist for the National Tele-Immersion Initiative, a consortium of universities studying the implications and applications of next-generation Internet technologies.


JORDAN POLLACK [199]

The limits of software engineering have been clear now for about 20 years. We reached a limit in the size of the programs that we could build, and since then we've essentially just been putting them together in different packages and adding wallpaper. Windows is little more than just DOS with wallpaper - it doesn't really add any more fundamental complexity or autonomy to the process.

Being in AI, I see this "scale" of programming as the problem, not the speed of computers. Its not that we don't understand some principles, but we just can't write a program big enough. We have really big computers. You could hook up a Beowulf, you could hook up a Cray super computer to the smallest robot, and if you knew how to make the robot be alive, it would be alive, if all you needed was computer time. Moores law won't solve AI. Computer time is not what we need; we need to understand how to organize systems of biological complexity.

What I've been working with for the past decade or so has been this question of self-organization. How can a system of chemicals heated by the sun dissipate energy and become more and more complex over time? If we really understood that, we'd be able to build it into software, we'd be able to build it into electronics, and if we got the theory right, we would see a piece of software that ran and wasted energy in the form of computer cycles and became more and more complex over time, and perhaps would bust through the ten million line code limit. In this field, which I've been calling co-evolutionary learning, we have had limited successes in areas like games, problem-solving, and robotics, but no open ended self-organizing reaction. Yet.

It looks like biological complexity comes from the interplay of several different fields. Physics, Evolution, and Game theory. What's possible is determined by a set of rules. There are the immutable rules, systems that obey these rules, and create new systems which operate with rule-like behaviors that we think of as computation. Evolution enables a kind of exploration of the possible, putting together components in various ways, exploring a constrained space of possible designs. What was the fitness, what was the affordance, what was the reason that something arose? But that's only part of it. Physics (the rules) determine whats possible. Evolution (the variation) explores the possible. The Game determines what persists.

When we look at the results of our evolution and co-evolution software, we see that many early discoveries afford temporary advantages, but when something gets built on top of something, because it's part of a larger configuration, it persists, even tho it may be less optimal than other competitive discoveries.

I believe we're never going to get rid of the DOS file system, we're never going to get rid of the notepad text editor, we're never going to get rid of the Qwerty keyboard, because there are systems built on top of them. Just like the human eye has a blind spot.

At one point in human evolution there arose a bundle of nerves that twisted in the wrong direction and, even though it blocked a small bit of the visual field, nevertheless added some kind of advantage, and then layered systems built on top of that and locked the blind spot into being, and there's no way you can get rid of the blind spot now because it's essentially built in and supportive of other mechanisms.

We must look at the entire game to see what among the possible persists. Winners are not determined by the best technology, or by the best design. It's determined by a set of factors that says, whatever this thing is, it's part of a network, and that network supports its persistance. And as evolution proceeds, the suboptimal systems tend to stay in place. Just like vision has a blind spot, we're going to be stuck with technology upon which economic systems depend.

Studying more economics than a usual computer scientist, and reflecting back to the question of the change of society due to software, which John and I have called it the solvent - a social solvent - our naive notion of free enterprise, our naive notion of how competition works in economy is that there's supposed to be checks and balances. Supposedly a durable goods monopoly is impossible. To increase market share, the Tractor monopoly makes better tractors which sell more and last longer until the used market in perfectly good tractors at half the price stops them. As something gets bigger, as a company's products become more widely used, instead of locking into monopoly, there's supposed to be a negative limiting effect, in the fact that more competition comes in, the monopoly is stuck on its fat margins, and stumbles while competition chases profit down to a normal profit.

The way I described this stumbling is "monopoly necrosis." You become so dependent on the sales and margins of a product in demand that you can't imagine selling another one, even though technology is driving prices down. There are some great examples of this. The IBM PC junior was built to not compete with the Selective typewriter by putting in a rate limiter on the keyboard, so a kid couldn't type more than 2 characters a second! This was really the end of IBM. It wasn't Microsoft, it was this necrosis of not exploiting new technology which might erode the profit margins on the old ones. Ten years ago you could see the digital camera and the ink jet printer were going to come together and give you something that you could print pictures for pennies apiece. But Polaroid was getting a dollar a sheet for silver based instant film, and they couldn't see how to move their company in front of this wave that was coming. The storage companies, the big million-dollar terabyte disk companies are going to run into the same sort of thing, a terabyte for $5,000.

From the point of view of software, what I've noticed is that the software companies don't seem to suffer monopoly necrosis as traditional durable-goods theory would predict. They seem to just get bigger and bigger, because while a telephone company gets locked into its telephones or its wires or its interfaces, a tractor company starts to compete against its excellent used tractors, a software company can buy back old licences for more than they would sell for on the street, destroying the secondary market.

We see this in software, and it's because of the particular way that software has changed the equation of information property. The "upgrade" is the idea that you've bought a piece of software and now there's a new release and so - as a loyal customer - you should be able trade the old one in and buy the new one. What that ends up being is since what you bought wasn't the software but a permanent right to use the software, what you do when you upgrade, is forfeit your permanent right and purchase it again. If you don't upgrade, your old software won't work soon, so that permanent right you thought you owned will be worthless.

It seems to me that what we're seeing in the software area, and this is the scary part for human society, is the beginning of a kind of dispossession. People are talking about this as dispossession that only comes from piracy, like Napster and Gnutella where the rights of artists are being violated by people sharing their work. But there's another kind of dispossession, which is the inability to actually BUY a product. The idea is here: you couldn't buy this piece of software, you could only licence it on a day by day, month by month, year by year basis; As this idea spreads from software to music, films, books, human civilization based on property fundamentally changes.

The idea we hear of the big Internet in the sky with all the music we want to listen to, all the books and movies we want to read and watch on demand, all the software games and apps we want to use, sounds real nice, until you realize it isnt a public library, it is a private jukebox. You could download whenever you wanted over high-speed wireless 3-G systems into portable playing devices and pay only $50/month. But you can never own the ebook, you can never own the divx movie, you can never own the ASP software.

By the way, all the bookstores and music stores have been shut down.

It turns out that property isn't about possession after all, it is only about a relationship between an individual and their right to use something. Ownership is just "right to use until you sell" Your right to a house, or your liquid wealth, are stored as bits in an institutional computer - whether the computer is at the bureau of deeds in your town, whether the computer is at the bank, whether the computer is at the stock transfer agent. And property only works when transfers are not duplicative or lossy.

There is a fundamental difference between protecting the encryption systems for real currency and securities, and protecting encryptions systems for unlimited publishing. If the content industries prevail in gaining legal protection for renting infinite simultaneous copies, if we don't protect the notion of ownership, which includes the ability to loan, rent, and sell something when you're done with it, we will lose the ability to own things. Dispossession is a very real threat to civilization.

One of the other initiatives that I've been working on is trying to get my university at least to see that software and books and patents are really varieties of the same thing, and that they should normalize the intellectual property policy. Most universities give the copyright to your books back, and let professors keep all the royalties, even on a $1,000,000 book. But if you write a piece of software or you have a patent and it earns $50,000, the university tries to claim the whole thing. Very few universities get lucky with a cancer drug or Vitamin D, yet their IP policies drive innovation underground.

What I'm trying to do is separate academe from industry by giving academics back all their intellectual properties, and accept a tithe, like 9 percent of the value of all IP created on campus, including books and software and options in companies. I call it the "commonwealth of intellectual property," and through it a community of diverse scholars can share in some way in the success, drive, and luck of themselves and their colleagues. Most people are afraid of this, but I am certain it would lead to greater wealth and academic freedom for smaller universities like my own.

JORDAN POLLACK is a computer science and complex systems professor at Brandeis University. His laboratory's work on AI, Artificial Life, Neural Networks, Evolution, Dynamical Systems, Games, Robotics, Machine Learning, and Educational Technology has been reported on by the New York Times, Time, Science, NPR, Slashdot.org and many other media sources worldwide.


DAVID GELERNTER [200]

Questions about the evolution of software in the big picture are worth asking. It's important that we don't lose sight of the fact that some of the key issues in software don't have anything to do with big strategic questions, they have to do with the fact that the software that's becoming ubiquitous and that so many people rely on is so crummy, and that for so many people software ­ and in fact the whole world of electronics ­ is a constant pain. The computers we're inflicting on people are more a cause for irritation and confusion and dissatisfaction and angst than a positive benefit. One thing that's going to happen is clearly a tactical issue; we're going to throw out the crummy, primitive software on which we rely, and see a completely new generation of software very soon.

If you look at where we are in the evolution of the desktop computer today, the machine is about 20 to 25 years old. Relatively speaking we're roughly where the airplane was in the late 1920s. A lot of work had been done but we were yet to see the first even quasi-proto modern airplane, which was the DC3 of 1935. In the evolution of desktop computing we haven't even reached DC3 level. We're a tremendously self-conscious and self-aware society, and yet we have to keep in mind how much we haven't done, and how crummy and primitive much of what we've built is. For most people a new electronic gadget is a disaster, another incomprehensible users manual or help set, things that break, don't work, that people can never figure out; features they don't need and don't understand. All of these are just tactical issues, but they are important to the quality of life of people who depend on computers, which increasingly is everybody.

When I look at where software is heading and what is it really doing, what's happening and what will happen with the emergence of a new generation of information-management systems, as we discard Windows and NT ­ these systems that are 1960s, 1970s systems on which we rely today, we'll see a transition similar to what happened during the 19th century, when people's sense of space suddenly changed. If you compare the world of 1800 to the world of 1900, people's sense of space was tremendously limited and local and restricted in 1800. If you look at a New England village of the time, you can see this dramatically, everything is on site, a small cluster of houses, in which everything that needs to be done is done, and fields beyond, and beyond the fields a forest.

People traveled to some extent, but they didn't travel often, most people rarely traveled at all. The picture of space outside people's own local space was exceptionally fuzzy. Today, our picture of time is equally fuzzy; we have an idea of our local time and what happened today and yesterday, and what's going to happen next week, what happened the last few weeks, but outside of this, our view of time is as restricted and local as people's view of space was around 1800. If you look at what happened in the 19th century as transportation became available, cheap and ubiquitous, all of a sudden people developed a sense of space beyond their own local spaces, and the world changed dramatically. It wasn't just that people got around more and the economy changed and wealth was created. There was a tremendous change in the intellectual status of life. People moved outside their intellectual burrows; religion collapsed; the character of arts changed during the 19th century far more than it has during the 20th century or during any other century as the people's lives became fundamentally less internal, less spiritual, because they had more to do. They had places to go, they had things to see. When we look at the collapse of religion in the 19th century, it had far less to do with science than with technology, the technology of transportation that changed people's view of space and put the world at people's beck and call, in a sense. In 1800 this country was deeply religious; in 1900 religion had already become a footnote. And art had fundamentally changed in character as well.

What's going to happen, what software will do over the next few years ­ this has already started to happen and will accelerate ­ is that our software will be time-based, rather than space-based. We'll deal with streams of information rather than chaotic file systems that are based on 1940s idea of desks and file cabinets. The transition to a software world where we have a stream with a past, present and future is a transition to a world in which people have a much more acute sense of time outside their own local week, or month ­ in which they now have a clear idea of what was different, why February of 1997 was different from February of 1994, which most people today don't have a clear picture of.

When we ask ourselves what the effect will be of time coming into focus the way space came into focus during the 19th century, we can count on the fact that the consequences will be big. It won't cause the kind of change in our spiritual life that space coming into focus did, because we've moved as far outside as we can get, pretty much. We won't see any further fundamental changes in our attitude towards art or religion ­ all that has happened already. We're apt to see other incalculably large affects on the way we deal with the world and with each other, and looking back at this world today it will look more or less the way 1800 did from the vantage point of 1900. Not just a world with fewer gadgets, but a world with a fundamentally different relationship to space and time. From the small details of our crummy software to the biggest and most abstract issues of how we deal with the world at large, this is a big story.

"Streams" is a software project I've been obsessed with. In the early '90s it was clear to me that the operating system, the standard world in which I lived, was collapsing. For me and the academic community it was Unix; but it was the same in the world of Windows or the world of Mac or whatever world you were in. In the early 90s we'd been online solidly for at least a decade; I was a graduate student in the early 80s when the first desktop computers hit the stands. By the early 90s there was too much, it was breaking down. The flow of email, the number of files we had because we kept making more and they kept accumulating, we no longer threw them out every few years when we threw out the machine, they just grew to a larger and larger assemblage.

In the early 90s we were seeing electronic images, electronic faxes and stuff like that. The Web hadn't hit yet but it was clear to some of us what was coming and we talked about it and we wrote about it. The Internet was already big in the early 90s, and it was clear that the software we had was no good. It was designed for a different age. Unix was built at Bell Labs in the 1970s for a radically different technology world where computing power was rare and expensive, memories were small, disks were small, bandwidth was expensive, email was non-existent, the net was an esoteric fringe phenomenon. And that was the software we were using to run our lives in 1991, 1992. It was clear it was no good, it was broken, and it was clear that things were not going to get any better in terms of managing our online lives. It seemed to us at that point that we needed to throw out this 60s and 70s stuff.

The Unix idea of a file system copied so faithfully from the 1941 Steelcase file cabinet, which had its files and it had its folders, and the Xerox idea of a desktop with its icons of wastepaper baskets and stuff just like the offices that we were supposed to be leaving behind us, all this stuff copied faithful from the pre-electronic age. It was a good way to get started, but it was no good anymore. We needed something that was designed for computers. Forms and ways of doing business that were electronic and software-based, as opposed to being cribbed from what people knew how to do in 1944. They did well in 1944 but by 1991 it was no longer the way to operate in a software and electronic-based world.

It seemed to us that we wanted to arrange our stuff in time rather than in space. Instead of spreading it out on a virtual desktop in front of us we wanted all our information to accumulate in a kind of time line, or a diary or narrative with a past, present and future, or a stream, as we called the software. Every piece of information that came into my life, whether it was an email, or eventually a URL, or a fax or an image or a digital photo or a voice mail, or the 15th draft of a book chapter, all pieces of information would be plopped down at the end of a growing stream.

By looking at this stream I'd be looking at my entire information life, I would drop the absurd idea of giving files names ­ the whole idea of names and directories had rendered itself ridiculous, and a burden. If we dropped everything into the stream and we provided powerful searching and indexing tools and powerful browsing tools, and we allowed time itself to guide us, we'd have a much better tool than trying to remember, am I looking for letter to John number 15B or am I looking for new new new letter to John prime. Instead I could say I'm looking for the letter to John I wrote last week, and go to last week and browse. It was clear that by keeping our stuff in a time line we could throw away the idea of names, we could throw away the idea of files and folders, we could throw away the desktop. Instead we'd have the stream, which was a virtual object that we could look at using any computer and no longer have to worry whether I put the file at work or at home, or in the laptop or the palm pilot. The stream was a virtual structure and by looking at it, tuning it in, I tuned in my life, and I could tune it in from any computer. It had a future as well, so if I was going to do something next Friday, I'd drop into the future, and next Friday would flow to the present, the present would flow to the past.

To make a long story short we built the software and the software was the basis of a world view, an approach to software and the way of dealing with information. It was also a commercial proposition. That's got intellectual content in a way because for so many of us we have been challenged, asked whether the intellectual center of gravity and technology has not moved away from the university into the private sector. I thought it was a rotten idea, I resisted this heavily. I had a bet with my graduate students in the mid-90s. I would try to fund this project by the usual government funding ways and they would try and fund it by private investors, and whoever got the money first, that's the way we would go. I thought there was no contest, I had all sorts of Washington funding contacts, but they beat me hands down. When I was trying to wangle invitations to Washington to talk about this stuff, they would get private investors to hop on a plane and fly to New Haven to see it. The difference in energy level between the private and the Washington sector was enormous. And bigots like myself, who didn't want to hear about private industry or private spending or the private sector, who believed in the university, as I still do, in principle were confronted with the fact that there was a radically higher energy level among people who had made a billion dollars and wanted to make another billion, than people who had got tenure and who were now bucking for what? A chair, or whatever.

The academic world was more restricted in terms of what it could offer greedy people, and greed drives the world, one of the things which you confront as you get older. Reluctantly. So this story is a commercial story also, and raises questions about the future of the university ­ where the smart people are, where the graduate students go, where the dollars are, where the energy is, where the activity is. It hasn't been raised in quite the same way in some of the sciences as it has in technology. It certainly has become a big issue in biology and in medicine. The University, forgetting about software, and forgetting about the future of the stream, fiddling while Rome burns, or whatever it does, thinks that it's going to come to grips with the world by putting course notes on the Web. But we're dealing with something much bigger and much deeper than that.

What Yale charges for an education, as you know, is simply incredible. What it delivers is not worth what it charges. It gets by today on its reputation and in fact can get good jobs for its graduates. However, we're resting on our laurels. All these are big changes. And the changes that will happen in this nation's intellectual life when the university as we know it today collapses. The Yales and the Harvards, will do okay, but the 98% of the nation's universities that are not the Yales and Harvards and the MITs, when they collapse, intellectual life will be different, and that will be a big change, too. We're not thinking about this enough. And I know the universities are not. ?

DAVID GELERNTER is a professor of computer science at Yale and chief scientist at Mirror Worlds Technologies (New Haven). His research centers on information management, parallel programming, and artificial intelligence. The "tuple spaces" introduced in Nicholas Carriero and Gelernter's Linda system (1983) are the basis of many computer communication systems worldwide. Dr. Gelernter is the author of Mirror Worlds, The Muse in the Machine, 1939: The Lost World of the Fair, and Drawiing a Life: Surviving the Unabomber.


ALAN GUTH [201]

Even though cosmology doesn't have that much to do with information, it certainly has a lot to do with revolution and phase transitions. In fact, it is connected to phase transitions in both the literal and the figurative sense of the phrase.

It's often said — and I believe this saying was started by the late David Schramm — that today we are in a golden age of cosmology. That's really true. Cosmology at this present time is undergoing a transition from being a bunch of speculations to being a genuine branch of hard science, where theories can be developed and tested against precise observations. One of the most interesting areas of this is the prediction of the fluctuations, the non-uniformities, in the cosmic background radiation, an area that I've been heavily involved in. We think of this radiation as being the afterglow of the heat of the Big Bang. One of the remarkable features of the radiation is that it's uniform in all directions, to an accuracy of about one part in a hundred thousand, after you subtract the term that's related to the motion of the earth through the background radiation.

I've been heavily involved in a theory called the inflationary universe, which seems to be our best explanation for this uniformity. The uniformity is hard to understand. You might think initially that maybe the uniformity could be explained by the same principles of physics that cause a hot slice of pizza to get cold when you take it out of the oven; things tend to come to a uniform temperature. But once the equations of cosmology were worked out, so that one could calculate how fast the universe was expanding at any given time, then physicists were able to calculate how much time there was for this uniformity to set in.

They found that, in order for the universe to have become uniform fast enough to account for the uniformity that we see in the cosmic background radiation, information would have to have been transferred at approximately a hundred times the speed of light. But according to all our theories of physics, nothing can travel faster than light, so there's no way that this could have happened. So the classical version of the Big Bang theory had to simply start out by assuming that the universe was homogeneous — completely uniform — from the very beginning.

The inflationary universe theory is an add-on to the standard Big Bang theory, and basically what it adds on is a description of what drove the universe into expansion in the first place. In the classic version of the Big Bang theory, that expansion was put in as part of the initial assumptions, so there's no explanation for it whatever. The classical Big Bang theory was never really a theory of a bang; it was really a theory about the aftermath of a bang. Inflation provides a possible answer to the question of what made the universe bang, and now it looks like it's almost certainly the right answer.

Inflationary theory takes advantage of results from modern particle physics, which predicts that at very high energies there should exist peculiar kinds of substances which actually turn gravity on its head and produce repulsive gravitational forces. The inflationary explanation is the idea that the early universe contains at least a patch of this peculiar substance. It turns out that all you need is a patch; it can actually be more than a billion times smaller than a proton. But once such a patch exists, its own gravitational repulsion causes it to grow, rapidly becoming large enough to encompass the entire observed universe.

The inflationary theory gives a simple explanation for the uniformity of the observed universe, because in the inflationary model the universe starts out incredibly tiny. There was plenty of time for such a tiny region to reach a uniform temperature and uniform density, by the same mechanisms through which the air in a room reaches a uniform density throughout the room. And if you isolated a room and let it sit long enough, it will reach a uniform temperature as well. For the tiny universe with which the inflationary model begins, there is enough time in the early history of the universe for these mechanisms to work, causing the universe to become almost perfectly uniform. Then inflation takes over and magnifies this tiny region to become large enough to encompass the entire universe, maintaining this uniformity as the expansion takes place.

For a while, when the theory was first developed, we were very worried that we would get too much uniformity. One of the amazing features of the universe is how uniform it is, but it's still by no means completely uniform. We have galaxies, and stars and clusters and all kinds of complicated structure in the universe that needs to be explained. If the universe started out completely uniform, it would just remain completely uniform, as there would be nothing to cause matter to collect here or there or any particular place.

I believe Stephen Hawking was the first person to suggest what we now think is the answer to this riddle. He pointed out — although his first calculations were inaccurate — that quantum effects could come to our rescue. The real world is not described by classical physics, and even though this was very "high-brow" physics, we were in fact describing things completely classically, with deterministic equations. The real world, according to what we understand about physics, is described quantum-mechanically, which means, deep down, that everything has to be described in terms of probabilities.

The "classical" world that we perceive, in which every object has a definite position and moves in a deterministic way, is really just the average of the different possibilities that the full quantum theory would predict. If you apply that notion here, it is at least qualitatively clear from the beginning that it gets us in the direction that we want to go. It means that the uniform density, which our classical equations were predicting, would really be just the average of the quantum mechanical densities, which would have a range of values which could differ from one place to another. The quantum mechanical uncertainly would make the density of the early universe a little bit higher in some places, and in other places it would be a little bit lower.

So, at the end of inflation, we expect to have ripples on top of an almost uniform density of matter. It's possible to actually calculate these ripples. I should confess that we don't yet know enough about the particle physics to actually predict the amplitude of these ripples, the intensity of the ripples, but what we can calculate is the way in which the intensity depends on the wavelength of the ripples. That is, there are ripples of all sizes, and you can measure the intensity of ripples of different sizes. And you can discuss what we call the spectrum — we use that word exactly the way it's used to describe sound waves. When we talk about the spectrum of a sound wave, we're talking about how the intensity varies with the different wavelengths that make up that sound wave.

We do exactly the same thing in the early universe, and talk about how the intensity of these ripples in the mass density of the early universe varied with the wavelengths of the different ripples that we're looking at. Today we can see those ripples in the cosmic background radiation. The fact that we can see them at all is an absolutely fantastic success of modern technology. When we were first making these predictions back in 1982, at that time astronomers had just barely been able to see the effect of the earth's motion through the cosmic background radiation, which is an effect of about one part in a thousand. The ripples that I'm talking about are only one part in a hundred thousand — just one percent of the intensity of the most subtle effect that it had been possible to observe at the time we were first doing these calculations.

I never believed that we would ever actually see these ripples. It just seemed too far fetched that astronomers would get to be a hundred times better at measuring these things than they were at the time. But, to my astonishment and delight, in 1992 these ripples were first detected by a satellite called COBE, the Cosmic Background Explorer, and now we have far better measurements than COBE, which had an angular resolution of about 7 degrees. This meant that you could only see the longest wavelength ripples. Now we have measurements that go down to a fraction of a degree, and we're getting very precise measurements now of how the intensity varies with wavelength, with marvelous success.

About a year and a half ago, there was a spectacular set of announcements from experiments called BOOMERANG and MAXIMA, both balloon-based experiments, which gave very strong evidence that the universe is geometrically flat, which is just what inflation predicts. (By flat I don't mean two-dimensional; I just mean that the three-dimensional space of the universe in not curved, as it could have been, according to general relativity.) You can actually see the curvature of space in the way that the pattern of ripples has been affected by the evolution of the universe. A year and a half ago, however, there was an important discrepancy that people worried about; and no one was sure how big a deal to make out of it. The spectrum they were measuring was a graph that had, in principle, several peaks. These peaks had to do with successive oscillations of the density waves in the early universe, and a phenomenon called resonance that makes some wavelengths more intense than others. The measurements showed the first peak beautifully, exactly where we expected it to be, with just the shape that was expected. But we couldn't actually see the second peak.

In order to fit the data with the theories, people had to assume that there were about ten times as many protons in the universe as we actually thought, because the extra protons would lead to a friction effect that could make the second peak disappear. Of course every experiment has some uncertainty — if an experiment is performed many times, the results will not be exactly the same each time. So we could imagine that the second peak was not seen purely because of bad luck. However, the probability that the peak could be so invisible, if the universe contained the density of protons that is indicated by other measurements, was down to about the one percent level. So, it was a very serious-looking discrepancy between what was observed and what was expected. All this changed dramatically for the better about 3 or 4 months ago, with the next set of announcements with more precise measurements. Now the second peak is not only visible, but it has exactly the height that was expected, and everything about the data now fits beautifully with the theoretical predictions. Too good, really. I'm sure it will get worse before it continues to get better, given the difficulties in making these kinds of measurements. But we have a beautiful picture now which seems to be confirming the inflationary theory of the early universe.

Our current picture of the universe has a new twist, however, which was discovered two or three years ago. To make things fit, to match the observations, which are now getting very clear, we have to assume that there's a new component of energy in the universe that we didn't know existed before. This new component is usually referred to as "dark energy." As the name clearly suggests, we still don't know exactly what this new component is. It's a component of energy which in fact is very much like the repulsive gravity matter I talked about earlier — the material that drives the inflation in the early universe. It appears that, in fact, today the universe is filled with a similar kind of matter. The antigravity effect is much weaker than the effect that I was talking about in the early universe, but the universe today appears very definitely to be starting to accelerate again under the influence of this so-called dark energy.

Although I'm trying to advertise that we've understood a lot, and we have, there are still many uncertainties. In particular, we still don't know what most of the universe is made out of. There's the dark energy, which seems to comprise in fact about 60% of the total mass/energy of the universe. We don't know what it is. It could in fact be the energy of the vacuum itself, but we don't know that for a fact. In addition, there's what we call dark matter, which is another 30%, or maybe almost 40%, of the total matter in the universe; we don't know what that is, either. The difference between the two is that the dark energy causes repulsive gravity and is smoothly distributed; the dark matter behaves like ordinary matter in terms of its gravitational properties — it's attractive and it clusters; but we don't know what it's made of. The stuff we do know about — protons, neutrons, ordinary atoms and molecules — appear to comprise only about 5% of the mass of the universe.

The moral of the story is we have a great deal to learn. At the same time, the theories that we have developed so far seem to be working almost shockingly well.

ALAN GUTH, father in the inflationary theory of the Universe, is Victor F. Weisskopf Professor of Physics at MIT; author of The Inflationary Universe: The Quest for a New Theory of Cosmic Origins.