Edge Video Library
Collaboration and the Evolution of Disciplines
Cooperation achieves its beneficial effects by improving communication, promoting gains from specialization, enhancing organizational effectiveness, and reducing the risks of harmful conflict. Members of an institutionalized academic discipline jointly benefit in all these ways. Unfortunately, members of different disciplines typically do not. The boundaries of most disciplines were largely set 100 (plus or minus 50) years ago, and efforts to redraw the boundaries (e.g. at Irvine and Carnegie Mellon) have not been met with much success. I would like us to consider how the more or less fragmented research community can best respond to new opportunities (AI), new problems (climate change), new modes of education and governance, and new understandings of human behavior and values.
ROBERT AXELROD, Walgreen Professor for the Study of Human Understanding at the University of Michigan, is best known for his interdisciplinary work on the evolution of cooperation. He is author of The Complexity of Cooperation and The Evolution of Cooperation. Robert Axelrod's Edge Bio Pag
Questioning the Cranial Paradigm
Part of the definition of intelligence is always this representation model. . . . I’m pushing this idea of distribution—homeostatic surfing on worldly engagements that the body is always not only a part of but enabled by and symbiotic on. Also, the idea of adaptation as not necessarily defined by the consciousness that we like to fetishize. Are there other forms of consciousness? Here’s where the gut-brain axis comes in. Are there forms that we describe as visceral gut feelings that are a form of human consciousness that we’re getting through this immune brain?
CAROLINE A. JONES is a professor of art history in the Department of Architecture at MIT and author, most recently, of The Global Work of Art. Caroline Jones's Edge Bio Page
The Brain Is Full of Maps
I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:
Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.
As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.
FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the Universe, Weapons and Hope, Infinite in All Directions, and Maker of Patterns. Freeman Dyson's Edge Bio Page
Perception As Controlled Hallucination
Perception itself is a kind of controlled hallucination. . . . [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception.
ANDY CLARK is professor of Cognitive Philosophy at the University of Sussex and author of Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Andy Clark's Edge Bio Page
Mining the Computational Universe
I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?
STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page
The Cul-de-Sac of the Computational Metaphor
Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?
RODNEY BROOKS is Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL); founder, chairman, and CTO of Rethink Robotics; and author of Flesh and Machines. Rodney Brooks's Edge Bio Page
How to Create an Institution That Lasts 10,000 Years
We’re also looking at the oldest living companies in the world, most of which are service-based. There are some family-run hotels and things like that, but also a huge amount in the food and beverage industry. Probably a third of the organizations or the companies over 500 or 1,000 years old are all in some way in wine, beer, or sake production. I was intrigued by that crossover.
What’s interesting is that humanity figured out how to ferment things about 10,000 years ago, which is exactly the time frame where people started creating cities and agriculture. It’s unclear if civilization started because we could ferment things, or we started fermenting things and therefore civilization started, but there’s clearly this intertwined link with fermenting beer, wine, and then much later spirits, and how that fits in with hospitality and places that people gather.
All of these things are right now just nascent bits and pieces of trying to figure out some of the ways in which organizations live for a very long time. While some of them, like being a family-run hotel, may not be very portable as an idea, some of them, like some of the natural strategies, we're just starting to understand how they can be of service to humanity. If we broaden the idea of service industry to our customer civilization, how can you make an institution whose customer is civilization and can last for a very long time?
ALEXANDER ROSE is the executive director of The Long Now Foundation, manager of the 10,000 Year Clock Project, and curator of the speaking series' at The Interval and The Battery SF. Alexander Rose's Edge Bio Page
Machines Like Me
I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge. A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.
IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. Ian McEwan's Edge Bio Page
Is Superintelligence Impossible?
[ED. NOTE: On Saturday, March 9th, more than 1200 people jammed into Pioneer Works in Red Hook, Brooklyn, for a conversation between two of our greatest philosophers, David Chalmers and Daniel C. Dennett, who ask each other, "Is Superintlligence Impossible?" As part of the Edgeongoing "Possible Minds Project," we are pleased to present the video, audio, and transcript of the event, which was orchestrated by the noted physicist, artist, author (and fellow Edgie), and Director of Sciences at Pioneer Works, Janna Levin, with the support of Science Sandbox, a Simons Foundation initiative dedicated to engaging everyone with the process of science. —JB]
Somebody said that the philosopher is the one who says, "We know it’s possible in practice, we’re trying to figure out if it’s possible in principle." Unfortunately, philosophers sometimes spend too much time worrying about logical possibilities that are importantly negligible in every other regard. So, let me go on the record as saying, yes, I think that conscious AI is possible because, after all, what are we? We’re conscious. We’re robots made of robots made of robots. We’re actual. In principle, you could make us out of other materials. Some of your best friends in the future could be robots. Possible in principle, absolutely no secret ingredients, but we’re not going to see it. We’re not going to see it for various reasons. One is, if you want a conscious agent, we’ve got plenty of them around and they’re quite wonderful, whereas the ones that we would make would be not so wonderful. —Daniel C. Dennett
One of our questions here is, is superintelligence possible or impossible? I’m on the side of possible. I like the possible, which is one reason I like John’s theme, "Possible Minds." That’s a wonderful theme for thinking about intelligence, both natural and artificial, and consciousness, both natural and artificial. … The space of possible minds is absolutely vast—all the minds there ever have been, will be, or could be, starting with the actual minds. There are a lot of actual minds. I guess there have been a hundred billion or so humans with minds of their own. Some pretty amazing minds have been Confucius, Isaac Newton, Jane Austen, Pablo Picasso, Martin Luther King, on it goes, a lot of amazing minds. But still, those hundred billion minds put together are just the tiniest corner of this space of possible minds. —David Chalmers
__
David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the “hard problem” of consciousness; Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained, and, most recently, From Bacteria to Bach and Back: The Evolution of Minds; John Brockman, moderator, is a cultural impresario whose career has encompassed the avant-garde art world, science, books, software, and the Internet. He is the author of By The Late John Brockman and The Third Culture; editor of the Edge Annual Question book series, and Possible Minds: 25 Ways of Looking at AI.
Cultural Intelligence
Getting back to culture being invisible and omnipresent, we think about intelligence or emotional intelligence, but we rarely think about cultivating cultural intelligence. In this ever-increasing global world, we need to understand culture. All of this research has been trying to elucidate not just how we understand other people who are different from us, but how we understand ourselves.
MICHELE GELFAND is a Distinguished University Professor at the University of Maryland, College Park. She is the author of Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the World. Michele Gelfand's Edge Bio Page