The Possible Minds Conference

The Possible Minds Conference

I am puzzled by the number of references to what AI “is” and what it “cannot do” when in fact the new AI is less than ten years old and is moving so fast that references to it in the present tense are dated almost before they are uttered. The statements that AI doesn’t know what it’s talking about or is not enjoying itself are trivial if they refer to the present and undefended if they refer to the medium-range future—say 30 years.  —Daniel Kahneman

From left: W. Daniel Hillis, Neil Gershenfeld, Frank Wilczek, David Chalmers, Robert Axelrod, Tom Griffiths, Caroline Jones, Peter Galison, Alison Gopnik, John Brockman, George Dyson, Freeman Dyson, Seth Lloyd, Rod Brooks, Stephen Wolfram, Ian McEwan. In absentia: Andy Clark, George M. Church, Daniel Kahneman, Alex "Sandy" Pentland, Venki Ramakrishnan  (Click to expand photo) 


INTRODUCTION
by Venki Ramakrishnan

The field of machine learning and AI is changing at such a rapid pace that we cannot foresee what new technical breakthroughs lie ahead, where the technology will lead us or the ways in which it will completely transform society. So it is appropriate to take a regular look at the landscape to see where we are, what lies ahead, where we should be going and, just as importantly, what we should be avoiding as a society. We want to bring a mix of people with deep expertise in the technology as well as broad thinkers from a variety of disciplines to make regular critical assessments of the state and future of AI. 

Venki Ramakrishnan, President of the Royal Society and Nobel Laureate in Chemistry, 2009, is Group Leader & Former Deputy Director, MRC Laboratory of Molecular Biology; Author, Gene Machine: The Race to Decipher the Secrets of the Ribosome.  


[ED. NOTE: In recent months, Edge has published the fifteen individual talks and discussions from its two-and-a-half-day Possible Minds Conference held in Morris, CT, an update from the field following on from the publication of the group-authored book Possible Minds: Twenty-Five Ways of Looking at AI. As a special event for the long Thanksgiving weekend, we are pleased to publish the complete conference—10 hours plus of audio and video, as well as a downloadable PDF of the 77,500-word manuscript. Enjoy.] 
 
Editor, Edge

IAN MCEWAN 
Machines Like Me

I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge.

IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like MeIan McEwan's Edge Bio Page


RODNEY BROOKS
The Cul-de-Sac of the Computational Metaphor 

Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?­­­

RODNEY BROOKS is Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL); founder, chairman, and CTO of Rethink Robotics; and author of Flesh and Machines. Rodney Brooks's Edge Bio Page


STEPHEN WOLFRAM
Mining the Computational Universe

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page


FREEMON DYSON
The Brain Is Full of Maps

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the Universe, Weapons and Hope, Infinite in All Directions, and Maker of Patterns. Freeman Dyson's Edge Bio Page


CAROLINE A. JONES
Questioning the Cranial Paradigm
 

Part of the definition of intelligence is always this representation model. . . . I’m pushing this idea of distribution—homeostatic surfing on worldly engagements that the body is always not only a part of but enabled by and symbiotic on. Also, the idea of adaptation as not necessarily defined by the consciousness that we like to fetishize. Are there other forms of consciousness? Here’s where the gut-brain axis comes in. Are there forms that we describe as visceral gut feelings that are a form of human consciousness that we’re getting through this immune brain?

CAROLINE A. JONES is a professor of art history in the Department of Architecture at MIT and author, most recently, of The Global Work of Art. Caroline A. Jones's Edge Bio Page


ROBERT AXELROD
Collaboration and the Evolution of Disciplines
 

The questions that I’ve been interested in more recently are about collaboration and what can make it succeed, also about the evolution of disciplines themselves. The part of collaboration that is well understood is that if a team has a diversity of tools and backgrounds available to them—they come from different cultures, they come from different knowledge sets—then that allows them to search a space and come up with solutions more effectively. Diversity is very good for teamwork, but the problem is that there are clearly barriers to people from diverse backgrounds working together. That part of it is not well understood. The way people usually talk about it is that they have to learn each other’s language and each other’s terminology. So, if you talk to somebody from a different field, they’re likely to use a different word for the same concept.

ROBERT AXELROD, Walgreen Professor for the Study of Human Understanding at the University of Michigan, is best known for his interdisciplinary work on the evolution of cooperation. He is author of The Evolution of Cooperation. Robert Axelrod's Edge Bio Page


ALISON GOPNIK
A Separate Kind of Intelligence

It looks as if there’s a general relationship between the very fact of childhood and the fact of intelligence. That might be informative if one of the things that we’re trying to do is create artificial intelligences or understand artificial intelligences. In neuroscience, you see this pattern of development where you start out with this very plastic system with lots of local connection, and then you have a tipping point where that turns into a system that has fewer connections but much stronger, more long-distance connections. It isn’t just a continuous process of development. So, you start out with a system that’s very plastic but not very efficient, and that turns into a system that’s very efficient and not very plastic and flexible.

ALISON GOPNIK is a developmental psychologist at UC Berkeley. Her books include The Philosophical Baby and, most recently, The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children. Alison Gopnik's Edge Bio Page


TOM GRIFFITHS
Doing More with Less
 

Imagine a superintelligent system with far more computational resources than us mere humans that’s trying to make inferences about what the humans who are surrounding it—which it thinks of as cute little pets—are trying to achieve so that it is then able to act in a way that is consistent with what those human beings might want. That system needs to be able to simulate what an agent with greater constraints on its cognitive resources should be doing, and it should be able to make inferences, like the fact that we’re not able to calculate the zeros of the Riemann zeta function or discover a cure for cancer. It doesn’t mean we’re not interested in those things; it’s just a consequence of the cognitive limitations that we have.

TOM GRIFFITHS is the Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By. Tom Griffiths's Edge Bio Page


FRANK WILCZEK
Ecology of Intelligence

There’s this tremendous drive for intelligence, but there will be a long period of coexistence in which there will be an ecology of intelligence. Humans will become enhanced in different ways and relatively trivial ways with smartphones and access to the Internet, but also the integration will become more intimate as time goes on. Younger people who interact with these devices from childhood will be cyborgs from the very beginning. They will think in different ways than current adults do.

FRANK WILCZEK is the Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and author of A Beautiful Question: Finding Nature’s Deep DesignFrank Wilczek's Edge Bio Page


NEIL GERSHENFELD
Morphogenesis for the Design of Design
 

As we work on the self-reproducing assembler, and writing software that looks like hardware that respects geometry, they meet in morphogenesis. This is the thing I’m most excited about right now: the design of design. Your genome doesn’t store anywhere that you have five fingers. It stores a developmental program, and when you run it, you get five fingers. It’s one of the oldest parts of the genome. Hox genes are an example. It’s essentially the only part of the genome where the spatial order matters. It gets read off as a program, and the program never represents the physical thing it’s constructing. The morphogenes are a program that specifies morphogens that do things like climb gradients and symmetry break; it never represents the thing it’s constructing, but the morphogens then following the morphogenes give rise to you.

NEIL GERSHENFELD is the director of MIT’s Center for Bits and Atoms; founder of the global fab lab network; the author of FAB; and co-author (with Alan Gershenfeld & Joel Cutcher-Gershenfeld) of Designing Reality. Neil Gershenfeld's Edge Bio Page


DAVID CHALMERS
The Language of Mind
 

Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective. If we’re working towards artificial general intelligence, it's natural to have AIs with models of themselves, particularly with introspective self-models, where they can know what’s going on in some sense from the first-person perspective.

DAVID CHALMERS is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the "hard problem" of consciousness. David Chalmers's Edge Bio Page


GEORGE DYSON 
AI That Evolves in the Wild

I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.

GEORGE DYSON is a historian of science and technology and author of Darwin Among the Machines and Turing’s Cathedral. George Dyson's Edge Bio Page


PETER GALISON
Epistemic Virtues 

I’m interested in the question of epistemic virtues, their diversity, and the epistemic fears that they’re designed to address. By epistemic I mean how we gain and secure knowledge. What I’d like to do here is talk about what we might be afraid of, where our knowledge might go astray, and what aspects of our fears about how what might misfire can be addressed by particular strategies, and then to see how that’s changed quite radically over time.

PETER GALISON is a science historian; Joseph Pellegrino University Professor and co-founder of the Black Hole Initiative at Harvard University; and author of Einstein's Clocks and Poincaré’s Maps: Empires of Time. Peter Galison's Edge Bio Page


SETH LLOYD
Communal Intelligence
 

We haven't talked about the socialization of intelligence very much. We talked a lot about intelligence as being individual human things, yet the thing that distinguishes humans from other animals is our possession of human language, which allows us both to think and communicate in ways that other animals don’t appear to be able to. This gives us a cooperative power as a global organism, which is causing lots of trouble. If I were another species, I’d be pretty damn pissed off right now. What makes human beings effective is not their individual intelligences, though there are many very intelligent people in this room, but their communal intelligence.

SETH LLOYD is a theoretical physicist at MIT; Nam P. Suh Professor in the Department of Mechanical Engineering; external professor at the Santa Fe Institute; and author of Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. Seth Lloyd's Edge Bio Page


W. DANIEL HILLIS 
Emergences

My interest in AI comes from a broader interest in a much more interesting question to which I have no answers (and can barely articulate the question): How do lots of simple things interacting emerge into something more complicated? Then how does that create the next system out of which that happens, and so on?

W. DANIEL HILLIS is an inventor, entrepreneur, and computer scientist, Judge Widney Professor of Engineering and Medicine at USC, and author of The Pattern on the Stone: The Simple Ideas That Make Computers Work. W. Daniel Hillis's Edge Bio Page


IN ABSENTIA…

Andy Clark

Perception itself is a kind of controlled hallucination. . . . [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception. 

ANDY CLARK is professor of Cognitive Philosophy at the University of Sussex and author of Surfing Uncertainty: Prediction, Action, and the Embodied MindAndy Clark’s Edge Bio Page


George M. Church

The opportunity is not mere AI-ML, but various hybrids of inorganic-digital-von Neumann architectures with “Natural computing” (CAD of natural phenomena and/or employing semi-natural materials to compute), which benefits from our exponentially growing synthetic capabilities. Also, both of these are being hybridized with highly evolved human brains (including long-tail exceptional individuals) and our new tools for building more and more complex synthetic human brain components using genetics and in-vitro developmental biology.

GEORGE M. CHURCH is Robert Winthrop Professor of Genetics at Harvard Medical School, Professor of Health Sciences and Technology at Harvard-MIT, and co-author (with Ed Regis) of Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves. George Church’s Edge Bio Page


Daniel Kahneman

My late teacher Yehoshua Bar-Hillel was once asked, in the 1950s, whether computers would ever understand language. He answered unhesitatingly “Never” and immediately clarified that by “Never” he meant “at least 50 years.” I am puzzled by the number of references to what AI “is” and what it “cannot do” when in fact the new AI is less than ten years old and is moving so fast that references to it in the present tense are dated almost before they are uttered. The statements that AI doesn’t know what it’s talking about or is not enjoying itself are trivial if they refer to the present and undefended if they refer to the medium-range future—say 30 years. Hype is bad, but the debunkers should remember that the AI Winter was brought about by two brilliant people proving what a one-layer Perceptron could not do. My optimistic question would be “Where will the next breakthrough in AI come from—and what will it easily do that deep learning is not good at?”

DANIEL KAHNEMAN, Nobel Laureate in Economic Sciences (2002), is Eugene Higgins Professor of Psychology Emeritus at Princeton University, Professor of Psychology and Public Affairs Emeritus at the Woodrow Wilson School, and winner of the 2013 Presidential Medal of Honor. He is author of Thinking, Fast and Slow. Daniel Kahneman’s Edge Bio Page


Alex “Sandy” Pentland

For the first time in history there is fine-grained data about the behavior of everyone in entire societies, from phones, transactions, transport, etc. This data shows that people are not very different from other social species, with the exception that we have much better social learning, and so we are much better at building a common set of heuristics generally known as culture. The math of this process is very similar to a distributed version of a well-known near-optimal financial portfolio strategy, and is analogous to reinforcement learning over a space of "micro-strategies." This insight suggests a way to make humanity as a whole much smarter than we are today, something I call "HumanAI."

ALEX “SANDY” PENTLAND is Toshiba Professor and professor of media arts and sciences at MIT; director of the Human Dynamics and Connection Science labs and the Media Lab Entrepreneurship Program, and the author of Social Physics. Sandy Pentland’s Edge Bio Page


Venki Ramakrishnan

The field of machine learning and AI is changing at such a rapid pace that we cannot foresee what new technical breakthroughs lie ahead, where the technology will lead us or the ways in which it will completely transform society. So it is appropriate to take a regular look at the landscape to see where we are, what lies ahead, where we should be going and, just as importantly, what we should be avoiding as a society. We want to bring a mix of people with deep expertise in the technology as well as broad thinkers from a variety of disciplines to make regular critical assessments of the state and future of AI.  

VENKI RAMAKRISHNAN, President of the Royal Society, is the recipient of the Nobel Prize in Chemistry, 2009, and Group Leader & Former Deputy Director, MRC Laboratory of Molecular Biology. He is the author of Gene Machine: The Race to Decipher the Secrets of the Ribosome. Venki Ramakrishan’s Edge Bio Page


Special thanks to the following individuals who have participated in meetings, dinners, and events leading up to the pilot conference: Chris Anderson, Robert Axelrod, Mary Catherine Bateson, Andrew Blake, Stewart Brand, Nick Bostrom, Rodney Brooks, David Chalmers, George Church, Andy Clark, Kate Darling, Richard Dawkins, Daniel C. Dennett, Freeman Dyson, George Dyson, Anca Dragan, David Deutsch, Brian Eno, Peter Galison, Neil Gershenfeld, Terry Gilliam, Alison Gopnik, Tom Griffiths, Demis Hassabis, Chrissie Hynde, Joi Ito, Jennifer Jacquet, Danny Hillis, Caroline Jones, Daniel Kahneman, David Kaiser, Janna Levin, Seth Lloyd, Gary Marcus, Annalena McAfee, Ian McEwan, William McEwan, Hans Ulrich Obrist, Judea Pearl, Alex Pentland, Steven Pinker, Robert Plomin, Venki Ramakrishnan, Martin Rees, Stuart Russell, Murray Shanahan, Susan Schneider, Jaan Tallinn, Max Tegmark, Geoffrey West, Frank Wilczek, Stephen Wolfram.

Weight: 

-91