The Brain Is Full of Maps

The Brain Is Full of Maps

Freeman Dyson [6.11.19]

 I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All Directions, and Maker of PatternsFreeman Dyson's Edge Bio Page


THE BRAIN IS FULL OF MAPS

FREEMAN DYSON: I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

Introspection tells us our brains are spectacularly quick, transforming two tasks essential to our survival: recognition of images in space, and recognition of patterns of sound in time. We recognize a human face or a snake in the grass in a fraction of a second. We recognize the sound of a voice or of a footstep equally fast. The process of recognition requires the comparison of a perceived image with an enormous database of remembered images. How this is done, in a quarter of a second without any conscious effort, we have no idea. It seems likely that scanning of images in associative memory is done by direct comparison of analog data rather than by digitization.

The quality of a poem such as Homer’s Odyssey or Eliot’s Wasteland is like the quality of a human personality. A large part of our brain is concerned with social interactions, getting to know other people, learning how to live in social groups. The observed correlation between size of brain and size of social groups in primates makes it likely that our brains evolved primarily to deal with social problems. Our ability to see others as analogs of ourselves is basic to our existence as social animals.

I go on to talk about what Danny Hillis told us thirty years ago in his paper titled "Intelligence as an Emergent Behavior; or, the Songs of Eden," which is of course a wonderful story that Danny invented to explain the evolution of speech from song. He had the idea that songs originally were the evolving species, and apes were just the phenotype.

How do songs actually evolve? They have to be remembered by an ape to survive. And how do you get remembered by an ape? Well, you have to give yourself some associated practical use. They have to be useful to the apes in order to survive. So, a song can only become fit to survive by associating itself with meaning. Thereby, you have a co-evolution of apes and songs so that the songs gradually acquire more meaning and the apes acquire more communication. In the end, that develops into speech. This is a beautiful idea. The song is of course analog from beginning to end. It is the sound and spirit of the thing that is transmitted, not the individual phonemes.

I’m suggesting that the brain is mainly an analog device with certain small regions specialized for digital processes. It’s certainly not true, as is sometimes claimed by pundits talking on television, that the left hemisphere is digital, and the right hemisphere is analog. It seems to be true that most of the digital processing is done on the left side. But the division of labor between the two hemispheres is still largely unexplored.

* * * *

SETH LLOYD: One of the interesting features in going back over the original Macy Conferences on Cybernetics is that it's a wonderful example of something that is now recurring. The problems that showed up then were somewhat irrelevant for decades, largely because of what Rodney was saying, which is that we adopted von Neumann architecture computers and then Moore’s law took off, so we didn’t have to bother with different ways of processing information.

They were very concerned about the question of gestalt. What does it mean? Why do human beings get a gestalt—a sense of a whole—from all these disconnected parts? They were questioning what’s going on in the brain that gives you this notion of "Aha, that’s Freeman right there. I recognize him." They also ask the question of whether artificial intelligences and computers could have a gestalt.

Now, ever since the famous example of Google’s deep neural networks learning to recognize kittens on the Internet, at least they have a gestalt of a kitten. Mind you, from a Bayesian perspective, the prior probability of a picture on the Internet being a kitten is rather high. For the first time, it’s pretty fair to say that we have artificial neural networks that possess a gestalt. This is amazing, because it's been seventy years since this question first came up. Up until now, I would have said that image recognition programs didn’t have the sense of "Aha, it’s a kitten," but now they do. So, it’s a remarkable time.

FREEMAN DYSON: That’s all true. What they call deep learning is imitating this comparison of images by translating to digital language. But still it’s not likely that the brain is doing it that way.

STEPHEN WOLFRAM: Neural nets, in their current instantiation, critically depend on the fact that they have real number weights that can be progressively improved by calculus-like methods. It's still an open question as to whether there’s a way to do this with purely digital things where there isn’t this calculus-like progressive improvement.

F. DYSON: Yes. Certainly, it’s an open question. I’m just prejudiced.

WOLFRAM: In your sense, is a neural net with real number weights analog or is that digital?

F. DYSON: That’s digital. It’s a crude digital imitation of a natural process which was analog.

WOLFRAM: So, to make it analog you would have to have a whole field and not just a matrix of weights?

F. DYSON: Images will slide over each other somehow and match. It’s a much more error-tolerant system, so you’re not asking for twelve-digit accuracy. If an image looks like another image, then it’s essentially remembered together with it. Associative memory is the basis of the whole process, and that works with amazing smoothness that we don’t understand.

W. DANIEL HILLIS: Certainly, at some level, there are non-firing neurons in the retina, which are clearly doing a purely analog computation in every sense of the word. If you have something like a Hopfield network, which is basically finding eigenvalues of the matrix by repeatedly feeding itself back into itself, is an eigenvalue a digital output of a completely analog system? Would you put that in the analog category?

F. DYSON: Well, of course you don’t have to put things into categories. Most things are a mixture, and that’s a good example.

CAROLINE JONES: One of the things that confuses the conversation for me, as an image theorist and a gestalt historian, is that we’ve made the machines interpolate and extrapolate from the digital to produce gestalt interfaces for us. It’s a complicated conversation, because all of the compression algorithms are tinkered to produce something that we will then complete. We will then take the fragmentary pieces and do our analog business on them to create a song and say, "Oh, it’s so real!"

We are the cybernetic completion of the digital. We are the analog meat machines that make the gestalt out of what I would imagine the machine doesn’t care is a kitten or not. And when you look at some of what Google calls kittens, it’s really breaking the gestalt picture. It’s a couple of eyes in a certain position and some fur, where the whole premise of gestalt is the completion of the fragmentary, and the curious project by which three different corners are perceived as a triangle, obscured by a circle. The three triangles are robustly perceived as a geometric figure by the human brain, which a machine would only do if we said, "Can you please make these fragmentary corners into a triangle for the human perceiver? Could you please interpolate those missing pieces? We need to see a triangle." So, this interface is productively confused by what we have given the machines as purposes. We have made them into makers of analog maps for us, but I don’t yet have a sense of what the machines would do by themselves, for themselves.

GEORGE DYSON: When the Cybernetics Group first formed, that wasn’t the name. It was called the Teleological Society. Then, when Macy came in and supported it, he said, "We’ll support this, but we’ve got to have a different name." And that’s when they made themselves the Cybernetics Group. Originally, it was the Teleological Society—that was the fundamental premise.

JOHN BROCKMAN: What would they call this group?

JONES: The Anti-teleological Society.

LLOYD: The Eschatology Society.

FRANK WILCZEK: Post-logical society.

WOLFRAM: What would be the type of theory you would have for what might be going on in the brain? You say it’s transforming an image into some different projection of an image, so what's the theory?

F. DYSON: Why did we evolve people like Beethoven and Mozart or Sophocles or Eliot, people who were masters of music or masters of language? This degree of sophistication both in music and in language is far beyond anything that biological survival needed, but it just happened. How do you understand that?

WOLFRAM: You take some simple program, you run it, it does amazingly complicated things, and the program might have been in some sense constructed only because it makes an array of three black cells after four steps or something. It just so happens that as a side effect it produces this amazingly complicated behavior. That would be my metaphor for what’s going on in those cases.

F. DYSON: Some quality in the whole scene—the quality of the sunset in the tropics or the quality of a symphony—is just the gestalt, it is something that’s inherent in the entire picture and not in the individual parts. That is the brain operating directly on the image and not on the constituent parts.

W. DANIEL HILLIS: The literal answer to your question may be runaway sexual selection. Basically, the way to get laid was to write a sonnet or sing a beautiful song.

ALISON GOPNIK: That may reflect some prejudices in this group. It’s not obvious that, generally, artistic and scientific achievement has that effect.

HILLIS: The question is, why are we evolved to support artistic and scientific achievement?

GOPNIK: Here’s an interesting possibility, which is something that has come out of the deep-learning world: A lot of times the way you can make those systems work is by having hallucinations, where the system is generating a lot of possible outputs from some representation that aren’t actually things that you perceive or aren’t inputs into the system.

Having this process of taking a generative model and then simulating a lot of outcomes that you aren’t seeing or detecting is a crucial step in making things work. Then, you have another system that looks at the relationship between the generative model and the outputs, and then uses that relationship to the hallucinated outputs—to the things that never existed except that you generated them—and tries to make sense out of that. That turns out to be important computationally.

It’s at least interestingly analogous to things like pretend play with children, for example. You don’t need to have Einsteins and Beethovens to have examples of people creating things that are non-real. What’s the evolutionary advantage to having an imaginary friend or a crazy pretend world? That’s not something that you need to depend on experts for. That’s something that seems to be a universal characteristic of childhood.

HILLIS: The notion that sexual selection causes you to explore the most complex expressions of those to demonstrate that the complexity is working plays out not just with intelligence but also with morphogenesis. There are all kinds of examples in low level animal behavior, or forms of flowers, things like that, where that process of feedback on sexual selection tends to select for complexity and beauty because that’s hard to do. Therefore, it shows it’s all working.

LLOYD: If Chomsky were here, he would say that human beings have universal human language, which we gifted to computers, by the way. We’re the only entities on the planet that have this universal language. If you look at chimpanzees, or songbirds, or dolphins, they just cannot process information the way that we do.

One of the features of universal human language is its open-endedness that allows you the potential to construct any possible sets of ideas, or to compute anything in the case of computers. The sonnets and Mozart symphonies, once you give people that, that’s what you’ve got to expect to happen sooner or later.

JONES: I have a different observation, which is that culture is a very unique human product. I’m sure you can argue that bowerbirds have culture and so on, so let’s just put that to the side. We have produced these externalities partly to evolve ourselves. That’s part of the magic, that you make this thing called art and you gather people around it to interpret it, then they make a certain meaning which then changes them for the future, changes their offspring, changes their survivability rate.

This is part of the operation that fascinates me. Not everybody who listens to Beethoven goes off to have sex with Beethoven. So, what else is going on with art? It is there to evolve us in directions that we agree socially and culturally that we want to evolve. That’s rather extraordinary.

NEIL GERSHENFELD: There was an interesting study a couple of years ago that showed birds have hemlines, that they have fashion. What color feathers they have and how long they are changes. There are fashions for the birds. The study traced it through to show that if you didn’t do that they would over specialize. If something was considered a locked-in fashion, the birds would exaggerate it. So, they keep having a new hemline to force themselves to diversify.

HILLIS: Part of the appeal of my "Songs of Eden" story that Freeman told is that the "we" that we’re talking about is not just the monkeys. The "we" is in fact that culture that evolved. So, what makes us human is that combination of those two things together. What was evolving is not just the genetics that was evolving the monkey, it’s the cultural complexity in which all those things should happen, and that’s part of what we are. We’re the combination of those two things.

IAN MCEWAN: It would seem that all art and all music is a special case of what everyone is doing, so there might be a random element that there are just people who happen to do it better.

F. DYSON: Just one more remark. If you bring in quantum mechanics—of course both digital and analog computers may be classical or they may be quantum—it makes an additional strong advantage to the analog way of working. Quantum mechanics has this quality of coherence that connects parts of the whole physical landscape in this mysterious way; the different parts of an image are coherent. That is totally lost when you digitize, but it’s preserved when you do analog. That’s an additional reason why analog computing probably looks more promising.

GERSHENFELD: Seth and I were both part of a very interesting program on quantum biology. Biology uses quantum coherence exquisitely, but only over a very small number of degrees of freedom. It’s very expensive to preserve coherence. It’s very unlikely, and I think Seth you would agree, that there’s large-scale quantum coherence anywhere near biology. It’s in very selected, small numbers of interacting degrees of freedom.

F. DYSON: No, I disagree totally with that. Quantum coherence works beautifully over large distances.

GERSHENFELD: Over large distances, but it’s the question of degrees of freedom and thermalization.

WOLFRAM: What are the examples in biology?

LLOYD: If you just look out the window, all these green leaves are LHC2, which is the primary photo system for plants. It uses quantum coherence in a very sophisticated fashion to increase the efficiency of exotonic transport, and it’s amazing. It would be one tenth as efficient if it weren’t for this quantum coherence.

GERSHENFELD: Another interesting one is sensing magnetic fields. There’s independent chemistry in how you perceive magnetic fields. Maintaining quantum coherence with lots of degrees of freedom against a heat bath is really hard. That’s the challenge in quantum computing. The physics makes it very unlikely there’s large-scale quantum coherence.

LLOYD: Well, that’s not entirely true. If you look at light from a distant star and you have a big enough telescope, then you can exhibit coherence in this light. This is the Hanbury Brown-Twiss effect, which is what allows you build large baseline telescopes. But that’s a situation in which light has traveled, and it could have traveled for millions of years.

GERSHENFELD: And there’s no interaction. There’s lateral coherence.

LLOYD: It's because it didn’t get de-cohered along the way.

GERSHENFELD: There isn’t longitudinal coherence, in which case it’s lateral coherence.

LLOYD: It’s still quantum.

DAVID CHALMERS: Freeman, I’m curious about how you get your model of the analog and the gestalt going without quantum computation. If we assume it’s all classical physics and classical computation, then presumably it breaks down into local mechanistic parts.

If I operate on an image via classical mechanisms, it’s presumably going to have to work at some level operating on the parts of the image. Aren't you going to come back and say, "Well, that’s not what I needed. I needed something holistic that operated on the whole image at once."? One could at least smell a way of trying to do that with quantum mechanics, but how could one possibly do that without quantum mechanics?

F. DYSON: Well, it’s just one of the big mysteries. We have no idea how all that works.

CHALMERS: If the brain does it by local mechanisms of neurons, would that count? Or would that still be breaking it down into parts?

F. DYSON: I don’t know what a neuron is and neither does anybody else. A neuron is a very, very clever device.