The Future of the Mind [1]


I think about the fundamental nature of the mind and the nature of the self. Lately, I’ve been thinking about these issues in relation to emerging technologies. In particular, I’ve been thinking about the future of the mind and how AI technology might reshape the human mind and create synthetic minds. As AI gets more sophisticated, one thing that I’ve been very interested in is whether the beings that we might create could have conscious experiences.

Conscious experience is the felt quality of your mental life. When you see the rich hues of a sunset or smell the aroma of your morning coffee, you’re having conscious experience. Conscious experience is very familiar to you. In fact, there’s not a moment of your waking life in which you’re not a conscious being.

If we have artificial general intelligence, intelligence that’s capable of flexibly connecting ideas across different domains and maybe having something like sensory experience, what I want to know is whether it would be conscious or if it would all just be computing in the dark—engaging in things like visual recognition tasks from a computational perspective and thinking sophisticated thoughts, but not truly being conscious.

Unlike many philosophers, and especially many people who are in the media and the transhumanists, I tend to take a wait-and-see approach about machine consciousness. For one thing, I reject a full skeptical line. There have been well-known philosophers in the past who have been skeptical about the possibility of machine consciousness—famously, the philosopher John Searle—but I think it's too early to tell. There will be many variables that determine whether there will be conscious machines.

For another thing, we have to ask whether it’s even compatible with the laws of nature to create machines that are conscious. We just don’t know if consciousness can be something that’s implemented in other substrates. We don’t know what the fastest microchips will be, so we don’t know what an artificial general intelligence would be made out of. So, until that time, it’s awfully difficult for us to say that something which is highly intelligent would even be conscious.

It’s probably safest right now to drive a conceptual wedge between the idea of sophisticated intelligence on the one hand and consciousness on the other. What we need to do is keep an open mind and suspect that it may be, for all we currently know, that the most sophisticated intelligences won't be conscious. There are a lot of issues, and not just issues involving substrates, that will determine whether conscious machines are possible. Suppose for a minute that it is possible, at least in principle, to construct a conscious artificial intelligence. Who would want to do it? Think about the debates going on right now concerning android rights, for example.

Suppose all of those Japanese androids that are being designed to take care of people’s homes and take care of the elderly turn out to be conscious. Wouldn't there be concerns about forcing creatures to work for others when they’re conscious beings? Wouldn't that be akin to slavery? I’m not so sure that it will be to the advantage of AI companies to produce conscious beings. In fact, they may decide to engineer consciousness out. Of course, we don’t know if consciousness can be engineered into or out of a machine. For all we know right now, it may not be compatible with the laws of nature to produce it. Conversely, it could be an inevitable byproduct of sophisticated computation, and then we will have to be very concerned about rights for androids and other AIs.

If machines turn out to be conscious, we won’t just be learning about machine minds, we may be learning about our own minds. We could learn more about the nature of conscious experience, which could cause us to reflect as a culture on what it is to be a conscious being. Humans would no longer be special in the sense of being capable of intellectual thought. We would be sharing that position with synthetic beings that aren’t even made of the same stuff we are. That could be a very humbling experience for humans.

As civilizations grow more intelligent, they could become post-biological. So, synthetic intelligence could turn out to be a natural outgrowth of successful technological civilizations. In a relatively short amount of time, we have managed to create interesting and sophisticated artificial intelligences. We’re now turning artificial intelligence inward in terms of building neural prosthetics to enhance the human brain. We already see tech gurus like Ray Kurzweil and Elon Musk talking about enhancing human intelligence with brain chips—not just helping people with brain disorders, but also helping people to live longer and be smarter. It may be the case that civilizations throughout the universe became post-biological and enhanced their intelligence to become synthetic beings themselves.

In a sense, artificial intelligence could be a natural outgrowth of a successful technological civilization. Of course, that’s not to say that the universe is teaming with life. It may not be. That’s an empirical question, although a lot of my colleagues at NASA are optimistic about that. And it’s not to suggest that even if other planets have life, life will become technological. We still aren’t clear on how probable it is that life itself could continue to progress and exist beyond its technological maturity.

~ ~ ~ ~

I started my academic life as an economist and then stumbled into a class with Donald Davidson, the eminent philosopher. I discovered that I liked Anglo-American philosophy and went to work with Jerry Fodor, a famous philosopher of mind who was a critic of the ideas that have now given rise to deep learning.

Fodor and I would spend hours arguing about the scope and limits of artificial intelligence. I disagreed with him about these early deep-learning views. I didn’t think they were as impossible as he suggested. At the time, those were called “connectionist views.” He claimed that the brain isn’t computational, and that artificial intelligence probably wouldn't succeed when it gets to the level of domain-general artificial intelligence because there was some special feature about the human mind that isn’t computational. Namely, he was referring to what he called “the central systems,” the areas of the brain that we might identify as being domain-general, going beyond highly compartmentalized mental functions—the good stuff that gives rise to human creativity and cognition.

I argued that the brain was computational through and through. For example, there were successful theories of working memory and attention which involved domain-general functions. While I was working with Fodor, I found myself reading an awful lot of computational neuroscience. I urged that the brain may be a hybrid system that could be described in terms of the neural network approach you see in computational neuroscience, but one in which these higher-level descriptions that you see in cognitive psychology make reference to the format of thinking that people like Jerry Fodor appeal to—the language of thought, which holds that the brain is a symbol processing device that manipulates symbols according to rules.

It would have been fun to talk to Fodor about deep-learning systems. I imagine he would still be quite skeptical about the possibility of these systems developing further into what some people call artificial general intelligence. I’m not at all suggesting that today’s resources could give rise to something that sophisticated. I do, however, think that with all the money pouring into artificial intelligence, all of the successes with the speed of computation improving year after year, always finding better and faster microchips, the possibility of quantum computing being developed in a serious way—all of these things strongly militate for artificial intelligence that progressively gets better and better. In the meantime, we can look at resources within different fields of neuroscience, like computational neuroscience, and borrow from what the brain is doing. We can reverse engineer AI from the brain to the extent that we even need to do so.

As I started to think about the success stories coming out of Deep Mind—with domain-specific systems, for example—I started to become more optimistic that with all of the emphasis on AI technology and the improved technologies that are available, more sophisticated AI would be created. We won't just create smart robots, we’ll also be putting AI in our heads and changing the shape of the human mind. Then I started to worry about how this could transform society.

I see many misunderstandings in current discussions about the nature of the mind, such as the assumption that if we create sophisticated AI, it will inevitably be conscious. There is also this idea that we should “merge with AI”—that in order for humans to keep up with developments in AI and not succumb to hostile superintelligent AIs or AI-based technological unemployment, we need to enhance our own brains with AI technology.

One thing that worries me about all this is that don't think AI companies should be settling issues involving the shape of the mind. The future of the mind should be a cultural decision and an individual decision. Many of the issues at stake here involve classic philosophical problems that have no easy solutions. I’m thinking, for example, of theories of the nature of the person in the field of metaphysics. Suppose that you add a microchip to enhance your working memory, and then years later you add another microchip to integrate yourself with the Internet, and you just keep adding enhancement after enhancement. At what point will you even be you? When you think about enhancing the brain, the idea is to improve your life—to make you smarter, or happier, maybe even to live longer, or have a sharper brain as you grow older—but what if those enhancements change us in such drastic ways that we’re no longer the same person?

These are issues that philosophers like Hume, and Locke, and Nietzsche, and Parfit have been thinking about for years in the context of debates over the nature of the person. Now that we have an opportunity to possibly sculpt our own minds, I believe that we need to dialogue with these classic philosophical positions about the nature of the self. 

I’m deeply concerned about the obsession with technology. I consider myself a techno-progressive in that I want to see technology used to better human lives, but we need to be careful with the unflinching acceptance of this idea of merging with AIs or even having an Internet of things around us at all times.

What we need to do now as these neural enhancement technologies are being developed is have a public dialogue about this. All stakeholders need to be involved, ranging from people who are researching these technologies to people who are policymakers to ordinary people, especially young people, so that as they make brain enhancement decisions, they will be able to do so with more scrutiny. Here, the classic philosophical issues about the nature of the self and the nature of consciousness come into play.

AI ethics boards at the larger companies are important, but in a sense, it’s the fox guarding the henhouse. The only way that we will have a positive future when it comes to the use of AI technologies to create synthetic minds and to enhance the human mind is to bring these issues directly to the public, which is why I care a good deal about public engagement and to make sure that all stakeholders are involved.

In a month I’ll be the distinguished scholar at the Library of Congress for the next year, so I can bring these issues to D.C. I hope that even though many tech leaders are too busy to think deeply about some of the underlying philosophical issues, that the public itself engages with this topic.

~ ~ ~ ~

How could we tell if a machine is conscious? I’ve suggested that we can’t just assume that sophisticated AI will be conscious. Further, it may be that consciousness is only developed in certain AI programs or with certain substrates, certain types of microchips and not others. For all we know right now, maybe silicon-based systems can be conscious, but systems that use carbon nanotubes can’t. We just don’t know. It’s an empirical question. So, it would be useful to have tests.

The tricky part is that even today we can’t tell exactly what deep-learning systems are doing. The black box problem of AI is a problem about how we can know what the computations are in deep-learning systems, even at the early level of sophistication they’re at today.

Instead of looking under the hood at the architecture of the AI, the most productive way to determine consciousness in machines is two-fold. The first thing to do is to have a behavior-based test, which I’ve developed at the Institute for Advanced Study with the astrophysicist and exoplanet whiz, Edwin Turner. It’s a simple test. One of the things that is quite salient about the fact that we’re conscious beings is that we can understand thought experiments involving the mind.  When you were a child, you may have seen the show Freaky Friday, which was a case of a mother and daughter trading bodies. Why did it even make sense to us? It made sense to us because we can imagine the mind leaving the body. Now, I’m not saying the mind does leave the body, but we can envision situations, at least in broad strokes, that involve an afterlife, reincarnation, philosophical thought experiments.

What we need to do then is look for AIs that are capable of imagining these kinds of situations. Now, there is an objection to this, a good objection, which is we can just program an AI to act as if it’s conscious. Already today there are AIs that will talk to you and act as if they’re having mental lives. Consider Sophia in Hanson Robotics. She’ll talk to you, and the press will even talk about her as if she is a conscious being. I believe she was even offered citizenship in Saudi Arabia, which is interesting.

What we need to do is box in an AI to determine whether it’s conscious. This is a strategy used in AI safety research to keep an AI outside of gaining knowledge about the world or acting in the world during the stage of R&D when one is learning about the capacities of a system. At this point, if you don’t give the artificial intelligence knowledge about neuroscience and human consciousness and you see anomalous behaviors when you probe it for conscious experience, give it thought experiments and see how it reacts. Ask it simply, "Can you imagine existing beyond the destruction of your parts?"

Turner and I have written several questions, a sort of Turing test for machine consciousness, designed to elicit behaviors in machines insofar as they’re boxed in appropriately, and that’s to ensure that we don’t get false positives. This being said, I don't think the test is the only way to approach machine consciousness. It’s what philosophers call a “sufficient condition" for machine consciousness. So, if something passes it, we have reason to believe it’s conscious. But if something fails it, other tests might determine that it’s nevertheless still conscious. It may not be suitably linguistic, it may not have a sense of self, and so on.

I’ve mentioned that I offer a two-fold approach. Let me talk about the second way of determining whether machines might be conscious, because this is a sensible path given the developments right now on brain chips. As we use neuroprosthetics or brain chips in parts of the brain that underlie conscious experience in humans, if those chips succeed and if we don’t notice deficits of consciousness, then we have reason to believe that that microchip made of a particular substrate, say, silicon, could underwrite consciousness when it’s in the right architectural environment.

That would be important if we determined that another substrate, when put in the human brain, wouldn't change the quality of our conscious experience when it’s in the areas of the brain that we believe are responsible for consciousness. That would mean that, in principle, we could perhaps develop a synthetic consciousness. We might even do it by simply replacing the human brain gradually with artificial components until at the end of the day we have a being that is a full-fledged AI.

I love the intersection between philosophy and science, or the part where science gets murky and one has to think about the implications. Examples of this would be theories of space-time emergence in physics, where they find themselves looking at mathematical theories and then drawing conclusions from them about the nature of time. Issues like this involve a delicate balance between mathematical or empirical considerations and philosophical issues. That’s where I like to step in and get involved.

I’m intensely interested in the scope and limit of what we can know as humans. We’re humble beings and it may be that as we enhance our brains, we’ll find answers to some of these classic philosophical problems. Who knows? For now, if we develop artificial intelligence technology without thinking carefully about issues involving the nature of consciousness or the nature of the self, we will see that the artificial intelligence technologies may not do what the people developing them intend them to do, which is to make our lives better and to promote human flourishing.

We have to be careful to make sure that we know if we’re creating conscious beings and we know if radically enhancing our brains would be compatible with the survival of the person, otherwise these technologies will lead to the exploitation and suffering of conscious beings rather than improving the lives of people.

I like living in that space of humility where we do hit an epistemological wall because it teaches us the scope and limits of what humans can understand. Sometimes it is important to remember in this day and age where technological innovations are sweeping that there will always be issues that we can’t fully get definitive answers to. A good example of this is the question of whether we’re brains in vats—living our lives inside computer simulations. These are epistemological issues, issues involving the nature of knowledge that have no easy answers.