Studying mathematical logic in the 1970s I believed it was possible to put together a convincing argument that no computer program can fully emulate a human mind. Although nobody had quite gotten the argument right, I hoped to straighten it out.
My belief in this will-o-the-wisp was motivated by a gut feeling that people have numinous inner qualities that will not be found in machines. For one thing, our self-awareness lets us reflect on ourselves and get into endless mental regresses: "I know that I know that I know..." For another, we have moments of mystical illumination when we seem to be in contact, if not with God, then with some higher cosmic mind. I felt that surely no machine could be self-aware or experience the divine light.
At that point, I'd never actually touched a computer — they were still inaccessible, stygian tools of the establishment. Three decades rolled by, and I'd morphed into a Silicon Valley computer scientist, in constant contact with nimble chips. Setting aside my old prejudices, I changed my mind — and came to believe that we can in fact create human-like computer programs.
Although writing out such a program is in some sense beyond the abilities of any one person, we can set up simulated worlds in which such computer programs evolve. I feel confident that some relatively simple set-up will, in time, produce a human-like program capable of emulating all known intelligent human behaviors: writing books, painting pictures, designing machines, creating scientific theories, discussing philosophy, and even falling in love. More than that, we will be able to generate an unlimited number of such programs, each with its own particular style and personality.
What of the old-style attacks from the quarters of mathematical logic? Roughly speaking, these arguments always hinged upon a spurious belief that we can somehow discern between, on the one hand, human-like systems which are fully reliable and, on the other hand, human-like systems fated to begin spouting gibberish. But the correct deduction from mathematical logic is that there is absolutely no way to separate the sheep from the goats. Note that this is already our situation vis-a-vis real humans: you have no way to tell if and when a friend or a loved one will forever stop making sense.
With the rise of new practical strategies for creating human-like programs and the collapse of the old a priori logical arguments against this endeavor, I have to reconsider my former reasons for believing humans to be different from machines. Might robots become self-aware? And — not to put too fine a point on it — might they see God? I believe both answers are yes.
Consciousness probably isn't that big a deal. A simple pair of facing mirrors exhibit a kind of endlessly regressing self-awareness, and this type of pattern can readily be turned into computer code.
And what about basking in the divine light? Certainly if we take a reductionistic view that mystical illumination is just a bath of intoxicating brain chemicals, then there seems to be no reason that machines couldn't occasionally be nudged into exceptional states as well. But I prefer to suppose that mystical experiences involve an objective union with a higher level of mind, possibly mediated by offbeat physics such as quantum entanglement, dark matter, or higher dimensions.
Might a robot enjoy these true mystical experiences? Based on my studies of the essential complexity of simple systems, I feel that any physical object at all must be equally capable of enlightenment. As the Zen apothegm has it, "The universal rain moistens all creatures."
So, yes, I now think that robots can see God.