Like many other neuroscientists, I receive my weekly dose of bizarre e-mails. My correspondents seem to have a good reason to worry, though: they think that their brain is being tapped. Thanks to new "neurophonic" technologies, someone is monitoring their mind. They can't think a thought without it being immediately broadcasted to Google, the CIA, news agencies worldwide, or… their wife.
This is a paranoid worry, to be sure—or is it? Neuroscience is making giant strides, and one does not have to be schizophrenic to wonder whether it will ever crack the safety box of our minds. Will there be a time, perhaps in the near future, where our innermost feelings and intimate memories will be laid bare for others to scroll through? I believe that the answer is a cautious no—at least for a while.
Brain imaging technologies are no doubt powerful. Over fifteen years ago, at the dawn of functional magnetic resonance imaging, I was already marveling at the fact that we could detect a single motor action: any time a person clicked a small button with the left or right hand, we could see the corresponding motor cortex being activated, and we could tell which hand the person had used with over 98% accuracy. We could also tell which language the scanned person spoke. In response to spoken sentences in French, English, Hindi or Japanese, brain activation would either invade a large swath of the left hemisphere, including Broca's area, or stay within the confines of the auditory cortex—a sure sign that the person did or did not understand what was being said. Recently, we also managed to tell whether an adult or a child had learned to read a given script—simply by monitoring the activation of the "visual word form area", a brain region that holds our knowledge of legal letter strings.
Whenever I lectured on this research, I insisted on the limitations of our methods. Action and language are macro-codes of the brain, I explained. They mobilize gigantic cortical networks that lay centimeters apart and are therefore easily resolved by our coarse brain imagers. Most of our fine-grained thoughts, however, are encrypted in a micro-code of sub-millimeter neuronal activity patterns. The neural configurations that distinguish my thought of a giraffe from my thought of an elephant are minuscule, unique to my brain, and intermingled within the same brain regions. Therefore, they would forever escape decoding, at least by non-invasive imaging methods.
In 2008, Tom Mitchell's beautiful Science paper proved me partially wrong. His research showed that snapshots of state-of-the-art functional MRI contained a lot of information about specific thoughts. When a person thought of different words or pictures, the brain activity patterns they evoked differed so much that a machine-learning algorithm could tell them apart much better than would be expected by chance. Strikingly, many of these patterns were macroscopic, and they were even similar in different people's brains. This is because, when we think of a word, we do not merely activate a small set of neurons in the temporal lobes that serves as an internal pointer to its meaning. The activation also spreads to distant sensory and motor cortices that encode each word's concrete network of associations. In all of us, the verb "kick" activates the foot region of the motor cortex, "banana" evokes a smell and a color, and so on. These associations and their cortical patterns are so predictable that even new, untrained words can be identified by their brain signature.
Why is such brain decoding an interesting challenge for neuroscientists? It is, above all, a proof that we understand enough about the brain to partially decrypt it. For instance, we now know enough about number sense to tell exactly where in the brain the knowledge of a number is encrypted. And, sure enough, when Evelyn Eger, in my lab, captured high-resolution MRI images of this parietal-lobe region, she could tell whether the scanned person had viewed 2, 4, 6 or 8 dots, or even the corresponding Arabic digits.
Similarly, in 2006, with Bertrand Thirion, we tested the theory that the visual areas of the cortex act as an internal visual blackboard where mental images get projected. Indeed, by measuring their activity, we managed to decode the rough shape of what a person had seen, and even of what she had imagined in her mind's eye, in full darkness. Jack Gallant, at Berkeley, later improved this technique to the point of decoding entire movies from the traces they evoke in the cortex. His reconstruction of the coarse contents of a film, as deduced by monitoring the spectator's brain, was an instant YouTube hit.
Why, then, do I refuse to worry that the CIA could harness these techniques to monitor my thoughts? Because many limitations still hamper their practical application in everyday circumstances. First of all, they require a ten-ton superconducting MR magnet filled with liquid helium—an unlikely addition to airport security portals. Furthermore, functional MRI only works with a very cooperative volunteer who stays perfectly still and attends to the protocol. Total immobility is a must. Even a millimeter of head motion, especially if it occurs in tight correlation with the scanning protocol, can ruin a brain scan. In the unlikely event that you are scanned against your will, rolling your eyes rhythmically or moving your head ever so slightly, in sync with the stimuli, may suffice to prevent detection. In case of an electro-encephalogram, clenching your teeth will go a long way. And systematically thinking of something else will, of course, disrupt the decoding.
Finally, there are limitations arising from the nature of the neural code. MRI samples brain activity at a coarse spatial scale and in a very indirect manner. Every millimeter-sized pixel in a brain scan averages over the activity of hundreds of thousands – of neurons. Yet the precise neural code that contains our detailed thoughts presumably lies in the fast timing of individual spikes from thousands of intermingled neurons—microscopic events that we cannot see without opening the skull. In truth, even if we did, the exact manner in which thoughts are encoded still escapes us. Crucially, neuroscience is lacking even the inkling of a theory as to how the complex combinatorial ideas afforded by the syntax of language are encrypted in neural networks. Until we do, we have very little chance of decoding nested thoughts such as "I think that X", "My neighbor thinks that X", "I used to believe that X", "he thinks that I think that X", "it is not true that X", and so on.
There is no guarantee, of course, that these problems will not be solved—next week or next century, perhaps using electronic implants or miniaturized electro-magnetic recording devices. Should we worry then? Millions of people will rejoice instead. They are the many patients with brain lesions, whose lives may soon change thanks to brain technologies. In a motivated patient, decoding the intention to move an arm is far from impossible, and it may allow a crippled quadriplegic to regain his or her autonomy, for instance by controlling a computer mouse or a robotic arm. My laboratory is currently working on an EEG-based device that decrypts the residual brain activity of patients in a coma or vegetative state, and helps doctors decide whether consciousness is present or will soon return. Such valuable medical applications are the future of brain imaging, not the devilish sci-fi devices that we wrongly worry about.