For better or for worse, Thomas Nagel’s infamous question “What is it like to be a bat?” has stirred the philosophically minded for forty years. Actually, the question should be “What kind of bubbles does a bat have?” That is, what are the contents of a bat’s consciousness? We will probably never fully experience bat-brand consciousness, but we can observe the contents of a lonely single hemisphere in a human brain. The brain is full of bubbles, and when a brain is split, each half has its own set of bubbles that boil up. Since we now know that each half brain has some unique bubbles, might it be that each hemisphere harbors a different kind of overall conscious experience? To get a feel for this, consider what bubbles you don’t have. For instance, I can tell you I don’t have the abstract math bubble; therefore, I can tell you I feel frustrated when equations start popping up in lectures. Though I wish I could, I cannot tell you what it is like to grasp highly abstract math, but I bet it would be cool!
Rebecca Saxe, at MIT, has discovered in some intriguing studies that there is special human brain hardware in the right half brain that appears specialized for determining what the intentions of another person might be. When we interact with others, we are constantly and reflexively making assessments of their mental state and their intent in all of their actions. It is virtually automatic. It appears that children with autism lack this capacity to a large extent and, as a result, find social interactions difficult. As I discussed earlier, informal psychological parlance it is called having a theory of mind. Saxe, using modern brain imaging techniques, discovered a brain area in the right hemisphere responsible for this capacity. As you might guess, this observation raises a new question. The Saxe finding would suggest that perhaps the left hemispheres of split-brain patients may not have access to the module that adds theory of mind to our cognition. What would a left hemisphere be like that didn’t have access to that capacity?
Michael Miller, my former student and now colleague, and Walter Sinnott-Armstrong, the distinguished philosopher, teamed up to examine the implications of the Saxe finding for split-brain patients. They wanted to determine if one separated hemisphere might evaluate moral issues differently than the other. Again, in a split-brain human, Saxe’s work suggests one hemisphere (the right) would have the module that considers the mind and intentions of others, while the other hemisphere (the left) would not. Upon separation, would the left hemisphere act differently, since it no longer possessed a module that evaluated the mental states and intentions of others?
Moral philosophers like to approach moral dilemmas as having either a deontological or a utilitarian nature. In plain English that means, “Do we solve the dilemma by considering what is inherently right, what our moral duty is, or does the solution reside in maximizing collective good?” There are many ways of phrasing this dichotomy and many ways to reveal whether someone is more deontological in their thinking or more utilitarian. In a series of cleverly devised tests, patients were told stories in which the main person did something evil but the outcome was, nonetheless, un-harmful: If a secretary wants to bump off her boss and intends to add poison to his coffee, but unknown to her, it actually is sugar, he drinks it, and he is fine, was that permissible? Or the story was about a person doing something that seemed innocent to them but proved to be fatal to someone else: If a secretary believes that she is adding sugar to her boss’s coffee, but it actually is poison accidentally left there by a chemist, and her boss drinks it and dies, was that a permissible action? The patient, upon hearing the full stories, had to simply judge whether the act the person did was “permissible” or “forbidden.”
Needless to say, most people judge an example with malintent as forbidden no matter what the outcome. Most people in this sense are deontological. Most people would judge an action by someone who had no malicious intentions to be permissible (although not always), even though it sometimes can end in tragedy. Split-brain patients act in a unique way. It appeared the left, speaking hemisphere initially offered a utilitarian response to all scenarios. Thus, if an act had mal-intent but no harm came from it, it was judged as “permissible.” And if an act did not have mal-intent, but resulted in harm, it was judged “forbidden.” Given the clarity of the stories used, this was a jarring result. What is going on? The disconnected left hemisphere is unable to take into account the intent of the person in the stories, acting as if it didn’t have a theory of mind.
Second, the patients would then frequently give spontaneous explanations as to why they had chosen the utilitarian result over the clearly deontological choice. It seemed they “felt” their judgments were not exactly copacetic, and they often rationalized their judgment without any prompting. Remember, the left hemisphere does have its interpreter, the module that tries to explain both the behavior it observes pouring out of the body and the emotions it feels. Keep in mind that an emotional reaction to something experienced by one side of the brain is felt by both. If the emotion was a result of the right brain’s experience, the left brain has no information about why it is feeling the emotion it is, but explains it anyway. So, when the right brain heard the left brain’s answer (even with limited language abilities there is still some comprehension in the right hemisphere), it was just as shocked as we were, resulting in an emotional reaction that didn’t match what the left hemisphere considered to be a reasonable answer. With the stage set for a major conflict like this, it was not surprising that the special module in the left hemisphere (the “interpreter” module—the one that is ever ready to explain away behaviors produced from the silent disconnected right hemisphere) jumped in and tried to explain what was going on. For example, in one scenario a waitress served sesame seeds to a customer while falsely believing that the seeds would cause a harmful allergic reaction. Patient J.W. judged the waitress’s action “permissible.” After a few moments, he spontaneously off d, “Sesame seeds are tiny little things. They don’t hurt nobody.”
In my metaphor, a bubble is the end result of the processing of a module or group of modules in a layered architecture. The special module that evaluates the intent of others in split-brain patients is disconnected and isolated from the speaking left hemisphere. As a consequence, the result of its processing doesn’t bubble up to contribute to or battle for dominance in the left hemisphere’s decision-making process. It can’t be part of that bubbling process if it is not physically located in the left hemisphere among the bubbles having access to language and speech. So, eerily, the knowledge of the intent of the other is absent. Yet bubbles from midbrain emotional processing do make it to both hemispheres. It is only when the right hemisphere hears the left hemisphere’s response, resulting in an emotional feeling felt by both hemispheres, that a mismatch is identified by the left hemisphere. That sets the process of justification in motion. The left hemisphere also has a lifetime of memories stored about the moral norms of the culture it has grown up in and can use these for justifications.
We are discovering that the subtleties of our psychological lives are being managed by specific modules in our brains. Again, the left brain, which benefits from modules that enable abstract thinking, verbal coding, and much more, does not have the module to take into account the intentions of others. Yet it has a powerful inference ability. If the result is good, it infers the means were okay. Thus, if the result is fine, the act is permissible. If the end is bad, the act is not permissible. What is the best for the most is okay. The uncanny and almost surreal aspect of these findings is the possibility that if the proper module that enables one to think about others is missing, one can’t seem to learn it.
Excerpted from THE CONSCIOUSNESS INSTINCT: Unraveling the Mystery of How the Brain Makes the Mind by Michael S. Gazzaniga. Published by Farrar, Straus and Giroux April 3rd 2018. Copyright © 2018 by Michael S. Gazzaniga. All rights reserved.
Big Think Edge helps organizations by catalyzing conversation around the topics most critical to 21st century business success. Led by the world’s foremost experts, our dynamic learning programs are short-form, mobile, and immediately actionable.