At the heart of scientific thinking is the systematic evaluation of alternative possibilities. The idea is so foundational that it’s woven into the practice of science itself.
Consider a few examples.
Statistical hypothesis testing is all about ruling out alternatives. With a null hypothesis test, one evaluates the possibility that a result was due to chance alone. Randomized controlled trials, the gold standard for drawing causal conclusions, are powerful precisely because they rule out alternatives: they diminish the plausibility of alternative explanations for a correlation between treatment and effect. And in science classes and laboratories across the globe, students are trained to generate alternative explanations for every observation—an exercise that peer reviewers take on as a professional obligation.
The systematic evaluation of alternative possibilities has deep roots in the origins of science. In the 17th century, Francis Bacon wrote about the special role of instantia crucis, “crucial instances,” in guiding the intellect towards the true causes of nature by supporting one possibility over stated alternatives. Soon after, Robert Boyle introduced the experimentum crucis, or “crucial experiment”; a term subsequently used by Robert Hooke and Isaac Newton. A crucial experiment is a decisive test between rival hypotheses: a way to differentiate possibilities. (More than two centuries later, Pierre Duhem would reject the crucial experiment—but not because it involves evaluating alternative possibilities. He rejected crucial experiments because the alternative possibilities that they differentiate are too few: there are always additional hypotheses available for amendment, addition, or rejection.)
The systematic evaluation of alternative possibilities is a hallmark of scientific thinking, but it isn’t restricted to science. To arrive at the truth (in science or beyond), we generate multiple hypotheses and methodically evaluate how they fair against reason and empirical observation. We can’t learn without entertaining the possibility that our current beliefs are wrong or incomplete, and we can’t seek diagnostic evidence unless we specify the alternatives. Evaluating alternative possibilities is a basic feature of human thinking—a feature that science has successfully refined.
Within psychology, prompting people to consider alternative possibilities is recognized as a strategy for debiasing judgments. When prompted to consider alternatives (and in particular, to “consider the opposite” of a possibility under evaluation), people question assumptions and recalibrate beliefs. They recognize that an initial thought was misguided, a first impression uncharitable, a plan unrealistic. That such a prompt is effective suggests that in its absence, people don’t reliably consider the alternative possibilities that they should. Yet the basis for doing so was in their heads all along—an untapped potential.
Evaluating alternative possibilities ought to be better known because it’s a tool for better thinking. It’s a tool that doesn’t require fancy training or fancy equipment (beyond the fancy equipment we already contain in our heads). What it does require is willingness to confront uncertainty, and boldly exploring the space of discarded or unformulated alternatives. That’s a kind of bravery that scientists should admire.