Imagine a scenario where one million monkeys engage in random stock market speculation. After a week, half of them make a profit while the other half face losses. The profitable ones continue, and the rest are sent home. This process continues for several weeks, resulting in a final survivor—the “success monkey” who seemingly possesses extraordinary stock-picking abilities. The media would naturally be intrigued by this remarkable primate and scramble to uncover the secrets behind its success. However, this story serves as a powerful illustration of the outcome bias. In this cognitive fallacy, we evaluate decisions solely based on their outcomes rather than the decision-making process itself.

The Monkey Stock Market Hypothesis: A Lesson in Randomness

The scenario begins with an experiment involving one million monkeys, each randomly buying and selling stocks on the market. These monkeys do not know the market or its trends — they act purely on chance. After one week, half of these monkeys will have made a profit, while the other half will have suffered a loss. In the second week, the monkeys who made a profit continue, while those who lost are removed. This process continues for several weeks, and by the tenth week, about 1,000 monkeys remain — those who have been randomly lucky enough to make good decisions. By the twentieth week, just one monkey remains. This “success monkey” has seemingly made the right choices every time, turning a modest investment into a fortune.

Now, let’s consider how this success is interpreted. The media, eager for a story, will hone in on the “success monkey.” They will search for a pattern or a strategy that explains why this monkey has succeeded. Maybe it’s how the monkey eats bananas or the specific corner of its cage. Perhaps it’s its swinging technique or how it pauses thoughtfully while grooming itself. The assumption will be that a specific behavior or action must have made this monkey successful. The underlying assumption is that success must have a reason — that it is not random, that it must be repeatable and explainable.

This scenario illustrates how outcome bias works in the real world. We tend to explain success by looking for patterns and “success principles” to describe the outcome. In this case, the media assumes that because the monkey succeeded, it must have done something exceptional. But the reality is that success in this case is purely random. The “success monkey” is simply the beneficiary of luck, not skill or a unique process. However, the outcome bias leads us to focus on success and craft a more satisfying narrative — as though success is always the result of identifiable, rational decisions. In truth, randomness played a significant role, and no specific behavior can reliably replicate the outcome.

The Historian Error: Pearl Harbor and Retrospective Judgment

Outcome bias is often intertwined with the historian error, a tendency to assess decisions and events based on what we know in hindsight rather than the knowledge available when the decision was made. A famous example of this error is the judgment of the U.S. military’s actions before the attack on Pearl Harbor in 1941. From a modern perspective, it may seem clear that an attack was imminent, given the intelligence reports warning of a possible Japanese assault. But in 1941, the situation was far more ambiguous. Intelligence was fragmented and contradictory, and many believed that Japan focused on territories elsewhere in the Pacific, not on attacking U.S. soil.

Looking back with the knowledge that Japan did attack Pearl Harbor makes it easy to criticize military leaders for not evacuating the base or for not taking more aggressive measures to prepare for an attack. However, evaluating this decision solely through the lens of the outcome, using the hindsight of knowing the attack happened, distorts the reality of the situation. At the time, there was no certainty that an attack would occur, and the available intelligence could be interpreted in various ways. Some reports pointed to other possibilities, and military leaders had to make decisions based on incomplete and often contradictory information.

This example underscores the dangers of outcome bias when assessing historical decisions. Had the military acted on some of the signals that suggested an attack, it might have been thwarted, but it is equally possible that such an evacuation would have been premature or even unnecessary. Historical errors arise when we judge a decision with the full knowledge of the result instead of considering the information available at the time. Such judgments fail to reflect the complexity of the decision-making process and ignore the uncertainty and ambiguity that the decision-makers face.

When evaluating historical events, it’s essential to filter out the influence of what we know now and consider the context in which the decisions were made. Doing so gives us a more accurate understanding of the challenges and thought processes involved in those decisions. Evaluating actions in hindsight without accounting for the uncertainties at the time distorts our perception of events and makes us more likely to commit outcome bias.

The Surgeon Experiment: Small Samples, Big Bias

Now, let’s turn our attention to an experiment designed to evaluate the performance of three heart surgeons, each tasked with performing five challenging surgeries. The mortality rate for this type of surgery has remained stable over the years at about 20%. The results of their surgeries are as follows:

  • Surgeon A performs five surgeries and loses no patients.
  • Surgeon B performs five surgeries and loses one patient.
  • Surgeon C performs five surgeries and loses two patients.

At first glance, many people would rate Surgeon A as the best, followed by B and C. After all, Surgeon A has the best outcome with zero deaths, so it seems logical to assume they are the most skilled. However, this outcome-based assessment is misleading and falls victim to outcome bias. The small sample size—just five surgeries per surgeon—means that the results can be highly influenced by random factors.

The small number of surgeries makes it difficult to draw meaningful conclusions. A surgeon’s performance can be influenced by many factors unrelated to their skill, such as patient health, the complexity of the surgery, or even external complications that arise during the procedure. A single poor outcome may not necessarily reflect the surgeon’s ability, and conversely, a string of good outcomes does not guarantee superior skill.

The key to a fair and accurate assessment lies in the sample size. With only five surgeries, randomness plays a large role in determining the results. A more reliable evaluation would require a larger sample size — hundreds or even thousands of surgeries. This would help mitigate the influence of random chance and provide a more accurate picture of a surgeon’s true capabilities. Moreover, assessing a surgeon based on their decision-making process, preparation, and technique is more important than simply looking at the raw outcomes.

The takeaway is that evaluating performance based on outcomes alone is misleading, especially when the sample size is small. In fields like medicine, where results can be influenced by myriad external factors, focusing solely on outcomes is unethical and unfair. It is far more valuable to assess the process and the quality of the decision-making involved in each surgery. That’s where true competence lies — in the preparation, skill, and judgment involved, not in whether or not a single patient survives.

The Fallacy of Judging Decisions Based on Results

Outcome bias stems from the fallacy that the quality of a decision can be accurately assessed based solely on its result. In reality, the outcome of any decision is often influenced by factors beyond the decision-maker’s control. Randomness, external influences, and unforeseen variables can all affect the result, making it unreliable as the sole measure of the decision’s quality.

Take, for example, a business leader who takes a bold risk by launching a new product. The decision is backed by careful research, market analysis, and a solid strategy. However, due to unforeseen circumstances — such as an economic downturn, a competitor’s surprise move, or shifting consumer preferences — the product fails to gain traction and results in financial losses. From an outcome-based perspective, this would be considered a bad decision. But is it? No. The decision was based on rational thought and a well-structured strategy, and external factors were the main contributors to its failure.

On the other hand, consider a situation where a business leader makes a similarly bold decision with minimal research, driven by gut feeling or a hunch. The product launch, against all odds, turns out to be a massive success, thanks to unforeseen trends or random factors like viral marketing. In this case, the outcome — a successful product launch — might lead us to believe that the decision-making process was sound, even though the underlying strategy was poor.

These examples demonstrate the danger of judging decisions solely by their outcomes. Good decisions don’t always lead to good results; bad decisions don’t always lead to bad outcomes. Randomness plays a major role in shaping results, and it’s essential to separate the process from the outcome in our evaluations. Focusing on the decision-making process, the reasoning behind the choice, and the available information can offer a more accurate and fair assessment of the decision.

The Process Over Outcome

To counteract the negative effects of outcome bias, focusing on the process rather than the outcome is crucial. The process refers to the steps taken, the information considered, and the reasoning behind a decision. A decision made with rationality, supported by available evidence, and grounded in sound judgment is inherently good, regardless of whether the outcome is successful.

This approach is particularly important in environments where outcomes are influenced by randomness or external factors. In these situations, the decision-maker may not have control over the result, but they do control the process. By focusing on the quality of the decision-making process — evaluating the data, considering various options, and weighing potential risks — we can make better-informed decisions and avoid falling into the trap of outcome bias.

Moreover, by recognizing that outcomes do not always reflect the process, we can reduce the emotional impact of failure and avoid the temptation to overestimate the value of success. A poor outcome does not necessarily mean a poor decision, just as a good outcome does not guarantee a good decision. The key to long-term success lies in consistently making rational, well-thought-out decisions, not relying on the whims of luck or chance.

By focusing on the process rather than the result in our evaluations, we can better understand the true value of our decisions. This mindset fosters a growth-oriented approach, where learning from successes and failures becomes the priority. When we assess decisions based on their reasoning and the thought process behind them, we not only improve our decision-making skills but also create a more fair and rational environment for evaluating the choices of others.

Conclusion

To avoid falling into the trap of outcome bias, it is essential to focus on the decision-making process rather than the result. A good decision is one made with rational thought based on the best information available at the time. This approach might not always lead to success, but it increases the likelihood of consistent, thoughtful outcomes. The key is recognizing that outcomes, particularly in complex systems, are often influenced by factors beyond our control.

So, the next time you evaluate a decision, whether it’s your own or someone else’s, take a step back. Look at the reasoning behind the choice and resist the urge to judge it based solely on the outcome. Was the decision process sound? Did reason and available data inform it? If so, stick with your approach, even if the result isn’t what you hoped. After all, success is not always a matter of luck — it’s about making decisions that are rational, grounded in evidence, and resilient enough to weather the randomness of life.

This article is a part of The Cognitive Bias Series based on The Art of Thinking Clearly by Rolf Dobelli.