In decision-making, we often fail to assess the true weight of probability accurately. Instead, our emotional impulses and biases skew our judgment, particularly when faced with seemingly large rewards or the potential for catastrophic outcomes. Consider two games of chance: one offers a $10 million prize, while the other promises a more modest $10,000. The first game has a slim chance of winning—one in 100 million—while the second game’s odds are far better at one in 10,000. Which would you choose?
In theory, the rational decision is clear. Though offering a smaller reward, the second game has a much higher chance of success, yielding a far better-expected value. But when emotions come into play, we are irresistibly drawn to the prospect of the huge jackpot despite the extremely low odds of winning. This attraction to big numbers, no matter how improbable, drives the popularity of lotteries like Mega Millions and Mega Billions, where the potential winnings reach astronomical figures, but the odds of success are vanishingly small.
The Experiment: Emotions Over Probability
The 1972 experiment conducted by researchers serves as a striking example of how we process risk and uncertainty. Participants were divided into two groups. Group A was told they would receive an electric shock, while Group B was informed there was a 50% chance of receiving the same shock. The expectation was that Group B—faced with a lower probability of the shock—would experience less anxiety. However, the results were unexpected: both groups showed the same level of anxiety and stress, measured by heart rate, sweating, and other physiological signs. This demonstrates that human beings do not naturally evaluate risk by its probability. Instead, we react more strongly to the anticipated magnitude of the outcome, whether it’s the jackpot’s expected size or the shock’s severity.
What’s most revealing about this experiment is that it challenges the commonly held belief that people are risk-averse and that reducing the likelihood of an adverse outcome should automatically reduce anxiety. The truth is that we are far more emotional about the event’s intensity than about how likely it is to happen. The emotional reactions triggered by the prospect of a large reward or a severe consequence often override our capacity to assess the probabilities involved. We’re often more afraid of the possible magnitude of a risk than the actual likelihood that it will occur, making us prone to irrational decisions based on emotional responses.
Further into the experiment, the researchers reduced the probability of the shock for Group B, from 50% to 20%, to 10%, and finally to just 5%. Despite these reductions in risk, the anxiety levels remained unchanged. This tells us that we have a deep-seated emotional response to risks, regardless of their actual probability. If anything, the experiment emphasizes that we are often too focused on potential outcomes—good or bad—without adequately weighing the odds of those outcomes occurring.
It was only when the researchers increased the intensity of the shock itself that anxiety increased for both groups. This finding speaks to a fundamental truth about human behavior: we may be oblivious to low-probability risks, but we react strongly to higher intensity events. This reflects how much more concerned we are with the severity of an outcome than with the probability of it happening, a behavior pattern that leads us to make choices that are not always in our best interest. If the probability of a negative outcome is low but the consequences are severe, we might overestimate the risk, taking irrational actions like avoiding an otherwise safe situation. Conversely, when the potential gain is large but the probability is exceedingly small, such as winning a massive lottery, we are drawn in emotionally, disregarding the minuscule likelihood of success.
Neglect of Probability in Real-World Decisions
The concept of neglecting probability is not confined to psychological experiments—it is prevalent in real-world decision-making across various aspects of life. One of the most common ways we fall victim to this neglect is in the investment world. The allure of startups and new ventures, fueled by the promise of incredible returns, often blinds investors to the harsh reality of the slim chances of success. We hear of a few successful companies, like Google or Apple, that started from humble beginnings, and this fosters an illusion that all start-ups have similar potential. In truth, most start-ups fail, and the likelihood of a given business achieving substantial growth is negligible.
This tendency is driven by an emotional response—an overwhelming desire for a big win. The thought of exponential returns evokes excitement and clouds rational judgment, prompting people to overlook the risks involved. Investors may hear about a new company with exciting technology or a disruptive business model, and the potential reward overshadows their rational understanding of the risks. The result is that they take on disproportionate amounts of risk, gambling on the possibility of success without considering the probability of failure.
Similarly, we are prone to irrational decisions driven by emotional responses to catastrophic events in safety. Take, for instance, a widely publicized airplane crash. Even though the probability of a crash is minuscule, the emotional impact of the tragedy makes many people cancel their flights. Their decision is often based on a fear of the possible outcome rather than a logical evaluation of the statistical likelihood of such an event. The fear becomes amplified by the emotional weight of the news coverage, and individuals make choices that are out of sync with reality. This is a textbook example of how our emotional impulses, rather than rational thinking, guide our behavior regarding risk and probability.
In financial decision-making, this emotional bias is further compounded by our tendency to focus solely on potential rewards. Amateur investors often make decisions based solely on returns, ignoring that higher returns often come with higher risk. Historically, A stock offering a 20% return may seem like a far better investment than a property yielding a mere 10%. However, this comparison neglects the volatility that comes with stocks, where market fluctuations and external factors can heavily influence the value of the investment. Meanwhile, while often more stable, real estate can carry its own risks, such as property damage, tenant issues, or changes in the local housing market. By failing to assess the risk properly, investors make suboptimal choices that could lead to financial ruin.
The Zero-Risk Bias
The zero-risk bias is another manifestation of our neglect of probability. This cognitive bias leads us to irrationally prefer a risk to be eliminated completely, even when the risk reduction is marginal. In the case of water treatment, consider two methods: Method A reduces the risk of contamination from 5% to 2%, saving 3% more lives. Method B eliminates the risk entirely, reducing the chance from 1% to 0%. Despite Method A offering a more significant safety improvement (saving 3% more people), most people would still choose Method B, simply because it eliminates all risk.
This preference for zero risk is rooted in a deep emotional desire for certainty and security. The idea of “complete safety” sounds appealing, even when it is not necessarily the most beneficial. When people are faced with a decision between a small but meaningful reduction in risk versus eliminating a smaller risk, they often go for the latter. This behavior is driven by the comfort of complete elimination, even if the overall benefit is less substantial.
The zero-risk bias is not just a quirk of individual decision-making; it can seriously affect public policy and lawmaking. The U.S. Food Act of 1958, for example, banned substances deemed carcinogenic, aiming for zero risk of cancer. However, this well-intentioned move backfired. To comply with the ban, food manufacturers had to replace the banned substances with non-carcinogenic alternatives, which were sometimes more dangerous in other ways. In this case, the zero-risk approach actually led to more harm than good, illustrating how our preference for complete safety can lead to poor outcomes.
This bias is further exacerbated by the fact that complete risk elimination is almost always impossible. Whether it’s food safety, environmental regulations, or medical practices, striving for zero risk often involves prohibitive costs and unintended consequences. We cannot eliminate every potential hazard, and striving for absolute safety in all situations can lead to inefficiency, increased costs, and even new risks. Pursuing zero risk rarely makes sense in a world where risks are part of every decision.
The Cost of Zero-Risk Thinking
Economically, pursuing zero risk is often not just irrational—it’s downright unwise. As Paracelsus famously stated, the concept of risk is about the dosage. In any given situation, a moderate level of risk may be more manageable and less costly than an attempt to eliminate all risks. The zero-risk mindset ignores that the costs associated with reducing risk can often outweigh the benefits.
Take, for example, food safety regulations. The U.S. Food Act of 1958, aimed to eliminate carcinogens from food, is a prime illustration of the problems associated with zero-risk thinking. By pushing for a total ban on cancer-causing substances, the law created an environment where manufacturers were forced to turn to alternative substances that were not necessarily safer—just non-carcinogenic. The result was higher costs for food producers, passed on to consumers, and a situation where eliminating one risk led to introducing other risks.
In most areas of life, achieving zero risk results in inefficiencies. For example, imagine trying to regulate emissions from factories or implement safety protocols in a workplace. Striving for a world where no risk is acceptable might involve overly stringent regulations that are impossible to enforce and economically unsustainable. The costs of such measures often outweigh the benefits, leading to an environment where resources are wasted on eliminating trivial risks while more significant yet more manageable risks are ignored. The pursuit of zero risk is rarely the most economical or practical approach.
This idea extends to manufacturing, healthcare, and even public policy industries. Often, the best approach is not to eliminate all risk but to manage it effectively. Understanding the limits of what can be achieved and recognizing when the cost of safety outweighs the benefit is key to making more rational decisions.
Our Irrational Responses to Risk
Human beings are notorious for making irrational decisions when it comes to risk. Our emotional responses to threats, particularly those that seem catastrophic, often outweigh any logical consideration of the likelihood of those threats materializing. This tendency is particularly evident when it comes to low-probability risks. For example, studies have shown that people are equally afraid of a 1% chance of contamination as they are of a 99% chance. This irrational fear is a byproduct of our psychology, where the emotional weight of potential consequences often clouds our rational thinking.
The fear of a small but highly dramatic risk—such as the chance of contamination by a toxic substance—often leads people to make decisions that defy the logic of probability. Even when the actual chance of exposure is minimal, the emotional impact of imagining the catastrophic consequences causes people to react as if the risk were far higher. This phenomenon is exacerbated when the topic at hand is something emotionally charged, like the fear of nuclear contamination or chemical poisoning.
This emotional response to risk can be seen in many public debates about safety and regulation. Whether it’s debates over genetically modified organisms, nuclear energy, or chemical pollutants, the emotional intensity surrounding the issue often drives people to fear statistically insignificant risks. As a result, policies are often shaped more by public fear than objective risk assessments. These irrational responses to low-probability, high-impact events lead to misallocating resources as more attention is paid to minimizing unlikely but emotionally charged risks. In contrast, more probable and manageable risks are ignored.
Understanding our irrational responses to risk is crucial to improving decision-making. To make better choices, individually and collectively, we must learn to evaluate risks emotionally and rationally, weighing the probabilities of different outcomes rather than reacting out of fear. Only then can we make more informed, balanced decisions prioritizing safety over emotional impulses.
Conclusion: The Need for a Rational Approach to Risk
The neglect of probability is an inherent flaw in human decision-making. Whether it’s the attraction to massive jackpots with minuscule chances of success, the pursuit of zero risk, or the irrational fear of unlikely dangers, our emotional responses often overshadow our ability to make logical decisions based on probability. To make better choices in both personal and professional life, we must learn to understand and appreciate the true nature of risk, recognizing that the size of the outcome does not necessarily correlate with the likelihood of its occurrence. A rational approach to risk involves acknowledging probabilities, considering potential rewards and risks, and making decisions based on a clear-headed assessment of both.
This article is part of The Art of Thinking Clearly Series based on Rolf Dobelli’s book.