Every day, the media floods us with experts’ predictions, each telling us what the future holds. Whether it’s Facebook overtaking all entertainment platforms in three years, a regime change in North Korea, or the eventual collapse of the euro, we are constantly bombarded with forecasts. However, how reliable are these predictions? The truth is, until recently, no one bothered to question the validity of expert forecasts. That was until Philip Tetlock, a psychologist, decided to dive deep into the world of predictions.
Tetlock embarked on an ambitious 10-year-long study, evaluating 28,361 predictions by 284 self-proclaimed professionals. His findings were eye-opening: the so-called experts were only marginally better than a random generator of forecasts. Those who gained the most media attention often fared the worst, and ironically, the doom-and-gloom forecasters were the least accurate. They predicted the collapse of countries like Canada, Nigeria, China, and even the European Union. None of these countries imploded. Their predictions were as inaccurate as flipping a coin.
This raises a crucial question: Why do we rely on their forecasts if experts can be so wrong?
The Reality of Expert Predictions: The Disconnect Between Confidence and Accuracy
Predictions, particularly from self-proclaimed experts, have become a part of our daily news and media consumption. We hear forecasts on everything from stock market fluctuations to the outcome of global conflicts. The public tends to trust these predictions, especially when they come from well-known pundits, economists, or political analysts. However, a deeper look at the accuracy of these forecasts reveals an unsettling truth: experts aren’t as reliable as we like to believe.
Philip Tetlock’s groundbreaking study analyzed 28,361 predictions from 284 so-called experts over a decade and showed that these professionals were only marginally better than random chance when accurately predicting future events. This extensive study underscores a critical flaw in our reliance on predictions — the more someone is exposed to the media or holds a title that suggests expertise, the more likely they are to be wrong. Ironically, the experts who frequently make the headlines for bold, far-reaching predictions are often the least accurate. This highlights a disconnect between the public’s trust in these professionals and the actual efficacy of their forecasts.
Why does this happen? Experts are often incentivized to make bold claims because they generate media attention, which leads to fame, book deals, speaking engagements, and other lucrative opportunities. These media personalities, or “prophets of doom,” as Tetlock calls them, thrive on sensationalism. Their predictions are often framed as catastrophic: economic collapse, political upheaval, or disaster. This sensationalism grabs headlines but doesn’t necessarily reflect the likelihood of these events coming true. The more extreme and dramatic the prediction, the less likely it is to be correct. This reveals a systemic issue: regardless of accuracy, the media ecosystem rewards attention-grabbing forecasts, perpetuating a cycle where sensationalism trumps truth.
Moreover, the public often overlooks that these predictions are not based on a complete or objective assessment of all variables involved. Instead, they are based on selective interpretations of available data. For instance, the prediction that the euro would collapse was rooted in economic concerns and geopolitical tensions. Yet, these forecasts failed to account for the resilience and adaptability of the currency, the political will to stabilize it, and other mitigating factors. Experts may base their predictions on an incomplete set of factors, making their forecasts appear more solid than they truly are.
The Limits of Prediction: What We Can and Can’t Predict
While it’s easy to cast a wide net and declare all predictions inaccurate, the truth is that some predictions are easier to make than others. The more stable and controlled a system, the more likely predictions will come true. For example, predicting how much a person will weigh in a year is a relatively simple forecast, assuming no major changes in diet or lifestyle. This type of prediction is based on historical data and stable trends, which allows for greater confidence in its accuracy.
On the other hand, the more complex and interconnected the system, the less reliable the forecast. Global phenomena such as climate change, the price of oil, or fluctuations in exchange rates are notoriously difficult to predict. Take climate change as an example. While scientists can study past trends and predict general patterns, climate change’s exact timing, and impacts are almost impossible to forecast with precision. The interconnectedness of environmental, social, economic, and political factors makes the future of our planet highly uncertain. Predicting a global temperature rise within a specific time frame or the exact consequences of such a rise involves too many variables to make a reliable forecast.
Similarly, predicting the price of oil involves analyzing not only the current supply and demand but also political tensions, technological advancements, natural disasters, and shifts in energy consumption. These are all interconnected and unpredictable forces that can radically change the trajectory of oil prices in ways that no expert can foresee with absolute certainty. Just as we saw in 2020, the COVID-19 pandemic sent shockwaves through global markets and brought about unforeseen shifts in demand and pricing. Economic modeling can offer broad projections, but the unpredictable nature of global events means that predictions about oil or other commodities are inherently fraught with risk.
The same principle applies to technological advancements. Technological innovation is inherently unpredictable. If we could foresee the inventions of tomorrow with certainty, we would already have created them today. The breakthroughs that shape our world often emerge from areas we never expected. Consider the development of the internet, smartphones, or even artificial intelligence. These innovations didn’t follow a predictable path; they arose from unexpected circumstances, discoveries, and shifts in how we interact with technology. The idea that we can predict the next big technological leap is speculative, and no expert can offer reliable insights into what will emerge in the next decade.
Evaluating Predictions: Two Critical Questions to Ask
Given predictions’ inherent uncertainty, it’s important to approach them with a critical mindset. When confronted with a forecast, it’s essential to consider the expert’s motivations. The first critical question to ask is: What incentive does the expert have?
Many so-called experts, especially media personalities or self-proclaimed gurus, are motivated by profit and the desire for visibility. These individuals often make bold, dramatic forecasts to generate media coverage, sell books, or secure speaking engagements. Sensational predictions are often framed in a way that captures attention, even if they are not grounded in reality. The expert’s income is tied to their ability to remain relevant, not necessarily to the accuracy of their forecasts. This creates a situation where making wild predictions can be more beneficial than providing cautious, measured forecasts. Their incentive is to maintain visibility, regardless of whether their predictions are correct.
On the other hand, experts employed by institutions may face pressure to deliver accurate forecasts, but their incentives are often shaped by institutional goals, funding sources, and career progression. These experts may not be financially rewarded for the accuracy of their predictions. Still, they may receive promotions, additional funding, or greater prestige within their organizations if they can make predictions that align with their institution’s objectives. This can result in biased forecasting, where predictions are shaped by the institution’s desires rather than an unbiased analysis of available data.
The second critical question is: how good is the expert’s success rate? A prediction might sound convincing at the moment, but how often have similar predictions come true in the past? A reliable expert should have a track record that can be scrutinized. How many forecasts has the expert made in the last five years, and how many were accurate? This is vital information that is often missing in media coverage. If an expert has a long history of inaccurate predictions, why should we trust their future forecasts?
Unfortunately, most media outlets fail to provide this context, and the public is left to assume that an expert’s reputation is synonymous with accuracy. For example, an economist with a prestigious title may appear trustworthy, but this does not guarantee that their forecasts will be any more accurate than those made by a lesser-known analyst. Media should be more transparent about the track record of the experts they feature, providing the public with the necessary information to make informed decisions about the reliability of the forecast.
The Consequences of Predictions: The Need for Accountability
The lack of accountability for inaccurate predictions is a major issue in forecasting. In many professions, failure comes with consequences. If a doctor makes an incorrect diagnosis, there are real-world consequences, potentially harming patients and damaging their careers. If an engineer makes a mistake in the design of a bridge, the results can be catastrophic. Yet, in the forecasting world, there are rarely any penalties for being wrong.
This lack of accountability creates a perverse incentive. Experts can make bold predictions with little fear of the consequences. If their forecast happens to be correct, they receive praise and recognition, but if they are wrong, they simply move on to the next prediction, unaffected. This dynamic encourages experts to generate as many predictions as possible, increasing the odds that one will be accurate by sheer coincidence. It also means that the public is exposed to a constant stream of inaccurate forecasts without any way to hold the experts accountable.
One possible solution to this problem is to introduce a system of accountability. Experts could be required to invest a small amount of money into a “forecast fund” for every prediction they make. If the prediction is correct, they could receive their money back with interest. If the prediction is wrong, the money could go to charity. This system would create a financial incentive for experts to make more thoughtful and accurate forecasts, knowing there are real consequences for being wrong. It would also encourage greater transparency, as experts must prove that their predictions are grounded in solid data rather than speculation.
Rethinking Predictions: The Uncertainty of the Future
Ultimately, the future is inherently uncertain, and this should shape our approach to predictions. While we can analyze trends and data to make educated guesses, there are no guarantees. The world’s complexity means that predicting the future is often more about managing risk than knowing what will happen with absolute certainty.
We live in a constantly evolving world. New technologies, political developments, and unexpected events can alter the course of history in ways that no one can foresee. This is why it’s important to approach predictions with skepticism and a healthy dose of caution. Rather than relying on experts who make bold, sweeping forecasts, we should embrace the uncertainty of the future and focus on strategies that allow us to adapt to whatever comes our way.
Instead of seeking certainty in predictions, we should focus on building resilience and adaptability. The ability to adjust to new information, respond to challenges, and pivot when necessary is far more important than any forecast. After all, the future is always in motion, and the best we can do is prepare ourselves to face whatever it holds.
Conclusion: Rethinking Our Relationship with Predictions
Ultimately, predictions should be taken with a grain of salt. While some may be grounded in fact, many are little more than educated guesses or wild speculation. Experts are not infallible, and their personal biases or professional incentives often shape their forecasts. Therefore, we must be discerning when we encounter predictions.
As former British Prime Minister Tony Blair wisely said, “I don’t make predictions. I never have, and I never will.” Perhaps this is the most sensible approach of all—accepting that the future is inherently uncertain and unpredictable and embracing the unknown rather than relying on others’ flawed forecasts.
This article is part of The Art of Thinking Clearly Series based on Rolf Dobelli’s book.