# Superforecasting
**Philip Tetlock and Dan Gardner**

---
_The difference between good and bad forecasters isn't intelligence. It's process._
Tetlock's landmark research is genuinely embarrassing for anyone who earns money giving predictions. Most expert forecasters, the analysts, pundits, and specialists who appear in print and on panels, perform about as well as chance when their track records are scored systematically. The problem is that most forecasters operate in an environment where predictions are vague, time horizons are ill-defined, and accountability is absent. Forecasting in those conditions is a low-stakes social performance, not a truth-seeking exercise.
But some forecasters consistently beat the odds. Significantly and persistently. The question is what separates them, and the answer isn't what most people expect.
---
**The strongest predictor of forecasting performance is commitment to self-improvement.** Not intelligence. Not domain expertise. Not access to privileged information. The willingness to track your predictions, identify errors, update your approach, and repeat. This is a skill, not a talent, which means it can be built through deliberate practice rather than selected for at hiring. The implication for any organisation trying to improve its collective judgment is uncomfortable: you probably need to change the feedback structures before you change the people.
Calibration matters more than confidence. Superforecasters express probabilities precisely: not "likely" but "73%." This granularity forces clear thinking and, more importantly, enables feedback. If you said something had a 73% chance and it happened, you can track whether your 73% calls come true about 73% of the time. Vague language like "probably" or "could happen" makes learning impossible because there's nothing to score.
---
**The method has a structure.** Unpack the question into components. Adopt the outside view first: put the problem into a comparative perspective, treat it as a special case of a wider class of events, and ask what the base rate is for this type of outcome. This is [[Priors]] in practice, and it's the step that most people skip, seduced by the specific details of the case in front of them. Then adopt the inside view: ask what makes this particular situation different from the reference class. Synthesise the two perspectives, express a precise probability, track the result, and update.
The outside view corrects for what Kahneman calls narrative seduction: the pull of the compelling story about why this case is unique. It almost never is. Most situations that feel unprecedented fit into recognisable categories with estimable base rates. The discipline is to look for the category before getting absorbed in the narrative.
---
**When intuition works, it's because the environment allows learning.** Tetlock and Kahneman's collaboration on when to trust expert intuition is one of the more useful things in the book. A fireground commander's intuition about a burning building is worth something: the domain offers clear feedback, stable patterns, and many repetitions. A stockbroker's intuition about next quarter's price movement is worth much less: if publicly available information could predict stock performance, prices would already reflect it. The domain question is whether feedback is timely, clear, and connected to the decisions being made. In "wicked" environments, where feedback is delayed, noisy, or absent, experience doesn't produce expertise. It produces confidence without calibration, which is worse than ignorance.
**Universal agreement is a warning.** A sensible executive treats apparent consensus as a signal that groupthink has taken hold, not as confirmation that the decision is right. If everyone agrees, someone isn't thinking. An array of differing judgments is evidence of genuine independent reasoning, not a problem to be resolved. The [[Unknown and unknowable]] is exactly where confident agreement is most suspicious.
---
**No plan survives contact with reality.** The German military doctrine of Auftragstaktik captures this: tell subordinates what the goal is, not how to achieve it. Rigid procedures break down when reality diverges from expectation, which it always does. Superforecasters need the flexibility to update their methods as they learn, not mechanical adherence to a fixed protocol. The method described above is a starting framework, not a script.
Reading about forecasting is no substitute for forecasting. There's tacit knowledge here that can only be acquired through bruising experience: making predictions, watching them fail, finding the pattern in your errors, and correcting. The hard work of research, the careful self-criticism, the granular judgments and relentless updating cannot be shortcut. They can only be done.
---