# Everything Is Predictable **Tom Chivers** ![rw-book-cover](https://m.media-amazon.com/images/I/6101TereOqL._SY160.jpg) --- _Bayes isn't a technique. It's the definition of rationality under uncertainty._ "All decision-making under uncertainty is Bayesian, or to put it more accurately, Bayes' theorem represents ideal decision-making, and the extent to which an agent is obeying Bayes is the extent to which it's making good decisions." That's a strong claim. Chivers means it. Not "Bayes is useful" or "Bayes is elegant" but: if you're departing from Bayesian reasoning, you're being irrational, by definition. The book's real target is frequentist statistics, the machinery of p-values and significance tests that underpins most academic research. These tools are not wrong so much as they're answering the wrong question. They tell you how likely you are to see this data, given a hypothesis. What you actually want to know is how likely the hypothesis is, given the data. Only Bayes answers that. The gap between those two questions is responsible for a remarkable proportion of the replication crisis. --- **All inference requires [[Priors]].** You cannot reason from evidence without some prior belief about what's plausible. Pretending to be "objective" by refusing to state priors doesn't make you more rigorous. It makes inference impossible. It also doesn't actually eliminate priors; it just hides them inside methodological choices: which tests to run, when to stop collecting data, how to define significance. Making priors explicit forces discipline. You have to justify them, show how evidence changes them, and confront the gap between what you believed before and what you believe now. The Bayesian machinery has three components. A prior: your belief about a hypothesis before seeing new evidence. A likelihood: how probable the new evidence is, assuming the hypothesis is true. A posterior: your updated belief after seeing the evidence. None of this is complicated in principle. The difficulty is being honest about the prior, which is the step most people would rather skip. --- **Occam's razor is a prior.** Simpler explanations get higher prior probabilities, because complex outcomes are less likely to arise by chance. This isn't a philosophical preference for elegance; it's a probabilistic claim. When two hypotheses fit the data equally well, the simpler one should be believed more strongly, because the more complex one required more things to go right to produce this evidence. This gives mathematical grounding to what usually feels like an aesthetic judgment. Probability is not an objective property of the world. It's a statement about what we don't know. Two people with different information and different priors will reasonably interpret the same evidence differently, and neither is necessarily wrong. As evidence accumulates, beliefs converge. The Bayesian defence against the subjectivity objection is that priors matter less as evidence grows, because posteriors are increasingly constrained by data. Start anywhere honest enough, and you'll eventually end up in roughly the same place as someone who started differently. --- **P-values are widely misunderstood, and the misunderstanding matters.** A p-value tells you how unusual your data would be if the null hypothesis were true. It doesn't tell you how likely the null hypothesis is. "Statistically significant" means the data would be surprising if nothing was going on, not that something is definitely going on. The conflation of these two statements has generated decades of published research that doesn't replicate. The fix isn't more statistical sophistication; it's asking the right question from the start, which is a Bayesian question. **"Precise [[Estimates]], high certainty, or small [[Samples]]. Pick two."** Uncertainty is irreducible. You manage it; you don't eliminate it. This is the territory of [[Unknown and unknowable]]: the gap between what data can tell you and what you actually need to decide. More data narrows the gap. It doesn't close it. --- [[Variance]] is how far data points spread from the mean. Standard deviation makes it more interpretable. These measures don't eliminate uncertainty; they describe it. The Bayesian view is that probability quantifies ignorance, not reality. What we call a probability distribution is a map of what we don't know, structured by what we do. That framing is more honest than treating statistical outputs as objective facts about the world, and it keeps you appropriately humble about conclusions that feel more certain than they should. The practical habit this book leaves you with is a simple discipline: before interpreting any evidence, ask what you believed before seeing it, and why. Then ask how strong the evidence actually is, not just whether it reached some conventional threshold. The threshold question is the wrong one. The updating question is the right one. ---