# Skin in the Game **Nassim Nicholas Taleb** ![rw-book-cover](https://images-na.ssl-images-amazon.com/images/I/41eWYZVFTbL._SL200_.jpg) --- _Judge people by what they risk, not by what they say._ Taleb's core claim is ethical as much as epistemological. Knowledge gained by tinkering, by trial and error, by contact with actual consequences, is vastly superior to knowledge gained through reasoning at a remove. You cannot separate knowledge from contact with the ground. And the ground is reached through skin in the game: personal exposure to the costs of being wrong. The book is a companion to Antifragile, but where that book explains the mechanism (volatility as teacher), this one explains the ethics. At no point in history have so many non-risk-takers exerted so much control over so many risk-takers. This is the structural problem. Bureaucracy separates a person from the consequences of their actions, which is also the structural version of the [[Execution trap]]: decisions made by people who won't bear the results. --- **If there is no penalty for giving advice, the advice is worthless.** The financial advisor whose fee is fixed regardless of your returns, the strategist whose career advances whether the strategy works or not, the consultant who has never run anything: they're structurally disconnected from the consequences of their recommendations, and that disconnection corrupts the advice. Not because these people are dishonest but because they're responding rationally to their incentive structure, which doesn't require them to be right. "Avoid taking advice from someone who gives advice for a living, unless there is a penalty for their advice." This sounds cynical until you think about who you actually trust. The person who has put their own money in, who will feel the loss personally, who can't exit cleanly if it goes wrong. You trust them not because they're better people but because their interests are genuinely aligned with yours. --- **Rationality is survival, not optimisation.** There is no such thing as the rationality of a belief in the abstract; there is only the rationality of action. The rationality of an action is judged by evolutionary considerations: does this behaviour produce survival? The classic economists' framework of expected value maximisation breaks down when ruin is on the table, because ruin is irreversible. You can be risk-loving in general yet completely averse to ruin, and this is the correct position. Every risk you take that could be ruinous reduces your life expectancy, regardless of the expected value calculation attached to it. "In a strategy that entails ruin, benefits never offset the risks of ruin." The asymmetry is absolute. The Kelly Criterion captures something similar: bet less than you think you should, because blowing up terminates the game permanently. Ruin and ordinary loss are different animals. --- **Things designed by people without skin in the game grow in complication.** This is one of Taleb's most practically useful observations. When you're rewarded for the perception of sophistication rather than for results, you have no incentive to simplify. Complexity signals effort, intelligence, thoroughness. Simplicity signals naivety. The incentive gradient points away from clarity. The people who are bred, selected, and compensated to find complicated solutions do not benefit from implementing simple ones, even when simple ones would work better. Skin in the game brings simplicity, not because people with stakes are smarter but because they bear the cost of unnecessary complication. Decentralisation follows from the same logic. It is easier to tell large lies than small ones. It is easier to conceal macro-level dysfunction than micro-level dysfunction. "It is easier to macrobullshit than microbullshit." Decentralisation reduces the scale of structural asymmetries, which reduces the scale of the failures that result. --- **People have two brains: one when they have skin in the game, and one when they don't.** An employee whose survival depends on their supervisor's assessment of them is a fundamentally different decision-maker from someone who bears personal consequences. The employee optimises for the performance on which they're evaluated, not for the underlying outcome that evaluation is meant to represent. When the metric diverges from the outcome, which it always does eventually, optimising for the metric produces the wrong outcome. This is [[Designing the organisation]] backwards: the structure creates the behaviour, and the behaviour produces the results. **Via negativa.** We don't learn primarily from our own mistakes. The system learns by selecting those less prone to a certain class of mistakes and eliminating others. Systems self-repair through collapse, not through instruction. You will never fully convince someone they are wrong; only reality can. Which is why systems without skin in the game don't self-correct: reality's message never arrives at the decision-maker. --- The practical implication is a simple audit: for any important decision, who bears the consequences if it's wrong? If the answer is "not the person making the decision," you have a structural problem, and sophisticated analysis of the decision itself won't fix it. Change the structure, or weight the advice accordingly. ---