# Sifting _Ask what would look different, not what looks convincing._ --- Q2 revenue is 12% below plan. Two competing explanations are on the table. Your Head of Sales says the product has fallen behind. A competitor launched a similar offering three months ago. Three deals stalled because prospects asked for functionality you don't have. The market moved and the product didn't keep up. Your Head of Product says the sales team lost momentum. Two senior reps left in April, territory coverage dropped, and pipeline generation fell 20%. The product is fine. The engine stalled. Both brought data. Both are credible. You need to decide where to put the next quarter's budget and attention. --- The instinct is to weigh the evidence. Three stalled deals feels specific. A 20% pipeline drop feels structural. You lean toward whichever story came with the most vivid number. But look at what each side brought you. The three stalled deals. If the product has fallen behind, you'd expect deals stalling on feature gaps. But you'd also expect that if your remaining reps are weaker and can't sell around objections that your best people used to handle. Consistent with both stories. The pipeline drop. If two senior reps left and nobody covered their territories, pipeline would fall. But pipeline would also fall if the product's reputation has slipped and fewer prospects are taking first meetings. Consistent with both stories. Every data point in the room supports the explanation it was brought to support. None of them helps you choose between the two. --- So what would actually help? Evidence that would look different depending on which story is true. If the product has fallen behind because of that competitor's launch, win rates against that competitor should have dropped further than win rates against everyone else. The new offering is the specific threat, so the damage should concentrate there. If it's execution, the drop should spread roughly evenly. Weaker coverage loses more deals against everyone, not just the competitor who launched something new. --- You ask the question. Nobody has the number to hand, so you pull it up. Win rate against the competitor who launched: down 3 points. Win rate against everyone else: down 7 points. The bigger drop is against competitors who didn't change anything. The three stalled deals were real, but they weren't the pattern. The pattern is a team losing ground across the board, which fits a coverage gap better than a product one. --- The stalled deals were the most vivid evidence in the room, and the least useful. A product gap would produce them. So would weaker reps. Evidence that's equally likely under both explanations tells you nothing, no matter how specific it sounds. The win rate comparison was undramatic and nobody thought to bring it. But a product problem and an execution problem predict different numbers there, which is what makes it worth looking at. [[Priors]] gives you the mechanics of updating a belief. This is the step before: checking that the evidence can actually tell the stories apart. --- Next quarterly review, you change how the conversation starts. Before anyone presents what happened, you ask: if this were a product problem, what would we expect to see that we wouldn't see if it were an execution problem? The debate shifts from advocacy to diagnosis. Each side still brings data, but now they're looking for the comparisons that discriminate, not the charts that persuade. ---