# Inverse response
_Watch the leading indicators when the headline numbers punish you._
---
You take over a business unit mid-year. The P&L is ugly: margins below breakeven on a third of the customer base, a quality backlog that support staff have been patching manually for eighteen months, and a churn rate that ticks up every quarter. On day one you sit in a client call where a customer service rep apologises for the same bug for the fourth time. The team knows what's broken. They've been waiting for someone with the authority to actually fix it.
You make the changes. Exit three unprofitable contracts that consume a disproportionate share of support time. Reassign two engineers from feature work to root-cause fixes. Reset delivery timelines with the four biggest accounts, honest about what's been promised versus what can be shipped. Raise the hiring bar, which means two open roles stay unfilled longer than the team would like.
Three months later, every metric you report to the board has moved the wrong direction. Revenue is down because you exited contracts. Costs are up because root-cause work doesn't ship features. Satisfaction scores dipped because you told customers the truth about timelines. Headcount is below plan because you refused to hire for the sake of filling seats.
Every one of these was the right call. And every one looks identical to "new leader making things worse."
---
Some systems move the wrong way first when you apply the correct input. Push them in the right direction and the initial response is backwards, then it corrects. Engineers call this a right-half-plane zero: a system where the output dips before it rises, even though the input was right all along. The danger is fighting that initial movement. If you see the numbers drop and reverse the changes, you've paid the transition costs twice, once going in and once coming out, without getting the benefit. Worse, you've taught the organisation that change initiatives don't stick, that waiting long enough undoes any decision.
The correct response is to hold steady and let the system work through it. But "hold steady" requires more than nerve. The [[Execution trap]] is to respond to falling numbers by tightening controls, adding oversight, pulling work back in. Here the trap is subtler. It requires a way to tell the difference between a system correcting and a strategy failing, because both look the same in the board pack.
---
Revenue, profit, NPS, satisfaction scores. These are the numbers on the first page of the quarterly review, and they are all lagging indicators. They measure the accumulated consequences of decisions made months or quarters ago. The revenue line you're reading in Q3 reflects pipeline built in Q1, contracts signed last year, churn decisions customers made weeks ago. It's a photograph of the past, not a reading of the present.
When you make structural changes, the lagging indicators keep deteriorating for a while because they're still catching up to the old reality. Revenue drops when you exit contracts, not because the business is getting worse, but because the metric is finally reflecting the unprofitable work you chose to stop doing. The number was flattering you before. Now it's being honest.
Leading indicators measure the conditions that produce future results. They sit upstream of the lagging metrics, closer to the actual work. In the turnaround you're running, they might include defect rate on the core product, [[Reading retention|retention rate]] among the customers you kept, average deal size in new pipeline, time-to-resolution on support tickets, or the ratio of proactive to reactive engineering work.
---
You exited three unprofitable contracts. The lagging indicator, total revenue, drops. That's mechanical. But what happens to the leading indicators around the remaining customer base?
If the change is working, you should see support ticket volume fall within weeks, because those three contracts were consuming disproportionate support time. With that time freed up, resolution speed for remaining customers should improve. And if those customers are being served better, their renewal behaviour should begin to shift, not immediately in the revenue line, but in the conversations your account managers are having and the expansion discussions that start opening up.
You reassigned two engineers from feature work to root-cause fixes. The lagging indicator, feature velocity, drops. The roadmap slows. But defect recurrence, the rate at which the same bugs come back, should start declining within a sprint or two. If it does, the engineers are fixing real causes, not symptoms. If defect recurrence stays flat while feature velocity drops, the engineers may be working on the wrong root causes, or the quality problem runs deeper than the backlog suggested.
You reset delivery timelines with your four biggest accounts. Satisfaction scores dip because customers hear that the timeline they were promised isn't real. But are those customers now receiving accurate commitments? If on-time delivery against the new, honest timeline improves, trust is rebuilding even while the survey score lags behind. If on-time delivery stays poor even against the reset timeline, you have a capacity problem, not just a communication one.
---
A system correcting and a strategy failing produce the same movement in lagging indicators. Both show revenue down, costs up, scores falling. The difference lives entirely in the leading indicators.
When the system is correcting, leading and lagging indicators diverge. The headline numbers get worse while the upstream signals improve. Fewer defects, better retention in target segments, stronger pipeline quality, shorter resolution times. The lagging metrics will catch up once enough time passes for the improved conditions to flow through.
When the strategy is genuinely wrong, leading and lagging indicators move together. Everything deteriorates. Defect rates don't improve despite the engineering investment. The customers you kept aren't buying more. Pipeline quality doesn't shift. The team feels it, often before the numbers confirm it, in the texture of customer conversations and the energy in standups. The system is telling you something isn't working.
The hardest case is the middle one: leading indicators improve, but not enough. Defect recurrence drops, but only slightly. Retention improves in one segment but not another. You're getting signal, but it's ambiguous. This is where you need the model you built before making the changes. How much improvement, by when, would confirm the thesis? If you predicted defect recurrence would halve in eight weeks and it's down 15% after ten, something in the causal chain is weaker than you assumed. Not necessarily wrong, but worth pressure-testing before the board meeting where someone asks why the headline numbers are still falling.
---
[[Variance]] teaches you to check whether the baseline was meaningful before reacting to a number. The same discipline applies here, but in reverse. When a lagging metric drops after a deliberate change, the baseline was the problem, not the movement. Revenue was £4m because you were serving unprofitable contracts. Satisfaction was 7.2 because you were making promises you couldn't keep. The old number wasn't health. It was a symptom being mistaken for a vital sign.
The leading indicators won't tell you the outcome. They'll tell you whether the system is pointed in the right direction. That's enough to hold steady through the dip, or to recognise, honestly, that the dip is not a transition at all.
---