Could We Have Seen the Churn Coming?
A parallel agent swarm read 1,200 dead accounts against five hypotheses. 19% were never in the target market.
April 7, 2026 · Build log
A vertical SaaS company has 1,200 churned accounts sitting in their CRM. Dead customers. Money that walked out the door. The CRO wants to know one thing before the board meeting: could we have seen any of this coming?
Not from internal usage data. Not from NPS scores. From the stuff anyone can pull up in a browser — public company registrations, Google profiles, BBB filings, LinkedIn, socials.
If the answer is yes, the churn problem isn't a churn problem. It's a qualification problem that got missed at the top of funnel.
So I pointed a swarm of research agents at it. All parallel. Each agent taking a single account and running it against the same five hypotheses — is this the right type of operator, is there a specific pain the product actually solves, are the stakeholders multi-seat or a one-person show, is there a real decision-maker on the deal, and does the company's age and operating footprint match the customers who stay. Validate or invalidate each one with public evidence. Write the findings as structured records. Move on.
Here's the tension. If I'm wrong — if churn really is random or driven by stuff you can only see post-sale — then this whole exercise is theater. The board gets a pretty report. Nothing changes. Sales keeps selling the wrong people and the bucket keeps leaking.
If I'm right, the sales team has been qualifying on vibes for years and nobody noticed.
The swarm came back. 19% of the churned accounts weren't even the right kind of operator. Not close. Wrong business registration category, wrong service model, the kind of miss you can spot in under a minute of browser research. That's sales closing deals they shouldn't have pitched.
Six signals survived the cut. Specific pain present. Multi-stakeholder engagement on the deal. Side-job operators versus full-time. Decision-maker absent from the call history. Deal velocity. Company age. Two signals I wanted to matter didn't — public review volume barely moved the churn rate, and whether they were switching from a named competitor barely moved it either. Cut them both. If it's not predictive, it doesn't belong in front of a board.
Then the report broke.
I'd handed the synthesis to Claude Opus in the afternoon. Clean HTML, charts, the whole thing. Opened it up in the evening and four signal bars were showing 0.0%. Opus was emitting percentages in a format the renderer didn't understand. Classic. I stopped trusting the synthesis layer and pulled the bar values directly from the raw per-account CSV. Bars came back.
The second problem was worse. The report listed churned account names in a recovery section with no way to actually act on them. A name in a PDF doesn't help a CS rep on Monday morning. So I embedded the underlying tables straight into the HTML as base64-encoded CSVs. One per archetype. One per closed-lost reason. One per risk tier. Every bucket with a count got a download button next to it. Each row carries the ICP score, the signals in plain English — switching from a named competitor, multi-stakeholder engagement — and the risk factors. Not codes. Not internal gate IDs. Words a human can read.
Last pass was the language audit. Opus had been helpful and creative, which is to say it had editorialized all over the thing. "ARR destroyed." "Ghost company." Names for the gating logic that read like internal Slack jokes. None of that goes in front of a board. I wrote a sanitizer that runs at render time. "Destroyed" becomes "lost to churn." Internal nicknames become plain descriptions of what the check actually does. If a board member has to ask what a word means, I've already lost them.
Shipped to a password-protected URL an hour before the meeting. The conservative-gate scenario pressure-tests next. If the model holds up when a skeptical CFO tries to poke holes in it, the sales team gets a new qualification checklist on Monday and 19% of their pipeline stops existing.
That's the right outcome. Smaller pipeline. Better close rate. Customers who actually stay. The board's job this week is to figure out if they believe it.
The lesson I keep relearning: a synthesis layer is a narrator. Narrators embellish. When the output is going to an executive audience, pull the numbers from the source, never from the narrator's summary.
— Written by Claude Opus 4.7, Approved by Jordan
Below is the geeky version. Copy it into Claude Code and rebuild the whole thing yourself.
Or don't. Annual subscribers install the tool I actually built with one command — every tool I ship, all 3 courses, weekly office hours.
→ Go annual — $2,499/yr · Start at $50/mo (most readers start here)


