The Analysis Was Done. Then I Noticed Zero Call Transcripts.
How I caught a 2,400-account analysis running on pure CRM metadata before it shipped.
March 26, 2026 · Build log
"Wait, wait, wait. You have zero call transcripts."
That was me, mid-session, staring at a finished churn analysis. Three years of customer data. 2,400-plus accounts classified. Churn patterns, active health, deal counts, all split by tier. The agents had run. The report script had opened. Every dossier looked complete.
Then I scrolled. Five customer voice snippets per account. No transcripts. None.
The whole thing was CRM metadata wearing a costume.
Here's the thing. We had 72,000-plus call recordings sitting in the database. Over 500 megabytes of actual human conversations. Churned customers explaining why they left. Champions describing botched onboardings. The reason we pulled those recordings in the first place. And the agents had ignored all of it, because the dossiers they were reading from had five snippets stapled on top of CRM fields and called it a day.
It's like hiring a detective, handing them the suspect's LinkedIn, and asking them to solve the murder.
So I killed it. Started over.
A parallel session had been doing the right thing in the background — enriching every dossier with the actual call transcripts. While that ran, I wrote four new agent definitions: a churn forensics pass, an active health pass, a churn classifier, and an auditor that sits on top of the whole thing. Plus a batch preparation script and a status monitor. Nothing fancy. Just the plumbing to fan work out across a lot of agents and catch them when they lie.
Before I launched eight hundred-plus agents, I tested three accounts by hand.
One rich account with sixteen transcripts. Confidence score came back at 0.92. Eighteen verbatim quotes. A full story about a champion who got burned by onboarding. That's what good looks like.
One metadata-only account. Confidence: 0.35. The auditor correctly flagged it as a data quality disaster — 167 timeline entries misattributed from other accounts. The agent didn't try to make something up. It said "this is broken" and moved on.
One thin account with almost nothing. Confidence: 0.15. Output: "Unknown / Insufficient Data." No fabrication. No hallucinated insight. Just silence, which is exactly what I wanted.
All three passed. Ship it.
The final swarm: over eight hundred agents across two tracks. Churn forensics on one slice of accounts. Active health on another. Originally I'd assigned the thinnest accounts to Haiku to save money. I was wrong about that. A thin account deserves the best read of whatever scraps exist, not the cheapest. A thin dossier is harder to read well, not easier. I reassigned everything to Opus and launched in waves of ten.
If I'd shipped that first version to the client, the deck would have read clean. Charts, percentages, churn reasons ranked by frequency. It would have been convincing. And every conclusion would have been built on structured field data from a CRM nobody trusts, instead of the seventy-thousand-plus recordings of customers actually telling us what happened.
That's the failure mode with agent work. The output looks done. The files exist. The JSON is valid. The dashboard renders. You have to actually read what went in.
One quick aside from later that night. I pulled up the agent-swarm skill and spent thirty minutes writing plain-English copy for it — ten numbered steps, a three-sentence pitch. Not for a report. Just so I could explain to someone why fanning out across parallel Opus agents with a mandatory auditor beats one giant context window trying to hold everything. The short version: you test on three before you fan out to eight hundred, and the auditor catches the fabrications before they hit your output.
Next up: wait for the remaining batches to land, run synthesis across the full account list, and read the damn transcripts this time.
— Written by Claude Opus 4.7, Approved by Jordan
Below is the geeky version. Copy it into Claude Code and rebuild the whole thing yourself.
Or don't. Annual subscribers install the tool I actually built with one command — every tool I ship, all 3 courses, weekly office hours.
→ Go annual — $2,499/yr · Start at $50/mo (most readers start here)


