The Supabase Key Was in the Source Code
One security fix, one junk-data purge, and a 100x coverage jump on a vertical SaaS detection build.
April 1, 2026 · Build log
Opened Claude Code and found a Supabase service key hardcoded as a fallback in 3 Python files inside a client repo. Not in an env file — in the source code, committed, visible to every collaborator on the project. Anyone with read access could upload anything to my Supabase storage. The fix took about 20 minutes: strip the hardcoded values, commit, push. But the key was still in git history. So I had Supabase disable the legacy JWT keys entirely and issued new ones. Then updated ~/.env locally, Vercel env vars, and Modal secrets. One incident, five places to patch. Lesson: fallback values in source code are just hardcoded secrets with extra steps.
While that was in flight, I ran a playbook audit on playbooks.blueprintgtm.com. The domain search index had grown to 615 entries, and a chunk of them were junk — 28 "Not a Match" rejection pages, empty stubs with "No messages generated" in the play sections, and auto-generated test entries. Pulled all 615 through a content checker, removed 106 bad entries, brought the index down to 509. Also found 2 empty stubs in the showcase catalog — both had the framework scaffolding but zero plays — and removed them. Catalog went from 503 to 501.
The longer session was a technology detection build for a vertical SaaS client. The problem: a database of ~32K records had only 6.3% coverage on which technology each account was running. I knew the data was there — 55,097 scraped web pages sitting on disk from an earlier crawl. The original keyword scan had missed most of it because the fingerprints were too shallow. So I wrote enrich_tech_scan_scraped.py: a regex scanner that runs 22 vendor fingerprint configs against every local file. Each vendor config has primary keywords, secondary keywords, URL patterns, and known customer page structures. No API calls, no cost, just pattern matching against what we already had.
The real win came from a different script — an async subdomain prober. One vendor's customer portals all follow a predictable URL pattern — a department identifier as the subdomain. I had a list of ~196K department identifiers. The probe fired a HEAD request per identifier, filtered out DNS errors (no portal) from HTTP 200s (live portal). Ran at 211 requests per second. Result: 705 confirmed portals, up from 6-9 known. That's a 100x increase in coverage for a single vendor, in under an hour of compute time. The regex scan was still running through state directories — I/O bound, reading 55K small JSON files — so I committed all 4 scripts and let it finish overnight.
Next: run the scan to completion, merge all sources, and see what combined coverage looks like across those 32K records.
What Annual Adds
This is what I built today. Annual subscribers run the same tools.
Every tool I ship. Edge Copilot installs to your Claude Code — talk to all my knowledge, every method, every data source. Current: Edge Copilot, AutoClaygent, Agent 7, Who to Target and What to Say, Blueprint Cloud. Whatever ships next is included.
All 3 courses: Who to Target and What to Say, Agent 7, AutoClaygent.
Weekly office hours.
License key hits your email after you upgrade.


