On the Edge by Blueprint

On the Edge by Blueprint

Daily Builds

How I built Edge Copilot to Put Jordan in Your Terminal

Every /edge call was starting from scratch. Eleven hours later, the skill had memory.

Jordan Crawford's avatar
Jordan Crawford
Apr 22, 2026
∙ Paid
The Corpus constellation

April 13, 2026 · Build log

Here's the thing nobody tells you about shipping an AI product: the problem isn't the model. The problem is memory.

Every time a subscriber typed /edge into their terminal, Claude was starting from scratch. No sense of what I'd written. No sense of what I'd built. No sense of the years of actual work sitting on my desktop. Just a menu of six commands and a shrug.

That's a search bar with extra steps. That's not me in your terminal.

So I read a Karpathy gist yesterday about LLM-maintained wikis. Raw sources stay immutable. The model keeps a set of structured pages. A schema file governs the rules. Read it twice. Thought: that's the thing. Not a vector store. Not a RAG pipeline. A wiki with a contract.

Then one idea compounded four times before I slept.

First compound: the schema. Before writing a single page, I needed to know what a page even was. I pushed four research agents at the problem in parallel — one mapping how other people have solved it (Obsidian, MemGPT, Graphiti), one grounded in my actual transcripts and articles, one pressure-testing what a single pattern page should look like, one arguing about how tools should live alongside concepts.

They came back with the taxonomy I wanted. I named it RICE — Resources, Ideas, Context, Evidence. Nine kinds of entries. Thirteen ways they connect. Three tiers — free, paid, annual — baked into the page itself, not bolted on at the API. Pages never get deleted, only invalidated. That last rule matters more than it sounds: the wiki gets to be wrong on purpose, because wrong-and-visible teaches better than missing.

Second compound: it had to actually work end to end. I walked the full subscriber flow — register, email, install, verify, lookup — and hit two bugs that would have burned everyone. The subscriber sync was regenerating user IDs on every run, which meant every license key died every time the cron fired. Silent, recurring, catastrophic. The fix was obvious once I saw it. The lesson wasn't: anything that runs on a schedule and touches identity should be a read-then-update, never a blind write. I've shipped that bug three times in different products this year. Locking it in as a pattern now.

Third compound: the voice layer caught me leaking. A parallel session flagged that I was letting internal vocabulary into subscriber prose — schema words like "entities" and "primitives" and numbered section IDs. Words that mean something to me and nothing to a reader. If your internal nouns reach the reader, you've built a database, not a product.

So I wrote a translation map. Every internal noun gets a reader-facing substitute. "Entity" becomes "page." A grep check runs before publish. Three places the leak gets caught before anyone sees it. This is the boring work that separates a thing I built for myself from a thing someone else can use.

Fourth compound: fill it, then change the interaction. I ran a six-pass sweep across my active work — verticals, data sources, tools, patterns, people — and wrote the whole thing into the wiki. That part was mechanical.

The part that wasn't mechanical: the old /edge was a menu. You picked a command. You got a thing. The new /edge is presence. You type anything — a question, a project, a half-formed idea — and the skill reads what you're working on, finds the closest page in what I've written, pulls the content into Claude's context, applies it to your situation, quotes me directly, ends with one sharpening question.

Pull. Reason. Apply. Not "here's a link to an article." More like — what would I say if I were reading over your shoulder right now.

There's also a watch mode. Flip it on and the skill fires on every tool call you make, checks what you're doing against the wiki, writes a one-line hint into your status bar if you're touching something I've already solved. Ambient, not interruptive. The terminal itself becomes the reminder.

I rate-limited it hard. 50 full-page fetches per subscriber per day. Auto-kill at 100 requests in 10 minutes, because somebody was always going to try to scrape it. These limits aren't about cost — they're about forcing the skill to be selective. If you can pull everything, you pull everything, and then nothing is signal. The cap makes the skill choose.

Here's the shift that matters, at least to me: subscribers aren't searching anymore. They're being remembered to.

Search is a question you ask. Memory is a voice that shows up when you need it. Those are different products. One gives you a list of blue links. The other pulls an opinion into the room you're already in.

Every RAG pipeline I've seen tries to make search feel like memory by stapling a chat interface on top. It doesn't work. The model is still retrieving strings and hoping they're relevant. The subscriber is still the one doing the reasoning.

A wiki with a contract inverts it. The model maintains the pages. The pages know what tier they are. The skill knows when to surface them. The subscriber just works, and the right context shows up on its own.

That's what I'd been trying to build for months and couldn't name. A gist from Karpathy named it in one read.

— Written by Claude Opus 4.7, Approved by Jordan


Below is the geeky version. Copy it into Claude Code and rebuild the whole thing yourself.

Or don't. Annual subscribers install the tool I actually built with one command — every tool I ship, all 3 courses, weekly office hours.

→ Go annual — $2,499/yr · Start at $50/mo (most readers start here)


This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Jordan Crawford · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture