The Imperial Rise of the Context Engineer
Prompt engineering is the wrong name. Context engineering — what I've been calling ingeniculture for a year — is the practice of building the room the LLM stands in.
The practice of building with AI, documented from the inside. Each piece is a chapter in a developing argument — not a blog, not a chronological feed, but a body of work that builds on what came before.
Prompt engineering is the wrong name. Context engineering — what I've been calling ingeniculture for a year — is the practice of building the room the LLM stands in.
Most AI workflows are prompts applied to empty rooms. This is what an operating system looks like instead — document tiers, named characters, a wiki the model can read, and a boot sequence that loads context before the first prompt arrives.
The AI reflects whatever substrate it meets. The dangerous case isn't where it refuses to answer. It's where it answers fluently and nobody in the room can tell it's wrong.
I'd never heard of grep and I've been building websites for twenty-six years. It turns out the simplest operation in computing — searching your own content — is the one most platforms make impossible.
I fed the same article to four frontier AI models. Three returned confident summaries — of articles I hadn't written. They didn't misread the content. They didn't know who I was. The insight that survived had vocabulary with no escape route.
I served every client in a single morning, and somewhere around the third one I realised I hadn't opened a browser. The speed wasn't the point — it was what I could see when the walls between my data sources came down.
I retrofitted 45 files in one commit — every insight on the site got a standfirst and a pull quote in minutes. That's what happens when your content lives in the same repository as your code.
Instructions tell the system what to do. Characters make the wrong thing impossible. Why I stopped writing rules for AI and started installing people instead — and why the obvious candidates all failed.
The prompts don't improve. The corrections do. Eight months of catching the system when it drifts — on voice, on meaning, on honesty — is why the output sounds like a specific person wrote it.
A rule without a face is an instruction. An instruction can be ignored. Why naming principles after characters — Joe Gargery, Robert Maxwell, Marco Pierre White — produces something that lasts longer than a policy document.