The Imperial Rise of the Context Engineer
Prompt engineering is the wrong name. Context engineering — what I've been calling ingeniculture for a year — is the practice of building the room the LLM stands in.
The Art of Reduction
One operator. One codebase. One operating system that boots in seconds and maintains state across sessions. 26 years of digital craft compressed into infrastructure that makes a solo practitioner move at the speed of a team.
This is the practice of building with AI from the inside. Not tips. Not tutorials. The actual work — documented as it happens, proven by the commit history, refined by the corrections.
The model is a commodity. The situation is the edge.
Prompt engineering is the wrong name. Context engineering — what I've been calling ingeniculture for a year — is the practice of building the room the LLM stands in.
Most AI workflows are prompts applied to empty rooms. This is what an operating system looks like instead — document tiers, named characters, a wiki the model can read, and a boot sequence that loads context before the first prompt arrives.
The AI reflects whatever substrate it meets. The dangerous case isn't where it refuses to answer. It's where it answers fluently and nobody in the room can tell it's wrong.
I'd never heard of grep and I've been building websites for twenty-six years. It turns out the simplest operation in computing — searching your own content — is the one most platforms make impossible.
I fed the same article to four frontier AI models. Three returned confident summaries — of articles I hadn't written. They didn't misread the content. They didn't know who I was. The insight that survived had vocabulary with no escape route.