Over the Christmas break, I did what I love most: build. No meetings, no Slack, just me and a side project.
I was building fast, leaning on an LLM to do the heavy lifting, and it felt great. Feature after feature, shipping daily. Then I needed to add something new and realized I couldn’t. I didn’t understand how the pieces fit together anymore. I couldn’t debug. I couldn’t reason about where a new feature should live or what it would break.
The system had run away from me.
Simon Willison shared a piece this week that gave this feeling a name: cognitive debt. When you use AI to build fast, the debt doesn’t accumulate in the code. It accumulates in your head. You lose the mental model of your own system, and once that’s gone, speed is an illusion.
Technical debt slows you down. Cognitive debt lets you think you’re fine until you’re not.
I hardly write code myself anymore. Strange thing to say as someone who spent the break coding, but it’s true. The design is the work now. I spend hours going back and forth on the architecture before anything gets built. What are the boundaries of this feature? Where does it touch the rest of the system? What assumptions am I making, and what happens when they’re wrong? Those are the decisions that compound. Get them wrong and no amount of AI-generated code will save you.
This is where LLMs fall short. They’re brilliant at the micro. Give them a function, a component, a bug, and they’ll nail it. But they optimize locally. They solve the problem in front of them without asking whether it’s the right problem, or how it reshapes everything around it. The macro view is still a human job. Maybe the most important one left.
But you can’t manage what you can’t see. That’s why I build observability in from day one. Not logging. Custom developer views, baked directly into the application. The AI assistant I’m working on has a debugger page where I can open any session and trace the exact event timeline. Every decision the system made, why it made it, and when a model is involved, its reasoning too.
These views do double duty. When something breaks, I feed them back into the agent as structured data. Instead of me digging through logs trying to explain the problem, the agent gets the event timeline, sees where things went wrong, and debugs itself. The same tool that keeps me oriented keeps the AI effective.
Once the design is solid, I produce two documents. One for the machine, a detailed spec with every decision, constraint, and implementation detail an AI agent needs to build correctly. One for the human, a document with sequence diagrams and architecture graphs that explains the feature back to me like I’m seeing it for the first time. Not dry documentation. Something I’d actually want to read. I built dedicated Claude skills for both.
The machine document lets AI move fast and not stray. The human document keeps me in the driver’s seat. I don’t start building until I can read my own explanation and see the whole system in my head.
AI handles the code now. What’s left is the harder job: knowing what to build and why.