Steve Yegge's recent essay "The AI Vampire" raises an urgent problem: AI-assisted programming accelerates output but exhausts developers through relentless cognitive demands. Paired with Margaret Storey's work on cognitive debt, the conversation reveals a critical blind spot in how teams measure software productivity.
The dynamic works like this. AI coding assistants generate solutions faster than humans can review them. Developers feel pressure to keep pace, shipping code without fully understanding its architecture or implications. The work feels productive in the moment—velocimeters show green, sprints complete on time—but developers accumulate mental fatigue. They're making constant low-level decisions about what to accept, modify, or reject from AI suggestions. That's exhausting.
Cognitive debt mirrors technical debt. When developers skip deep understanding to move quickly, they don't accumulate busted code; they accumulate unmapped mental models. They understand less about the system over time, not more. The problem compounds: new developers inherit codebases they can't easily comprehend, and experienced developers must constantly reorient themselves when returning to old sections.
Yegge and Storey both point to the same gap. Teams obsess over velocity metrics and feature count. Nobody measures clarity, comprehension, or the mental load required to maintain systems. An AI-generated codebase that ships twice as fast but requires triple the cognitive effort per maintenance cycle is a bad trade.
The burnout stems directly from this. Developers work harder to understand less. Their expertise feels devalued when a chatbot can generate boilerplate faster than they can think. The speed advantage dissolves once changes are needed and nobody understands the original intent.
The fix requires separating velocity from value. AI should augment understanding, not replace it. That means reviewing AI output deeply, documenting reasoning, and pushing back when suggestions skip important architectural decisions. It means
