Back to the future
The economics of human coordination
Modern software delivery processes evolved to coordinate scarce and expensive human implementation effort.
Agile, Scrum, ticketing systems, sprint planning, backlog grooming, estimation rituals, and architectural review processes all emerged in an environment where writing and modifying software was slow, expensive, and error-prone. Engineering time was the primary constraint. The purpose of process was to preserve and transmit enough context that groups of humans could coordinate implementation work without constantly colliding with one another.
In that world, caution was rational. Changes were expensive to make. Refactoring was expensive. Reversing bad decisions was expensive. Coordination failures could burn weeks of engineering effort. Organisations therefore evolved elaborate systems for decomposing work, sequencing dependencies, reducing ambiguity, and preventing waste.
Agentic development can do more, faster
Agentic software development changes the economics beneath those assumptions.
When implementation effort becomes cheap and abundant, the optimisation target changes with it.
An engineer working with agents can often produce functional software at extraordinary speed simply by ignoring many of the standards that traditionally governed software craftsmanship. Strict abstractions, perfect naming, architectural elegance, and carefully layered design can all be deferred. The system may still function. The feature may still ship. In many cases, it may ship dramatically faster.
This creates a temptation to conclude that code quality no longer matters. That would be the wrong conclusion.
The value of good abstraction
Readable, coherent, well-structured systems still matter because humans still operate them. Humans still diagnose failures, make architectural decisions, maintain long-term coherence, and determine whether a system remains understandable as complexity grows. Poor abstractions still create cognitive load. Technical debt still accumulates. A disorganised codebase still slows future development, even if agents help produce it, because agents themselves partially rely on the patterns, structures, and conventions they discover within the existing codebase. As coherence declines, both human and machine understanding degrade with it.
Ship now, refactor later
Traditional engineering processes attempted to enforce quality continuously during implementation because implementation itself was expensive. Agentic development makes it increasingly viable to separate rapid delivery from later consolidation.
The emerging workflow looks less like traditional handcrafted engineering and more like real Agile/Extreme Programming. Systems can be built rapidly, validated against reality, observed under actual usage, and then restructured once the valuable parts become clear. Infrastructure can be rebuilt underneath functioning systems. Failed ideas can be removed quickly. Successful ideas can be normalised and hardened later.
Back to the future
In practice, this resembles the original spirit of Agile more closely than many modern Agile implementations do.
Early Agile methods recognised that working software and rapid feedback mattered more than excessive upfront design. Over time, however, many organisations transformed Agile into a bureaucratic coordination layer built around tickets, ceremonies, and decomposition rituals. Those processes were often necessary because large groups of humans had to coordinate scarce implementation capacity.
Agents reduce part of that coordination burden.
They can synthesise context across systems, search large codebases, reconcile documentation, propose implementations, and perform large volumes of routine plumbing work. This allows individual engineers to operate across broader scopes with less organisational friction.
But the reduction in implementation cost introduces a new danger: organisations can now generate technical debt much faster than before.
The sorcerers apprentice
There is a useful analogy in Goethe’s The Sorcerer’s Apprentice. The apprentice succeeds in animating the broom and accelerating the execution of his housekeeping work, but lacks the judgement required to govern the system he has unleashed, resulting in the broom multiplying and causing chaos he cannot control.
In the same way, a poorly structured team using agents can flood a codebase with inconsistent abstractions, duplicated logic, fragile integrations, and semantically incoherent systems at unprecedented speed. The limiting factor is no longer the speed of human code production. It is the ability to direct and integrate strategy into coherent implementation.
That shifts the role of engineering upward.
The scarce and valuable skills become architectural reasoning, context management, semantic clarity, organisational alignment, validation, defining correct boundaries, and deciding where determinism matters and where probabilistic behaviour is acceptable. The engineer becomes less responsible for writing individual components and more responsible for governing the evolution of the system as a whole.
What not to do
Adding agents to existing delivery pipelines without restructuring the surrounding workflow often produces disappointing results. Agentic capabilities become accelerators for technical debt, or speed up low value processes, rather than act as catalysts for new and more efficient organisational forms. Teams preserve the same ceremonies, the same decomposition structures, and the same coordination patterns, then merely ask agents to participate inside them, when those structures were designed around a different economic reality.
The deeper opportunity is not merely faster implementation. It is redesigning the software delivery model for a world where implementation is abundant, but integrative systems reasoning remains scarce.