About Nielsen Ramon header image

<- agile-in-the-age-of-ai

Do We Still Need Agile?

Agile was a reaction to slow, expensive implementation. If AI collapses implementation cost to near zero, did we just accidentally resurrect the design-first methodologies it replaced?

Agile was invented to solve a specific problem: software development was slow, expensive, and wrong. The waterfall process — full requirements, full design, full implementation, then test and discover you built the wrong thing — routinely produced systems that were over budget, late, and didn't match what users actually needed. The Agile Manifesto in 2001 was a corrective: working software over comprehensive documentation, responding to change over following a plan, customer collaboration over contract negotiation. The core insight was that you could not know what was right in advance, so the process should be optimized for learning fast and adapting constantly.

That insight was correct, for its time. But the underlying assumption — that implementation is expensive and slow, so you must minimize rework by keeping cycles tight — is being aggressively eroded by AI-assisted development. Which raises a question: if the cost of implementation is approaching zero, does the logic of Agile still hold? And if it doesn't — what replaces it?


To understand what we might be giving up, it's worth remembering what Agile replaced and why.

Waterfall was not invented by sadists. It came from Winston Royce's 1970 paper on managing large software projects, and it reflected how large engineering projects actually worked: you designed the bridge fully before you started pouring concrete, because once you started pouring you could not cheaply change the design. Software seemed to follow the same logic. The BDUF — Big Design Up Front — tradition was not irrational. If implementation takes months and you can only do it once, you had better be sure about what you're implementing.

The problem was that software is not concrete. Requirements change. Users don't know what they want until they see what they don't want. The feedback loops in waterfall were months or years long, which meant you could spend enormous effort building something entirely wrong.

Extreme Programming, Scrum, Kanban — these were all attempts to shorten those feedback loops by treating implementation as cheap and repeatable. Write the test before the code. Ship every two weeks. Keep the backlog small and reprioritize constantly. The ritual of the sprint exists specifically because someone calculated that two weeks was short enough to catch mistakes before they became catastrophic.

But "short enough" was a function of how long implementation actually took.


Here is the shift. Agile's unit of work was always calibrated to implementation cost. One sprint equals two weeks equals roughly one feature, or one meaningful slice of a feature. That ratio was not arbitrary — it was set by how long implementation actually took. The sprint length, the story point system, the definition of "done," the shape of the backlog: all of it was tuned to a world where a feature cost a week of engineering time.

With AI-assisted development, that ratio has broken. A competent engineer can now implement in an afternoon what used to take a sprint. Which means the unit of work per sprint is no longer one feature — it is closer to a full product version. A prototype that used to require a sprint can be standing up the same afternoon. An MVP that used to require a quarter can exist by the end of the week.

Agile does not scale to this. The ceremonies — sprint planning, backlog grooming, story points, velocity tracking — were designed to manage the flow of work when work moved slowly. When work moves fast enough that you could ship a full product version before the sprint planning meeting is over, the management overhead of Agile is no longer a small fraction of the work. It is the work. You spend more time in the process ritual than doing the thing the process was supposed to organize.

The constraint has moved. It is no longer "how do we avoid wasting implementation effort." It becomes "how do we make sure we are building the right thing at all."

This is, ironically, the exact problem that BDUF was designed to solve.


The older methodologies that Agile displaced — waterfall, RUP, spiral development — had long design phases not because they were bureaucratic for its own sake, but because design was cheaper than implementation. Thinking was cheaper than building. You could iterate through many possible designs on paper — or in diagrams, or in formal specifications — at a fraction of the cost of implementing each one.

Spiral development in particular is worth revisiting. Barry Boehm proposed it in 1986 as an iterative process that put risk analysis at the center of each cycle: identify the biggest risks, build a prototype to resolve those risks, evaluate, then plan the next spiral. It was explicitly not "design everything upfront" — it was "resolve the hardest unknowns first, repeatedly, before committing to full implementation." That is a remarkably sane framework, and it got abandoned because the prototypes and evaluations still took a long time.

What if they didn't anymore? What if an afternoon of work with AI could resolve the risk that used to require a two-week spike?


The specific problem with Kanban and Agile in a world of cheap implementation is that they are optimized for reactivity — responding to new information quickly. But reactivity is only valuable relative to how long reactions used to take. Kanban's pull system, Scrum's sprint cadence, the constant backlog reprioritization — these were designed for a world where the cost of a wrong decision was measured in weeks of engineering time, and the best you could do was catch it at the next sprint review.

If you can just try the thing in an afternoon, you already have faster feedback than any sprint ceremony provides. The overhead of managing the process — standups, planning, grooming, retrospectives, velocity tracking — was justified when it was a small fraction of total engineering time. When a full product version fits in a sprint, it is no longer a small fraction. The process overhead dominates.

Kanban in particular assumes a steady flow of discrete work items of roughly consistent size moving through defined stages. That model made sense when the unit of work was a feature or a task. It breaks when the unit of work is a product version — because you can't pull a "full product version" off a board and have it flow through QA in the same way a bug fix does. The work items are not consistent anymore. Some are hours, some are a day, some are a full product. The pipeline model does not fit.

There is also a subtler issue. Kanban and Agile were explicitly designed to keep teams in a state of perpetual reaction: what does the customer need right now, what is blocking us right now, what did we learn last sprint. This is the right posture when implementation is slow and costly. But it systematically underweights long-horizon thinking. When you are always reacting, you are rarely designing. The backlog replaces the architecture. The sprint replaces the vision. You end up with software that works for the next two weeks but has no coherent shape.

This was always a critique of Agile — that it produced working software that was difficult to evolve because nobody had ever sat down to think about it whole. AI-assisted implementation makes this worse, not better, because it lowers the cost of adding features without lowering the cost of having no coherent architecture.


So what might a better methodology look like in a world of cheap implementation?

A few hypotheses:

Longer design phases, shorter implementation phases. If implementation is fast, the bottleneck is knowing what to build. Invest more time in the design phase — not BDUF in the sense of specifying every detail upfront, but genuine exploratory design: competing approaches, user research, prototype evaluation, architectural thinking. Then implement quickly once you know what you're doing.

Multiple competing prototypes. If building a prototype costs an afternoon instead of a sprint, you can build three and compare them. This is standard practice in hardware design and industrial design — it was prohibitively expensive in software. It may not be anymore. Design sprints pointed in this direction but were still constrained by implementation cost.

Risk-first sequencing à la Boehm. Before committing to an implementation direction, explicitly identify the riskiest assumptions — the things that would invalidate the design if they turned out to be wrong — and resolve those first. With cheap implementation, you can spike each risk quickly.

Less ceremony, more thinking. The standups, story points, and sprint planning were process overhead justified by the need to coordinate expensive implementation work. If implementation is cheap and fast, the coordination overhead may not be worth it. A smaller team with a clearer shared vision and a longer planning horizon may outperform a larger Agile team running process for its own sake.


None of this means Agile is simply wrong. The core values — working software over documentation, customer collaboration, responding to change — remain sound. But the practices that were derived from those values assumed a world where implementation was the expensive constraint. That world is changing faster than the practices are.

The irony is that we may be heading back toward something that looks more like the methodologies Agile displaced — not because those methodologies were right all along, but because the conditions that made them wrong are being removed. Waterfall failed because feedback loops were too slow. AI is compressing those loops. The question is whether we are thoughtful enough about which loops to compress, and whether we preserve the parts of the process that were not about speed at all — the parts about thinking clearly, designing coherently, and building something that will still make sense in two years.

The sprint was a workaround for expensive implementation. The workaround may be expiring. It's worth asking what we actually want to do with the time we're getting back.