With Spec-Driven Development we've come full circle to Big Design Up Front

Better specs and clearer task decomposition are a significant step forward. But specs and plans describe intentions. What AI agents also need is visibility into what systems actually do at runtime.

With Spec-Driven Development we've come full circle to Big Design Up Front

Intro

When I first read GitHub’s article about spec-driven development, my first impression was: “Wait a minute. I’ve heard this before…”

Here's how they describe it:

You provide a high-level description of what you’re building and why […] It’s about user journeys, experiences, and what success looks like. […] Now you get technical. In this phase, you provide the coding agent with your desired stack, architecture, and constraints, and the coding agent generates a comprehensive technical plan…

Does this sound familiar? It should.

We used to call this Big Design Up Front (BDUF). We spent decades running away from it. And now we're sprinting right back, rebranding it as innovation for the AI era. From GitHub:

[…] we’re rethinking specifications — not as static documents, but as living, executable artifacts that evolve with the project. Specs become the shared source of truth. When something doesn’t make sense, you go back to the spec; when a project grows complex, you refine it; when tasks feel too large, you break them down.

The irony is so thick you could easily represent it with a UML diagram. Documentation was always meant to be a living, evolving artifact. Engineering teams should have always started with some upfront system design before jumping into code.

But, also, implementing those best practices was harder than it sounded. Documentation was outdated the moment you pushed your next commit, writing it well was a days-long slog, and no amount of good intentions survived the pressure to ship new features.

So what’s changed? The biggest difference between BDUF and spec-driven development is that AI changes the economics of doing it well.

For the first time, writing a thorough, living specification isn't a weeks-long documentation marathon. It's a few hours (minutes?) of conversation.

That’s why teams can and should adopt it now. Spec-driven development with AI is genuinely better than BDUF and it will make your team more productive.

A quick trip through history

To understand why this matters, it's worth remembering what we're comparing it to.

In the waterfall days, BDUF was gospel. You'd spend months perfecting every requirement, every architecture diagram, every interface specification before writing a single line of code. The theory was sound: think deeply before building, catch problems early, avoid expensive rewrites.

The reality was brutal. Requirements changed while you were still in month three of design reviews. Technologies evolved. Markets shifted. By the time you finished your perfect blueprint, it was already obsolete. And when implementation finally started, reality had an annoying habit of disagreeing with your theoretical perfection.

Then the Agile Manifesto arrived and we all collectively exhaled. Finally. No more soul-crushing design documents that nobody reads.

But here's what happened: a lot of teams heard "Responding to change over following a plan," and translated it to "don't plan at all." They took "working software over comprehensive documentation" to mean "no documentation."

The pendulum swung from BDUF paralysis to "we'll figure it out as we go" chaos.

I've written about why this is a problem before. Modern distributed systems are too complex to hold in your head. Without some upfront design, you end up with:

  • Teams working from different mental models of the same system
  • Architecture decisions made by accident rather than intention
  • Critical trade-offs (security, scalability, cost) discovered too late
  • Knowledge that lives in someone's head until they leave

The answer isn't BDUF. But it's also not "no design." It’s somewhere in the middle: enough upfront design to establish shared vision, identify significant risks, and make conscious trade-offs, without the paralysis of trying to specify everything in advance.

We always knew this. We just struggled to do it consistently, because it takes discipline, the benefits feel distant, and there's always pressure to just start coding.

AI tools are changing that calculus.

Why AI agents need structure and how they make it easier

Here's the thing about AI coding tools and agents: they need specificity and structure.

Why? Because they lack the capabilities that let human developers work with ambiguity:

  • They lack context beyond the prompt. When you tell a senior engineer "build an authentication system," they know you're already using Auth0 for your customer portal, and follow the same pattern for consistency. An AI agent? It might give you a perfectly functional JWT implementation from scratch, or suggest Passport.js, or recommend Firebase Auth. All reasonable choices in isolation, but potentially creating fragmentation in your architecture. They don't have access to your team's tribal knowledge, past design decisions, or the unwritten "we standardized on XYZ" agreements.
  • They don't think defensively or double-check edge cases. A human developer building a user registration endpoint will pause and think: "Wait, what if someone registers with the same email twice? What if the email service is down? What about internationalized characters in names?". They'll ping you on Slack if they're unsure. An AI agent will confidently write the happy path and keep moving.
  • They struggle with ambiguity. Tell a human developer "make it performant" and they'll ask clarifying questions or make reasonable assumptions based on the application type. Tell an AI agent "make it performant" and you might get... literally anything. But write "Ensure 95th percentile response time stays under 200ms" and suddenly they have something concrete to optimize for.

That’s why GitHub created Spec Kit, a toolkit that formalizes a four-phase process: Specify (user journeys, experiences, outcomes) → Plan (stack, architecture, constraints) → Tasks (small, reviewable chunks) → Implement (AI generates code).

But here's what's different from BDUF: you don't have to write all of this yourself, from scratch, in a vacuum. The process is collaborative and iterative, with AI doing the heavy lifting at every stage.

What this actually looks like in practice

At each stage of this iterative conversation with AI tools, the humans are doing the high-value work: making judgment calls, catching misalignments, applying context the AI doesn't have. The AI is doing the drafting, the structuring, the fleshing-out.

What used to take days of writing and review meetings now takes hours.

The barrier to actually doing upfront design properly (the time cost, the documentation overhead, the discipline required) drops dramatically.

"We need to do this for the AI tools to work well" means that engineering teams finally:

  • Write clear specifications before coding
  • Document architectural decisions while they're fresh (not six months later when someone asks "why did we build it this way?")
  • Break down work into reviewable, testable chunks
  • Maintain living documentation that evolves with the system (actually keeping it updated as things change)

The humans get better collaboration, clearer communication, and knowledge that survives beyond individual team members. The AI gets the context it needs to generate useful code. Future developers (human or AI) inherit comprehensible systems instead of archaeological dig sites.

Where this really pays off: parallel AI agents

The single most compelling argument for spec-driven development is what happens when you start running multiple AI coding agents in parallel.

This is where structured upfront design becomes essential: when you have several agents working simultaneously on different parts of a system (e.g. one building the API layer, another handling the data model, another working on the frontend integration) the spec and the task breakdown become the coordination mechanism. They're how the agents know what they're each responsible for, what interfaces they can rely on from each other, and what the boundaries of their work are.

Without that shared source of truth, parallel agents will do what parallel human teams do without proper coordination: make inconsistent assumptions, build to incompatible interfaces, duplicate work, and create integration problems that are expensive to untangle later.

With a well-structured spec and clearly decomposed tasks, each agent has exactly the context it needs. It knows the overall architecture. It knows the constraints. It knows what it owns and what it depends on. The result is faster development that actually fits together.

Full circle, but wiser

So yes, there's a family resemblance between spec-driven development and BDUF. Both involve designing before you build. Both emphasize clear requirements and structured planning.

But the differences matter. BDUF was slow, manual, expensive, and brittle. Spec-driven development with AI is fast, iterative, collaborative, and maintainable.

More importantly, it's actually achievable.

The reason teams skipped upfront design wasn't that they didn't believe in it, it was because the cost was too high relative to the short-term pressure to just start coding. AI changes that equation. When a solid spec takes hours instead of weeks, the discipline becomes sustainable.

We've come full circle, but we've brought better tools.

One more thing AI agents need

Better specs and clearer task decomposition are a significant step forward. But specs and plans describe intentions. What AI agents also need (especially for debugging, iteration, and understanding production behavior) is visibility into what systems actually do at runtime.

Your AI coding assistant can't debug a production issue if your observability stack only shows sampled traces. It can't understand a user-reported bug if you can't connect frontend errors to backend failures. It can't suggest meaningful fixes if it can't see the complete context of what actually happened.

The same trend that's pushing us toward better design practices upfront should also push us toward better observability practices in production. AI agents are revealing gaps in both, and closing those gaps is what will make AI-assisted development genuinely powerful, end to end.