Agile, Scrum, and Software Development Methodologies
Software teams have been arguing about the best way to build things for as long as there have been software teams. Agile, Scrum, Kanban, Waterfall — these aren't just buzzwords on a resume; they're structural decisions that shape how a project moves from idea to working code, and how badly it fails when something goes wrong. This page covers the major software development methodologies, how they differ mechanically, when each one fits, and how to think through the choice between them.
Definition and scope
A software development methodology is a framework that defines how a team plans, executes, and delivers software — who does what, in what order, and how decisions get made when requirements change (and they always change). The Agile Manifesto, published in 2001 by 17 software practitioners in Snowbird, Utah, marked a formal break from heavyweight, document-driven processes. Its 4 core values and 12 principles prioritize working software over comprehensive documentation, and responding to change over following a plan.
Agile is not a single methodology — it's an umbrella term for a family of iterative, incremental approaches. Scrum is the most widely adopted framework within that family. The Scrum Guide, maintained by Scrum co-creators Ken Schwaber and Jeff Sutherland, defines Scrum as a lightweight framework with 3 roles (Product Owner, Scrum Master, Developers), 5 events (Sprint, Sprint Planning, Daily Scrum, Sprint Review, Sprint Retrospective), and 3 artifacts (Product Backlog, Sprint Backlog, Increment).
Waterfall, the older model, structures work into sequential phases — requirements, design, implementation, testing, deployment — with each phase completed before the next begins. The term is commonly traced to a 1970 paper by Winston W. Royce, though Royce actually described it as a flawed model even then, which is one of history's better ironies.
A broader look at the landscape of how software gets built — from language choice to team structure — is available on the Programming Authority home page.
How it works
Scrum organizes work into Sprints: fixed-length cycles of 1 to 4 weeks during which a team commits to delivering a potentially shippable product increment. The process runs as follows:
- Product Backlog refinement — The Product Owner maintains a prioritized list of features, fixes, and improvements.
- Sprint Planning — The team selects backlog items they can complete within the Sprint and creates the Sprint Backlog.
- Daily Scrum — A 15-minute daily event where Developers synchronize work and surface blockers.
- Sprint execution — The team builds, tests, and integrates work continuously across the Sprint.
- Sprint Review — The team demonstrates the Increment to stakeholders and collects feedback.
- Sprint Retrospective — The team inspects its own process and identifies 1 or more concrete improvements for the next Sprint.
Kanban, another Agile-aligned method codified by David Anderson in his 2010 book Kanban, takes a flow-based approach rather than time-boxed sprints. Work items move through defined stages (To Do, In Progress, Done) with explicit Work-in-Progress (WIP) limits on each stage. A WIP limit of 3 on "In Progress," for example, means no more than 3 items can occupy that column simultaneously — forcing the team to finish before starting new work.
Waterfall, by contrast, produces a comprehensive requirements specification before a single line of code is written. Testing happens after development is complete, which means defects discovered late carry a significantly higher remediation cost — a dynamic well-documented in the National Institute of Standards and Technology (NIST) report on software defect costs, which found that defects caught in production cost 30 times more to fix than those caught during design.
Common scenarios
Different methodologies suit different conditions. Three patterns appear repeatedly in practice:
Early-stage product development — When requirements are genuinely unknown and will evolve through user feedback, Scrum's short Sprint cycles allow a team to validate assumptions every 2 weeks rather than discovering a wrong direction after 6 months of Waterfall execution.
Regulatory or safety-critical systems — Medical device software, aviation systems, and certain financial platforms operate under standards — such as IEC 62304 for medical device software or DO-178C for avionics — that require rigorous documentation, traceability, and formal verification. These environments often favor plan-driven approaches with heavier upfront specification, sometimes hybridized with Agile testing practices.
Continuous-delivery teams — High-throughput engineering teams running cloud infrastructure or SaaS platforms frequently use Kanban or its engineering-focused cousin, DevOps pipelines, where deployment happens multiple times per day. The DevOps Research and Assessment (DORA) program, maintained under Google Cloud, tracks four key metrics — deployment frequency, lead time for changes, change failure rate, and time to restore service — that quantify delivery performance in these environments. Elite performers, per DORA's 2023 State of DevOps Report, deploy on demand with a change failure rate below 5%.
Decision boundaries
Choosing a methodology comes down to 4 concrete variables:
- Requirements stability — Fixed and well-understood requirements favor Waterfall or a phased plan-driven approach. Fluid or emergent requirements favor Scrum or Kanban.
- Feedback loop speed — If stakeholders can engage every 2 weeks, Scrum's Sprint Review cadence works. If stakeholder access is limited, longer-cycle methods reduce coordination overhead.
- Team size and distribution — Scrum is designed for teams of 10 or fewer Developers per the Scrum Guide. Scaled frameworks like SAFe (Scaled Agile Framework) or LeSS (Large-Scale Scrum) extend Agile to programs with 50 to 150+ engineers, each with its own coordination overhead.
- Regulatory environment — Compliance requirements that mandate audit trails, formal sign-offs, or stage-gate reviews constrain methodology choice regardless of team preference.
The methodology question intersects directly with tooling — version control with Git and integrated development environments are infrastructure choices that shape how any methodology actually runs day to day.
No methodology eliminates project failure on its own. The Standish Group's CHAOS Report has tracked software project outcomes since 1994 and consistently finds that project success rates correlate more strongly with team experience and clear requirements than with any specific process framework.