Disclaimer: this article expresses my personal opinion and is based upon personal experiences. You can consider this a rant, but I am open to any constructive criticism. Yes, the article shows a single view on the state of affairs and everything depends upon the context and the situation. The only thing we can do is to learn from experiences and work together to build great software products and improve the world we live in.
Traditional development is like being the passenger on a bus tearing down a serpentine mount road while being driven by a drunk
The fallacy of the perfect requirements
Let’s define all requirements up-front. Let’s create all design up front. Let’s freeze those requirements and let’s develop based upon that set of requirements.
Great!
So, what’s wrong?
Requirements do change
External and internal factors cause change. The world around us changes. Technology advances at a fast pace. Competitors arrive with new ideas. Legal requirements are imposed. We get feedback from the market: users, customers, competitors. We learn as we do.

The decay of requirements during the project life-cycle. Even in the best case, requirements will change inevitably. In the worst case, only the half of your requirements will be valid.
In case the planned delivery date is not met, the situation gets worse: it becomes more difficult to ever meet the actual business needs. Requirements further decay.
The assumption that you can design and define all requirements up front, it’s simply false.
Complex product and software creation is a wicked undertaking. Software is intangible. Knowledge is in people’s heads. Delivering an all-inclusive requirements specification document and aiming at a 100% satisfying developed software application will simply not happen in reality. It’s a self-fulfilling prophesy.
As soon as the actual end-users (customers) consume the product or software, opinions are formed and feedback is given. By using the software product, we learn about its use and instantly potential improvements emerge. The act of delivering the software causes the requirements to change. The longer it takes to get feedback about the software product, the longer it will take to converge to a solution that satisfies the user needs. In fact, requirements change while the software is being developed.
So what happens in traditional sequentially phased product development?
- Lots of change requests.
- Lots of defects; a large part of those defects are actually improvements based upon (testing) feedback (so, change requests).
- Discontented business stakeholders; impatient end-users
- Feeling of lost opportunities
- Delay in delivery
- Delivery of value at the very end

In a sequential phases project, the value is only delivered at the end. The risk is only decreasing at the end.
The longer the duration of the sequential phases, the larger the devastating effect will be. The longer it takes to define the requirements and to deliver the software, the more substantial the change will be. The later in the software development life-cycle the change is requested, the more costly it will be.

Figure 1: The cost of change in the lifetime of a project. Red line: in a traditional sequential project, the cost of change becomes increases dramatically as the project progresses. Grey dotted line: the cost of fixing defects is similar. Blue line: the ability to respond to changes goes linear down as the project progresses.
Figure 2: Red line: The cost of change in a traditional sequential project. Blue line: the cost of change in a realistic agile project approach. Grey line: the cost of change ideally in an agile project approach.
The fallacy of end-game testing
Let’s fully develop the software product, and next let’s test everything at the end.
Great!
So, what’s wrong?
Bugs, defects are discovered late in the development process. The cause of the defect can be anything: bad design, mistake in coding, integration issue or simply a misunderstanding or misinterpretation in requirements.
So what happens in traditional sequentially phased product development?
Defects are reported late. There’s little to no time left to fix all defects. Only the functionally blocking defects are solved to get the application into an acceptable state. Any defects that “scratch the surface” or points for improvement are postponed to a “phase II” or “aftercare release”; which hardly happens, or which does not address all issues. And most importantly: this will happen way too late. In production, we call these “production incidents”.
People are pushed to work overtime and to do weekend work. Bugs must be fixed. Retesting must be done. Remember, we are in the project phase near the end (development has been done) and business stakeholders are impatient to release.
…with the software in production, fixing bugs is akin to repairing a car while it is driving down the road, long after it has left the drawing board, the assembly line, and the dealer lot. It’s as expensive to do as it can possibly be.
In sequential development learning happens late. Changing or removing requirements causes rework at a high cost and may break other functionality. Fixing defects is costly.

http://www.ambysoft.com/artwork/comparingTechniques.jpg
The fallacy of end-game system integration
Let’s define and develop all components or software layers separately. Next let’s have one phase at the end to integrate it all.
Great!
So, what’s wrong?
Big-bang system integration will not work as planned. An upfront system analysis or design will not be able to envision or control the full process of system integration. In real life complex software development, something always will pop-up or go wrong. The assumptions regarding the integration will not withhold, due to changes, constraints or simply unknown unknowns that are about to happen along the way. Real knowledge on how to integrate the system is acquired as you’re integrating: too late.
So what happens in traditional sequentially phased product development?
Integration problems at the end of the project: these demand ad-hoc and root-cause analysis but without the ability to systematically fix the underlying issue.
Delay in delivery and incomplete system integration means the risks are still high: not all potential issues have been discovered.
The fallacy of a fully planned-ahead schedule
Let’s create an end-to-end project plan, schedule all activities, plan and assign all resources, map all dependencies. Oh and additionally, we will add some buffer, as compensation if something unexpectedly happens – you never know. We also define the critical path and monitor it as a maniac.
Great!
So, what’s wrong?
We are assuming we have planned all the knowns, we have added some buffer for the known unknowns. But you cannot schedule for the unknown unknowns. We are assuming that we are good at estimating effort of all activities of the different project phases. Moreover, we estimate and schedule all activities upfront at the beginning.
Software creation is an inherently complex process. Given the multiple sources of potential changes and discovery of the unknown unknowns along the way, the project will never be executed as on schedule. In the process of creating, designing, developing, testing, new information and knowledge are constantly acquired.
Managers expect they can plan for all the variables in a complex project in advance, but they can’t. Nobody is that smart or has that clear a crystal ball.
Source: https://hbr.org/2003/09/why-good-projects-fail-anyway
Cone of uncertainty: variability in estimates

The Cone of uncertainty indicates the degree of variability in estimates during the lifetime of the project.
Humans are bad as estimating, even when given historical data. Estimates are considered as the truth, they become a self-fulfilling prophesy. The whole project is built upon those estimates and milestones are defined. Expectations are set and promises given. It becomes a dysfunctional model. The project progresses, but almost no one dares to truly acknowledge those initial estimates may be off. We don’t want to break the expectations. The whole project is built upon initial estimations which are taken as commitments. No empirical process is in place to get in touch with reality and to adapt.
So what happens in traditional sequentially phased product development?
Project managers track and update their schedule, pushing out activities ad hoc to resources. Metrics are used in waterfall because we had no idea what was happening so we try to measure anything.
- Friction between project and sponsor
- The blame game
- The typical “IT projects always have delay” situation
- Problems to actual change something about the situation: a lot of investment in infrastructure systems, development has been done. Rework causes changes to stuff already built.
- Value is delivered at the very end of the project
The fallacy of gated phases, experts working in silos and hand-overs
Let’s ask the business owner to define the business need and a concept. Let’s ask the business analysts to translate that concept and deliver the business requirements. Let’s ask the visual designers to make stunning design. Let’s ask copywriters to write content and text labels. The functional analysts will create a functional speciation based upon those requirements. Next, a technical analysis will be performed, followed by a technical design. Those specifications will be accomplished with a set of non-functional requirements, and an architectural design. Finally, this will converge to a development specification. Let’s start development! Some period after, the testers start looking into requirements to write test scenarios.
Great!
So, what’s wrong?
It’s not efficient. It’s very time-consuming. Handovers between experts in silos will be a source of misunderstandings, misinterpretations. The business owner of representative will never talk to the developer. Requirements are lost in translation. A change or gap correction in the specification will be costly, often not accepted. No ability to act upon changes, resulting in missed opportunities.
Missed opportunity for a creative product development process: different people working in the different phases of the project carry their own opinion regarding their area of expertise. The concept and the design of the product therefore will be influenced / impacted. Unfortunately the sequential nature of the project does not allow incorporating all that feedback in a meaningful way.
So what happens in traditional sequentially phased product development?
- Lots of time (and money) wasted on handovers, alignment between teams
- Missed opportunity for a creative product development process
- Lack of common understanding
- No team alignment
- Loss of intrinsic motivation, work satisfaction
To sum-up
A believe that an old industrial paradigm can be applied to software manufacturing. A believe that defining and planning the full project in advance will lead to a successful on-time delivery of the project. After several decades of software development, many new technologies, new practices came to existence, but the traditional manufacturing approach to create software products proves to not be successful.
What do you think?
Read more:
- http://brodzinski.com/2011/01/good-waterfall-better-than-bad-agile.html
- http://conversationswithandrew.blogspot.be/2006/06/its-all-about-version-20.html
- http://superwebdeveloper.com/2009/11/25/the-incredible-rate-of-diminishing-returns-of-fixing-software-bugs/
- http://www.infoq.com/resource/articles/scaling-software-agility/en/resources/ch02.pdf
- http://www.waterfall-model.com
- http://alistair.cockburn.us/Waterfall+is+a+late+learning+strategy.png
- https://hbr.org/2003/09/why-good-projects-fail-anyway
- https://hbr.org/2013/10/the-hidden-indicators-of-a-failing-project/