The Quality Iceberg Game

By | Blog, Uncategorized | 6 Comments

This is really 2 posts in one. I have an observation to share, and then a team exercise based on that observation.


Technical Risk Runs Deep

The observation comes from scrum expert Colin Bird. Colin is one of the most gifted agile architects in the world, and often rants about the mismatch between industry standard quality practices and true technical risk. His point is this: “why do we spend so much money on testing through the GUI, when 90% of the risk is underneath the GUI?”. It’s an intriguing question.

A standard application can be divided into three sections: the unit or module level, the integration level where those modules interact, and the GUI that exposes the behavior of all that integration. Where do most of the defects come from? Below the GUI. But consider that large tech companies hire dozens even hundreds of testers, clicking through an application, to be the primary counter measure for technical risk. Yes defects are found, but when when you consider the salaries of those manual testers, the licensing fees for those automated GUI testing suites, and the performance cost of going through all the tiers…the cost of finding those defects can be staggering.

Instead, agile engineers know to invest in low-level quality techniques like unit-tests (aka TDD), real-time peer reviews (aka pair programming), code consistency (coding standards), proper design patterns (refactoring). Low-level quality techniques also address the integration of stand-along components or services. These could include design-by-contract tests, or automated acceptance tests (using FitNesse or Selenium).

Let The Team Decide Its Risk Management

As a team lead, agile coach, or ScrumMaster, how do you hammer home this point? As an Innovation Games Trained Facilitator, I am a big fan of collaboration games. During a session with a team at Cisco here in India, I adapted a game call Prune-The-Product Tree. The intent was to have the team explore technical practices on their project, in a collaborative fashion. Here’s the instructions:

1. Draw a triangle, divided into 3 tiers (UI, Integration, Unit).

2. Tell the team to think of every quality technique they can think of

3. For each technique, post a sticky note onto the area it addresses

4. Here’s the interesting part. After 5-10 minutes of listing all the techniques, ask this question: “Will all these techniques fit into a single iteration?”. The team all answers in unison, “NO!”

5. Then tell them to choose which quality techniques will fit in the current sprint, while also remove the most technical risk and achieving the highest quality.


This final step is where the gold is mined. Team members have to negotiate which practices best address the quality iceberg. There will be strong opinions. The testers will prefer the GUI-driven techniques, because that is generally how they are trained. The agile types will ask for all the code-level techniques, even though they’re the only ones trained to do them. To help get to some reasonable consensus, you may announce some caveats (“Don’t worry about lack of training or tools for certain techniques. Pick the ones you think will work best, and we’ll get you the support you need.”

Then you take a step back and declare this iceberg is our “Definition of Done” for the next iteration. If there are problems, we can discuss adjustments during the next retrospective.