Theory of QA Bridging – Overview

Draft Version::   

I’ve often thought, why do projects fail ….?  Can we create a quantifiable formula that will predict the risk of failure and success in an approach?  Can I create a quantifiable theory that will explain why some approaches work better than other approaches?

QA and successful project delivery are inextricably linked.   If you can embed good QA throughout a project, the chances of success increase.

What follows is a short explanation of a QA theory I’m proposing – and actually put into practice at several client sites.

I call the theory QA Bridging, this has to two key variables:

  1. TL.Fq – Time Latency & Frequency
  2. CL – Communication Loss

TL is the action instigated by a series of nodes and the time it takes to return an output. TL can be thought of as the Time it takes to see the result of an action.  Frequency is also a variable that comes into play.  An example of a TL Node may be a developer or a BA writing business requirements.

If Node(A) = Development Coding Activity, examples of items that incur Time Latency costs can include:

    1. Code checked into build
    2. Testing against build (Unit, Automated)
    3. Integration testing
    4. Manual Test effort against Code developed
    5. Performance testing against latest build
    6. Live incidents

A project will have multiple nodes with associated Time Latency attributes.

Assertion:  If we can increase the frequency that it takes to see a result by decreasing Time Latency then Quality will go up.   Where possible we should always seek to reduce time latency to reduce associated project risk.   At a simple level, Time Latency can be thought of as the bottlenecks in the system for feedback loops.

CL is the Communication Quality loss through passing key information, such as requirements through successive channels. These channels in waterfall are translating Business objectives -> Requirements -> Specification -> Testing.   An analogy is  chinese whispers.  A key premise of the theory is my observation that user requirements are almost impossible to clearly define upfront.  

Example attributes of Communication Loss include:

  • Users think they know what they need, but are unable to articulate precisely
  • ‘I said this but actually meant this’ scenarios
  • Users change what they want as they discover more about what they have stated (or haven’t)

We often hear that one of the main causes of projects failing is ‘changing requirements’. If we accept that the nature of what is captured is ambiguous, then we should either invest more in requirements capture or adopt an approach which will attempt to align user expectations to the output of the process (or vice-versa).

QA is ensuring that a product 

  1. Is delivered to sufficient quality  (Validation) 
  2. Meet customer expectations (Verification) 

In my experience, a lot of time is spent on (1), the main reason projects fail is because of (2).

The theory is called QA Bridging because as we create a Time Latency effect to check the quality of results we are effectively creating a QA Bridge.  By actively looking to minimise the size of all bridges we will increase quality and reduce risk.

Application to Waterfall & Scrum, a simple example:

If the Theory is applied to Waterfall Approaches:

Time Latency costs associated with:

  • Node (Requirements)
    • Attributes
      • Business Acceptance Test
      • User Acceptance Test
  • Node (Development)
    • Attributes
      • Test
  • Node (Test)
    • Attributes
      • Time to Find & Resolution (Defects)

Are of the above TL Attribute costs all naturally high in Waterfall (as demonstrated by the V model).

Pair that will the Communication Loss variable and we can see that the Waterfall approach naturally has more risk associated with it.  

If executed using Scrum then all of the above Time Latency costs are lower.  Scrum also accepts that requirements will change and provides a number of possibilities throughout the life-cycle to check and remediate requirements.  So there a multiple opportunities  to correct the Communication Loss variable associated with requirements.

The theory is stating that Scrum is inherently less risky than Waterfall.

This theory isn’t about bashing waterfall or promoting Agile approaches, this was created so I could objectively:

  • Measure risk of project failure associated with different approaches
  • Apply to existing projects and identify where significant risk resides
  • Apply to projects and identify where most effective action can be taken in order to increase delivery capability

The theory, once explained is simple and obvious. What’s more, I’ve applied the theory on a number of projects and the results have been nothing short of amazing.  I’ll provide details of a large scale case study here <<link to future article here>>

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s