How do we create a project approach that will ensure that the quality meets customer expectations? If we can figure out the mechanisms of this then we will be more likely to have success.
Note: See QA Bridging Theory for a birds eye view of the below.
This entry attempts to unpick at what those fundamental variables are and then attempts to provide the basis for a quantifiable formula for measuring the associated risk of achieving success.
I’ve helped deliver a number of troubled projects. Fundamentally I find the pattern of risk to be the same. A lack of frequent end user engagement throughout the lifecycle and development activities that result in long time latency activities.
So first of all let’s take a brief step back in time:
Henry Ford Doesn’t Work
Henry Ford popularised the factory line, people specialising in focused activities and doing their specialised bit in isolation. This was fantastic, the factory could produce record amounts of a product. This is still followed today and is a solid approach to producing mass engineering products the world over.
When I look at the basis of Waterfall approaches I see a setup similar to the Henry ford approach. Waterfall, and other approaches encourage over specialisation of skills (silo) and the product moving incrementally from one silo skill to another. The skill of a particular part of the line does only needs to know the minimal amount of information about what has happened prior to the product arriving. In effect a production line has been set up with the goal of producing something.
This only really works if you have a clearly defined target state and that is the essence of what I see to be the problem with Software Engineering approaches. They look to be modelled in a similar way to the mass production techniques created by Henry Ford – Siloed skill sets and a sequential production line.
A Software Engineering project is producing an abstract entity. Its fuzzy, loose and prone to misinterpretation. This means that the target state is not able to be clearly defined upfront. But this is the aim of the requirements gathering phase, isn’t it ….?
Rule 1: It is impossible, or really hard for users to state upfront what they actually want.
It’s my experience that users are unable to state exactly what they want upfront for a number of reasons. Sometimes they think they know what they want, but don’t know until they see something. Requirements capture, no matter how hard we try is ambiguous.
If users are stating requirements in a way that can be misinterpreted and not precise, then imagine what happens further successively along the virtual factory line as the product is being produced. Each successive segment of the line will have their own interpretation of the message being passed throughout the process. When this happens there is a high risk that what a user wants is not what a user gets.
Example 1: We have asked someone for something very specific and have been extremely clear …… What results is something very different from what was asked.
Example2: House building programmes – A couple will specifically state what they want, an architect will draw diagrams, build a 3D computer model. A clearly defined target state is defined. As the house is being built, the buyers’ realise they want to change X,Y and Z as it will work better for them.
Example 2 is interesting. As an engineering product, house building is very mature, well established and has very strong requirements gathering and playback abilities (3D modelling). Yet it is still subject to user changes as the house is being built. The customer naturally changes what they want as it is being built as they understand more about what they want.
Rule 1 (Again): It is impossible, or really hard for users to state upfront what they actually want.
Assertion: If we accept users cannot state clearly what they want, how can a process possibly hope to meet expectations?
Assertion: If we accept users cannot state clearly what they want, users will change requirements as they progress
Assertion: Allowing users to view and feedback frequency will allow them to communicate more precisely about what they want.
Assertion: The longer we leave playback of requirements unchecked, the more potential there is for wasted effort (man days/time) spent on codifying and testing items that are based on imprecise requirements.
Assertion: Shared project understanding of requirements is not just the domain of a few, but of all the project participants.
The abstract nature of what is being produced means users are unable to clearly define and know what they want. Users tend not to be familiar with IT systems, it is not their core job, nor should it be.
Our responsibilities following on from Rule 1:
- Drop technical jargon, make the language of IT as accessible as possible
- Accept what the User states is a starting point, not the definitive end state and will need refinement
- Accept that the onus is with us to come up with ways of enabling that refinement to occur within our approach.
- Engage the users throughout the process as frequently as possible
Very often I’ve seen projects citing ‘changing requirements’ as a reason for project failure or overrun. It’s my conclusion that most methods of capture are fundamentally flawed, or subject to significant misinterpretation if a user is not getting what they actually want.
Users are on our side, they have paid and want the system to be successful. If they are become unhappy then something has gone wrong in the process we are in charge of.
Rule 2: The longer it takes to produce a result related to an action (feedback), the higher the risk factor for failure.
Time burns money on IT projects. Time also increases the risk of getting something wrong if the requirements are imprecise. Waterfall approaches assume that the initial requirements captured are correct and allows once chance to get it right for delivery. In my view this is a high risk assumption.
An approach that feedbacks regularly to the user will naturally reduce the risk for getting it wrong on delivery. More feedback opportunities in a timeline will equate to better quality in meeting end users expectations.
To use an example. I recently asked a college to produce an artifact, we emailed on and off for around two weeks (we were remote). We happened to be in the office one day where we sat together. We had a number of conversations and fairly quickly we got to where we needed. Two weeks work was achieved in under 2 hours. The conversation was important, but the real catalyst was the speed at which we could interact.
Time Latency, not the conversation was the real underlying reason the quality and speed of development improved. As we decrease time latency dependencies, the number of interactions naturally increase – this results in improved quality.
Graph: The effect on throughput (work done) with time latency. This illustrates that marginal effects on time latency have a crushing effect on the amount of work able to be done.
The ability to communicate instantly increases the quality of what was being produced. Conversations are important, but the reduced time latency between conversations are more important. Emails naturally have a time latency attached. Instant Messages improves this. Conversations improve this further. Face to Face, in person is even better.
Assertion: If we reduce time latency, then the quality of associated dependencies will increase.
This means Quality will increase if you are able to do more of ‘the something’ in a defined time period e.g. Have you ever attempted to have a conversation with someone on a telephone where there is a X second delay? It gets very difficult very quickly. The conversation is cut sort and the quality isn’t good.
Quality isn’t simply confined to requirements. It is any project artifact that can have an associated time latency.
Example 1 : Build processes are key, but up until recently some projects used to attempt to do this infrequently (every few weeks to months). The project would then incur an enormous amount of issues. It was always problematic. As the ability to run merges and builds became more frequently, the quality improved and time wasted decreased. Reducing time latency reduced risk and increased quality.
Example 2 : Offshore capability is often attempted to be utilised by some clients. This is an immediate source of time latency risk. Steps taken to reduce time latency risk can be:
- Moving working hours so there is significant overlap between timezones
- Instant Messenger Vs Email
- Video Conferencing – ease of access for team in the co-located offices. This is ineffective if it’s on a different floor
- Bringing offshore onshore for a period (To build relationships and understand Ways of Working)
- Move an entire Scrum team offshore (So you have a separate Scrum team offshore)
- Considering moving offshore capability to onshore
Other examples of project artifacts could be test cycles, feedback to the end user, CI tests, Performance tests, Dev Op’s processes etc.
Why does this all matter?
I wanted to be able to enter into a project and create an objective tool for measuring risk of project success. The above description provides the foundation for doing that. If we can objectively measure then we can:
- Apply to to existing in-flight projects and use as a tool for identifying areas of current project risk
- Use as a tool to identify why different project approaches (Waterfall, 6 Sigma, Agile etc) are likely to result in better outcomes
- Allow better critical thinking and structure to be applied to implementation of project approaches and principles
- Allow new project approaches to be modelled and tested that may result in a less risky outcome
My main initial aim is to break the habit of prescriptive thinking and the senseless implementation of written doctrine that results in project failure.
The theory also implies that there are other potential approaches that may be more effective than the current.
See QA Bridging Theory for an overview of the above.