I’ve seen automation fail and succeed through different engagements. I’ve noticed some patterns and have created the following prerequisite Automation Principles. Having these in place will as least minimise your risk of failure.
Failure patterns tend to be:
- Monolithic ‘one size fits all’ solutions – attempting to replicate a pattern that was successful elsewhere into a completely different environment
- Managers in charge of automation that aren’t technical enough, cannot understand the technology solutions in place and are being lead by engineers that tend to work in silos
- A pattern of working that doesn’t encourage Automation to be engaged as far left as possible – if you are throwing ‘over the fence’ to automation then this is strategy for failure. This could be simply attempting to automate the manual test scripts or developing a solution and then letting the automation team attempt to figure out how to automate
- A solution that hasn’t factored in the environment (People, process, politics, technology, skills) – a good manager will assess the agile environment and map an appropriate way forward.
With the advent of iterative cycles, incrementation development and frequent delivery – Continue reading
I’m writing this article so individuals can see where they are on the BDD journey. Recognising where you are is important as it allows you to realise just how much further you can actually go.
I’ve been to another client site where I’ve been told they ‘are BDD’. I’ve seen it many times, and although the intention is good there is a difference between ‘being BDD’ and ‘BDD syntax’.
Given – When – Then ….. Seems deceptively simple. BDD was undoubtedly one of the most difficult approaches we had difficulty implementing. For me it was key – it wasn’t about the syntax, BDD is about:
- Getting different disciplines talking and communicating using a common accessible language. Pushing out the enemy – ambiguous statements and subjective interpretation. Building the right thing, preventing wasted effort on development and testing of the wrong thing.
- Building quality and testing into the incoming stream of project requirements
- Embedding rigour and testing into the most important part of the process – Incoming Requirements
Being ‘Scrum’ is surprisingly subjective as implementation is deliberately open to interpretation, this can lead to misrepresentation. This post attempts to take out some of the subjectively by creating a Scrum Maturity standard teams can objectively start to measure themselves against.
Agile is about delivery, first and foremost. Scrum is an approach that is adopted as a delivery mechanism. Sometimes its easy to lose sight of that and get too involved in the ceremonies, sophistication and the maturity level teams are at.
I was asked to drop in and consult on a project that had just been kicked off.
How do we create a project approach that will ensure that the quality meets customer expectations? If we can figure out the mechanisms of this then we will be more likely to have success.
Note: See QA Bridging Theory for a birds eye view of the below.
This entry attempts to unpick at what those fundamental variables are and then attempts to provide the basis for a quantifiable formula for measuring the associated risk of achieving success.
I was in a management meeting with a traditional client last year. They have a very ingrained top down culture. They were in what I would term the first stages of Agile transition. Mini Waterfall, Stage 1 Agile, Immature Agile or Wagile.
There were questions which seemed to illustrate the traditional way of thinking Vs current ways of thinking.
Story points. They are a difficult concept to grasp for those that aren’t familiar with seeing Scrum done right. I recently had a ad-hoc conversation in which a number of PM’s (…. And an SM) stated that they expected the number of story points burnt to keep increasing as the team got ‘more advanced’.
I pointed out to a rather perplexed audience that actually the number of points will stay consistent as the team became more advanced. Sometimes its easy to forget how the abstract can be counterintuitive.
I’ve often thought, why do projects fail ….? Can we create a quantifiable formula that will predict the risk of failure and success in an approach? Can I create a quantifiable theory that will explain why some approaches work better than other approaches?
QA and successful project delivery are inextricably linked. If you can embed good QA throughout a project, the chances of success increase.
What follows is a short explanation of a QA theory I’m proposing – and actually put into practice at several client sites.
Team sizes, I’ve been around a number of projects and yet again the size of some Scrum teams never ceases to amaze me. A recent problem project I landed on had 20 people in the stand up – I observed the following:
- People talked, some clearly and a lot not so clearly – it was very hard to hear 70% of the team members
- The team members gave an update – hardly any of the updates resulted in team engagement and interaction
- The board did not reflect reality
- Everyone was talking to the Scrum Master
- The team members looked disengaged and body language was poor
- A lot of things were ‘stuck in test’, mainly because the stories were not flowing through at a consistent rate