I’ve seen automation fail and succeed through different engagements. I’ve noticed some patterns and have created the following prerequisite Automation Principles. Having these in place will as least minimise your risk of failure.
Failure patterns tend to be:
- Monolithic ‘one size fits all’ solutions – attempting to replicate a pattern that was successful elsewhere into a completely different environment
- Managers in charge of automation that aren’t technical enough, cannot understand the technology solutions in place and are being lead by engineers that tend to work in silos
- A pattern of working that doesn’t encourage Automation to be engaged as far left as possible – if you are throwing ‘over the fence’ to automation then this is strategy for failure. This could be simply attempting to automate the manual test scripts or developing a solution and then letting the automation team attempt to figure out how to automate
- A solution that hasn’t factored in the environment (People, process, politics, technology, skills) – a good manager will assess the agile environment and map an appropriate way forward.
With the advent of iterative cycles, incrementation development and frequent delivery – Automation of testing is now tricker and harder to get right. It requires a holistic approach to increase the chances of success – which means engagement and coordination of architecture, development, Dev’ops and the testing disciplines.
The following principles are deliberately technology anognotic. I’m not stating they will guarantee success, but they will make a great initial starting point and help set project up for success.
All Automation will aim to follow the following principles:
- Accessible & Available – Easy to understand by all stakeholders in an instantly available format.
- Not introduce time and delivery risk – able to run many times, quick to run, not fragile and reliable (Low associated maintenance). GUI / frontend testing is not the solution.
- Reduce Regression Testing effort – allow the Scrums team to run at a constant forward rate, not build forward manual regression testing debt
‘GUI testing not the solution’ may appear to be counter intuitive, but in my experience these types of tests introduce significant time risk to delivery of a project and team Sprints. You’ll get an initial impressive benefit, but it’ll quickly run out of steam and become an ongoing cumulative drag. Testing through a set of API’s have proven much more successful – reliable, resilient and faster.
How do we make automation a success?
- Collaboration: Dev and QA. Stories are designed with Automation built in. When a story enters into Sprint the developers are thinking ‘How can I make this story automatable? I need to work with the Automation QA’s whilst I am developing the story to make sure the relevant hooks are available so they have the ability to test the Acceptance Criteria associated with this
- Definition of Done: Stories should not be considered ‘done’ unless the team has the ability to run automated regression tests against them. Manual test effort creates future drag and impedes the ability of the team to deliver and bug fix frequency
- Fast Running, low Maintenance tests: Long running tests are the enemy. GUI based testing adds significant risk. API/Integration hooks are the preferred method.
Automation failure: It is my experience that there is considerably increased risk of failure or ‘drag’ when automation and developers work in silos. The lagging pattern should be avoided to maximize automation success
Principle Questions to be answered
The requirements are designed to answer & solve the following questions quickly:
- Can I produce a human readable artifact on what automation has executed? What are these tests doing?
- What is the coverage of the automation testing we have in place for system X?
- I want to know what the automated tests are doing? e.g. External stakeholder – from the business/UAT side
- External Stakeholder: I want to see the results for the latest automation tests and be able to understand them
- Automation tests should be designed to run many times and standalone, everytime we produce a new build ideally
- Has the regression testing effort been considerably reduced by the tests? A key requirement is that forward regression testing debt is not being built
Overarching Principle Requirements for Automation
- All tests must have a description that is business readable or sufficiently abstracted to a level that can be understood by external stakeholders. BDD/Cucumber is the preferred syntax for wrapping automation logic
- Results and test reporting should be designed to be auto generated into different reporting formats e.g. Pickles allows you to do this
- Tests should be designed so they can be stand alone and fired from build/scheduling tools e.g. Jenkins. If they can only be fired by Person X or team X they lose significant value. Stand alone tests ensures a minimal level of maintenance discipline
- Tests should have a number of tag(s) that describe the functional areas to which the tests related to
- Automation technologies should attempt to align with other technologies within the team – where technologies are similar, preference should be given to those already in use
- Where possible, automation should be heavily weighted to execute through API layers in preference to the GUI – these are less fragile, require less maintenance, execute faster and tend to exercise business level logic
- When a test fails, it should contain enough information for speedy diagnosis of the failure – that is to identify what part of the system did not meet the test’s expectation
- A test should be easy to re-run such that they include preparation and clean-up for their state, and when a test-case fails it can be easily run on its own. (Whether tests are clustered together in packs with common setup and clean-up or individually are idempotent is a task for the test’s designer.)
- Speed is a priority – running tests by themselves has to be fast, but to win out with a fast feedback loop (or low ‘mean time to system recovery’, a DevOps metric), speedy diagnosis and easy re-running of single test cases will help us rectify bugs discovered by automation tests
Problems and situations these principles will seek to avoid:
- Its takes 12 hours to run a test pack, something broke I need to run it again :: Fragile, long running – introduces delivery time risk
- The test pack ran – 24% of the tests failed …. What do they do? ….. I don’t know, I’m going to have to look at them and get back to you in X hours/days :: If we cannot understand what they do easily and quickly they are pointless
- There was a problem in live, what is the coverage of your automation pack I want to see it, I want to understand what you have automated ….. I’m going to have to look at them and get back to you in X hours/days :: Unacceptable, reporting must be visible, transparent and easily accessible
If the design and build of our automation packs are done in the right way from the onset, there are tools that will do most of the above heavy lifting. Pickles is a fantastic tool I have previously leveraged and helps complete a full 360 on work in progress.