Talking test automation

Big Thoughts

2 November 2012 • Written by Katrina Clokie

Organiser Katrina Clokie reviews the first WeTest workshop held on 25 October which focused on test automation

The first Wellington Tester (WeTest) Workshop was attended by 17 testers from various organisations and the topic of test automation generated some lively discussion...

What is automation trying to achieve?

In my experience report, I spoke of working in an environment where test automation was the preferred approach in all cases; there was a focus on how the tests were executed rather than what they did. This lead to discussion on what we should be using automation for.

One purpose of automated testing – or indeed any testing – is to find bugs. We should constantly question the type of bugs found by automated testing and those that are missed. An automated test will only find the bugs that it is told to find and is therefore not comparable to the deductive reasoning of a human tester.

Automated tests should be doing the type of testing that automation is suited to such as performance testing, form validation using a wide variety of user input combinations or environment setup. Also, in situations where manual testing would be onerous or impractical, automation may be a solution.

At the workshop, we spoke about how organisations start automating testing practically, automating to reduce risk, testing time and costs. However, there is a danger that success leads to increased and unnecessary adoption of automation.

Instead of automating to fix a problem or make a task easier, automation can become a mandated approach. At some point then, the seesaw starts to tip and the costs of maintaining an extensive automated suite exceed the benefits. It’s important to regularly audit automation suites to ensure that each test has a clear purpose and communicate to business stakeholders that not all automation is beneficial.

Ongoing maintenance and rot

I spoke about the high cost of maintaining automated suites in environments that frequently change. Without such maintenance, automated test suites can quite rapidly become useless. During a project that includes software development, there is usually a budgeted and expected test effort that is expected to remain constrained within the project timeline.

When automated tests start to fail after a release – perhaps due to changes in dependent products or environment – they lose their value. A test suite that is known to be unreliable is likely to obscure true failures.

This introduced the idea of test rot, in reference to the state of a legacy automation suite at the start of a new development cycle. A period of neglect often results in failures that need to be resolved by an initial test effort before verification of new functionality can commence. This concentrated time period is not necessarily less expensive than on-going test maintenance in the business as such expenditure is normal for an organisation. However, it is difficult to sell the need for continued upkeep to an organisation, one suggestion being to include the cost of on-going maintenance in test estimates for a project.

The role of an automated tester

I shared a thought from James Bach in my talk: “Test automation is software development”. This lead to a discussion about who takes ownership of writing automated tests and the skills required. In one organisation, the development team were very involved in producing the automated test suites to the extent that pair programming between a tester and a developer was used.

This meant that each participant bought their core skill to the process to produce the best possible set of tests; developers know how to code and testers know how to test. We talked about poor quality in automated suites due to non-technical testers being expected to write code. A script that doesn’t test what it says it does due to a lack of skill in the writing of that script may be harmful if the script is the sole method of verification.

We also discussed the rise of automated tester roles versus traditional testing roles, the market now advertising positions using this distinction. Questions were raised about the wisdom of this approach as testers usually work individually. Perhaps resourcing a team with only automated testers will result in tidy code with very poor coverage?

Since the workshop finished with several discussion threads left unexplored, we will focus on test automation at a future WeTest event. Feedback from attendees was very positive so we hope that these events will contribute to the continued growth of a vibrant testing community in Wellington.

The following people contributed to this article through their participation in the workshop: Aaron Hodder, Andrew Black, Anna Marshall, Damian Glenny, Daniel McClelland, Greg Finch, John Dutson-Whitter, John McElhiney, Katrina Edgar, Mike Talks, Nigel Charman, Oliver Erlewein, Owen Calder, Sean Cresswell, Thomas Recker, Till Neunast and Trevor Cuttriss.

Search the Assurity website (Hit ESC to cancel)