Test planning for the real world

Big Thoughts

19 February 2013 • Written by Damian Glenny

Following his presentation where he drew on his own test planning experience, Damian Glenny reviews Wellington's third WeTest Workshop

Following his presentation where he drew on his own test planning experience, Damian Glenny reviews Wellington's third WeTest workshop.

The third WeTest workshop on 7 February focused on test planning – how to do it well in a rigid environment and the value of finding out what needs to be done before planning for it.

The event was proof that if you stick 17 passionate testers in a small room with pizza and beer, good things can happen. A rousing discussion followed, ranging from test plans to test process and strategising.

Many attendees posited that one common usage of test plans was as political capital; getting the test plan signed is often a contractual milestone and it’s easy to let this fall by the wayside, particularly in an environment which doesn’t allow for change or the re-release of signed-off documents. A tester who’s not averse to being a politician can often gain smoother consensus during the test planning and agreement process compared to an apolitical tester.

It was generally agreed that the tester responsible for the test plan, rather than the project manager, often takes on the responsibility for getting sign-off. Suggestions included having fewer signatories or grouping them under a figurehead. While people may not sign because of lack of interest or motivation, some disagree with the content. One interesting point was that when someone disagrees with a test plan, this can provide useful information for more relevant planning.

However, even after a test plan is signed off, it may not actually be useful. When you write a test plan, it reflects your initial thoughts on how to implement a particular test effort. So what does it mean if you receive feedback that leads to you having to significantly alter the plan, particularly for political or “unnecessary” reasons? Will your test process alter to accommodate the diversions? What if the document is signed off and is ostensibly immutable? In which case, what good is a test plan? Is it actually ethical to write a useless test plan?

So if we can’t – or choose not to show value through a formal test plan – what other routes can we take? There were murmurings of agreement over producing test artefacts such as mind maps that show test coverage – as discussed at the Lean Principles Workshop in December. But then the “M-word” (rhymes with “net tricks”) popped up to provoke a healthy debate. It was universally agreed that the bigger the document template, the less useful and more despised the end result would be (I showed a 42-page ‘template’ as an example of an object of shared derision).

Terminology is an issue when discussing such matters. For example, a ”test strategy” can refer to a project’s high-level test planning or to an organisation’s entire testing philosophy and process, resulting in confusion without having first set the context. After a few side conversations during the break, we resumed the debate and concluded that a test plan should be as specific as it needs to be and no more.

Specificity is one thing, but is anything so small that it doesn't require a test plan? We reckoned not, because you’re going to do the planning anyway, whether it’s written down or not. However, sometimes things are so small and fast that the information constantly changes, particularly in Agile environments. You can’t plan for uncertainty, but you can react appropriately.

At this point, we took a detour to discuss approaches to test planning in Scrum. It was generally agreed that Agile solves most, if not all problems of waterfall, at least in terms of changing requirements. If done well, it solves those problems at root level.

On the heels of that conversation came the quote of the night: “Testing is the documentation layer of everything and makes up for a lack of clarity in everything else”. You see, other areas rarely provide enough information before the test plan needs to be created. No real input means no real test planning. This is compounded by testing – in Waterfall, at least. Generally being the last bulwark before things get pushed into the real world, the test documentation is normally the most up-to-date and comprehensive reference material available. We then completed the circle, suggesting that the test plan shows what success will look like.

(We noted here that risk registers are sadly underused and surrounded by an impermeable lack of process. Who's responsible? Who can access it? Why not everyone?).

With just enough time for one final debate, we jumped right back to Philosophy 101 – assumptions. It was pointed out that reasonable assumptions are necessary for everything and don’t need to be challenged. Unreasonable assumptions are what need to be clarified. There is clearly a trade-off to be made between succinctness and assumptions.

Thanks to our sponsor Assurity, co-organisers Katrina Edgar, Aaron Hodder and Oliver Erlewein and all the participants. I look forward to seeing everyone at the fourth WeTest Workshop on 18 April with Michael Bolton.

Read Damian’s full experience report.

Search the Assurity website (Hit ESC to cancel)