As testers, one of our key roles is to identify and report on issues that might impact on the value of a product, says Craig McKirdy.
Often we assume it is relatively straightforward to determine how our key stakeholders or users could be impacted by an issue and we assign a severity and/or priority rating to it without giving it much thought. However, this is not always as straightforward as we might think as our business stakeholders will often have a very different opinion on the impact of an issue than our technical teams.
The process of raising, resolving and managing issues can be time consuming and that time has been wasted if the issue doesn’t matter to our stakeholder. If we can eliminate waste in this process, it can lead to a better result for our stakeholder and less frustration for our testers and technical teams.
The project I am currently working on has already taken steps in this direction by holding a 15-minute daily review of issues raised on the preceding day. The focus of this review is solely on confirming the assigned severity and priority. This has been very effective at ensuring that the right issues are focused on – and that we were constantly engaged with the business representatives and they are effectively getting a daily status update on testing.
In addition, as we moved closer to the test completion deadline, the issues would go through another review to determine if the assigned severities and priorities were still appropriate. Sometimes, the review would confirm they were correct, sometimes they were increased to provide additional focus. Sometimes a decision was made to accept the behaviour and to update specifications – and, at other times, it was decided to close the issue without resolution.
The question remained though – should all the issues we find go through this process? If we could identify early on that an issue was likely to be accepted with a specification update or the issue didn’t require resolution, then we could free up more time to focus on the important issues.
We decided to trial an early review of issues for one of our business functions under test. We were already using Visual Models and Session-Based Test Management as our test approach, so we were already capturing all our observations in the Visual Models. At the conclusion of each session, any obvious issues (i.e. those we considered to be High or Medium severity) were raised as normal.
For the other issues, a workshop was held every few days with the business representative and business analyst to review the findings from testing and decisions were made on which issues needed fixing (and their associated severity / priority), which issues could be accepted as an update to specification and which ones could be ignored.
The trial itself was a success. A reduced number of issues being raised and managed meant less time spent on this activity. Issues where updates to a specification were required were grouped together and raised as a single item, again reducing the time required for issue management and making it easier for the business analyst performing the update.
What was seen to be more of a success though was the higher engagement between testing and the business that occurred during this process. As the workshops were focused on specific sessions and the sessions themselves were focused on particular areas, it provided for greater discussion and understanding for the business representatives around how the product worked and for the testers around how the end users would use the product. The Visual Models were updated with the new information and additional sessions planned as a result. Ultimately, it gave all parties a higher level of confidence in the work undertaken.