Visualising test activities can lead to better collaboration and test coverage, richer reporting and less risk of product failure says Katrina Clokie
Do you feel that testing in your organisation is transparent? That the activities of testers are well understood and widely appreciated?
In many organisations, testing is an activity treated with impatience and doubt. The genesis of this attitude is often the test reports that present the voice of testing to the wider organisation. When reporting is unclear or feels too simplistic, there is a tendency towards suspicion. How can people be confident that testers are finding the right bugs if they can’t understand what the testers are doing?
Although these negative feelings may be unfounded, a lack of confidence in the abilities of the test team to find important issues can indicate a bigger problem – that those outside of the test team do not understand what testing has happened, is happening and needs to happen, due to the nature of test reporting available.
Traditional test reporting often contains numbers and graphs that show test case and bug counts. This sort of report will feel informative, but may fail to communicate rich information required for complete understanding. Take the following example.
Imagine you are standing in London. I ask you to spend 12 days touring Europe, visiting the Netherlands, Germany, Austria, Italy, Switzerland and France. You agree enthusiastically, plan a route and begin your journey.
On day four, I ask for a travel update. You’ve visited four of the six countries I requested and you’re currently in Italy. Fantastic! I’m feeling confident that you’re on track and you might even return to London early.
On day eight, I check in again. You’re still in Italy. What’s happened?
Things appeared to be going so well and now it seems your progress has stalled. In fact, you had always planned to spend five nights in Italy. You thought Italy was the most interesting country in Europe and had planned to spend more time there. I’m feeling annoyed, anxious and uninformed.
What has gone wrong here?
First, I didn’t know the details of your journey. Though I knew broadly where you were heading, the specific destinations weren’t visible to me.
Secondly, the reports you gave me were accurate, but they didn’t tell me everything I needed to know. When the report was good news, I was happy to take it at face value. When the report was bad news, I wanted to find out what was going wrong.
We could have avoided these problems. What I really needed to see was your map.
There are parallels between this example and the traditional testing process. We can solve the same types of problems in our own work by making our testing more transparent. Aaron Hodder has written about creating a visual model of product functionality using mind-mapping software.
There are many advantages to using this approach, which can fundamentally change the testing thought process and its output. Using a lightweight and flexible mind-mapping tool to capture test ideas allows for rapid thought across the breadth of the application. A test case-driven approach is designed to encourage thought in a specific area, often at the expense of seeing the whole.
By operating at a level that considers the product in its entirety, the tester can rapidly create a high-level coverage model for review.
A mind map is visually appealing, the format accessible and familiar. In my experience, people are comfortable with the mind map format for presenting information and more likely to engage with a visualisation rather than a wall of text.
This engagement encourages collaboration between testers and other areas of software development. In contrast, test cases tend to be stored in a hidden and monolithic repository. There is great advantage to being able to point at a map on the wall during conversation.
A model open to critique will result in a more comprehensive test coverage than one that is developed in isolation. By being unafraid and publicising where we intend to test, we offer the opportunity for others to engage in testing activities. Where the ownership of quality becomes shared among a wider audience, a higher quality product results.
Where test coverage is visualized, we have a vehicle for transparent reporting. It is possible for management to understand the activities of their test team without investing a huge quantity of time in reading test cases. Progress and problems are visible. Improved communication between testing and other areas of the business will reduce the risk of product failure. By sharing ideas and opening them to scrutiny, testers will create trust in their relationships with the business.
Once management feels confident in their test team, the truth of testing can be reported clearly. The culture of the organisation can be changed by embracing visibility in the test process. Rather than an interrogative environment born in miscommunication, create one in which there is shared understanding.