There are many different definitions of software testing and many views on what responsible testing looks like in our industry. How you view the role of a tester informs what practices and artifacts you believe are valuable.
My preferred definition of software testing is from Cem Kaner: “Testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test.”
Basically, I see testing as belonging in the information-gathering and delivery business.
Furthermore, I identify as a context-driven tester. These are the seven principles of the context-driven school and this is my interpretation of them and what they mean to me:
“The value of any practice depends on its context” and “There are good practices in context, but there are no best practices.” Every testing problem is unique. How we approach each testing problem is therefore unique. So learning as much as possible and communicating your understanding of what you think the problem is, is vital to performing effective testing. My choice in tools reflects this belief. I like tools that are adaptable and give me freedom. The more adaptable and less rigid the tool or process is, the more contexts I can use it in.
“People working together are the most important part of any project's context.” Knowledge exists in people's heads. It doesn't exist in the world in specifications or emails. Those are merely representations of knowledge. On a project, everyone has little pieces of the puzzle and no one person knows everything. Any practice that enhances social interaction – and any tool that helps me to communicate clearly what I think is going on – is a Good Thing™.
“Projects unfold over time in ways that are often not predictable.” Unfortunately, we can't schedule all our insights at the beginning of the project. Things change. At any given point in time, we must be willing to throw away everything we believed to be true and change tack. If we acknowledge that early on, then we can be vigilant about keeping the cost of changing and discarding to a minimum.
“The product is a solution. If the problem isn't solved, the product doesn't work.” This, to me, is all about really understanding the purpose of what we're testing and about seeing the whole. Software systems as a whole are greater than the sum of their parts. Every individual written requirement may be met, every user story may have passed a set of automated checks but, taken as a whole system, the software may completely fail to achieve what it was designed to do – or do it in less than ideal ways. Other emergent defects can occur when the software system is taken as a whole. Workflow inefficiencies, for example, often only become apparent when the entire system is viewed as whole.
“Good software testing is a challenging intellectual process” and “Only through judgement and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.” The last two principles state that good testing requires skill and judgement to adapt and learn. It sums up that, since things change and since each situation is unique, we can't just take a canned procedure, apply it to interchangeable human cogs on a factory line and expect good results.
In conclusion, central to the context-driven mindset is the idea that software development projects are complex and that it is better to accept that, rather than try to control it away. When we're developing software, we're solving novel and unique problems. When things change, it means someone has learned something. Learning leads us to better outcomes, so we should embrace that.
We should try to mold our practices to our environment, rather than try to mold our environment to our practices.
So if the principles of Context-driven Testing describe the world in which we find ourselves and how we ought to think about that world, are there any principles that describe how we should operate in those environments?
Three or four years ago, I encountered the principles of lean software development by Mary and Tom Poppendieck based on the principles of lean manufacturing – and they really resonated with me. I've adapted them and applied them to testing and what they mean to me. If the principles of Context-driven Testing describe the world and how we view it, then I believe these principles of lean testing describe how we may respond to that world:
Eliminate waste: I consider waste to be anything that doesn't add value or something that the cost of creation is hugely asymmetrical to the benefit derived. Software projects have a limited amount of time in which we have nearly an infinite number of things to test. It should be our goal to squeeze as much as we can from every minute we have.
Amplify learning: As mentioned, software projects are volatile environments where change is constant. Information comes to us non-linearly and different people on the team are exposed to different information at different times. When we learn or discover something new, we need to make this new information as visible as possible.
Regularly revise: We can't schedule our moments of inspiration or schedule when new ideas occur to us. We are constantly learning, as are others on the project. We need to be able to adjust our outputs and actions with the changing landscape and, as the desires of the stakeholders change, so too should our test strategy. Be willing to throw out everything I believed thus far.
Rapidly respond: Our role is to provide information to stakeholders to allow them to make quality-related decisions. Giving them information sooner allows them to make decisions sooner. Finding important bugs quickly allows them to be fixed in a timely manner. It aids with our credibility too. If someone taps us on the shoulder and asks "How did the testing go on the search module last week?", we want to be able to give them a satisfactory answer quickly.
Collaborate and communicate: We don't operate in a vacuum and I try to communicate what I'm doing and how I did it with anyone who's interested. This is essential for maintaining transparency and trust. Speaking of which...
Maintain transparency and trust: Our greatest asset for getting things done and being able to do our jobs to the best of our ability, is trust. A lot of testing theatre and wasteful activities are undertaken in environments with little trust to avoid penalties or criticism. I also believe in meaningful reporting. A lot of the time, when someone is asking you for metrics or status reports in particular formats, it's the only way they really know how to ask "What's going on?". I try to keep everyone in the loop as much as possible. It also means that rather than giving a ‘pass / fail’, I try to pass on as much information as possible about how I tested it and why I think a bug may exist or not. This means that any errors in my thinking or reasoning can be picked up by others. An environment of trust allows me to be transparent and being transparent builds trust.
See the whole: This is about staying mindful of the problem the software is trying to solve and for constantly questioning the context in which the product you're testing fits in with the values of the customers and other stakeholders.
To sum up, I want to be able to give and receive as much information as I can in the limited amount of time I have and communicate it in a way that is respectful of others' time and resources. These are my values and what I think constitutes responsible testing. However, the two biggest threats to these values as I see it, are the prevalence of premature test scripting and the overuse of IEEE829-style documentation. I have seen the overuse of these two practices undermine good testing and good communication.
IEEE829-style documentation and premature test scripting
When I talk about IEEE829-style documentation, I mean any style of documentation that is overly wordy, template-driven or ceremonial in nature.
I'm not saying there isn't a time and place for these. Sometimes this style of documentation is necessary and useful, but the very beginning of a project when we know less than we will ever know again, when we have so much learning to do, is rarely the time or place. This style of documentation is a poor medium to communicate evolving understandings.
At worst, this style of documentation is often template-based and acts as a ceremonial deliverable to satisfy some checkbox.
But even at best, developing them requires a lot of time and effort and reading them and assimilating the information requires a lot of time and cognitive energy because any useful information is obfuscated within multi-thousand-word documents. It is difficult to extract the meaningful from the insignificant. How often do we see documents like this get written, signed off and then forgotten about on a shared drive, collecting virtual cobwebs?
To me, useful documentation is documentation that reflects my current understanding of the project in a format that makes it easy for me and others to retrieve that knowledge.
All in all, to crystallise our ideas so early when the cost of creation, editing and retrieval is so high and we still have so much to learn, makes little sense.
Below I offer an approach. This isn't necessarily the best approach for your context, or even a good approach for your context. It is just an example of an alternative approach to overly formalised and prescriptive approached that I have used.
With regards to premature test scripting, I certainly favour a more exploratory approach to testing. My argument against scripting too early is:
- At the beginning of a project, we know least about the project
- Skilled exploratory testers perform hundreds of what could be considered discrete tests in a session. Few are worth repeating
- Most tests performed are informed by the results of the previous test
- The usefulness of a test is often not known until it has been performed. You don’t know if a rock is worth lifting until you’ve lifted it to see what’s underneath
- Testers, when following a script, will deviate from the script
- Different testers interpret scripted instructions differently resulting in differences between testers, even when following the same script
- Test scripts don’t tell you what you may think they are telling you
Let's revisit the principles of lean testing I proposed and see how the two practices of test scripting and traditional heavy documentation creation stack up:
Eliminate waste: Creating test scripts and large testing documents is a very labour-intensive job. Time spent writing documentation is time not spent testing. Investing large amounts of time and resources upfront on artifacts that are likely to become obsolete is wasteful.
Amplify learning: Because information comes to us non-linearly and at unscheduled times throughout a project's lifetime, it doesn't make sense to crystallise information early on in a format that is difficult to edit and difficult to retrieve information from.
Regularly revise: As mentioned, it's important to always be willing and able to change our course and throw away what we previously believed to be true. Test scripts and traditional test documents are very difficult to edit and, based on the time spent creating them, it can be very painful to throw them away and start again based on new information. The linear format of a document makes revision difficult and test scripts by their nature do not allow flexibility in their execution.
Rapidly respond: When someone asks us how our testing is going, what we intend to test and what we've done so far, we want to give a quick response. Retrieval of information should be rapid. Also, if an issue arises or a fix for a defect is delivered, it is important that we have the ability to retest quickly and report our results efficiently. When testing and we discover that the map and the territory differ, our ability to be flexible and respond to that is important.
Collaborate and communicate and maintain transparency and trust: Lengthy word documents and detailed test scripts obfuscate valuable information. Reading a document requires an investment of time and energy. This often means that our day-to-day activities are hidden and our mental model of the testing problem is hidden too. Traditional test scripting and reporting also hinders communication and transparency. If a manager asks me “What was the greatest issue you discovered in the last week?”, it is less valuable to say “Well, our pass to fail ratio is 89.5%” than to be able to quickly tell a story about the system under test, how you tested it and what you discovered about it. If I am to collaborate and communicate on a daily basis in a way that is rich and honest, I need a way to do so that requires little effort on the part of my audience.
So where to next?
If it’s time to move on from premature test scripting and large cumbersome test documents, then how do we fill that gap? What do we do instead?
Instead, let’s delve a little deeper into what’s being asked of us when others request test scripts and test documentation. Could it be that that’s the only artifact they know of that will deliver what it is they want? What do people want? People want to know what you did, what your coverage model is and what you intend to do next.
This approach is comprised of three elements: The Heuristic Test Strategy Model, aspects of Session-based Test Management, presented in a visual model using mind-mapping software. This ensures that testing is performed systematically and intelligently and be auditable, accountable and traceable.
Heuristic Test Strategy Model
The Heuristic Test Strategy Model (HTSM) by James Bach is a set of patterns and heuristics designed to spark ideas around developing a test strategy. It is available to download for free from James’ site. I was fortunate to discover the HTSM early on in my testing career and I can say without any doubt that it is the most useful tool I have ever used. I even carry a copy with me to client sites.
For the purposes of creating a Visual Testing Coverage Model (VTCM), I want to investigate and map out all the different aspects of the product, as well as all the ways in which a stakeholder may value the product. For this reason, I use the Product Elements and Quality Criteria categories for the VTCM.
Session-based Test Management
Session-based Test Management (SBTM) is a way to structure exploratory testing and offers a framework for reporting and measuring testing activities.
The important elements of SBTM are:
Charter: This briefly describes the agenda or mission of the testing session.
Time-boxed session: This is an uninterrupted period of time spent testing or pursuing the Charter. Session lengths are typically 60 to 120 minutes, but you can timebox in half or full-days at the expense of granularity of measurement.
Session Report: This reports what occurred during the test session and can include:
- Charter or mission
- Start and end times
- Tester’s name
- Notes on how testing was conducted
- List of any bugs found
- List of any issues encountered
- Time spent off charter (called ‘opportunity’ in SBTM)
- If necessary, percentage of time spent testing, time spent investigating bugs, time spent reporting bugs and time spent setting up data and test environments
The kind of information recorded can, and should be, tailored for the project. If a standardised reporting format is used, the reports can be parsed to aggregate data that is deemed useful. There are also many tools that support SBTM.
The advantage of SBTM is that a tester has the freedom to apply their skills to actively investigate the product, while still being accountable and their work traceable. The advantage over pre-scripted testing is that the tester is reporting on what they actually did, as opposed to describing what they intended to do at some point in the past.
In practice, scripts are often obsolete by the time the software is delivered and, even under scripted conditions, testers will deviate from the script and two different testers may interpret the script two different ways. If important information is discovered under these deviations, this information is usually lost. SBTM allows rapid execution of tests and gives the tester the freedom to perform the next best action based on the information they have just learned.
Find out more about SBTM here:
The testing process is not a nice, neat, linear process. Information comes to us non-linearly and often spontaneously. I need a way to capture this information as it arrives without being concerned about how to adjust it to a linear model (such as a test plan document written in MS Word). I also need a way to communicate non-linear, complex information in a way that is easy to understand and process. So far, I have found that mind-mapping software is the best-suited tool for these goals. Along the way, I have discovered a wonderful side effect of using mind-mapping software: unsolicited feedback... which I will go into later.
So, how do we begin?
1. Start planning the framework
I like to begin by arranging the elements of the Structure and Quality criteria around a central node in the mind-mapping software. I then identify the properties I’m interested in, or may be interested in (after all, in these early days, who knows what may be important and the point is to stimulate your thinking in ways you may not have considered) and begin the framework for your map:
2. Start learning and collecting information
Before setting sail for uncharted lands, we should gather as much information as we can. After all, we are in a state of ignorance about what we will encounter. There may be tales of great mountains, of bountiful oceans and of savage beasts. There may be scrolls written by soothsayers and there will be accounts written by previous travellers.
In other words, there may be specification documents, responses to RFP’s, wiki pages, high-level descriptions, requirements documents. There may be project managers who can tell us great tales and business analysts with promises of bounty. We should begin mapping now so we have a frame of reference for our journey, but also fully expect that we will change it, add details, remove wrong details etc when we get in there.
The act of mapping sparks new questions allowing you to ask more focused questions of the people around you. The map becomes the tool to make the map better. This is the beauty of an organic system. Give it some initial conditions (The Heuristic Test Strategy Model, for example) and it almost builds itself. Here is the same map after a conversation with a developer:
New parts have been added, we've learned a bit about the structure and the platforms and what is required to install it all. We were able to ask good questions because we could see where the silhouettes were that needed to be illuminated.
3. Begin touring
We feel we have a useful framework to begin the actual work of touring the product. We know where the general inlets are and where the major landmarks are. We have enough to make landfall. Now we get the lay of the land. "Oh, here's a river we didn't know about. Better map that so we can take a closer look later." "This mountain is a surprise. I wonder if anyone knows about it".
As an example, as I installed something, it asked me for a port number . “Interesting,” I thought. “I’d better map that as a test idea”:
Eventually your map will be fleshed out and you’ll be confident that it is a pretty good and useful model of the software under test.
4. Create test charters
With a little tweaking, you've almost automatically generated your test charters from your visual model and you can attach your test session reports directly to the model.
5. Keep adjusting the map
Your map is a model. As your model changes, so will your map. Parts that you thought were important will be deleted and parts you didn't know existed will rise to prominence. The map is a reflection of YOUR understanding of the product at a given time.
It also has other uses. You can use it to manage the testing process by assigning testers to a branch, letting them take ownership of all sub-branches. This lets you quickly assign work without having to go through requirements and specifications one-by-one and manually assigning individual tasks. You could colour branches that you're currently working on and shade in branches that have been completed. At a glance, you can see what has been done, what is being worked on and what is left to do.
"Auditable", "traceable", "reportable" need not be synonyms for "prescriptive" and "cumbersome". I have offered one way of making the test management process a bit leaner, and there will be others that may suit your particular context.
This article is an updated version of Aaron's post on using mind-mapping software as a visual management tool (part 1) and was originally published on Aaron's blog site The Adventures of TestKiwi.