Assurity hosts industry leader Michael Bolton

Big Thoughts

22 July 2013 • Written by Aaron Hodder, Adam Howard & Katrina Clokie

In April, Assurity hosted Michael Bolton, industry leader in context-driven testing and co-creator of the Rapid Software Testing methodology

During his week in Wellington, Michael taught the Rapid Software Testing course and a one-day session on Critical Thinking Skills for Testers. He also presented at the Test Professionals’ Network forum and WeTest Workshop (reviewed here).

Aaron Hodder, Adam Howard and Katrina Edgar share their insights of these events presented by such a prominent figure in the testing community.

Rapid Software Testing (15–17 April)

I’ve heard Michael Bolton described in many different ways – ‘testing superstar’, the oft-cited ‘testing specialist’, even ‘that singer with the funny hair’. But having been lucky enough to attend his Rapid Software Testing course, hosted by Assurity, if I was to describe him I could only use one word… ‘passionate’.

His passion for testing really shines through during the course, which is designed to promote methodologies and practices that enable software testing to be done more quickly and less expensively while delivering excellent results.

Developed in tandem with associate James Bach, the course pays homage to the school of Context-driven Testing, to which they are both strongly aligned. But what struck me most of all was the realism with which Bolton approaches the idea of improving the way we do testing.

This is no mutiny, no cult, no revolution against the established order – although the fervour and passion inspired in those who had recommended the course to me may suggest otherwise. Instead, it’s a considered and practical approach to improving how we can go about testing software.

The major theme that runs through the ideas he presents is that testing is becoming rather too much like a game – that we spend too much time keeping score and competing against one another.

Testing, Bolton argues, ought not to be judged by the number of test cases we run or the number of bugs we find. The goal of testing should only ever be to find as many problems as possible in the time available.

One of the key distinctions here is the difference between a problem and a bug. Bolton suggests that quality is “value to some person who matters, at some time” and that a bug is something that threatens this value, according to the person who matters.

In that context though, testers don’t matter so we cannot truly identify bugs. Problems may only potentially be bugs if someone who matters – such as a project manager or business owner – decides that they threaten the value of the product.

Building on this, Bolton identifies testing as a lens that enables the project to see more clearly than it otherwise could. Testing is a learning activity, its purpose being to identify as much information as possible and allow the business owner or project manager or whoever matters, to make informed decisions.

This is not to say that the tester has no role in identifying bugs or threats to the project’s value. To be able to identify such potential problems, Bolton suggests we identify and utilise oracles to help us form judgments about the software.

Oracles are ways in which we judge a product and they are heuristic in the sense that they are fallible and not absolute. They act as a guide to enable us to infer the existence of a potential problem, but ultimately they can’t guarantee that an issue will undermine the value of the product. This is because they can’t encapsulate all the factors that must inevitably inform such a call.

This awareness of the fallibility of testing is another theme that runs through the Rapid Software Testing course. Bolton stresses that the goal of testing is only ever to find as many bugs as possible in the time available and aims to maximise such returns – but in the knowledge that exhaustive testing will never be achievable.

And so he proposes that we make the best possible use of our time and that when we are engaged to test a product, we should spend as much time as possible doing precisely that.

This means that he argues for less emphasis on creating lengthy and exhaustive test plans and reporting – although by no means suggests these are banished altogether. In particular, he disputes the sense of investing our precious time in creating test scripts.

Bolton argues that a test script can cause inattentional blindness in a tester, causing them not to see issues that may be right in front of them because they become reliant on the script they’re following.

As such, scripted testing ought to be used for what Bolton and Bach term ‘checking’ – the process of making evaluations by applying algorithmic decision rules to specific observations of a product – while manual testing time is better used in conducting structured exploratory testing sessions.

This places greater emphasis on the tacit knowledge of the tester – the things we know without necessarily knowing that we know them. These inform our understanding of the world and our interactions with it, guide our instincts and expectations and may alert us to the presence of a potential problem.

Bolton is quick to dismiss the common view of exploratory testing as an unstructured and non-repeatable activity that cannot be accurately reported upon.

In his argument for session-based exploratory testing, he stresses the need for each test session to be strictly time-boxed, have a set mission or charter to govern its objectives, generate a reviewable session sheet of what was observed – to be followed by a debriefing session with an experienced test lead or manager to introduce accountability and trust in to the exercise.

Such an approach to testing allows a more hands-on testing engagement – maximising the time spent actually testing the software, rather than writing test scripts that become a representation of the tacit knowledge a tester would use to guide their exploration.

Organising a testing effort in this session-based way also doesn’t discount the act of planning as a crucial testing activity. Initial survey and strategy sessions can be used to help familiarise yourself with the system under test and define the processes that will be utilised to test it.

Indeed, Bolton argues that such early engagement with the product is crucial in building a model of the software that then allows us to define our test approach and focus our energies on those areas of the system most important to the business or that present the greatest risk.

Such a model can take any form, but during the course we used mind-mapping software to great effect. It’s a skill I’ve never employed before, but I quickly found it to be a wonderful method of visually representing the extent of a system and the relationships within it.

Presentation of such a model to a business owner can be extremely effective in demonstrating the vastness of a system. It also encourages them to make a call on those areas your testing should cover – building in an awareness of the inherent limits and helping the people who matter to define the test approach.

Reporting outcomes in a more efficient manner is another way that the Rapid Software Testing approach attempts to make the business of testing more efficient. Instead of long and wordy documents, Bolton argues for the construction of a simple narrative about the product, the testing and the value of that testing.

However, the real value in the course lies in the fact that Bolton is not selling solutions. The Rapid Software Testing course makes no claims to solve all your testing problems and help you produce perfect software.

Instead, Rapid Software Testing is a mindset and a skillset that helps make testing faster, cheaper and more effective. It is also a living thing. Bolton regularly points out that he and Bach continuously debate various elements and are constantly introducing new ideas.

The ideas behind Rapid Software Testing are consistently developing and evolving to keep pace with a rapidly developing industry.

This doesn’t mean that I’ll need to re-take this course in a few years to keep up with those developments. More than anything, you take a mindset away with you – a new way of looking at testing and the way we do things and a passion to improve the way we work.

That word ‘passion’ again. It’s what drives the course and makes Bolton such an engaging and inspirational teacher. He has a genuine love for what he does – as illustrated by the way he gleefully showed us his collection of software bugs found in the world’s hotels and airports.

Bolton’s passion is for software testing. For finding bugs and for finding them in the methods and processes we use to do that. While talking to him about having pride in what we do, he looked at me and said something that I think sums up the ethos of Rapid Software Testing and Bolton himself:

“You know what, I just think that we can do better”.

Test Professionals’ Network Forum (16 April)

As software testers, we’re usually called upon to discover threats to functionality and often to things like security and usability. Something we don’t hear much about though is ‘charisma’.

During Michael’s visit, he presented a thought-provoking and inspiring interactive session at the Wellington Test Professionals’ Network (TPN) that explored this concept and how it applies to testing.

Charisma as a quality attribute was first introduced in 2011 by Rikard Edgren in his Little Black Book on Test Design. It has since been incorporated into the heuristic test strategy model taught by Bolton and his colleague James Bach. 

The presentation started with a broad overview of the three pillars of the heuristic test strategy model (HTSM): Product Elements, Project Environment, and Quality Criteria. After a brief, but broad introduction to the HTSM, we delved deeper to focus on how charisma can be considered as a quality attribute. 

As explained by Rikard Edgren, charisma is something that software can or should possess. When we look at a product, we often ask questions like, “Does it work?” and “Is it user-friendly?”. These relate to more functional attributes, but to ask, “Does the product have ‘it’?” – where ‘it’ is that X-factor or charisma that makes something stand out from the competition – is an equally valid assessment to engage in.

We then broke into groups to brainstorm what we thought charisma – with reference to computer software – might entail and what the elements of charisma were.

This generated some really interesting discussion and ideas. I was especially intrigued with an idea that one group came up with about ‘tribe values’ and how the application’s ability to conform to the values of the tribe could give it this sense of charisma.

After we came back and put all the ideas on the wall, Michael outlined what Rikard Edgren had proposed:

  • Uniqueness: the product is distinguishable and has something no one else has
  • Satisfaction: how do you feel after using the product?
  • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
  • Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
  • Curiosity: will users get interested and try out what they can do with the product?
  • Entrancement: do users get hooked, have fun, in a flow and fully engaged when using the product?
  • Hype: should the product use the latest and greatest technologies/ideas?
  • Expectancy: the product exceeds expectations and meets needs you didn't know you had
  • Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
  • Directness: are (first) impressions impressive?
  • Story: are there compelling stories about the product’s inception, construction or usage?

Since the TPN session, I've been ruminating on the idea of 'charisma' in relation to software. The most important insight was that the value of charisma isn't just applicable to the Apples and Googles of the software world.

That internal banking app you're testing? A charismatic application engages users and makes them happy to use it. A happy, engaged user makes fewer mistakes than someone who feels 'forced' to use your application and will likely respond better to the experience and your business as a whole.

As such, there are definite, tangible business outcomes to this idea of treating charisma as a quality attribute when we test software.

Critical Thinking Skills for Testers (19 April)

To round up the week, Michael presented a one-day workshop on Critical Thinking Skills for Testers – his definition of ‘critical thinking’ being “thinking about thinking with the aim of not getting fooled”.

In this workshop, Michael taught different ways of deconstructing claims made about testing and how to structure our own thinking to avoid traps.  This leads to more careful thinking and more careful testing.

One part I found particularly interesting was the difference between system one and system two thinking and the importance of being able to recognise the difference and apply it to different situations as appropriate.

System one thinking is fast, instinctual and emotional and often serves us well for everyday thinking when speed is important. However, it can also mislead us. This is when we need to move into system two thinking, which is slower, but more deliberate and logical.

I also found it very valuable hearing how to elicit hidden assumptions from the things people say. As testers, we often deal with multiple sources of truth and multiple interpretations of requirements and objectives. Being able to think clearly and recognise the differing interpretations and assumptions that may occur is a crucial skill.

By the end of the day, we had learnt about common-thinking fallacies and had picked up tools and techniques for overcoming these fallacies, enabling us to think about testing problems in a more considered, logical way.

Everyone I spoke to was energised and enthusiastic. I highly recommend this workshop for anyone who wants to clarify and enhance their thinking, not just with regards to their testing work, but in their everyday life as well.

All in all, a whirlwind week for the man behind Rapid Software Testing. I think it’s fair to say he took Wellington a little by storm! It’s impossible to be around someone with Michael’s energy and passion and not have it rub off on you a little. A huge thanks go to Assurity for giving us – and the Wellington testing community – the chance to be inspired.

Adam Howard with Katrina Edgar and Aaron Hodder

Search the Assurity website (Hit ESC to cancel)