‘Heeling errors’ with visual models

‘Heeling errors’ with visual models

Quick Thoughts

19 December 2014 • Written by Andrew Robins

Like good testers everywhere, I love re-using other people’s great ideas. So when I was asked to do a last-minute presentation on models to the TPN Christchurch, I immediately thought “Visual Models”.

There has been some excellent work done on these by members of our Wellington practice. I had a fresh audience in Christchurch, so my basic approach was to ‘rinse and repeat’ the visual modelling session that Katrina Clokie ran in September – with a few twists of my own.

The basic challenge was to form small teams and model a test approach for an object. Participants would then dot vote for the different models.

Teams of three were formed. Every participant was asked to think of a theme song, a mascot and a favourite cultural icon – and then, armed with this information, go in search of teammates.

I encouraged people to think about their team selection strategy. Were they going to go for diversity or were there more benefits in thinking about homogeneity? This part of the challenge was mainly intended as an icebreaker to get people talking and it served this purpose well.

The challenge was then introduced and I pointed out five different things people could achieve with their model:

  • They could model the tests that they intended to run
  • They could model the selection criteria for these tests
  • They could model the test coverage that they intended to deliver
  • They could model how they intended to evaluate their test results
  • They could model the decisions that they had made while modelling

This last point may seem a little obscure – but it is an important one. Any modelling process is a simplification process. Otherwise, there is no point in doing it. During the process of modelling, we all make decisions about which information we are going to pay attention to, and which we are going to ignore. That can be an important thing to describe and yet it is mostly hidden in the models that we produce.

Of course, any team that tried to model all five of these things was doomed to failure, so I also made it clear that some of my input was going to be helpful – and some unhelpful – and left participants to work out what that might mean.

Just to make things even more fun  (mainly for me), each team was allowed to select up to five items to help produce their model. A marker pen counted as one item, as did a single sheet of paper. This was intended to get people thinking and taking the challenge seriously right from the start. Also, constrained resources are a feature of our work and I felt this would be a good thing to work in to the exercise.

The challenge itself was to model a test approach for an antique ‘Heeling Error Gauge’. To make things more interesting, I had printed out a lot of information about Heeling Gauges, without initially realising that these were two very different things. This was not actually a problem for the challenge though as the main purpose to including the documentation in the first place was to encourage participants to waste limited time.

Only one person per team was allowed to interact with the gauge (which was in a different room) and look at the documentation. They then had to explain the information they had to the other team members who produced the model.

Observations I made during the exercise were:

  • Most of the teams appeared to select less-experienced team members to be the ones who gathered the information. In general, this was a mistake, as I had deliberately selected an object that was difficult to describe – and almost certain to be unfamiliar to all participants
  • Most teams started by trying to draw a picture of the gauge, rather than a model of the gauge – let alone a model of the way they intended to test the gauge. It was clear to me that most participants were new to the concept of visual models – and as such were falling back on the familiar
  • Teams spent way too much time looking at the documentation – there was intentionally far too much information here to be usefully analysed in a useful timeframe
  • No teams cooperated with each other – they all treated it as a competition, although I never explicitly described the challenge in that way. I just implied it

With 15 minutes to go, I introduced my final twist to the challenge and gave teams the option of working cooperatively – or continuing to work in isolation. If they chose to work in isolation, they had the advantage of still having access to the gauge. Otherwise, I was closing the doors on them.

Six teams out of eight took the option of working cooperatively (which they could have been doing all along as far as I was concerned) and losing access to the gauge, while two continued to work in isolation.

This was an important part of the exercise, as I wanted to show people the benefits of cooperation and make the point that part of what we need to do is to consider the way in which we are working. Yes, I had set the challenge up to look like a competition – but clearly in terms of achieving a better end result, cooperation was likely to be an effective strategy and, as challenge owner, I would have been very open to the idea of teams cooperating earlier than they did.

My observation was that the teams that worked cooperatively made very rapid progress and there was a real hive of activity occurring as people shared knowledge and ideas.

This session was probably very different from what most people were expecting when they turned up to the TPN – but engagement levels were pretty high and people seemed to enjoy the challenge.

In the end, the two teams that came out on top in the dot voting included a good contingent of testers from Assurity – plus some of our friends from JADE and Tait – so I wasn’t complaining about that result!

Search the Assurity website (Hit ESC to cancel)