Acceptance Criteria: Using Specification by Example helps you see the wood and some trees
The Assurity coaching team have helped clients achieve significant improvements in quality by adopting Specification by Example, a collaborative approach to seeking requirements. Nigel Charman describes how development teams can reach a better understanding of their business needs by discussing general acceptance criteria in addition to specific examples.
To build quality into a solution, you need to understand the requirements at a detailed level and then prove they have been met. In many software development projects however, the requirements aren’t well understood by the development team because of the system’s complexity, for example, or poor communication within the team. But all too often, misunderstandings only surface after the software is built, when the financial damage has already been done.
The obvious solutions have been to gather all the requirements and get the design done first, under the assumption that if we know everything upfront and in its entirety and correctly, then the requirements will hold over time. Experience tells us that few pieces of work have these attributes.
Specification by Example seeks to address these issues, one of its key practices being specifying collaboratively. Immediately prior to the development of a user story, the team discusses the story and elaborates a set of concrete examples that are later used as acceptance tests. This discussion typically involves the whole team, including the product owner, business people, testers and developers. By harnessing the skills and experience of the whole team, we discover missing requirements and examples. The business plays a vital role in explaining the context and can immediately elaborate and save time through these discussions. The team gets clarity, resolves ambiguities, reduces misunderstandings and gains a deep, shared understanding of the business need. A key differentiation for this approach is that it drives out these inconsistencies early in the process, immediately prior to development.
Behaviour Driven Development (BDD) is one technique from which Specification by Example emerged. The BDD approach uses a two-layer structure of stories and examples.
The examples (or ‘scenarios’ in BDD terminology) are deemed to be the acceptance criteria agreed by the team.
This two-layer approach is commonly used. For example, Gojko Adzic’s recent Specification by Example book describes that “refined examples can be used as acceptance criteria for delivery”.
By having the “acceptance criteria presented as scenarios”, this approach conflates examples (scenarios) with acceptance criteria. It is up to the reader to deduce general rules from the specific examples.
Martin Fowler highlights what is missing with this approach:
"Specification By Example isn't the way most of us have been brought up to think of specifications. Specifications are supposed to be general, to cover all cases. Examples only highlight a few points, you have to infer the generalizations yourself. This does mean that Specification By Example can't be the only requirements technique you use, but it doesn't mean that it can't take a leading role."
In short, this approach makes it “hard to see the wood for the trees”.
By reclaiming the term ‘acceptance criteria’ to describe the desired behaviour in general terms, we separate the acceptance criteria from the examples.
For instance, an acceptance criterion might be that “All orders of $50 or over receive free shipping”. Examples might be “a $49.99 order has to pay for shipping” and “a $50.00 order has free shipping”.
This gives us a three-layer structure with an increasing level of detail:
It is important that each example relates to an acceptance criterion and that each acceptance criterion has corresponding examples.
When specifying collaboratively, we discuss examples of each acceptance criterion. By discussing ‘what-if’ scenarios, we discover important new examples. These new examples often don't have corresponding acceptance criteria prompting us to define new acceptance criteria. This may, in turn, lead to additional examples being created as we explore the new criteria.
Switching back and forth between the acceptance criteria and examples helps us discover gaps and gain a deeper understanding of the requirements. We start to see the wood and some trees.
Using this approach, we find that the barrier to entry for detailed discussions is lowered. We only have the discussion once we have the context; we then only have the discussion about one context at a time; we then span the complexities through compare and contrast techniques. It is in this wider discussion and shared understanding that we see teams produce better solutions.
As an example, let's start from the ‘Search Courses’ feature in the Cucumber documentation:
The scenario title states that we can search by topic but doesn’t define the general criteria. Questioning what these criteria are leads us to discover other requirements and examples.
When specifying collaboratively, we would expect a team discussion such as:
The team adds:
The discussion continues:
The team adds:
This dialogue illustrates how examples are elaborated from acceptance criteria. The examples stimulate ‘what-if’ discussions which generate more examples from which generalisation leads to new or refined acceptance criteria (and sometimes to new stories).
In summary, seeking progressive refinement from stories through acceptance criteria to examples helps us understand the requirements better. Combining generalisation and specialisation approaches reveals both missing requirements and examples. And most importantly, this is all done before a line of code has been written.
This results in a solution that better meets business needs, first time.
In Part Two, we will look at how this approach improves the living documentation, enhancing the understanding of your specifications and making your system more maintainable.
Thanks to Assurity’s Todd Brackley, Gillian Clark and Gareth Evans for their input into this article.
Alice Chu reviews the first WeTest workshop of 2014 that focused on the often highly contentious balance between testing and reporting…