Is it time to revise the Definition of Done?

Quick Thoughts

26 June 2016 • Written by Arpan Mandal

As a consultant, I see many different Agile teams in action at various client sites. One of the things I see impacting delivery is often an outdated Definition of Done (DoD). Agile teams often seem to create this checklist and then never revisit the Definition of Done to improve – missing a great opportunity to improve their delivery.

From time to time, the DoD should be reviewed with the goal of continuous improvement. I’ve found a couple of very important steps missing from the DoD with many teams I’ve worked with.

Here’s a recent DoD:

DOD

As you can see, the DoD comprises a code review, design review, adding tests etc. The testing review is never stated or practised as a delivery checklist. I’ve often found the essence of writing tests is misinterpreted. Most teams do write unit tests to automate the testing of developed code, but what about reviewing those tests? A good code design complements good test design. Design review as an exercise results in better design, mostly involving the Product Owner and developers. A code review aims for good code quality and mostly involves developers who often forget to review the automated tests in this exercise.

Shouldn’t there be a test review to improve the quality of the tests (unit, acceptance, behaviour, integration etc)? I think that adding the (forgotten) test review in the DoD checklist and ensuring it’s done with the right people (testers), is crucial to good development practice.

Who

Developers are not testers. Therefore, a good test review should also involve a tester, as well as the Product Owner, SME and developers.

What

The review should include at least the automated unit tests, integration tests, behaviour driven tests and acceptance tests.

Bonus

One added value of a test review is that it creates conversations from the testing process, which is different from design or the technical nitty gritty of code. These conversations lead to exposing some unexplored user scenarios or usability cases which were previously undiscovered during design or coding. It also helps refine the test and acceptance criteria and make design and code changes – all the elements that result in a better product.

The code review is an important step that is never missed in any DoD. Sadly, a key aspect is often forgotten during the code review process – the impact of the code change on the end user or customer.

For example, during work with one client recently, even after a routine code review and ensuring that the code was working perfectly, no-one thought about its impact on the end user. The date/time value of a certain property was showing the incorrect value so it was fixed by the developers and tested well. However, the customer wasn’t advised that all the data loaded to the system should now be in the UTC date format when they upgrade to the new version of software with the changes – therefore some of  the old datasets had date/time values that needed to be changed to UTC format but no upgrade of this date data was prepared. The impact is huge.

Therefore, as a good exercise, it’s important to assess the impact of the code/ functionality change and capture it as part of the code/design/test review or as a separate exercise on its own. Adding an impact assessment into the DoD keeps this important exercise visible to the team and thereby creates a better chance of getting it done. It doesn’t matter when it’s done, as long as it is.

As the fundamental principle of Agile, the focus should be to strive for continuous improvement and this can only be achieved by review and retrospect of the DoD. The test review and impact assessment can add an immense amount of value to any product development team’s delivery results.

 

Search the Assurity website (Hit ESC to cancel)