Testing web services provides some challenges which are often overcome by the use of automated testing tool(s), where the automated tests are data driven and share a common XML-based foundation.
However, by their very nature, data-driven automated tests are only as good as the data they are fed with. Therefore, flawed test data leads to an equally flawed, if not pointless, automated test suite.
I encountered this in a large-scale web service implementation project, involving complex business processes, where all test coverage information was gathered in spreadsheets. This caused great confusion within the team, including developers and business analysts, and was an early sign that we were heading towards the above-mentioned compromised automated testing.
In order to rectify this, we released the existing test model from the spreadsheet cells and moved to a visual representation of it. The effect was dramatic as we could identify – at a glance – gaps and duplicates, and it made the test creation process very straightforward by relating it to the complexity of the project. The increased visibility of the bigger picture also allowed us to highlight areas of concern for management purposes and has proven to be very flexible to changing requirements.
In conclusion, the visual modelling of test coverage has allowed us to build a very healthy set of data on which the web service automation test suite is now based.