Integration testing is latest WeTest focus

Big Thoughts

23 September 2013 • Written by

The latest WeTest workshop in Wellington was presented by James Hailstone and focused on integration testing

Participants of WeTest on 4 September were treated to a workshop on integration testing focused on test environment planning, useful tools for analysing problems and web services testing.

The presentation initially focused on test environment planning – specifically the starting point of system testing with ‘single’ stack environments, moving to the end game with SIT ‘multi’ stack environments. What actually constitutes a test environment? What does the Project Manager need to know when sourcing the components for the test team? Quite often people in the project team regard ‘test environment’ as just two words… and something that can be magically conjured up. But the reality is far removed from this with many dependencies and constraints that become known over time. The sooner the project team uncovers these issues, the sooner a corrective course of action can be taken to move the project further forwards.

The presentation then touched on common issues that can crop up such as blocked pipes, messages/data dropped on the floor, misconfiguration etc, followed swiftly by system monitoring tools used to analyse such events. Examples mentioned include using Putty to ‘tail’ logs, XMLSpy for XML validation and ASAP utilities for Excel/CSV manipulation.

404 screenshots do not help anybody! Developers sometime need additional help to track down the cause of a defect, particularly when they have 20 other defects to look at at the same time. Using error log info and attaching to the defect can provide just enough information for the developer to provide a fix – and in turn produce a speedier turnaround. 

Web services in the industry were mentioned in terms of the types of XML testing that can be done. Examples include testing valid, invalid, incomplete and malformed XML types, as well as in the post-presentation discussion – message flows and replay and missing/out of sequence message testing. The key idea here is the earlier the testing can be done ahead of main GUI/Functional Testing, the earlier the integration defects can be raised and put on the board to be fixed.

An example ‘bread recipe’ XML message was shown as a way of testing the error handling capability of a piece of software. The test is unscripted and not a requirement, but by using XML messages such as this, we can then work out how robust the system is. Will the system completely choke or is the pipe blocked? Or skip the message or move it to the side and process other valid messages?

The XML message is shown below for you to use in your own testing if you wish:

<recipe name="bread" prep_time="5 mins" cook_time="3 hours">

     <title>Basic bread</title>

<ingredient amount="8" unit="dL">Flour</ingredient>

<ingredient amount="10" unit="grams">Yeast</ingredient> <ingredient amount="4" unit="dL" state="warm">Water</ingredient>

<ingredient amount="1" unit="teaspoon">Salt</ingredient> <instructions easy="yes" hard="false">

 <step>Mix all ingredients together.</step>

<step>Knead thoroughly.</step>

<step>Cover with a cloth, and leave for one hour in warm room.</step> <step>Knead again.</step>

<step>Place in a bread baking tin.</step>

<step>Cover with a cloth, and leave for one hour in warm room.</step>

<step>Bake in the oven at 180(degrees)C for 30 minutes.</step> </instructions> </recipe>

We then moved onto WSDL contracts and how we can use them as another source of information in conjunction with the more traditional Requirements documents that are familiar to us. The key point here is that web service contracts are authored by developers and detail how the system actually works. Information useful to testers held in these documents can include:

Field definitions

Field values

Server port configuration

Uncovering discrepancies between these documents and the requirements can yield defects before any actual coding takes place. This is called ‘static document analysis’ and is a very good way of finding defects ‘ahead of the game’. The sooner defects are on the table, the better it is for the project team because developers / BAs etc can address them and we can retest them and inch that little bit closer to a go-live!

Following the presentation, extensive discussion took place over a number of threads including:

Belief in the stub as a viable test method

Stubs are very useful in the outset of a project where suitable systems are still to be built and defects can be found early. However, they are no replacement for an actual system, especially in the run up to a go-live.

Stubbing can be a good way of testing edge cases, particularly performance of individual system components.

How do you predict serious defects?

Testing experience largely dictates how to predict when and where some serious defects can appear. Unfortunately, as a tester, you don't win every battle with the project to test a certain way. Some common themes definitely emerge however with use of the same technology on different projects (XML an example).

Service virtualisation as enterprise service modelling eg. DPS

This can keep the costs down on a project where the system needs an inbound/outbound connection – but the actual service has a $ transaction per use. However, you still the need the use of the real deal connections once system integration hits the project to prove the system is capable. There can be a lot of system components in play such as proxies, load balancers and web servers that can't be easily replicated and is necessary from a configuration point of view to get right with the web service connections.

Developer psychology

Getting inside the head of a developer and seeing what makes him or her tick is a hard skill to master. Water cooler chats, informal discussions over a cup of tea can lead to a very productive atmosphere. Also establishing some commonality (even talking about the weather or the weekend etc) can go a long way to improving testers’ relationships with their developers. Stating up front that the defects raised are no reflection on how poor their work is can also take the sting out of the tail and they may be more inclined to fix the issues rather than combat them.

Ping pong between developers and non-compliance

There can be two systems – system A and system B – the defect raised applying to both. How to know which one to fix? This can be tricky and depends on the requirement written at the time. If there is no requirement, then which system is easier, cheaper and more time effective to fix is a good approach to resolving these kinds of issues. Ultimately, all a tester can do is inform the Project Manager of the situation at hand and hopefully guide them with the information they need to make a decision.

Tools

The tasty stuff! Everyone loves tools. The following tools were mentioned in the discussion:

SoapUI: This tool is free to use and well known in testing circles as the best all-round tool on the market for testing web services and XML messages against a system under development. It is also useful for testing in a stub capacity in order to get integration defects caught earlier.

Squid and Fiddler: Squid is a proxy tool that allows data/network traffic to be diverted to your desktop machine – very useful when tracking down system errors at a network packet level. Fiddler allows the user to manipulate packet data to trigger error responses, for example, and see the behaviour of the system under extreme conditions.

Tail for Windows: Allows testers to get up close to some serious logging output. Defects that have logs attached aid the developer immensely and save time in fix analysis.

Custom Tools: On any project, developers can build custom tools to help testers get through their workload more quickly in terms of test data generation, bulk message sending/receiving or in many automation tasks. If you have a tester or developer with bash scripting skills on the team – all the better!

Search the Assurity website (Hit ESC to cancel)