The next stage of functional testing

Big Thoughts

20 May 2013 • Written by Adam Howard

Investigative and exploratory testing techniques are the next stage in the evolution of functional testing as a specialist field, says Adam Howard

Like most at Assurity, I work in testing. And yet, I’m sure I’m not alone in feeling some resentment towards the label ‘tester’ when it’s applied to my role. This isn’t because I don’t conduct software tests, or because I would prefer another role, but because the connotations of the word seem insufficient to accurately describe the service that I, and we as a company, provide.

Testing is a general field with various specialties and distinct roles. For instance, a software vendor would be foolhardy to launch any system into live operation without first conducting specialised performance or security testing.

It seems to me though, that functional testing is often regarded as the homogenous blob left within testing once all of the specialised streams are separated out. I would instead argue that functional testing is a specialised field itself, but also that it has distinct specialisations within it.

Let’s start by looking at the definition of a ‘test’ itself. Probably the most common understanding of a test is the ‘execution of a procedure designed to elicit a known response where the success or failure of the operation under test depends upon the observation of the expected result’.

This is the definition of testing that most closely aligns with the perception of functional testing, where a series of tests are developed to facilitate the observation of a known outcome that will prove or disprove a system’s competence in performing its desired function.

Central to the ability to conduct this sort of testing is the knowledge of all factors in the equation. To conduct testing in this manner, we have to know which inputs to feed into a system, as well as the outputs that such parameters should produce.

This closely resembles the most commonly shared experience of the meaning of the word ‘test’ in our society – that of an academic test or exam to which we have all been subjected. In such tests, an invigilator sets the questions in full knowledge of the answers. Whether you pass or fail depends on whether or not you can duplicate the desired solutions to a sufficient level of accuracy or detail.

This is a process that certainly holds value in software testing. To be able to repeatedly demonstrate that a system will correctly and consistently generate the expected and desired outcome when given a set of defined input parameters allows a vendor a measure of confidence in their software.

However, it shouldn’t be enough to give them full confidence in that software or constitute readiness for deployment. A test of this nature can only demonstrate that the subject knows answers that you, as the invigilator, already know.

What about those answers which you don’t already possess? They cannot be tested this way because you have nothing to check the generated output against. This calls into question the aforementioned common understanding of the idea of testing. So how do we go about testing beyond the known or expected extents of a system?

For the answer, let’s return to our academic analogy. We have compared testing to sitting an exam, during which you’re tasked with reproducing known answers to questions that have already been solved. These are the kind of exams you will have sat right through school.

However, at university – particularly at post-grad level – the measurement of academic merit changes. This is when students grapple more with that which we don’t already know – where fresh research is conducted to attempt to elicit or extend unknown or partially known truths about our world.

Instead of exams, academic achievement at this level is attained through the completion of a thesis or dissertation that explores a defined topic. The student will start by setting forth a hypothesis, usually in the form of a statement or a question, and their research will attempt to prove, disprove or solve that hypothesis.

I would argue that a similar exercise is possible and crucial in software testing. Beyond simply testing whether or not what we know about the way in which a software system should behave remains true, we should be adding value and confidence by exploring the unknown elements of the software too.

A vendor can never truly understand or communicate their full system requirements, just as a developer can never be sure of all the ways in which their code can be executed. As such, when it comes to software, we are always working with unknowns.

Because of this, a key part of the functional aspect of testing should always be an investigation into those unknowns. Such investigation, just as with normal testing – or what we may now refer to as validation – can never provide exhaustive coverage and thus absolute confidence, but a carefully planned and executed investigation of targeted areas of a system can uncover incredibly valuable information.

For example, a social networking site such as Facebook may have a test designed to validate that you can crop a photo and use the selected portion of it as your profile picture. If your new mugshot displays as expected, then the test is considered to have passed.

However, an investigation into the consequences of such an action may identify unexpected defects. Perhaps the original photo has been replaced with the cropped version, or maybe setting a portion of the selected photo as your profile picture means all photos in that album are now publicly accessible.

These are not consequences that would necessarily be validated in a straightforward functional test for that action. But a responsible tester with an understanding of the way the photo is stored and accessibility managed, and a depth of knowledge of what other areas of the site such an action may impact, would be able to conduct such an investigation and find these problems.

Such investigation requires a greater level of understanding of one’s field and subject matter – in the same way that a post-graduate university student would be expected to have a greater understanding of their subject than a school student in the same field.

To be able to analyse a system sufficiently to plan and conduct such investigations requires both experience in testing, knowing where and how problems are likely to occur and manifest themselves, and an in-depth understanding of the purpose of the system, in order to recognise defects where no expected result exists to compare against.

This kind of investigation is likely to identify a different set of problems or defects than those that would be identified by traditional validation.

And as such, we have two distinct processes that are likely to return two different sets of information relevant to the system’s ability to meet their needs and to be fit for purpose. These are two sets of information that are of equal importance and that we, as testers, are equally obligated to provide.

This is an important distinction that allows us to continue the evolution of our craft. Vendors should be in a position where they can ask for investigative testing, as well as traditional validation. Testers should be aware that functional testing is not merely a stepping-stone en route to a technical specialisation or a leadership or management position, but a worthy and critical specialist discipline in its own right.

In a world of rapid development and enormously complex software systems that exist in a state of constant flux and evolution, traditional validation-based testing methods cannot, and should no longer, provide the necessary assurance needed for a system to be considered fit for purpose.

In the same way that testing has become an integral and valued part of any well-structured software development lifecycle, software investigation must now also be considered an integral and valued part of any well-structured testing strategy.

Search the Assurity website (Hit ESC to cancel)