Skip to main content
blog title image

12 minute read - Techniques Modelling

On Modelling

Jul 10, 2023

TLDR: Every time we test something we are testing from models. Modelling is a key skill for Software Testers. Errors cannot be identified without a model to compare them. Quality Control cannot be conducted without a model.


“A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness…” A. Korzybski, Science & Sanity, 4th Ed. 1958, pp. 58-60 (quoted from: R Bandler & J Grinder, Patterns of the hypnotic Techniques of Milton H. Erickson, M.D. Vol 1, 1975, pp 181)

Human beings model the world. We learn by constructing models and then subjecting those models to tests in order to determine their validity. Human beings do this all the time. By the time a human reaches adulthood it has constructed so many models that the person is probably unaware of all those models, and may even be unaware of the modelling process.

When we are faced with something new which we don’t understand, a new fangled tap in the washroom for example, we will use the models that we have already developed which are similar to that situation; all our tap models. If they fail to make the tap work then we will use other models associated with the washroom. Having encountered hand driers with proximity sensors we may wave our hands around to get the tap working. If we have been on a train or a plane then we may have encountered taps that work through foot pedals and we would start to look for those. We have models for turning things on outside the washroom using buttons or levers and we would start using those.

We use models and strategies all the time. We are experts at constructing models and strategies to apply to those models. We interact with the world through those models.

The software development process constructs software that works more often than it fails. This is in spite of us not doing what we are told is best practise, or even what we believe we should do; we don’t spend enough time on requirements, we don’t stabilise requirements, we don’t design, we don’t unit test, we don’t document, we don’t review. We often get away with it because modelling is a natural talent and software development is a process of modelling.

Unfortunately we also learn from experience, and if our experience of success includes not doing all these best practise processes then that leads us to not do these processes again.

We have to understand what we do and what each step is for, in order to ascertain if we can miss them out in the situation we are in.

Modelling explicitly, and understanding our models, allows us to be pragmatic.

Modelling Software Development

Software development tries to create a product. It does this by engaging in a number of processes (requirements, design, coding, etc.). Each of these processes will create a model that will either be the final product or an input to a follow on process.

There are constraints on the construction of a product; we need it in X days, it must only cost Y thousand, and you only have Z staff members. The skill of developing software is in the effective application of the strategies that have been learned bearing in mind the constraints involved.

Modelling is a fundamental activity of the software development process.

To take the main processes from a minimal software life cycle:

  • Requirements
    • Requirements are modelled, possibly as text.
  • Design
    • The popular UML provides a range of diagrams: Class, Object, Component, Deployment, Use Case, Sequence, collaboration, Statechart, Activity
  • Coding
    • The program is modelled in Code. We have the choice of language to model the system in be it C++, Smalltalk or assembly. The code is a model. When we execute the system we have to have special programs to map that executable system back into the code model. e.g. a source code debugger. (It is possible to formalise the design models above so that they are equivalent to a code model.)
  • Testing
    • Testing can use many of the development models and will apply strategies such as: loop once, loop twice, cover every statement, cover every predicate condition, cover every exception.

Each of the models produced in the development process is a refinement of a previous model, even if the previous model was never formally documented. A requirement is a refinement of the dreams and aims associated with the picture in the specifier of the requirement’s head.

Modelling For Process Improvement

The simplest way to improve a process is to analyse the errors that it lets slip through.

For every error not found by one of the previous quality control processes, ask, is there a strategy that could have been applied to one of the existing models that would have created a test case that could have identified the error.

If the answer is yes then it may well have slipped through because timescales or staffing levels forced your hand and you simply didn’t have the time to apply that strategy to that model, or the risk of not applying it was deemed low.

If the answer is no then we have to identify a model and a strategy that could have found it.

In both cases assess the cost and time impact of adopting that model and strategy. The development process is one of trade offs and compromises.

Modelling and Testing

In order to test a model we have to have some way of recognising the success or failure of a test, our testing has a goal, it has a reason for existing.

That reason is inherent in the model from which the testing is derived.

This means that it is difficult to use a model to test itself.

In software development this is typically not a problem, we rarely use the source code as the only model when testing the source code. We typically derive tests from a design model, a requirements model or a specific testing model, or even a model which may be derived from the source code, essentially any model which has the level of detail required for our testing.

Modelling is as fundamental an activity of testing as it is of development.

Test Strategies are applied to models. This is for the purposes of test derivation, derivation and execution coverage measurement, domain analysis, risk analysis, the list includes almost every task that testers do.

Strategies typically evolve and are identified by thinking about errors that slipped through and identifying a strategy that could have found them.

Testing and Modelling using Exploration

Testing is exploration. In a mature testing organisation the expedition is well planned, and staffed with seasoned explorers. The planning will be done around a number of maps of the territory to be explored. Some maps will show different levels of detail - to show all the detail on one map would confuse the issue, so one map will identify areas of population, one will provide information of season rainfall statistics etc. Maps are very important. The explorer plans different routes through the maps to match the aims of the expedition, perhaps they are trying to unearth hidden temples and consequently will pick routes which take them through areas which are sparsely populated now, but in the past were densely populated. Effective exploration requires an understanding of the terrain to be explored.

Errors cannot be found without a model. Quality Control cannot be conducted without a model.

I have heard it said that “some testers never model” and “reviews are not conducted against a model”. These statements are false. In the absence of a defined and identifiable model, there will be an informal model, a model of understanding in the tester’s head.

A model is our understanding and knowledge of the system. The level of testing that can be done with no understanding and no knowledge of the system is zero.

Try this. Take a program that you don’t know what it does. Make sure that the program presents all its information in a way that you cannot understand. If you don’t know Japanese then test a Japanese program. If the information presented to you is obscure enough then you will find it impossible to build a model of it and then you have no way to assess the correctness of any action. Remember that if you even understand the name of the program or its main purpose then that is information that you will have assimilated into a model and will use during testing.

Note: you might make the application crash, you probably know how to recognise an application crashing, even if you don’t speak the language. You spot problems like a crash, because you have model of crashing applications which are independent of a language model.

Reviews cannot be conducted without a model of whatever the thing being reviewed is supposed to represent. Review models are different from testing models.

A review will be conducted against a number of models:

  • The model of a well-formed document. (does it have a title page? Are the pages numbered?)
  • The syntax of the actual text
  • The semantic model in the reviewers’ head of the items to be presented which they have a vested interest in.

There are at least as many informal models as there are people.

Modelling is a fundamental task in testing.

Quality Control is essentially the checking of a model against an implementation of that model.

A test is a specific situation with a predefined set of things to check against the model. The differences are errors, either in the model or the implementation.

Test Conditions as Modelling

Test Conditions are statements of compliance which testing will demonstrate to be either true or false. These are (in effect) test requirements.

Conditions serve different purposes. Some conditions will act as the audit reason for a particular test case e.g. The user must be able to create a flight. The tester will create a test which creates a flight, obviously there are more attributes to this case than this - what type of flight, what type of user, fully booked, partially booked, etc. These attributes are other types of conditions.

Some conditions are used to define a test’s attributes or preconditions. E.g. create flight of type local, create flight of type international.

Or are they….

This may be modelling that has not gone far enough.

The initial condition ‘create a flight’ is valid. When we only have test conditions as our modelling tool then we have to represent this as a condition. It is also a program function - create flight, or an object method, or an entity event, or a business process. Consequently we should really have a model and a derivation strategy that says “there must be at least one test for each entity event” or “there must be at least one test for each object method”. In this case it is obvious that “one test” will not cover the condition but with a rich model, with object or entity models we have a list of properties or attributes, these will have scoping variants (i.e. attribute flightType - international, local).

Basically, we use these context rich models to give us the combination information that we require to construct test cases. Without this approach we will never know if we have a valid or complete set of condition combinations.

Hierarchical models are appropriate for test grouping i.e. tests related to business processes, program modules, program functions etc. There is no reason why a test cannot be in more than one test grouping.

Hierarchical models are appropriate in derivation for hierarchical structures (it is possible to list entities attributes and events as hierarchical structures but this hides valid combination options and is a mix of ELH and ER, we should really have a relationship section on the model)


  • Entity: Flight
  • Attribute: Type
  • Attribute: Start Airport
  • Event: Create
  • Event: Takeoff
  • Event: Land
  • Event: Delete

Models for test derivation should be rich. This allows a derivation strategy to be created which can be used to gauge the completeness of the test products and the validity of the test products.

With rich models, Test conditions become requirements which are used to check the completeness of the test derivation approach rather than the audit reason - unless of course there is no way to make the construction of test cases automatic with the implicit cross referencing of test conditions i.e. we don’t have to state ‘Create a flight’ as a test condition because there is an entity event on flight called create which we know we have to be able to test and it will apply to a variety of attributes on that entity. Without this rich modelling, and without an implicit (or strategy driven) approach to testing, a vast number of test conditions have to be created and maintained and no guarantee of combination thoroughness can be achieved.

Error Guessing

Error Guessing is described in ‘Testing Computer Software’ by Cem Kaner [1]:

“For reasons that you can’t logically describe, you may suspect that a certain class of tests will crash the program. Trust your judgment and include the test.”

This quote suggests to me that there is an informal model in the tester’s head and that the subconscious is applying a strategy to the model which the tester is unaware of. The tester is only aware of the subconscious flagging the results of that check to the conscious as a nagging doubt.

If you do engage in error guessing then you should be aware that:

  • you have a model and applicable strategy in your head that you are not using on the project or possibly even aware of.
  • if your strategy does work then you should try to quantify it so that you can use it consistently.
  • If it doesn’t work then you should possibly change the model and strategy in your head.

[1] Testing Computer Software, Cem Kaner, Jack Falk, Hung Quoc Nguyen, 2nd Edition 1993, International Thompson Computer Press

The above content was a slightly condensed version of five blog posts from 2002.

You might also be interested in the following content: