In The Evil Tester Show podcast episode 8 I mentioned other people’s definitions of software testing and outlined my own. In this post I will deep dive a little bit more into the other definitions.
Note: this was originally written in 20190305, but I accidentally left it in draft until a content review found it on 20200621
Posts in this “What is Software Testing? " series:
- What is Software Testing and Why do we Test?
- Some Definitions of Software Testing
- My Working Definition of Software Testing
A Definition That I Use
In my podcast episode I presented my current definition of testing as:
comparing models of system with observed interactions of the system for the purpose of…. evaluating the comparison and refining the model
This is a more specific version of the general “What is Testing?” which I would phrase as:
“comparing models of the thing with the thing”
The key points for this are:
- model that we have
- the thing we are ‘testing’
- a comparison process
This was built, over years, by looking at definitions created by other people.
- Identifying what resonated
- Identifying what didn’t resonate
- Identifying what felt limiting
- Identifying what nuances I never wanted to lose
- Building on those definitions and changing the words so that I could own and fully justify the definition and avoid calls to authority in my defence of the definition.
My comments below do not imply that the definitions I’m looking at are ‘wrong’. They are just not ones that I use.
And these are not the only definitions out there. They are just some of the most quoted.
I’ve picked them to try and work back through the type of thought procesess I used when looking at the definitions of others and building my own.
The definition I used is slightly more generic than:
ISTQB’s “Software testing is a way to assess the quality of the software and to reduce the risk of software failure in operation.”
ISTQBs definition talks about Software a lot, I tend to generalise to “System” or even more general “Thing” because it focuses my attention on more than the deployed bits and bytes:
- interactions with the users
- interactions with other systems
- take into account the deployed environments
- the process of deployment
I don’t think of it in terms of “assess the quality”. Quality makes it all seem very Quality Assurance and Quality Control and Gatekeeper etc. Sometimes I will want to:
- investigate risks
- look for errors of omission
- look for exploits
And all of those could be viewed as ‘quality’ I’d prefer to avoid that word since it is ambiguous and often misinterpreted to mean that Quality then becomes the responsibility of Software Testing.
The aims of testing may not be to “reduce the risk of software failure in operation” we might identify risk and then choose to accept it, thereby not reducing it at all.
I appreciate that ‘assess’ can map on to ‘compare’ since you have to have a model of ‘quality’ in order to ‘assess’ if the Software meets it, but I prefer to not use those words.
Cem Kaner Definition
“Software testing is an empirical, technical, investigation conducted to provide stakeholders with information about the quality of the product or service under test”
Again this talks about ‘the quality’, ‘product or service’ is a little wider than Software, but I still prefer “System”.
“conducted to provide stakeholders with information” presupposes something about the reasons behind testing. That is less relevant to the definition I use. If I don’t have reasons to test, I won’t test. If I do have reasons to test then that doesn’t have to be part of my definition of testing.
This part of Cem Kaner’s definition does focus the attention of practitioners on the value of the testing we do to make sure that it is a useful thing to do, but that is still independent of the act of Software Testing. Otherwise it means that if we don’t have stakeholders or if they aren’t interested in the information then even if we do the same activities, we would only define it as Software Testing if as stakeholder receives information.
I do think this communication is an important consideration when we do Software Testing. And I Think it can exist independent of the definition. And so I don’t have that explicitly in my definition.
It would come to the fore when we consider “Why do we want to do Software Testing?”
Empirical implies experience and observation and I have that in my “System” definition, but not in my higher level generic definition. Empirical also implies experimentation and that is a good thing to keep in mind.
I think I have probably abstracted this into the “comparing”, since we have many ways of comparing things: automatically, experimentally, theoretically, through review, etc.
Testing might be “Technical”, and it might not. It might be a simple comparison between a requirement “the title message must say ‘X’” and actually reading the title which doesn’t seem very technical. But testing can be incredibly technical. I don’t want to restrict what the comparison process involves in my definition.
“Empirical”, “Technical” and “Investigation” are good words to remind me that when I compare the model with the system I probably want some empirical comparisons, some technical comparisons, some investigative and exploratory comparisons.
I’ve simply gone more generic and abstract with my definition because that fits my modelling of the process better for eme.
Why mention these other definitions?
I didn’t mention them to critique them to say that they were wrong.
I have tried adopting these definitions before. These and other definitions.
None of them quite fit me and how I approached my work.
I have clearly built on them, and the lessons I learned from those definitions, but the definitions themselves don’t work as well for me.
I’m looking at them in more detail to try and point out that we need to create our own definitions that guide us.
Rather than adopt someone else’s definition and then ‘bolt’ stuff on.
Create your own definition that help you focus.
Create your own definitions that you can own, and iterate over, as you continue to learn and adapt.