This blog post is a collation of micro-blog posts which were reflections on slogans generated by The Evil Tester Sloganizer, and uploaded to LinkedIn, Facebook and Instagram. Covering topics such as What does Completely mean? and what does the concept of “Good” mean on a project? Could it help if we though Testing was all about automation?
The following slogans were all automatically generated by The Evil Tester Sloganizer. And I then thought about what they might mean, and posted my reflections on Instagram. I’ve collated them here because they might trigger thoughts for you.
“Be completely good to everyone!”
My Sloganizer threw me a short and very saccharine slogan “Be completely good to everyone!”
Is this a soft skills recommendation? Very often Soft Skills are associated with ‘getting along with people’ and ‘how to win friends and influence people’. And these are important topics.
There are usually discussions (disagreements, arguments, conflict) on projects as well, what does ‘good’ mean when there is a discussion?
What does ‘completely good’ mean? This is some sort of subjective quality statement.
And Quality is a relationship.
In this case it might be between the other person’s view of our actions and our actions. And we don’t have control over how they perceive and interpret our actions. We are unlikely to achieve this.
Perhaps this is about ‘intent’. We can take responsibility for our intent. If our intent is to be ‘completely good’ then this might mean:
- treating everyone fairly
- ensuring people have the same opportunity to be heard
But everyone would have to decide for themselves, what this means.
D&D had multiple concepts of good: Chaotic Good, Lawful Good, Neutral Good. (Alignment
Would Completely mean “all of these” or “one of these, fully adopted”?
One of the things I find interesting about Testing to identify ambiguity in as many ways, places and forms as possible. Because ambiguity leads me to questions and often leads me to assumptions.
Completely Good sounds perfectly ‘nice’ and sensible. But it is overly generic and ambiguous to be actionable. If we ever find ourselves working with requirements or edicts like this then identifying and questioning the ambiguity might be more useful than attempting to implement it.
“Testing is all about automation! lols.”
Clearly Testing is not all about automation. To say “Testing is all about automation” or “Automate all the testing” is a provocation.
And it’s a provocation on multiple levels.
Testing is not “all” about anything except Testing. The only word that we can use to describe the totality of Testing is Testing. Any other word that we add for the comparison is a constraint.
What I find interesting about automation is that when we think of automation from a testing perspective, we are viewing a subset of testing, or nuance of our approach, or a perspective of automation.
Automated execution of flows through a system, exercising the system functionality and asserting on results is not the totality of Automation.
- use of tools
- deploying applications
- continuous integration
- and more…
Automation is independent of Testing. We do not need to say, or consider “Testing” when discussing Automation or Automating. Automation is a separate concept.
We can harness Automation as part of our approach to, or strategy for, testing. And when we do, we have nuanced “Automation” because the Automation that we harness is not the full scope of what Automation offers.
This limited perspective of Automation - through the lens of supporting testing, might mean that we equate Automation as testing because we are looking at Testing specific uses of automation. But it would also mean that we restrict Testing to the scope of things that we can automate, and that would mean defining Testing in terms of Automating.
Testing can only be “all about” Something, if we redefine Testing to mean the same as “Something”.
I can find value in the provocation to think about all the things we might be able to automate to support our testing process.
- we might identify tactics that support us in increasing coverage easily
- we might identify approaches that give us additional insight into the application and allow us to increase the scope of what we might test (e.g. tools to observe more deeply into the application and identify new sources of risk)
- we might identify portions of testing that we can’t automate, which we might not have been doing and can then expand our testing approach.
By thinking through the notion that “Testing is all about automation” we can identify many activities that do not fit this view and can gain a better understanding of what we think Testing might be.
“Be unpleasant to the people who live under the stairs that only appear when you are looking but not when anyone else is looking!”
One of the interesting things about Testing is when… you spot something that no-one else sees. When that happens it is worth unpacking, to try and understand what is going on.
- Was it just luck? How can you get more lucky?
- Did you happen to look in a different direction than other people? Was something misdirecting those other people?
- Did you deliberately try to observe something different when following a familiar path?
- Were you observing a different level in the technology stack? e.g. looking at HTTP requests in addition to the GUI
- Do you have a different configuration? What other configurations are likely to be encountered in the world?
I see a danger in accepting the situation without analysis, and assuming “I’m a bug magnet” or “This always happens to me because I’m a Tester”. Our habitual process might work well, in which case we can analyse it to deliberately improve, and then we can communicate what we do to help other people on our team expand their options.
I like when this happens, and I do the analysis.
But it is also important to remember that there are things in the system that we are not spotting, that someone else would, so what do we have to do to look at the system the way that they do?
Note: One of the reasons I created the simple games and apps in my Compendium of Testing Apps was to encourage looking at systems in different ways when practicing our testing https://www.eviltester.com/page/tools/compendiumtesting/
“Simply do not fully utilise the Testing Quadrilateral then you’ll fail and fail again”
There are so many diagrams that are presented as “Must Use Models” in the world of Software Development.
There is a difference between a model and a diagram.
The “Test Automation Pyramid” is model. But most of the time it is presented as a diagram.
Many different diagrams can be used to visualise the same model. They might not be as unique and different as the presenter thinks.
A diagram in isolation doesn’t mean something. Because what it ‘means’ is the model it represents.
A model is a subset of a full representation, we also have to explain what it doesn’t mean, so that people understand the limits and risks of the model.
Of course, we don’t have to explain that. We can leave that as an exercise for the reader.
Therefore. Every time we see a diagram, or hear someone summarise a new ‘model’… Build the model anew for your self.
Deconstruct it. Reconstruct it. Interpret the diagram in as many ways as you can. What is missing? What does it imply? What if this word was used instead of that word? How many different ways can you interpret it.
And if it is presented to you as a model. Then interpret it in as many was as you can. What does it miss out? What has it generalised? Over generalised? Excluded?
Your interpretation builds a unique model that you can use and understand.
My book “Dear Evil Tester” uses provocative questions and answers to help you build your own model of your understanding of Software Testing.