Skip to main content
blog title image

4 minute read - Tools

Q: when do we prefer manual testing over automated testing? A: Hmmm....

Oct 25, 2016

TLDR; As a simple rule, “I might not automate, if I have to create”. But I have done the opposite. I have automated when others would not. And I have waded in when others would have automated. To answer we need to question our situation and build a deeper model of our actual aims, options and capabilities.

Q: when do we prefer manual testing over automated testing?

A: I have a problem providing a simple answer to this question…

And not just because “it depends”.

Also because - “that’s not the model I use” so I don’t frame the question like that.

I don’t think in terms of “manual testing” and “automated testing”.

I think in terms of “testing”, “interacting directly with the system”, “automating predefined tasks, actions and paths”, “using tools”, etc.

Testing

If I’m testing then:

  • do tools already exist to help?
  • do I have abstractions that cover those areas?
  • do I have to create tools?
  • do libraries already exist?

I might automate heavily if I already have abstractions.

My testing might involve lots of automating if tools already exist.

I might not automate, if I have to create. When - if I create, the testing has to wait.

Don’t Automate Under These Circumstances…

So what about scenarios where you might expect “don’t automate” as my default answer:

  • something we need to do rarely
  • usability? “look and feel”
  • experimental features - surely we wouldn’t automate these
  • something still under development and changes frequently

Many people conduct stress and performance testing rarely. They generally automate.

“I might not automate, if I have to create.”

Usability and “look and feel” have tools to detect standards non-compliance. I might very well use those. So I might automate portions of this type of testing early.

Experimental features might fit into an existing process and so we might already have a lot of the scaffolding we need to support minimal and speedy automating - in which case I might automate those.

I have automated features that were changing frequently because we were testing the same ‘path’ through the application many, many times (every time it changed) and we often found problems (every single release) and the effort of testing it, by interacting with it, was waste. So we didn’t interact with it until the automated execution could pass reliably. But, that also depends on ‘what’ in the implementation changes, and how that change impacts the chosen approach to automate it.

What’s the real question?

If I have to ‘create’ stuff to help me automate as part of my process then I’m likely to weigh up the ‘cost’ of automating vs just getting on and interacting with it.

Are the assumptions behind the scenarios true?

  • Scenario: rare execution
  • Assumption: we would spend a lot of time automating something that we would rarely use

But perhaps we ‘rarely’ execute it because of the time and energy involved in setting up the environment and data, rather than the tooling. Perhaps we need to automate more, to allow us to ’test’ it more frequently?

  • Scenario: experimental feature
  • Assumption: we write code to automate it that we throw away

But perhaps, the feature is so experimental that the easiest way to interact with it is by automating it, otherwise we have to spend a lot of time building the scaffolding required (GUI) for a user to interact with it.

  • Scenario: usability or “look and feel”
  • Assumption: this is subjective

But perhaps standards exist to guide us, and if we ignore those and all the existing tooling, then we create something so subjective that no-one else can use it.

  • Scenario: frequent change
  • Assumption: automating a scenario always takes longer than interacting with it and maintenance in the face of change takes too long

But perhaps, the GUI changes infrequently and the change takes place in the business rules behind the GUI. So really we are dealing with a single path, but lots of data. And perhaps we can randomize the data, and implement some generic ‘model based’ rules to check results.

Scenarios have more nuances in the real world

Very often we discuss these ‘when should we automate’ questions as hypothetical scenarios. And if we do, we need to drill into that scenario in more detail in order to come up with our answer. One of the main benefits to the exercise of ‘hypothetically…’ is the asking of questions and fleshing out a model to better explore the scenario.

  • I have automated when other people said I should not, and it saved time.
  • I have interacted manually using tools, when others said we should automate, and we found problems we would never have found if we had taken the ‘automate’ path so early.
  • I have manually engaged in stress and performance testing.

Sometimes, conducting a contrary experiment, will provide you with ’examples’ that make it harder for you to answer these scenarios without extensive model building.