Skip to main content

[More Conferences]

Test Management Summit 2015

I presented a session on "Successful Test Automation for Managers" at the Test Management Summit in London in April 2015. We discussed automation from a management perspective.

Test Management Summit 2015 Thumb Image

I attended the Test Management Summit on 29th April 2015 to present a session on Successful Test Automation for Managers.

Illustration by Herb Lebowitz from the cover of “Automation” by Carl Dreher

The focus of this session was a discussion about automation from a management perspective. This would allow me to draw upon the experiences of the managers in the room and create a lively discussion.

Partly of the aim of allowing managers with less technical experience draw upon the experiences of those more technical.

Also to understand the concerns that managers have with automation and allow those with more automation experience, help to address their concerns and issues.


What do you do if you’re a non-technical manager and have to manage test automation? How do you know if your automation is working? How do you know if you (the manager) are getting in the way? Perhaps you should you let the development manager deal with it. After all. Automation is programming. What support do automation staff need from their management team? If you are a technical test manager, what advice would you give to a non-technical test manager?

Alan Richardson has been a technical tester working for non-technical manager’s so he knows how managers and their ’needs’ get in the way. Alan has also managed automation teams and knows what it takes to build test automation that works.

Bring your experience, and tales of success or woe, and we can learn lessons about managing test automation, and test automators.

Normally, when I present on automation I describe the experiences from a technical point of view.

In this session I wanted to discuss automation from a management point of view. And in particular try to tackle the concerns of managers, who may not have the technical knowledge to fully participate in the construction of automation.

Because of the nature of the Summit - we aim for discussion, but have to be prepared to lead the session as a fall back plan - I created quite a lot of slides. A set of slides that I talked through to prepare the discussion and trigger thoughts and questions in the participants. Then a set of ’emergency’ slides presented as mindmaps which I could use to lead the discussion if necessary.

Fortunately the session had a lively crowd that were fully prepared to participate and contribute their questions, issues, concerns and experiences. Allowing me to act as facilitator, summariser and add points which built on the discussion items presented to allow the conversation to flow into the next point.



What questions do Test Managers have regarding automation? What concerns and issues do non-technical managers face with test automation?

I presented at the TMS to try and find out.

I’ve been an automator and a test manager so I’ve experienced both sides of the coin, but I’ve always had the benefit of technical and programming knowledge and experience. So while I had an initial list of areas that I thought current managers working, or about to work, with automation would want to discuss, I wanted to see what would happen on the day.

At this point I’m working from memory without notes, because I was facilitating a discussion, so let’s see what I remember…

The Test Management Summit is a discussion based event, with the emphasis on the presenter moving to triggering and facilitating discussion rather than doing all the talking. As a result, when you look at the slides, the first half of the slides were designed for me to talk over, and the remainder held in reserve in case the discussion faltered.

In the event, I talked for 15 mins, and the discussion was lively and proceeded without requiring my reserve slides.

I made a distinction between use of tools, and automation. With automation having the characteristic of no manual intervention during its execution.

I emphasized the need to have an understanding of how the automation works. If you don’t understand how it works then you have added a dependency on your success on someone who does. This applies across the board:

  • If you use a commercial tool then you rely on the support team from the supplier
  • If you use open-source then you rely on the internet and the informal development team, but have the opportunity to involve yourself, if you have the skills
  • Similarly if you bring in frameworks or libraries into your automation, if you don’t have the skill sets to read, and possibly amend the code, then you limit the level that you can understand or fix them.

Regardless, we have to have some sort of way of deciding if the automation process supports your test approach, or not.

As to “Who can use automation?”. If you don’t have the skills to move beyond automation as ‘magic’ then you can only use it if an interface has been added to the automation which allows your skill level to engage with it. i.e. automation with one button to start it can be triggered by anyone, but if that is the level of your understanding then you won’t have the ability to intervene when something goes wrong.

This may, or may not, be managed as a risk in your organization.

Many people do not think through what they want from automation to identify if their approach to automation can meet their expectations, or to think through multiple approaches to achieving their aims. Too often we default to: “automate all the scripts”. I recommend thinking through your aims, and identifying options. This will also help you work out if your automation ‘works’ i.e. helps you meet your aims.

If you read the slides then you’ll see that I used the Carl Dreher book “Automation” as one of the references for the talk.

Carl Dreher’s book is out of print, but I like it, as an historical overview of various forms of automation from mechanical devices through to cybernetics and computing. There are various quirks in language that I enjoyed, particularly the use of “Automationist” rather than “Automator” to describe someone who works on automation.

I also referenced “The Art of Leadership” by Captain S.W. Roskill. Another out of print book, but I can relate to the early sections in Roskill’s book where he describes two traits of military leaders that we don’t always expect from leaders in business:

  • An expectation that military leaders will train and teach as they gain experience
  • An expectation that military leaders need technical skills to ’lead’ from the front and gain the respect of their troops

In order to balance this out for the business world, I think we make a distinction between ‘managers’ and ’leaders’. Therefore managers may need to engage with automation in a different way than leaders who exhibit the traits above.

We discussed:

  • Should we convert manual scripts to automation scripts?
  • What would you do in the event of …
    • Single staff member having all the knowledge
    • Immediate “No - can’t be done” to requests for work
    • Staff who are ‘doing their job’ but are finished in half the day
  • How do you know your automation is working
  • How to recruit technical staff
  • What should a test manager do to improve their technical knowledge
  • What should a test programme manager do to improve their technical knowledge

Much of the discussion will remain private because it was provided by the participants themselves.

As add on points - during the discussion, I remember mentioning:

  • Use multiple layers of abstraction to support:
    • different “Who can use?” levels
    • use of automation in different ways, i.e. you can use the automation abstraction as a tool to support your testing, rather than only create non-manual intervention automation
  • Recruit with hands on exercises, i.e. pairing, discussion
  • Automating manual scripts does not lead to effective automation because
    • does not harness the ability for data driven automation (vary non-path invariant data)
    • does not harness the ability to use random data to explore assumptions inherent in equivalence class analysis
    • building on a point made by Adam Knight [] it does not encourage ‘alternative’ thinking about how automation can help test the system in ways that the script analysis did not identify e.g. API testing, adhoc automation to support defect investigation
  • Coaching without technical knowledge
    • Ask open questions of the team (i.e. non yes/no answers)
    • Ask “How can we…” questions
  • Encourage alternative investigations, but constrain them to time or scope

Hopefully the slide deck will have some points that encourage you to evaluate your own use of automation in your test process, and try something different, or try to investigate in more detail.

[List of Conference Talks]