Skip to main content

Do you want to improve the quality of your Test Automation code?

Code reviews can quickly help identify improvements to your Test Automation and are an effective way to share lessons learned from decades of experience.

I can perform remote or onsite code reviews of your Automated Execution code.

Experience based code reviews provide a fast track method for avoiding problematic approaches in Test Automation that lead to tech debt, brittle, and hard to change Test Automation.

You should be able to continue to improve and evolve your automated execution code. You don’t have to throw it away and start again.

The Code Review service has two main parts:

  • An initial deep dive into the code and execution logs to identify recommendations
  • A longer engagement to periodically review the code, report on new issues and changes

I typically work with Java, WebDriver, REST APIs, RestAssured, but I have also performed code reviews for other languages and technologies.

The deep dive can be performed without the longer engagement.

The deep dive can also be performed on site as part of the team to agree and discuss improvements and experiment with refactoring to create a set of patterns the team are comfortable with and understand the value of.

The ongoing engagement is service where I am engaged for an agreed period of months and at agreed points, periodically review the code base as it evolves. This can also involve online video sessions to discuss or recorded videos of the findings.

Contact me if this is a service you are interested in

Practices and Review Attributes

To provide some idea of how I operate, I have documented below some of the practices I use and some of the review attributes that I look for. This may also be useful to you if you want to review the code yourself.

Remote code reviews

  • review in small chunks, incrementally proceed through the code
  • repeat code review until code base is understood (otherwise changes might adversely affect code)
  • abstraction layers should support readability, maintainability and ease of code creation
  • use tools e.g. CheckStyle, FindBugs
  • build standards and checklists as reviews mature
  • code formatting etc. should be enforced via IDE stylesheets and checked by static analysis tools

Periodic Group Reviews

Periodic group reviews can help provide a shared understanding:

  • of the code, the tests and the abstractions,
  • reasons for the code being the way it is,
  • understanding of the technical debt so that existing bad patterns are not replicated and are removed when found
  • new patterns and approaches that the team want to implement

Group reviews can be performed remotely, but are often a good basis for a Mobbing exercise.

Simple practices to look for with automated execution code

  • Review execution as well as code, are their intermittent failures?
  • Can tests be run independently? Or do they have to be run as a suite or depend on other tests?
  • Do they depend on data or create data? Creating data makes them easier to run in multiple environments and in isolation.
  • Does each test have one main focus? This isn’t an absolute, but it can make test failures easier to debug and isolate.
  • Assertions should only exist in @Test methods and not in the abstraction layers - abstraction layers may throw exceptions to halt test execution.
  • Abstraction classes should be small and not have too many methods. Too large and they can be hard to maintain and it is too easy to keep adding more methods to them and lose the point of the class.
  • Are the Test classes well organised into packages to help coverage reviews?
  • Are the @Test methods named well to support understanding of the purpose of the test?
  • Do support classes which have no interface with the external world have unit tests? e.g. data generators
  • Has the code been kept as simple as possible? Automated execution code has to handle more use cases than application code, keeping it simple can help e.g. don’t use a library for a simple method, avoid annotations, avoid dependency injection frameworks and favour passing in interfaces as parameters.
  • Does the code have a mix of styles and patterns as it evolved or has it stabilised into a clear set of patterns?
  • Is the code understood by the everyone using it?
  • Is the code unique to the project or have external libraries been pasted in, instead of added as a dependency.
  • Is dependency management used?
  • Is everything in version control such that a simple checkout allows execution or is there more setup required?

Simple practices to look for when using WebDriver

  • Is WebDriver itself used in @Test methods? If so is the test completely unique? Most WebDriver usage should live in an abstraction.
  • Do the abstractions support literal code completion at the @Test level? This can aid test professionals, who are not always professional programmers to write test code.
  • Look for abstraction layers like Page Objects, Component Objects, Synchronisation Objects, User Objects etc.
  • Can the abstractions be re-used as libraries e.g. to support Exploratory Testing, or in Cucumber as well as JUnit? Try to avoid ‘frameworks’ and mandatory Base Test classes as these can make abstractions harder to re-use.
  • Locators should exist in an abstraction, unless this is the only place that part of the application is automated.
  • Waits should synchronise on application state and not hard coded times or implicit waits
  • Is it easy to switch between different browsers and cloud servers? Ideally this can be done in multiple ways e.g. environment variables, method calls, property files, JVM properties?
  • Is it possible to gain access to the underlying WebDriver object or is that imprisoned in the ‘framework’ if you can’t gain access to it then you may not be able to create adhoc automated execution e.g. to support exploratory testing.
  • Are locators clean and simple? If not then the application might need to change to support automated execution via the GUI.
  • Has ‘performance’ been prioritised over readability, code creation and maintainability? Automated execution code through the GUI rarely has to focus on performance, maintainability and ease of creation are more important.

Ideally the team should agree upon a base set of books that they believe are useful for their code base.

For Java, and writing code for automated execution I would typically recommend:

Do you want to improve your Test Automation?

Code Reviews can help quickly. Let me know if you want my help.

Your details will only be used for responding to your request, they will never be shared, this contact will not add you to our mailing list.