The answers given during an AMA session held on Discord on the 11th of December 2025, following a live LinkedIn video stream. The session focused on “Mastering Automatability for Test Automation”. The main theme is the concept of Automatability, which I view as the ability to automate, this personal skill is more critical than reliance on specific tools. The discussion covers various topics, including how to separate automation problems from application design issues, dealing with slow UIs and non-automation friendly third-party widgets, evaluating automation readiness, and addressing common architectural failings related to large-scale UI automation.
Discord And LinkedIn
This was a Browserstack hosted event. The initial Q&A session started on Linkedin with a conversation between Alan Richardson and David Burns.
Recording can be found here:
The session then moved to Discord. The Browserstack discord has many AMA and interviews so is worth signing up to have a look at.
Other AMA sessions include:
Join the BrowserStack Community on Discord and discover many more sessions, videos and conversations.
Q&A Session Questions and Summaries
I’ve listed the questions and summary answers. Full answers can be found in the podcast, audio or video, or on the Discord AMA chat.
Q&A 1: Understanding Automatability for a First Automation Framework
Question: If I’m building my first test automation framework, what’s the one thing about automatability I should understand?
Summary of Answer:
The most important thing to understand is that automatability refers to your ability to automate. By having a strong ability to automate, you become less dependent on specific tools, making it easier to create workarounds and choose from multiple tools. Developing experience in how to automate allows you to succeed more often, and means you are not reliant on a tool interacting with your system, which makes workarounds harder. Automating is fundamentally about your understanding of what and how to automate, and practicing the application of that ability.
Q&A 2: Separating Automation Problems from Application Design Problems
Question: How do you separate automation problems from application design problems?
Summary of Answer:
If an issue causes problems when you are automating, I would call it an automation problem. While this problem might be triggered by an application design problem (such as a state-based system that is hard to track, or features that are harder to automate), the issue itself remains distinct. If the team cannot change the application design, they must figure out how to automate the application as it is. This might involve absorbing the issue, figuring out how to automate it at a different level (not end-to-end), or handling it through testing processes using observability tools like DataDog.
Q&A 3: Slow UIs and Testability/Automatability
Question: When dealing with slow UIs is the slowness a testability issue an automatability issue or both?
Summary of Answer:
Slowness is likely both, and more, because it is also a usability/user experience issue. If the slow UI impacts the user experience, it is more likely to be addressed than if it only impacts testing or automation. In cybernetics terms, testers or automators must possess the “requisite variety” to handle the variety (slowness) in the system being tested, which means knowing how to synchronize or potentially cleaning the environment to improve speed. The focus would be on the impact of the slowness, rather than the slowness itself, and whether the team or its tools can absorb the variation in response times.
Q&A 4: Handling Non-Automation Friendly Third-Party Widgets
Question: How do you handle third party widgets like payment gateways that are inherently not automation friendly?
Summary of Answer:
If a third-party widget is “not automation friendly” for one tactic (e.g., UI automation), it might become easier to automate by adopting different tactics, such as issuing HTTP requests using cookies collected from the UI. Teams may not need to automate the full flow of the widget, but instead focus on ensuring the widget is wired up correctly within their own application. This can involve only testing partway through the flow, or using a mock or stub in the environment so that the full widget flow doesn’t need to be tested constantly.
Q&A 5: Evaluating Automation Readiness and Consultancy Frameworks
Question: How do you evaluate an application’s automation readiness during consulting? Do you follow a framework?
Summary of Answer:
I do not use a formal consulting frameworks. The closest methodology used is the meta model from Neuro-Linguistic Programming, which involves asking questions to build a model of the client’s environment and processes, comparing it to the reality they face.
- learn more about NLP and Testing in this paper I wrote.
Regarding automation readiness, an application is considered ready to automate “as soon as someone wants to automate it”. Readiness is judged by whether the client is prepared to commit to whatever it takes to automate the application at that specific point in time to achieve their desired outcomes, regardless of the application’s current state.
Q&A 6: Architectural Patterns Failing in Large-Scale UI Automation
Question: What architectural patterns do you see repeatedly failing when it comes to large scale UI automation?
Summary of Answer:
Recurring issues often stem from the team lacking the ability to automate and consequently blaming the tool for problems, rather than creating necessary workarounds. Common process anti-patterns, not strictly architectural patterns, include deploying differently into test and production environments (not using the same install process).
A major failure point is test data maintenance, especially trying to use production data or any data that the team does not control. Automating against specific data conditions without control over that data causes random test failures. This can be worked around by hardcoding tests against data conditions instead of specific data, and dynamically selecting the required data during execution.
Q&A 7: Prioritizing Testability and Automation in Sprint Planning
Question: If testability improves debugging and automation improves scale, how do we prioritize them during sprint planning?
Summary of Answer:
Prioritization can be based on what the team wants to achieve (the outcomes) by the end of the sprint, specifically focusing on the expected coverage from testing and automation. It is beneficial to plan for features that need extensive testing to be delivered early in the sprint. Ideally, testing and automating occur in parallel, and teams automate at lower levels (like unit level) to reduce the necessary coverage at the higher UI level. Issues often arise when teams are divided into isolated roles, creating process problems that hinder effective interaction and prioritization.
Q&A 8: Playwright and the Illusion of Reduced Automatability Design Needs
Question: Do modern frameworks like Playwright reduce the need for high automatability design or is that an illusion?
Summary of Answer:
It is an illusion. Frameworks like Playwright are designed to absorb application variability through features like retry mechanisms (for synchronization) and locator strategies (like visible text), which reduces the need for constant notification when minor changes occur. This absorption capability makes Playwright effective for agent-based automation where the goal is checking an end-to-end path and the final result.
However, this absorption can hide issues that a team might want exposed. Even when using Playwright, developers must still understand how to automate and structure their code using abstraction layers (like page objects, domain objects) to ensure long-term maintainability and efficiency.
Q&A 9: Explaining Automatability as an Investment to Leadership
Question: How do I explain to leadership that improving automatability is an investment not a delay?
Summary of Answer:
The explanation depends on what is being improved. If improving automatability means increasing the team’s ability to automate, it can be presented as an investment in staff training. If it involves adding technical aids (like IDs in the UI), leadership might perceive it as a delay because they may not value UI execution coverage or may already be confident in unit-level automation. To convince leadership, the team could demonstrate the return on investment by showing the alternative world. This involves comparing the current reality to a scenario where improved automatability allows the team to do beneficial things they otherwise couldn’t, thereby highlighting the value gained.
Q&A 10: AI Agents Dealing with Dynamic Elements
Question: We are exploring AI agents for our teams and I want to know how does the AI agent deal with dynamic elements like rotating banner banners, third party widgets or AB tests?
Summary of Answer:
How the agent deals with dynamic elements depends on how it works (e.g., building high-level BDD scripts or generating code). Agents often operate on first principles. If an agent uses a BDD approach, it works from a runtime specification and handles dynamic elements because it works from scratch for each execution, constantly aiming to fulfill the objective. For example, if an unexpected pop-up appears, the agent clears it and continues.
If the agent writes code, it uses what is often called “autohealing”. This process automatically amends the script based on the current application state, prioritizing the achievement of the objective regardless of whether the change is “right”.
Q&A 11: Early Signals of Flaky Features
Question: What early signals tell you that a feature will become flaky once automated?
Summary of Answer:
Early signals involve understanding the synchronization points of the page. A feature is likely to be flaky if the application is populated or amended over time by JavaScript and the automation tool is not synchronizing properly on the DOM buildup. If a page is being constantly updated in the background without clear visual indicators (like spinners, which are easy to synchronize on), flakiness is more likely.
The signal is the update process itself, particularly when it is non-deterministic (e.g., how totals are updated in a shopping cart). The core question is whether synchronization is required to prevent flakiness, and if it is difficult to synchronize, that is a strong signal. If necessary, automatability might be enhanced by adding an extra flag to the DOM to signal when the update is complete.
Q&A 12: Layers to Focus on in Microservices for Automatability
Question: In microservices setups which layers should teams focus on first to increase overall automatability?
Summary of Answer:
The foundational layer to focus on is the human understanding of the architecture and the requirements for automation. In microservices specifically, teams would typically focus on the interface layer and their ability to automate it while keeping the interface standard.
If microservices are communicating via HTTP messages compliant with a version standard, automation can be relatively easy. If interfaces are internal and change randomly, issues may arise, requiring attention to managing event-based queues if applicable. Strategies include using versioned interfaces or having a process to update automated coverage and abstraction layers when microservice interfaces change. It is crucial to avoid replicating interface objects (like payload objects) directly into test code, as this can prevent tests from spotting issues when new or removed fields occur in the application interface.
Q&A 13: Budget-Limited Automatability Fixes for Fastest ROI
Question: For a team with limited budget which problems around automatability should we fix first to get the fastest return on investment?
Summary of Answer:
The fastest return on investment comes from enhancing the team’s ability to automate. This improvement allows teams to develop workarounds, find alternative solutions, and identify when to use different techniques. It is not about purchasing multiple or expensive tools. Instead, investment could be placed in training, practicing, exploring the capabilities of existing tools, and eliminating fundamental issues like test flakiness by fixing the root causes.
Q&A 14: Collaborating Earlier to Avoid Automatability Rework
Question: How can developers and testers collaborate earlier to avoid expensive rework on automatability issues?
Summary of Answer:
Collaboration is achieved by removing the barriers that cause people to be isolated into silos, such as separate programming, testing, or test automation teams. The core issue is often that a “development team” is defined only as a programming team, instead of encompassing responsibility for design, programming, product suitability, testing, and production.
Practical steps include involving the programming team in the automated execution maintenance. When programmers contribute, they often ensure data IDs are present, which removes many hard-to-automate issues typically found during end-to-end testing. Sharing the responsibility for maintenance ensures people understand and resolve related issues earlier.
Q&A 15: Automatability in Continuous Delivery and Trunk-Based Development
Question: How should teams think about automatability when shifting towards continuous delivery and trunk-based development?
Summary of Answer:
This environment requires high automated execution coverage that runs quickly, often with features being merged multiple times a day. Automatability is achieved by ensuring the person or pair responsible for creating a feature is also responsible for adding automated execution coverage (unit tests). These tests demonstrate that the feature has been tested and provide future checks against accidental changes impacting the functionality.
Teams could focus on structuring unit tests at the domain level (e.g., focusing on users or orders) rather than strictly class level. This approach results in a degree of internal end-to-end flow tests without needing extensive external system testing. Furthermore, application architecture can be designed so that interfaces (like HTTP interfaces) can be tested primarily at the domain level, reducing the need for numerous actual HTTP calls.


