TLDR; OpenCode is a free and easy to use AI coding agent, it can be coupled with free models and MCP servers to create test automation code quickly and easily.
I had two sessions using Grok to create Page Objects.
The recorded session was the second, and it did not go as smoothly as the first.
Overview Video
Curious about coding test automation with free AI tools? See how OpenCode generates page objects and WebDriver tests in real time. Also see the Chrome DevTools MCP Server in action so that the locators are pulled from the DOM, reducing manual effort in creating Automated Execution Code.
In this video I used Grok Code Fast 1 because it is currently free on OpenCode Zen, so anyone can repeat the experiment without additional requirements or funds.
Code
You’ll find AI Generated code for on Github:
- https://github.com/eviltester/ai-supported-testing-experiments/tree/main/opencode/create-page-object
The application under test:
Accelerating Test Automation with OpenCode and AI
I’ve been using OpenCode, a free, command-line interface for AI-driven code generation, to build page objects and tests for a WebDriver Java project. When configured to use Chrome DevTools MCP, the AI Tooling can pull down the DOM from the page under test and use it to create accurate locators and coverage flows.
What is OpenCode and Why Use It?
OpenCode is a completely free CLI tool for operating code agents. Running from the terminal, you can use AI models to generate code automatically. OpenCode supports a variety of model providers:
- OpenCodeZen (completely free for certain models, no billing details required)
- OpenRouter (paid credits, but many free models as well)
- Possibility to run local models with tools like Ollama, keeping everything on your machine
Switching between models is easy, letting you experiment and find which AI delivers the best results on your codebase.
During my WebDriver code experiments I’ve used Grok Code Fast 1, Kat Coder Pro, GPT-5 Nano.
GPT5 Nano didn’t give good results, while GROK Code Fast One did better. Through OpenRouter, Kat Coder Pro (free) with occasional use of Claude Haiku 3.5 have been my main models.
Practical Use Case: Building Page Objects and Tests
The core of the demo is applying OpenCode to a Test Transformer application on Test Pages. A page where users enter text, which is then manipulated in various ways (e.g., reverse, Pig Latin, ROT13, etc.).
The page also has cookies to track visits and last used text. There is no ’transform’ button and text is transformed automatically as the text changes.
Using OpenCode
- Automatic Page Analysis: By integrating with Chrome DevTools MCP, OpenCode can launch a browser, analyze the DOM, and retrieve locators with high accuracy.
- Prompt-driven Code Generation: You can specify exactly what parts of the page you want covered. I keep prompts focused, asking for page objects that cover only relevant fields and results.
- Iterative Improvement: Results can be variable. Sometimes you get exactly what you need, sometimes less, sometimes more. But with targeted prompts and reviewing, you can guide the AI toward better outcomes.
MCP Configuration
OpenCode has a configuration folder in the user directory, so I amended it, as per the instructions on the web site, to use Chrome DevTools as an MCP Server.
- https://opencode.ai/docs#configure
- https://opencode.ai/docs/mcp-servers/#configure
- https://github.com/ChromeDevTools/chrome-devtools-mcp/
This allows the AI to use the browser and analyze the DOM:
{
"$schema": "https://opencode.ai/config.json",
"theme": "opencode",
"autoupdate": true,
"mcp": {
"chrome-devtools": {
"type": "local",
"command": ["npx", "-y", "chrome-devtools-mcp@latest"],
"enabled": true
},
}
}
Page Object and Test Creation: Challenges & Strategies
The generation process is interactive:
- Refining Results: Initial code may be too generic or incomplete. By referencing exact DOM elements (IDs, etc.), you can guide the AI for more precise methods and cleaner tests.
- Decoupling Structure from Tests: I had to guide the AI to improve the design by avoiding dependencies where tests need to know page structure, relying instead on abstractions implemented as page object methods.
- Handling Edge Cases: Randomized features (like word shuffling) require smart assertions e.g., comparing sets or arrays rather than specific string outputs.
Reviewing and Refactoring AI-Generated Code
The AI output should always be reviewed and, when necessary, tweaked. Which means it still helps to know what you are doing, learn how to code, and learn how to automate.
In the video, I check the validity of date-time assertions and tolerances, tidy up method design, and make small adjustments for clarity and robustness. AI can cover a lot quickly, manual review remains essential.
Takeaways: When and Why to Experiment with OpenCode
- Speed and Coverage: OpenCode can save you significant time, especially for repetitive or DOM-heavy automation tasks.
- Variability: Different models (and even the same model, run twice) yield different results. Prompt engineering and manual review are key.
- Cost-Effectiveness: Thanks to the availability of free models, you can lower the barrier to entry for AI-powered test automation.
- Flexibility: Whether you want cloud-based AI or local execution, OpenCode can accommodate diverse workflows.
Final Thoughts
OpenCode is a powerful tool for testers and developers looking to speed up automation without a big cost outlay.
You need to know how to code and automate, so that you can effectively review the generated code and prompt for improvements.
By combining intelligent prompting with iterative review, you can build robust page objects and coverage tailored to your applications.
I think it is worth experimenting with OpenCode. Try different models.


