This particular case study demonstrates how I think about testing and incorporate automation into my test approach.
The scenario you face as a tester:
- You have a main web site
- You have a new mobile site
- You have a set of redirection rules that take you from
m.based on the device
- And the device is identified by the user-agent header string
bbc.co.uk redirects to
m.bbc.co.uk if you have the user-agent set to a mobile device.
The first thought for testing?
- We need to get a bunch of devices to test this on.
And you probably do. But you limit the scope of your testing to a small subset of the possible set of user-agents out there in the real world.
- We could spoof the user-agent.
It becomes a technical test where we use the implementation details and check the scope of the implementation coverage.
- Well, Chrome has the override settings where we could choose a different user-agent.
- We could have our debug proxy change the user-agent for us.
Great. Both of those would work. But require us to do this stuff manually, and it will be slow. We probably still want to do this though, to make sure it renders and that the approach works.
Where will we find the user-agents?
We need an oracle source for our data set of user-agents. Fortunately there are sites out there that track what user-agents are in use:
I tend to use useragentstring.com
So I wrote some code. And I know about all the “testers shouldn’t code”, “tester’s don’t need to code”, “blah blah blah” discussions.
I can code. It increases my ability to respond to the variety of conditions on a project.
So I code.
I wrote a simple set of Java code that:
- Uses GhostDriver - the new headless driver wrapper around PhantomJS
- Visits useragentstring.com and scrapes off the user-agent strings
- Filters the user-agent strings to those that I consider ‘mobile’ devices
- Iterates over all those user-agents
- Creates a new GhostDriver with that user-agent and visits the www site
- Checks that I redirect to the mobile site
You can find the code over on github:
Surely it would be faster to use direct HTTP calls?
- Faster to run, but not necessarily faster to write. Yes.
- See I can use the WebDriver findElements commands when scraping the page and not have to remember how to parse XML in Java or download another Java library.
- I can use the WebDriver to visit the site and handle all the redirection for me, rather than write some redirect handling code for the Apache HTTP libraries.
I want to get some automation done fast. That adds value. That augments my manual testing.
I tidied it up a little for release to github so it isn’t completely embarassing, but hey ho, it added value. I’ll use it again. It looks pretty nasty, but it works.
Sometimes that’s the type of automation I write when I test.
But that wasn’t the requirement scope!
- True. It wasn’t.
- The requirement scope was small.
- Sometimes we have to explore.
- I look for external oracles and comparative sites and rules to help me evaluate if the requirements meet the actual user need.
- In this instance I found a lot of user-agents that the redirect rules didn’t cover.
But if it wasn’t in the requirements we can’t justify the testing!
- I can use a comparison with other sites handling of the user-agents (e.g. bbc or tfl)
- I can see if the gaps in the system under test are better or worse than theirs.
- BBC didn’t handle 1 user-agent I found,
- TFL didn’t handle 3,
- The system under test didn’t handle 100+
I use external oracles, as well as internal oracles. I use the competition to evaluate the system under test. I use multiple sources of information and look from multiple angles.
What would you do?
Do let me know how you would have done it differently.