TLDR; RestMud has JUnit Unit @Test coverage, functional integration testing, REST API testing with Jsoup and Gson, Bots for multi-user and model based testing, Postman and GUI based exploratory testing.
I’m getting RestMud ready for some workshops later in the year and this means:
- making sure a player can complete the maps I have created
- I’m OK with some bugs (because they are for testers), but I need them to complete because they are games
- making sure they can handle enough players
- I imagine that we will have a max of 20 or so people playing at a time
- making sure I don’t break existing games
- with last minute engine changes
- with new game maps
- with new game commands and script commands
As most of you reading this will realise - that means as well as developing it, I need to test it.
The basic game architecture is:
- Spark Web Framework
- Games/Maps are written as Java classes
- Games have custom functionality implemented using a set of internal DSL classes
- A ‘Game’ class is the main interface e.g.
processVerbNounForPlayer(verb, noun, player)
- Spark is my REST API - using JSON
- Spark is my Web Server for the HTML GUI
If you are intersted in text adventures then I can recommend Jeff Nyman’s series of blog posts on interaction and testing with an inform text adventure game.
How would you test that?
For a moment, I’d like you to think about how you would test that.
- Would you exclusively automate it?
- Would you exclusively explore it?
- Would you script extensively and follow scripts?
- Where are the risks?
- What tools would you use?
My Test Approach
Here is what I do at the moment.
Automated Unit Testing
I’m using traditional headings here so it fits the general understanding of ‘how we test stuff’.
- I have JUnit @Test methods for game domain classes
- I have JUnit @Test methods for internal DSL classes
The above misses out:
- REST API
- Web API
- Integrated Game Class Testing
- Testing the Games themselves
- Multi-user interaction
So I guess I have to do more than this.
Automated Integration Testing
I have ‘games’ and ‘game engine’ I want to make sure that when I instantiate the game engine with a game that I can play it so I have @Test methods to check the functional integration between the main internal packages.
- I have JUnit @Test methods which instantiate the engine with ‘test’ game snippets to check that the basic engine works and can process verb/noun combinations
But that misses out… a bunch of stuff we already mentioned.
Automated Functional Testing of Game Conditions
I want to make sure that the Game combinations work e.g. in room 12 when you examine the key, that the troll appears and growls at you.
But I don’t want to play the whole game to get to that point, I just want to make sure that that set of conditions works. And some conditions mean that some other conditions don’t work (there are multiple paths through the game), therefore I don’t want to end to end test all of this.
- I have JUnit @Test methods to play the Game in small chunks
- instantiate the game
- setup the player state and game state
- issue verb,noun combinations through the game interface to check that the conditions work
By using the game interface I don’t have to issue, or parse JSON, because I’m working with internal game objects. I’m testing - just below the REST and Web API.
Automated Functional Testing of Game Play
I want to make sure that a player can complete the game, therefore I create a ‘walkthrough’ test. Again still in the ‘game’ rather than the REST and WEB API.
This works for single player, non random games, where:
- commands are deterministic
- I start in the same location
These instantiate the game, so it is in default state and then they start to play the game by issuing commands:
successfullyVisitRoom("1", walkthrough("we start in room 1", "look", "")); successfully(walkthrough("I always examine signs on walls", "examine", "ahint")); successfullyVisitRoom("2", walkthrough("north leads into room 2", "go", "n")); successfully(walkthrough("oh oh, it is dark here", "look", "")); successfully(walkthrough("amend the url to go back south /go/s", "go", "s")); successfullyVisitRoom("1", walkthrough("to get back to room 1", "look", "")); successfullyVisitRoom("3", walkthrough("east leads into room 3 - east room", "go", "e"));
I think these are pretty readable, and I use high level methods to create a ‘Test DSL’ for writing these
At the moment I have one ‘walkthrough’ test per game.
Rest API Walkthrough Testing
The ‘walkthrough’ test above uses a ‘Test DSL’.
This also writes out a CSV file with all the commands that are entered.
I have a REST API test which reads the file and sends the requests to the REST API, this outputs the request and the responses.
At the moment I review the output rather than automatically assert against it.
In the future, I will re-use the walkthrough test but the Test DSL will have a backend that uses the REST API rather than the game API.
The Rest API is very similar to that described in my Tracks REST API Testing Case Study but I’m using Jsoup and Gson as my HTTP and JSON parsing libraries.
This is semi-automated at the moment with a blast of messages which I review.
REST API Automating
I decided to build a ‘bot’ to ‘test’ the REST API i.e. some code that wanders about the game doing stuff. This will help me flush out ‘unexpected’ game conditions and I can randomise various inputs on the API to see if it works.
I’m doing this because:
- the REST API Walkthrough demonstrates that the game can be completed through the REST API
- I want to ‘test’ other conditions through the REST API
At the moment I have created a very ‘stupid’ bot:
- it is unaware of the game it is playing, so if there are puzzles it won’t solve them, unless by accident
At the moment my bots are a colletion of ‘strategies’:
myFirstBot.addActionStrategy(new WalkerStrategy().canOpenDoors(false)); myFirstBot.addActionStrategy(new AllDoorCloserStrategy().setWaitingStrategy( new RandomWaitStrategy().waitTimeBetween(500,2000))); myFirstBot.addActionStrategy(new RandomDoorCloserStrategy()); myFirstBot.addActionStrategy(new RandomDoorOpenerStrategy()); myFirstBot.addActionStrategy(new RandomTakerStrategy()); myFirstBot.addActionStrategy(new RandomExaminerStrategy()); myFirstBot.addActionStrategy(new RandomUseStrategy()); myFirstBot.addWaitingStrategy(new RandomWaitStrategy().waitTimeBetween(0,100));
You can probably guess what the different strategies do.
I play to add game specific strategies so that the bots can solve problems and not get ‘stuck’ in one room - which currently happens on one of the maps.
This allows me to simulate a user that doesn’t know what they are doing and wanders around pulling things and taking stuff in different order.
Bots are a form of Model Based Testing. A collection of strategies is the ‘Model’ that the bot uses to interact with the application and the bot implements a traversal strategy (currently -randomly choose a strategy and execute it) to execute.
At the moment I don’t really care what the bots do. I’m checking for - no server errors, and no exceptions in the bot or the server. Also I use them for background load when I’m performing exploratory testing through the GUI or the REST API.
Multi-User REST API Testing
I was thinking of using JMeter or Gatling for performance and stress testing.
Instead I decided that I wanted a more flexibile strategy which re-used the existing REST API testing work.
I could probably re-use my API abstractions but decided it would be easier and faster just to make sure that my bots were threadsafe and that I could start up multiple bots.
I introduced a ThreadedBot which I can start, and stop, but it will autonomously ‘play’ the game in its own thread, interacting with the other users on the game.
It is quite annoying to play alongside the bots as they have a habit of closing doors that I have just opened.
But in this way I’ve been able to use the GUI and play the game with 150 bots running in the background using the API every second or so.
I need to introduce a few more strategies and reporting capabilities in the bots but it is a fairly low tech but scalable approach to automated simulation of multiple users.
Sharp eyed readers might notice a similarity between the ‘strategy’ approach and the ‘screenplay’ pattern and there are certainly lessons for me to learn in the screenplay pattern for readability but I’m refactoring my way to better bots through usage.
I released a ‘walkthrough’ of the RestMud public single player game.
I certainly didn’t write 30 pages of Walkthrough - that would be madness given that the game has a tendency (nay, obligation) to change.
Instead I expanded my ‘Test DSL’ to output markdown as it executes, and report some of the output from the game.
dsl.walkthroughStep("\n## Walkthrough\n"); successfullyVisitRoom("1", walkthrough("we start in room 1", "look", "")); successfully(walkthrough("I always examine signs on walls", "examine", "ahint")); dsl.walkthroughStep("\n## Room 2 is dark\n"); successfullyVisitRoom("2", walkthrough("north leads into room 2", "go", "n"));
This allows me to have an executable
- which checks that a user can complete the game
- outputs a csv of commands to replay via the REST API
- generate a markdown file which I can process through pandoc to create a pdf
I experimented with outputting a GraphViz diagram at the same time but the map layout was not good enough so I ended up creating them manually in draw.io for my own purposes to support my exploratory testing and thinking through the game mechanics.
I didn’t mention exploratory testing.
I explore this system all the time.
When I write the code I use TDD and explore scenarios.
I write games which are designed to explore the game DSL and create different use cases for the game.
I use Postman to interact with the REST API, and I use the game through the GUI in Chrome. (The game HTML is such that I view cross browser rendering and interaction as low risk.)
I still have a lot to do.
- Dependency Injection of REST API Abstraction to re-use walkthrough
@Testinstead of csv
- Increasingly diverse and clever bots
- I chose JSoup because I can also re-use it for headless browser interaction
- I will add WebDriver into the mix as a GUI abstraction so I can switch between JSoup and Browsers
- Technical REST API testing - headers, formatting, malformed requests, etc.
- Bots running from multiple machines
- Bot testing against a cloud based deploy of the game rather than my local machine
- more of all of the above - because while I’m doing well on code coverage metrics I know that my coverage of usage models is low
- Testing the Admin interface
But of course, I do not need to do all of this prior to my workshops.
Hope this gives you some ideas that you could use in your own testing and automating. And if you have any questions or comments then feel free to let me know.