Skip to main content
blog title image

16 minute read - Exploratory Testing

A retrospective critique of an exploratory testing session

Nov 29, 2016

These are the notes for a critique of this Exploratory Testing Session on YouTube

You can watch the critique below on youtube

Introduction

I picked Google Search because I thought:

  • an obvious software
  • simple to understand
  • at a high level just an input and an button
  • didn’t expect bugs to distract so could focus on thought process and execution

I was wrong.

Note: Time stamp links in the header are links to the YouTube video at the point where the retrospective notes discuss that section.

0:21 Reflection is an important part of learning https://www.youtube.com/watch?v=b3izXqERlqo?t=21s

I want to do more “Let’s Explore” videos because this is a way for me to reflect on how I test.

And reflection is an important part in the exploration process and our own learning process because it allows us to build a model of our approach and we can step back from that model to look for gaps, weaknesses and areas to improve.

0:29 How do you know when you are done? https://www.youtube.com/watch?v=b3izXqERlqo?t=29s

When conducting exploratory testing one of the questions that comes up a lot is “How do you know when you’re done” and one way is to set a time limit. It doesn’t mean you’re ‘done’ when you hit the time, but you’ll have a much better idea of “do you want to continue?” “do what next?” “how much more time do you want?” “are you still finding the process valuable?” “what else is there to do?”

And part of the process here was that I’m thinking things through as I test - which is hard.

I often use the Timer from Google because I don’t have to install anything and I know it is there all the time.

0:54 Timer https://www.youtube.com/watch?v=b3izXqERlqo?t=54s

And 15 minutes was pretty much just to control the length of the YouTube video. It’s a bit tight for an exploratory testing session.

1:11 Session note formats https://www.youtube.com/watch?v=b3izXqERlqo?t=1m11s

And You can see I already started my notes before the session to write down my aims - to frame my intent.

Intent isn’t just there to ‘control’ my testing, it is so that when I deviate, and I will deviate, I recognise it as a conscious choice and I know what my intent in deviating is, and I know what I’m coming back to. Intent is the top level node in my decision tree, I’m still allowed to branch off.

I normally write down the date and time, but you can see the date in the Evernote title ‘20161110’ - I always write my dates like this now YYYYMMDD because they are easy to sort and Evernote keeps versions and tracks the time in the Note info anyway.

Time stamps are very important when note taking - but I havent’ used any in this video because I’m recording it.

1:19 Google.com redirects https://www.youtube.com/watch?v=b3izXqERlqo?t=1m19s

One of the first things we notice is that ‘google.com’ redirects. I assume it is based on an IP list, but I don’t know. I’m not testing that but I make a note of it as an interesting ‘feature’ that I might want to test in the future. And I’m also expanding my model of what Google Does.

We can also see some ‘stuff’ in the URL that Google has added - what does it do? is it important? I don’t know. Does that contribute to the issue we find later?I don’t know. But it part of the ‘unknown’ set of things in my model and I don’t want to lose those, or discount those, or ignore thos, in case they are important.

I make a brief note.

I ignore spelling because I can fix that later. And I tend to make a lot of notes as I test.

I also write my notes using Markdown so I can process them and format them later if I need to.

2:08 Testing Concepts https://www.youtube.com/watch?v=b3izXqERlqo?t=2m08s

One of the concepts I have in my high level testing model is the notion of Observation and Interrogation.

Observation - having the ability to spot things as they happen Interrogation - drill into something deliberately to understand it

And here I’ve opened the dev tools to allow me to observe the DOM, and I have the ability to observe it flash and change as the JavaScript manipulates the DOM and changes it as I type and use Google.

I’m also Interrogating the input field to see how it is constructed in the DOM and I can see the type of Element it is and the attributes.

  • Points of Note:
    • Google changes rendering as I type but not much change in the DOM when that happens. I think that is interesting and I’d really like to explore that in more detail to understand it.
    • We have a predictive text and proposed search mechanism which I assume is using Ajax calls in the background which adds more complexity to this ‘simple’ search process, and none of that is very visible inthe DOM view but is in the rendering view so I have to try and Observe both parts of the screen
    • the input has a lot of fields, some are pretty simple, but some I don’t know and I do need to learn those and fill in gaps in my knowledge
    • I also find it interesting that the input field has a class and a style attribute so there might be a lot more dynamic interaction to test for the rendering process and that might require more cross browser testing than I’d expect from first glance.

I think it is important to note down areas for future research.

3:05 Copy and paste information into your notes https://www.youtube.com/watch?v=b3izXqERlqo?t=3m05s

I often copy and paste parts of the system into my notes:

  • HTML,
  • Error messages,
  • Information Messages,
  • HTTP requests,
  • JSON calls.

All of that I find useful for reviewing later and copy and paste really doesn’t take a lot of time as I’m testing.

3:25 Deliberately go off track https://www.youtube.com/watch?v=b3izXqERlqo?t=3m25s

I allow myself to get distracted by ‘spellcheck’ attribute - mainly for my education but I’m also trying to demonstrate that we are allowed to deviate, that the learning process isn’t just ‘what will I test’ but is to help us understand ‘what is this thing I am testing’.

And so much of what we need to know is just a web search away.

4:17 Spelling Shocks https://www.youtube.com/watch?v=b3izXqERlqo?t=4m17s

Normally I don’t have any problem making spelling mistakes but here I can’t seem to type correctly or consistently mistype or even remember the thing that I’m supposed to be typing incorrectly.

The important point about this deviation is that I tried to do it quickly. Saw that it wasn’t adding value. Added it to my research list.

5:15 Presupposition Analysis https://www.youtube.com/watch?v=b3izXqERlqo?t=5m15s

The process I’m going through when I look at the system, I think of as ‘pre-supposition analysis’. Its a technique I learned from psychotherapy where we use the surface information - what someone says to us, what the system says to us, because the code at this point is telling me “I’m an input field. I allow you to type 2048 characters, I will submit this to the server in a form with the form param named ‘q’” It’s telling me all this stuff.

And that presupposes that it isn’t lying to me. In order for this information I’m seeing to be true the server must be able to handle inputs of 2048 characters. And anything I type into this field is valid. And the more HTML I know, the more presuppositions I can see there. auto completion, combobox functionality, aria relationships between the text in the input field and a relevant item in the combobox.

So at this point I start exploring the presuppositions. Because I may as well start with a confirmation process. Because if the presuppositions aren’t true then I can’t really trust what I’m seeing at a surface level and I’ll need to use other techniques, fairly quickly to build a model of the system.

The first presupposition I explore is the field name and check that the search is only honoured when the name is ‘q’. I don’t really know if the message that was set to the server honoured that, so I’m still working at a high surface level and trusting the system until it gives me a reason not to trust it.

6:00 Counterstrings https://www.youtube.com/watch?v=b3izXqERlqo?t=6m00s

The maxlength is the obvious immediate exploration point. I’m not drilling into this because I have a heuristic that says “Big things” or anything like that. I’m being guided by the system and the system says I handle 2048, so I say let’s check that presupposition.

I generate the string to type in as a counterstring, which is a string of a certain length where the string itself contains markers of the length you are at in the string. I first learned this from James Bach. And this tool I’m using is one that I wrote for me because all Jedi create their own light sabers.

6:46 Reactions https://www.youtube.com/watch?v=b3izXqERlqo?t=6m46s

Part of the reason I made sure to capture the web cam in the video was to capture facial expressions as I tested because its important to realise that our emotional reaction to the testing and the system gives us clues to our models.

When we are surprised, when we are confused, then our model doesn’t match the system in some way.

At this point I’ve found a problem.

And I have the issue that I’m trying to think it through, and talk about it at the same time, so I start to miss things from this point.

7:40 Double Checks https://www.youtube.com/watch?v=b3izXqERlqo?t=7m40s

I make a note about it, and double check my inputs. Normally I would timestamp my observation but I haven’t done that here.

The presupposition approach - will lead you to many of the heuristics that you find online. So I kind of prefer this, as it works better for me. I model presuppositions, then explore them with questions. And much of my testing process I can frame as - I have this model, I’ll ask a question of the system to see if my model matches the system or not.

8:29 Binary Chops and Off By One https://www.youtube.com/watch?v=b3izXqERlqo?t=8m29s

And I possibly should have binary chopped it, but I assumed that perhaps it as an ‘off by one’ type of error and that by putting in a slightly smaller string it would go through the system. Because at this point I don’t know what I’m seeing.

I don’t know if it is:

  • length related
  • content of string related

I’m assuming it is length related but I don’t know if it is:

  • near to 2048 or
  • wildly out in which case binary chop to 1024 would be a better approach

But I was brought up in the good old days of “off by one” errors and “Boundary Value Analysis” and those habits are hard to shake.

So I tried “2000”

And then I backtrack to a binary chop strategy.

8:54 Miss an important piece of information https://www.youtube.com/watch?v=b3izXqERlqo?t=8m54s

And this is interesting for me.

This is where reflection is very important. Or pairing is very important.

Because I completely miss the information the system just gave me.

I’m so focused on exploring the length and looking for the ’error’ that I don’t observe that the request has been accepted and processed by the system.

Even though the system rejected my input when I typed it. When I go back, using the browser back button, at one point in there the request is issued in an acceptable form.

And this is a key addition to my model of Google Search but I miss it.

At this point in my reflection I think Google makes requests in different ways, with different values and they are processed differently. And I think at this point 1024 is accepted with asterisks when it is issued as a GET request in the URL. I haven’t tested that supposition. But I suspect the search might issue a POST request and it might be treated differently.

And I think google might issue requests multiple times either for predictive text of some other reason, but at this point in my testing the model that I’m working with is incomplete, and my observation of the system in front of me, is incomplete and I’m pretty much just going to pot for a while, and it takes me a little while to realise it and I’m essentially exploring blind.

9:15 Realisation https://www.youtube.com/watch?v=b3izXqERlqo?t=9m15s

At this point I seem to realise that something has happened on the screen and so either ’the request’ or ‘part of the request’ has been accepted.

9:34 Stop to reflect https://www.youtube.com/watch?v=b3izXqERlqo?t=9m34s

And I stop.

I have to step back and reflect because what I thought was happening isn’t happening.

I started exploring length, but 1024 failed and then passed, so perhaps it isn’t ‘just’ length.

I’m confused.

9:49 Ruling out https://www.youtube.com/watch?v=b3izXqERlqo?t=9m49s

I confirm to myself that it seems to handle 1024 as a GET so I now have a value that ‘might’ still be length related but probably isn’t, but the front end system has a way to issue a 1024 request in such a way that the back end system can’t process it.

I try and rule out ’ ’ and ‘*’

10:20 Dogged Pursuit https://www.youtube.com/watch?v=b3izXqERlqo?t=10m20s

And then for some reason I’m still hung up on length.

Watching this back I don’t think length is the problem here, and if I was to investigate it now, I wouldn’t be exploring length or binary chop.

But when we are in the zone of testing we sometimes miss things.

10:40 Expanding Model https://www.youtube.com/watch?v=b3izXqERlqo?t=10m40s

I do start expanding my model of Google search from an observation of the front end because I can observe that Google has done something, before I asked it to ‘do’ something and that Ajax request issuing might be part of the problem but I clearly don’t have an adequate model of how Google search actually works.

10:57 Take Stock https://www.youtube.com/watch?v=b3izXqERlqo?t=10m57s

At this point I have to take stock of where I am.

I suspect that if I hadn’t been recording this then I might have eased up on the time pressure and stepped back to clarify my model of Google and build a plan of attack.

But I make a different decision here. I decide to expand my observation abilities by bringing up a proxy. And that is an important step in allowing me to expand my model.

I think if I had stepped back first, then my time int he proxy would have been more valuable.

But, in terms of a ‘video’ adding another ’tool’ into the mix might add more value to the viewer and I know I was conscious of ‘making a video’ as well as ’testing’

And this is an interesting point because I wrote down my ‘intent for the test session as ’learn and test’ but I also had the intent of ‘capture video’ ‘make video entertaining’ ’explain as I test’. The clearer we are about our intent the more focus we will have on the parts of the model we use and the parts of the system we observe.

Any areas of unclarity can influence us in unexpected and, in this case, detrimental ways.

11:14 Bad start to Proxy https://www.youtube.com/watch?v=b3izXqERlqo?t=11m14s

Again we see that issuing the GET request is different from the form request, but I haven’t realised that, so my observation in Fiddler is off to a misinformed start.

11:23 Confused https://www.youtube.com/watch?v=b3izXqERlqo?t=11m23s

But I’m confused at this point, watching the video, because the request I’m seeing in Fiddler there, doesn’t match the request that I’m seeing in the browser window.

At this point I really want Alan to step back and build a better model of the interaction between Google frontend and the messages it sends.

But he doesn’t. He seems so focused on making the next 4 minutes a valuable video watching experience that this is really a ’tool’ demo rather than an effective testing demo.

The one thing he does get right is using ‘compare’ in Fiddler to try and compare the requests sent through.

But even this he gets wrong when he takes the next step of issuing a request through the composer.
I find this whole section hard to watch.

It’s true that we haven’t yet ruled out ’length’ as a problem but Alan is still hung up on it.

13:21 Composer usage https://www.youtube.com/watch?v=b3izXqERlqo?t=13m21s

I think using Composer to send request and amend them is useful for exploring any hypothesis about ’length’ of request.

And using Composer to resend the requests that fail is a useful thing to try.

13:45 Messed up https://www.youtube.com/watch?v=b3izXqERlqo?t=13m34s

But at this point Alan is really messed up and hasn’t realised it.

For some reason, the request Alan wants to resend is not the request that was copied into the composer - I don’t know how that happened. But it did.

And then Alan tries to put that request in, but copies the RAW http request link into the GUI parsed view rather than the RAW view.

I’m going to be kind to Alan and assume it was time pressure from the video recording but this section is just painful and doesn’t add value to his test session or his model of the system at all.

15:00 Fiddler Highlights Error https://www.youtube.com/watch?v=b3izXqERlqo?t=15m00s

I think this is interesting. I’ve never noticed the composer in Fiddler highlighting a request in error - that’s useful to know the feature exists, but it would also be useful if Fiddler provided some clue as to what was wrong with the request.

And clearly I’m trying to blame the tool for my inability to use it.

15:55 Saved by the bell https://www.youtube.com/watch?v=b3izXqERlqo?t=15m55s

And saved by the bell. The pain is over.

In retrospect, it wasn’t ’that much time to be confused’ - only about 10 minutes. And I hope in a ‘real testing scenario’ I wouldn’t have been suckered into that limited set of thinking loops. But I might have been.

So a lesson is to ensure that we force ourselves to step back from what we are doing - often.
And I noticed that I didn’t make as many notes as I normally do. I would have expected me to write down notes about:

  • what I was confused about
    • why does GET request work but form submission doesn’t
    • does ‘back’ do a GET request which I didn’t see?
    • why does Google ‘act’ on my reuquest in the input before I hit search?
      • and what is that request?
  • decisions
    • going to get up fiddler to increase observation

These type of observations when I make note of them help me step back and reflect as I test, and I skipped those steps in this recording to make it more interesting so next time I might not worry so much about making the ‘raw’ video ‘fast and interesting’ because it interferes with my testing process, and the raw might have to stand as a ‘record’ and the commentary might be the only video that I promote.

16:36 Keypoints https://www.youtube.com/watch?v=b3izXqERlqo?t=16m36s

One of the keypoints is that we used presupposition analysis to guide the testing and exploration. And we used ‘presupposition analysis’ as a ‘confirmation guide’ to build a model of the system.

I probably will investigate this ‘defect’ and understand how Google search works - I might record that, I might not. But its probably a useful exercise for the reader/viewer to try this and use whatever tools you need to, to get a handle on it.

What else?

And I will do more “Let’s Explore” videos so subscribe to the channel and use the comments to let me know what else I should explore.

I’ve had a few comments on the video and so I’ll revisit this particular session to address those comments in a future post and video.