Subscribe to the full blog feed using RSS
TLDR; All SauceCon 2020 talk replays are online. My Talk “Automating Tactically vs Strategically” has a preview and has been added to Evil Tester Talks.
TLDR; April content contains links to free books and new podcasts.
TLDR; A collection of tips for presenting online presented on The Evil Tester Show Podcast Episode: get a decent microphone.
500+ videos of experience, crammed into one tiny blog post.
TLDR; I have a collection of on-demand conference talks and webinars available for only $10.
With Online conferences become the en vogue delivery mechanism. I realised that I already have an online and on-demand conference.
TLDR; Coverage requires some sort of model. We can organise code to support review against a mental model, and some models are executable. Other models we compare against the output of execution.
I was asked a series of questions: How can we document what an automated test does and covers without adding a lot of overhead? How do we know what is not covered by automation?
TLDR; March content contains links to free books and resources and a new course on Linkedin.
TLDR; Observation in real time. Interrogation after the act. Bringing Interrogation closer to Observation can help detect issues during a process. The depth of Observation and Interrogation changes depending on our knowledge of the system and technology. And we may not be done, if our observation was limited.
When I test I make a distinction between Observation and Interrogation and I’m going to explain what that means, and show you hands on example of how that distinction helps me improve my testing and the scope of of my testing.
TLDR; Learning effective synchronisation strategies makes your automated execution more reliable.
One of the most important skills I have developed for automating was learning how to synchronise. And we often spend time working on this for automation consultancy engagements because it is a fast way to improve the trust in the execution.
TLDR; Testing uses models to target the system, and our information is constrained by the models we use and build. We can introduce variation to increase the possibility of finding information related to bugs. We have to take care not to develop false confidence.