Skip to main content
blog title image

14 minute read - Book Reviews

Software Testing Book Reviews

Feb 3, 2000

I used to have a lot of book reviews on the site. I decided to delete most of them because books are such a subjective topic. Every book is perfect for someone at the right time. I don’t want to put anyone off finding the right book at the right time. What follows are extracts from deleted book reviews for any segements that have longer term value.

Not all the books I have reviewed in the past are listed. Many of my book reviews didn’t have any useful information, those I have deleted.

Software Testing Techniques by Boris Beizer

My favourite testing book. Study this to become a better tester.

I remember reading this book early in my University career and being particularly struck by the discrepancy between what I was being taught about testing on the course and what was being presented to me by this book. This book made testing seem like a technical discipline with a lot of very strong techniques.

The introduction alone is worth the price of admission. Over the years, many of the hard lessons that I’ve learned and tried to represent as simple axioms are here, and they are presented simply. The evolution of the tester’s mind is here in the 5 phases of a Tester’s Mental Life. So much that I had to learn the hard way is presented here.

The majority of Software Testing Techniques takes the reader through the various types of models that the tester can use to describe the system under test and then the various techniques that the tester can apply to those models. The models increase in complexity and the reader carries through the skills and understanding from one model to the next.

This is a book that I studied. I read it, and read it again. I spent a year working through the book and doing every exercise, and thinking deeply about each presented technique.

I don’t know of a single tool which embodies the techniques presented in this book and yet these are the techniques which testers use to derive tests and ensure that they understand and can measure the coverage of their testing.

Black Box Testing by Boris Beizer

This is a case study of using structural techniques to model and test a set of requirements. The draw back for me was that the requirements relate to the American Tax System so I lack the context. I assume that the title is bow to public word usage as Boris Beizer is on record for preferring the terms behavioral and structural testing. This is a pragmatic book, that uses well established theoretical concerns, to demonstrate the construction of tests that provide a measurable level of coverage. This is not woolly testing.

The book provides an overview of: Graphs and relations, Control-flow testing, Loop testing, Data-flow testing, Transaction-flow testing, Domain testing, Syntax testing, Finite-state testing, Tools and automation.

I have written a small Perl script which may help if you test in this way.

A good follow on from the theoretical Software Testing Techniques (which remains my favourite Software Testing Book).

The Design of Everyday Things by Donald A. Norman

As software development professionals we have to grapple with complexity every day. The complexity of: the business, the requirements, the documentation that is produced, set up of test environments, planning the test dependencies, and test data. And it is worth bearing in mind that “the principles of good design can make complexity manageable”. The principles of good design do not only apply to the final product, they apply at every stage of the product’s development including the design of our testing artefacts and automated execution.

The principles of good design are summarised in the book as visibility, conceptual model, mappings, and feedback.

  • Visibility so that the user can tell the current state,
  • a conceptual model which reflects the user’s model,
  • mappings which will relate what is required to how it is done,
  • and feedback so that the user knows what has occurred.

All of these are principles that we should keep in mind when reviewing software designs, but also for process improvement in the software development process itself.

  • Does this system design tell me what I need to know?
  • Do my test progress reports allow people to know what is done and still to be done?
  • Do my defect reports allow others to identify the problem and rate its seriousness?

In Chapter 2 there is a seven-stage model of action:

  • Form the goal
  • Form the intent
  • Specify an action
  • Execute the action
  • Perceive the state of the world
  • Interpret the state of the world
  • Evaluate the outcome

This maps very well on to the T.O.T.E model since Miller is the stated reference. But presented in this form, it maps very well on to the process of testing: from the initial requirements, test conditions are identified which we use to construct tests. Tests involve sequences of steps which we execute and determine the success of each of those steps. This will require examining the system state before evaluating whether the test passed or fail.

A common theme in the book is, unsurprisingly, that humans make mistakes.

It is always worth remembering, when reporting defects, that the cause effect link in a defect situation is only perceived by the tester, it becomes a real cause effect link to others when they investigate it. The book cautions us against drawing erroneous conclusions from perceived cause effect links and then engaging in inflammatory communication.

Running throughout the text is the theme of models.

People have models in their heads about how they expect something to work and that is how they approach the doing of a task, by consulting their model on how to achieve what they desire. This is a common theme that I have used in relation to testing, that all our testing is done through models either explicitly or in our heads.

The book makes the point that we also use real world models to help guide us and provide us with stimulus but also explores the relation of models to memory. And one reason at least for documenting the models that we as testers use when testing, aside from coverage measurement and communication, is so that we remember them.

Software Testing Fundamentals by Marnie L. Hutcheson

The section in Chapter 2 relating to “picking the correct quality control tools for your environment” provides encouragement and advice on:

  1. automate your record keeping,
  2. improve your documentation techniques
  3. use pictures to describe systems and processes
  4. choose appropriate methods and metrics that help you and/or the client

Chapter 3 starts slowly but explains some useful rules:

  1. state the methods you will follow, and why
  2. state assumptions

then goes on to examine some methods of organising test teams.

Chapter 4 discusses the “Most Important Tests (MITs) Method”.

MITs, as I understood Marnie’s explanation of it:

  1. Build a test ‘inventory’ of all the stuff you know: assumptions, features, requirements, specs, etc.
  2. Expand the inventory into ’test’s.
  3. Prioritise the inventory and related tests
  4. Estimate effort
  5. Cost the effort and negotiate the budge - as this dictates the scope of the inventory you can over
  6. Define the scope - an ongoing activity
  7. Measure progress
  8. Reflect on what happened to allow you to improve

I’ve paraphrased it above as Marnie does not use those exact words and the italic words are my summary keywords of the approach.

The Craft of Sofware Testing by Brian Marick

I had a quick look around Brian’s website to see if the important elements from the book had made it there and I found a few papers that mirror the book (all available at exampler.com/testing-com/writings.html):

Also read the “Testing For Programmers” course notes on the web page - visit the bottom of the ‘writings’ page for the links.

You can download Appendix B, the Generic Test Requirements Catalog, from Brian’s web site.

A few hints and tips that I pulled out of the text when reading the book:

  • gain an awareness of the types of problems that your test approach will not find
  • Review path coverage after the test analysis, rather than driving test analysis from path coverage
  • Create tests by predicting faults - using general rules abstracted from common errors
  • Errors with test requirements: overly general, too small (i.e. missing requirements)
  • Use missing code coverage as a pointer to missing clues

JUnit Recipes by J. B. Rainsberger and Scott Stirling

Guidelines include:

  • “don’t test it if it is too simple to break”,
  • “don’t test the platform”
  • “try the different techniques out and see which you prefer”
  • “test anything in which you do not already have confidence”

Many tidbits based on experience, of which I have only selected 4 that stood out for me on initial reading:

  • testing floating point values with tolerence levels
  • abstract test cases - http://c2.com/cgi/wiki?AbstractTestCases
  • have JUnit automatically build test suites “return new TestSuite(MyNewTest.class);” (and other ways of automatically building suites)
  • suite or higher level setups rather than just at testCase level

Useful links that have examples of usage in the book:

Testing Computer Software by Kaner, Falk, Nguyen

“…find and flag problems in a product, in the service of improving its quality. your reports of unreliability in the human-computer system are appropriate and important…You are one of few who will examine the full product in detail before it is shipped.”

‘Testing Computer Software’ contains a lot of very direct opinions from the authors which you will see presented as authoritative is’ms (this is X) which may distance the reader if the reader currently adopts a very different mindset - which I think happened to me on first reading. So if it happens to you, don’t switch off, don’t skim. Analyse your response. Read this book in a better way than I did.

“Always write down what you do and what happens when you run exploratory tests.”

Chapter 1 starts with an overview of ’exploratory’ testing and a possible strategy that an experienced tester might adopt. A ‘show’ don’t ’tell’ approach to explaining software testing.

1st cycle of testing

  1. Start with an obvious and simple test
  2. Make some notes about what else needs testing
  3. Check the valid cases and see what happens
  4. Do some testing “on the fly”
  5. Summarize what you know about the program and its problems

2nd cycle of testing

  1. Review responses to problem reports and see what needs doing and what doesn’t
  2. Review comments on problems that won’t be fixed. They may suggest further tests.
  3. Use your notes from last time, add your new notes to them, and start testing

“the best tester is the one who gets the most bugs fixed.” I now read that as “the best tester finds the bugs that matter most”.

Chapter 2 sets out the various ground rule axioms so the reader doesn’t have to learn them the hard way e.g. “you can’t test a program completely” “you can’t test it works correctly” etc.

Problem tracking (chapter 6) pulls no punches in its description of the ‘real’ world:

  • “Don’t expect any programmer or project manager to report any bugs”
  • “Plan to spend days arguing whether reports point to true bugs or just to design errors.”

Fortunately the chapter contains a lot of advice as well:

  • Each problem report results from an exercise in judgement where you reached the conclusion that a “change is worth considering”
  • Hints on dealing with ‘similar’ or ‘duplicate’ reports (and how to tell them apart)

From Chapter 12 I see that I learned the very important lesson that the test plan can act as a tool as well as a product from this book, and that for me was worth the initial time with the book as it clarified a lot of thoughts in my head and helped me approach the particular project I worked on at the time in a different way; incrementally building up my thoughts on the testing, making my concerns and knowledge gaps visible.

Coders at Work by Peter Seibel

Some of the things that came across strongly from the book for me:

  • work at your craft constantly,
  • read code to learn from other people,
  • write readable code – in some cases literate programming
  • keep thing simple,
  • build code, to ship products
  • build incrementally and functionally to learn quickly
  • do not over-engineer
  • keep learning “If you don’t understand how something works, ask someone who does.”
  • Use your app and have it running all the time.
  • Good programmers program for fun outside of work - not just during work
  • Think in subsets of languages and libraries – learn those subsets thoroughly and understand their limitations – use them based on appropriateness, complexity, applicability, and how well it lets you communicate
  • Ship it – re-factor it
  • Use static analysis
  • Hire for smarts and practical ability rather than random and abstract problem solving
  • Simplify

I also learned a few tips about naming conventions and commenting to allow scanning of code to pick up errors e.g. customerName and customerNames would instead have names like customerName and listOfCustomerNames to allow human reading/scanning to see the difference.

The various interviewees have strong opinions about programming languages, but I don’t remember any of them having found their perfect language.

Automating doesn’t get covered very often by the interviewees, and those that do mention it cover it pragmatically – using it to automate the ‘clever’ code to guard against people changing it without understanding it.

Many of these programmers thought of themselves as writers – hence the emphasis on readable code and reading other’s code.

When not programming these people spend their time learning about and thinking about programming. The 10,000 hours to expertise comes through strongly in these interviews and they act as mentoring sessions. All of the interviewees have taken apart frameworks and other people’s code to see how it works, they have all worked closely with other very good people so had many opportunities to learn. Even if you think the people you work with do not have the ‘greatness’ you desire to learn from then you could pick the code from open source projects to read - but you have probably underestimated some of the people you work with.

All these very strong developers talk about simplicity, readability, writing. Identifying subsets of the languages and use those well.

Working Effectively with Legacy Code by Michael C. Feathers

“Legacy code” describes “code without tests”, you can apply the approaches presented at any point in a project where you discover that the code does not have tests.

“Cover and Modify”; cover code with tests, and then modify it.

The ‘algorithm’ for code change that Michael Feathers presents early in the book has 5 points:

  1. Identify change Points
  2. Find Test Points
  3. Break Dependencies
  4. Write Tests
  5. Make Changes and Refactor

The ‘seam’ chapter - chapter 4 (freely available from InformIT - see link below)- describes one of the fundamental approaches that Michael uses; the identification of places “where you can alter behaviour in your program without editing in that place”.

And every ‘seam’ “has an enabling point, a place where you can make the decision to use one behaviour or another”

The small chapter describing this should seem fairly natural to testers. If testers have the experience of thinking through environments, working out what to split out and mock at different levels or replace with alternatives, or how to inject a monitoring mechanism into an existing app. Seeing the same thought processes applied to code helped me understand mocking and TDD better when I first read this chapter.

I learned a name for an approach to testing that I had adopted before but hadn’t identified as a special case - ‘characterisation tests’. Tests which represent the behaviour of the system “as is”. A concept that has served me well when doing ’exploratory testing’ on applications I don’t know to first ’learn’ the application through ‘characterisation testing’ and then perform specific ‘question’ oriented testing after this.

I generally treat legacy regression tests as ‘characterisation tests’ rather than as ‘real’ tests, meaning that they may tell me something about an ‘as is’ state of the application at some point in time, but they probably don’t ’test’ the system in terms of asking any ‘questions’ about it. This provides me with a sense of doubt that I value.

You can get a really good idea of the book’s contents below.

Related Links: