Effective Software Testing in Modern Software Development
What is modern software development? Pretty much anything we do now.
And testing has to have the flexibility to cope with that.
In this talk I consider: Agile, Dev Ops, CI, Continous Delivery, etc. But I ignore buzzwords and lump it all together as Modern Software Development and explain the thought processes and approaches that allow testing to morph as required to thrive as part of an integrated Software Development System.
Update: 20th June 2018 The official video has now been released on Infoq.
EFFECTIVE SOFTWARE TESTING FOR MODERN SOFTWARE DEVELOPMENT
Modern Software Development requires a highly customised and tailored process, built around the needs of the team which creates it and the organisation it supports. Despite the uniqueness or our process, we give it standard names like “Agile” or “DevOps”; and we adopt general concepts like “Automation Pyramid”, “ATDD”, “BDD”, “TDD”, “Continuous Integration”, “Continuous Delivery”. We have to interpret all those generic concepts for the specific project we work on.
In this session, we will look at the modern software development process, and how Testing fits into that, to support us: customise the process, adopt new tools, increase the amount of automated execution, mitigate risk and deliver software quickly which works.
Areas to cover:
– Using Risk to customise development approaches – How testing fits into modern software development – How to mitigate risk with tooling and testing – Using Abstractions Effectively to mitigate risk and increase flexibility – How to Architect systems to effectively support software development
I put this talk together because people seem to ‘argue’ or ‘discuss’ if we need Software Testing or Software Testers when we engage in Modern Software Development practices.
I wanted to try and present - in 40 minutes - my models for how I think about the System of Developments that we use and how thinking about Testing changes depending on the approach to development and how it also fundamentally stays the same.
I explain a little about ‘why’ I think the process of Software Testing became a very documentation heavy and ineffective process in a waterfall process. To me this demonstrated that the Software Testing process is built in and around the Software Development process. The more we try to separate Testing from other Software Development activities like: programming, design, analysis, management, release, etc. the more we add to Testing, and the more confused the nature of testing becomes.
But then we thought of the additional trappings as Testing:
- Test Strategy Document (independent of Development Strategy)
- Test Plan (independent of Project Plan)
- Pre-Written Test Cases,
- Pre-Written Test Scripts
- Requirement Cross Reference Matrices to the above artifacts
None of the above are ‘Testing’ they are a result of the Software Development process in place and the way the team was structured and the development was implemented.
Unfortunately when we moved to more Modern Software Development processes, we often brought the above trappings with us, when they may no longer have been appropriate. And then we faced calls of “we don’t need testing”, rather than “we don’t need pre-written test cases and scripts”.
We should have built an effective and unique approach to testing into the unique development approach adopted on each project.
In this talk I try to approach the model building of Software Testing from a Software Development perspective.
In the talk I cover:
- perhaps people have never experienced Good Testing
- a Tester is someone who sees opportunities for testing that other people miss
- Modern Software Development is hard and requires constant reflection and customisation to be effective
- Feedback is an important part of that reflection process
- We carried forward legacy concepts, instead of applying the unchanged core principles of Testing to the new approach to Software Development
- We accept ‘concepts’ with names as normal, even when they are detrimental e.g. Technical Debt
- We confuse tools with concepts, with the risk that we think we have implemented the concept, when all we have done is implement a tool e.g. Jira with Communication, Jenkins with Continuous Integration
- One way to build a contextual test process is to model the risks on a project and decide how to detect and mitigate those risks.
- Observation is important to help us verify particular conditions
- We also test to increase our uncertainty and expand our models
- and more…
If I have not communicated the points well in the above summary then hopefully I explain them better in the full talk.