Historically, testing has often taken low priority for many software projects. If a team had dedicated testers, they were often siloed from developers, waiting for work to be tossed over the wall.
The widespread adoption of Agile methodologies has largely erased the meaningful boundaries between testing staff and the rest of the team. At Pentalog, for instance, testers are included in Agile software teams by default.
Nevertheless, when we look at the broader industry, we see continued confusion when it comes to actually testing the software. In this article, we examine two common myths and why those ideas can safely be debunked.
Let’s start at the beginning. Many software engineering training programs don’t offer software testing as part of the curriculum. As a result, many students who may otherwise develop an interest in the discipline never have the opportunity to do so.
So it should come as no surprise that there are several misconceptions surrounding things like software testing strategies, especially who is in charge of creating the strategy and how it should be carried out.
Considering the importance of deploying high-quality software — both to your customers and users and to your reputation — the topic of software testing strategies deserves some consideration.
Myth No. 1: QA Bears Responsibility for the Testing
First and foremost, this assumption is troubling because software quality — including creating a comprehensive test strategy — is everyone’s job.
Not to be confused with the test plan (i.e., a document that details how to check that you’ve met functional or nonfunctional requirements on a project), the test strategy is a high-level, organizational overview of the methodologies and environments that will be used for testing.
Because the testing strategy is built on organizational objectives and reaches beyond a single project, many stakeholders must be involved in creating the strategy. Aligning stakeholders helps ensure that all objectives are addressed and that the strategy stays up to date.
If your organization maintains a “Team QA Should Do It“-approach, there are still a few things you can do to adopt a more collaborative approach to creating high-quality software.
Begin by acknowledging that there is a problem with the current process. You can’t expect changes if your team doesn’t know changes are needed. Then, specify how the current process impacts quality both internally and also for the end users. Make it personal, and explain to the team how every role can contribute to the testing strategy.
When a team is prepared to take ownership for its testing strategy, begin by scheduling a series of workshops to discuss how to create a strategy, define product objectives and make the strategy actionable.
Myth #2: Automation Will Simplify My Testing Strategy
Sorry, but no.
There is no disputing that automated testing is a lifesaver, especially for those repetitive, time-consuming tests that involve multiple steps and require large-scale environments.
However, other types of tests in your strategy, such as exploratory and user acceptance testing, are cost-prohibitive to automate and deliver better results with human intervention.
When a team starts working on a new product, they often want to dive right in and take all the manual tests and turn them into automated tests—specifically, UI automated tests. But when the testing strategy is heavily focused around UI automation tests, it can even slow down development because UI tests are brittle when they are used to test everything.
This slowdown often causes frustration for the team every time there are changes to the application, which can cause the team to shy away from automating other tests to avoid further disruption to development.
If your team is hyper-focused on UI automation tests and the tests frequently fail due to “unknown/framework/driver” causes or every additional test increases the time to maintain the existing solution, you are stuck in the ice cream cone anti-pattern.
This happens because you have too many end-to-end tests and not enough unit tests, which causes the automated tests to run incredibly slowly and frequently fail.
The best way to resolve this is to invest in clean architecture training so you can explain to the team why it is detrimental to automate tests only at boundaries. Ask the team to create a testing strategy guideline that will pivot the ice cream cone to a normal testing pyramid, so you can avoid future slowdowns.
7 Best Practices for Software Testing Strategy
Creating an effective software engineering testing strategy doesn’t have to be complicated. Here are seven best practices to help your team overcome common roadblocks and implement a testing strategy that works:
- Make your testing strategy explicit and easy to find.
- Involve the entire team in creating the strategy.
- Separate high-level strategy from the details.
- List the main business objectives that the testing strategy tries to achieve.
- Don’t include technical objectives as the target for your testing strategy.
- Don’t list specifics such as tools or framework.
- Don’t include too much detail … one or two screens will do it.
Looking to learn more? Check out this PentaBar video to learn how testers and developers can work together to create a testing strategy that drives quality and efficiency for the business and for the end users.