Facebook EmaiInACirclel
IT development

Myths about Testing Strategies in Software Engineering

Andrei Gavrilă
Andrei Gavrilă
Head of Product Development & Agile Coach

I decided to speak at a PentaBAR event and then write an article on testing strategies in software engineering and the common problems that block us in the quest of crafting good ones.

Throughout my career, I identified a set of anti-patterns that tend to reproduce again and again. These anti-patterns come from the stories we tell ourselves about what needs to be done and by whom. Therefore, understanding them is one of the best ways to improve our way of doing things on this topic. I hope you will enjoy the information below.

First, let’s see what a strategy is.

Since this is a myth-busting article, I’d better say what a testing strategy in software engineering is not. Test strategy is not a document.

Testing Strategy in Software Development - Pentalog

Why do I start with this idea? Because if you search for “testing strategy” definitions on Google, you will find that after the words “test,” “testing,” “strategy,” and “software,” the word “document” is the one that appears the most.

There are some problems with this association:

  • By themselves, documents have limited added value in software development. This is why the second Agile value is: “Working software over comprehensive documentation.”
  • Creating documents is not a good motivator for most developers I know (in the Scrum sense of the word).
  • Documents do not promote team-collaboration; they suggest one document owner who delivers it by creating it and incorporating feedback through various review phases.

So, if not a document, then what is a testing strategy in software engineering?

One of the definitions you will also find via Google is:

“A set of guiding principles that determine the test design and how the software testing will be done. The objective of the testing strategy is to provide a systematic approach to the software testing process in order to ensure quality.”

It’s a pretty good definition, but I don’t really agree with the phrase “ensuring quality.” To “ensure” means making certain that something may occur.

I think it would be best to say that we will significantly increase our chances to improve product quality by having a testing strategy, especially if it focuses on building quality before and during development, not after.

What I do like about the definition is the idea of a guideline. What is great about this choice of words is that a guide does not generate value on its own, but it supports a value creation process. And this, in my opinion, is what strategies are all about.

Before digging deeper into what a testing strategy is and how to create one, let us first analyze the word “strategy.”

What is a strategy?

When I ask people that attend my trainings for contexts in which they have heard the word “strategy,” most of the answers are the following:

  • Marketing strategy
  • Strategy games
  • The winning strategy
  • Chess strategy
  • The season’s strategy
  • War strategy
  • Sales strategy
  • Investment strategy

I think it is easy to spot some patterns or groups in the list above. Each time I ask this question, people mention sports activities (especially team sports) like football and basketball and business activities such as marketing, sales, investment, and product.

What do these examples have in common? They’re usually medium or large teams that try to achieve an important objective in a complex world—in the case of sports, winning a game, or the season. And in the case of business, achieving the most important business objectives.

To achieve their goal, teams need a certain degree of alignment; the whole team needs to be pointed in the right direction and apply a set of generic rules or actions to succeed.

If a team has direction but little guidance, it will lack alignment and function as individuals instead of a cohesive group. But, if it has too much guidance, the team becomes constrained and has little freedom to adapt to the complex world it inhabits. I am always worried about inflexibility in a complex world, and I think you should too.

Let’s take a basketball game as an example and consider how coaches create a match strategy. They start with their objective first: winning the game. A draw may sometimes be an objective, or even losing to the opposite team with a minor difference. Once the objective is clear, the coach identifies the other team’s strengths and weaknesses to know what actions to implement to achieve their goal.

Here are some examples of the choices a basketball team may undertake:

  • Shoot three-pointers because they have a good three-pointer rate, and the opponent has a shorter average height.
  • Use the crowd to put pressure on the opponents for the first 15 minutes and get a good advantage for the rest of the game.
  • Play defense in the first part of the game to tire the opponent and press in the second part.
  • Play as aggressively as the rules allow to force mistakes also using crowd pressure.
  • Focus on blocking the other team’s most valuable player.

As you can see, the items listed above are generic but also clear. They do not precisely describe how to play aggressively or block the other team’s most valuable players. The ideas above are general enough to provide direction but leave space to adapt the strategy.

My definition of strategy

To conclude, the strategy is a guide/blueprint for reaching an objective by highlighting actions you can take or choices you can make.

Now that we know what a strategy is and how to create one let us dive into testing strategies and their myths.

Creating a Strategy with the Team

Myth #1: The QA Role is Responsible for Creating the Testing Strategy in Software Engineering

Simply put: the entire team is responsible for creating a test strategy.

Where does this myth come from?

Let’s return to our first Google search, where most of the results specifically target the QA audience. I would say that 7 of 10 are primarily addressed to QA roles.

Few software engineer learning tracks mention testing strategies. Most speak about certain testing types like unit testing, integration testing, and performance testing but don’t put this into the context of a larger testing strategy that improves the chances of having a better-quality product.

Most QA certification paths focus on the creation of different testing “documents” (again “document”) as part of the QA role.

What does it imply?

Test strategy risks omitting types of actions and choices that are less visible or familiar to a QA role.

Is my team living under the shadow of this myth? What are the symptoms?

  • There is no testing strategy in place, or if one exists, it is the same as the test plan – a document that usually focuses on how to check that you’ve met functional or non-functional requirements.
  • The only role familiar with the testing strategy term is the QA manager.
  • The software engineering testing strategy focuses more on activities that validate the product/feature/acceptance criteria and less on testing from other significant business/architectural concerns like usability, scalability, performance, performance under load, etc.

How should we overcome this problem?

  • State it as a problem. If you haven’t accepted it, you can’t overcome it.
  • Start with the symptoms, then review the implications. Do you want to live with them?
  • Ask the team if they are ready to take ownership as a group and not through a single role.
  • Schedule multiple workshops as a team to discuss a strategy, product objectives, and how to draw actions/choices.
  • Every role can contribute to the testing strategy.

 

Myth #2: My Test Strategy is Simple – I am Converting the Existing and Future Manual Test Cases into Automated Test Cases

Where does this myth come from?

It builds on the previous myth. Most of the products I know don’t start with automation. This is a form of normality. Because everybody wants to build features, not automated tests, especially in the stages of POC, MVP, or even in the early adoption phase.

While I am not happy with this, it has an economic motivation. If you want to test product-market fit, your priority isn’t to make the product bullet-proof for later scalability. The value of automation comes later, and, in the beginning, everybody wants to focus on immediate value. And this creates debt, in this case, testing debt.

Therefore, most new products start with a smaller emphasis on automated testing. When the product reaches a market validation and development starts to slow down due to manual regression, the first idea we might have is to take all the manual tests and turn them into automated tests—specifically, UI automated tests.

I highly recommend having TDD and ATDD as part of the testing strategy from the start of building a new product. Don’t compromise!

What does it imply?

As the testing strategy heavily gravitates around UI Automation tests, it slows down development in the medium term since UI tests are brittle when used to test everything. This causes frustration every time there are changes to the application and risks the entire team declaring that testing automation is not for them.

Is my team living under the shadow of this myth? What are the symptoms?

  • The testing strategy exists around one thing: UI Automation Tools – Selenium/Cypress.
  • Only the QA works on automation.
  • The tests are brittle, and with every additional test added, the time to maintain the existing solution grows.
  • More often, tests fail due to “unknown/framework/driver” causes.
  • Team confidence in automated tests decreases, even to the point of not being bothered that some tests are not passing.
  • The tests are arranged into ice-cream cones rather than the testing pyramid.

How should we overcome this problem?

  • State it as a problem. If you haven’t accepted it, you can’t overcome it.
  • Start with the symptoms, then review the implications. Do you want to live with them?
  • Invest in clean architecture training to explain why it’s detrimental to automatically test the system only at boundaries. Speak about the testing pyramid and its failure – the ice-cream cone.
  • Ask the team to create a testing strategy guideline that will pivot the ice-cream cone to a normal testing pyramid.
  • Be careful to insert actions/choices in your testing strategy that focus on non-functional requirements too.

 

Dos and Don’ts in Testing Strategy in Software Engineering

  1. Have an explicit, easy to find testing strategy.
  2. Have a testing strategy that was created by the entire team.
  3. Put all the details that explain how to fulfill your strategy (something that we usually call Tactics) separately.
  4. List the main business objectives that the testing strategy tries to achieve.
  5. DON’T include technical objectives as the target for your testing strategy.
  6. DON’T list specifics like tools or framework.
  7. DON’T have a testing strategy that is larger than one or two read screens. If it is more than that, there’s too much detail.

 

Examples of Things to Have and Not to Have in Your Testing Strategy

  • All regression tests are done manually after a development impact analysis. Although I don’t like doing regression tests manually, this item is valid in your testing strategy because it respects the DOs and DON’TS above.
  • All the major application UI flows will be tested automatically. Valid point. It gives us a good orientation on how to test UI flows.
  • All the major application UI flows will be tested automatically with Selenium. Too much detail, it contains references to tools or frameworks that are in the realm of Tactics. Stating the fact that all major application UI flows will be tested automatically is enough giving us flexibility on the choice of tools.
  • No business rules will be tested through UI automation tools. Valid point. It gives guidance in the choices we make. We don’t test business rules through UI automation tools, so we will need to test them differently. If you want to learn more about why I discourage testing business rules through UI automation tools, please watch my video.
  • All business rules will be tested automatically by unit tests. Valid point. It adds information to the previous point and clearly states where the business rules will be tested.
  • All the major business flows will be tested automatically at the service application layer level. Valid point. It adds even more information to the previous two points on where to test business rules and flow.
  • Performance testing will be done automatically twice a day on the most visited 10 pages. What do you think – is this a good fit for your testing strategy?

This is my perspective about creating testing strategies and why it is everyone’s responsibility to participate. If you want to give me feedback or discuss more, feel free to add a comment here.


Leave a Reply

Your email address will not be published. Required fields are marked *

mypentalog

Always, Everywhere.
Our teams are here for you.