Based on the observations of my previous post, I wanted to provide a specific example of how this might look when your software team builds and tests a feature. Let’s walk through the various stages of a two week sprint, and what the test engineer’s role, aims, and priorities should be in each of them:

i. Concept stage

This stage takes place on day one, when the feature is still little more than an idea. The team doesn’t have much clarity on the feature (around 10%). They know about it, but don’t know exactly how it’ll be built. The test engineer’s most important job in this stage is to thoroughly understand, and potentially help define, what success means to the product owner. Some companies call this acceptance criteria.

In any case, I prefer using the guiding question, “What is winning?” Figuring out the answer and corresponding key metrics will be the whole team’s main focus of the concept stage.

ii. Mocks and designs stage

This stage also happens early in the sprint (usually on day one), when the feature has been defined a bit more clearly (around 30%). The test engineer’s understanding of the feature will change how they test it, and understanding the desired user behavior will inform the testing flow.

The product owner might define winning as having the user take a certain action. If that’s the case, the test engineer must help determine how to take the user there. As the team starts preparing mockups, some flows or concepts might not make sense. It’s the test engineer’s and team’s job to call out mockups that does not follow user flow or doesn’t lead to winning.

The test engineer doesn’t necessarily act in a quality assurance capacity here, but still contributes as a team member. They must help identify different user profiles and different ways the app will be used (i.e., personas). The most important thing for the test engineer in this phase is still one of understanding: to grasp the feature as a whole rather than individual ticket elements.

iii. Tickets stage

Software tickets make up the bulk of the sprint. It could potentially take place from day 1 - 10. The tickets make the feature much clearer (at around 60%). As software engineers work on the feature and resolve tickets, test engineers build small tests to ensure the feature will be built properly. Tickets make the feature more concrete. As the feature becomes more certain, the testing methods become clearer. The test engineer should also build pipelines to enable faster feedback.

That’s a lot of information, so I recommend that the test engineer start off by literally building one test that would be running after every commit or any change. I’d recommend building the happy path scenario first, then moving into the more detailed tests.

When the test engineer tackles these other more detailed tests, they should also identify other paths: What happens if the user rotates the screen? What happens if they cancel? What happens if they have a strange feature combination? What about the potential critical failures of the feature (e.g., making sure the page loads and avoiding the dreaded 500 error, or ensuring the action on the web page gets recorded accurately in the database). Some of these other paths will be frequent, others will be less common. The test engineer should prioritize the ones they should be testing. I recommend evaluating based on these two criteria:

What is the impact on revenue or key business metrics? The test engineer should make it easy for people to get where they want to go. For example, in Flipp, we want to make it as easy as possible for the users to get to their desired flyer. Any obstacle takes away from quality.
Does it affect retention? The test engineer must ensure the app doesn’t crash, or users don’t exit for any unplanned reason. They should also consider ways of bringing users back in, or encouraging them to re-open the app.

The most important thing for test engineers is for them to prioritize what is being found, based on definition of winning, and identifying risks throughout the feature based on the usage, impact, and probability of that scenario occurring. How many people will the ticket influence? How deep is the problem? If the app does fail, will it be difficult to recover?

iv. Quality assurance and testing

Quality assurance and testing takes place from days 5 - 10 (all the way till release). The feature should be be 90-100% clear at this point. As tickets become resolved, test engineers should look at their strategy, which they developed in the previous stages, and start executing on their plan. They will work with software engineers to build more clarity or to resolve bugs.

The most important thing here is for test engineers to ensure the features that are related to winning are clear. Nice-to-haves or should do’s aren’t going to be fixed unless there’s extra time.

v. Release/deployment

Release takes place on the final day (hypothetical Day 10). The test engineer’s job must ensure the release is smooth. Prior to release, they should test the feature on a replica of production system. We have a specific post-release checklist we use to keep things consistent. I won’t bore you with the details.

As the feature launches, test engineers should keep a close eye on the data to see if it’s in a healthy state, and look at crash logs. We’re humans, so we all miss things sometime even after testing. There might be events even the most careful testing engineers don’t anticipate.

As software development evolves, QA and testing should evolve to keep up with it. At Flipp, we constantly reinvent everything — including the way we test our software. If you haven’t seen Part I of this post yet, check it out here.


Are you a talented Test Engineer and team player? Do you want to help us reinvent QA at Flipp? Check out our current job postings.