Catch those Bugs: 6 Rules of Thumb for QA in a Development Project

Catch those Bugs: 6 Rules of Thumb for QA in a Development Project


At OutSystems we always have quality in mind when delivering new versions of the Agile Platform. As a Quality Assurance engineer, I've come to the conclusion that robust software involves a strong dedication not only from QA professionals but also from developers themselves. This is why our QA team works closely with all the Development teams, making sure that most bugs are found and dealt with way before a release, while the new features are being developed.

This is a difficult task. It involves creating, maintaining, and using a complex testing infrastructure (for which we use our own dogfood). It also means that everyone takes effort in finding bugs throughout the product as it is being changed, regardless of having “official” QA or Development roles. It's easy to make some mistakes during the process, or even to do it not so effectively. Here are 6 rules of thumb that we like to keep in mind when doing QA work in a Development Project.

Test as soon as possible, but not too soon

In other words, test a feature as soon as it is available for testing but not while it doesn't even exist. To try to test something whose implementation hasn't even begun is a mistake we see now and then. It usually happens because something holds the development of the feature back but there are people available to start producing automated tests. Then, a lot of false assumptions are made while creating these tests and later on, when they finally can be run over something real, we come to the conclusion they do not fit properly. It is better to throw them away and start it all over, which is counterproductive. However, don't confuse this with the design of the tests: defining functional tests on paper while designing the feature is always a good idea. It will help you find problems earlier on and to validate the requirements at the beginning, which takes us to the next rule...

Prioritize what you test

In a perfect world, we would develop and run tests for every single possible scenario over our feature. Unfortunately, in most situations it is only possible to cover a smaller subset, either because we can't grasp all the scenarios, some scenarios are difficult to reproduce or we simply have time constraints. As an example, it is commonly accepted that a good test code coverage for most software products lies at 70-80%. It's easy to go gung-ho when we start testing a new feature and, before we know it, we're trying to build tests for complex and uncommon scenarios without having the frequent ones already covered.

That's why it's important to prioritize at the beginning. Start with the main use cases that are being addressed by the newly developed capabilities. What will the most common situations be for the user? What basic scenarios must not fail, no matter what? Fully understanding what the feature will be used for is a must before testing, so discuss with the design team what are the requirements and build the tests upon those. It will help you make sure that the time you spent in testing was mostly directed towards the more important aspects of your features!

The one who tests shall not be the one who develops

Don't get me wrong, there are certain tests that can and should be done by whoever develops a feature (unit tests are obvious cases of this), but you should refrain from having the developer produce all the tests for a feature they've worked on. In a nutshell, they will be biased because they know the feature inside out: The feature was built to reflect what they envisioned after gathering the requirements. What's going to end up being tested is their interpretation of the functionality and the extreme scenarios that they predicted and tackled in the implementation.

If you bring other people to do the testing however, they will have a different notion of the feature. The simple fact that they do not know it as well will make them pose more "what if I..?" questions that the developer never thought of. This alone guarantees you that a wider array of scenarios is covered in the testing phase. It will also help you validate the design implementation, by having someone with a different point of view question the functionality of the feature, given the specs.

Catching bugs is a must in a Software Project
(wearing shorts is optional)

Decide: To auto-test or not to auto-test?

A lot of times we have to evaluate how far we are going to go while testing a feature. Should we just test it manually or should we produce automated tests that will run constantly and assure us the feature does not get broken in the future? Of course every feature should have auto-tests, but usually there are tests that cover the main use cases and others that target uncommon and tricky scenarios. It's easy to forget the real usefulness of uncommon cases in the great scheme of things, which can lead us to treat them all with the same level of importance.

It's all about the return on investment (ROI). And when we talk about the ROI,  we mean it in the larger sense of the lifetime of the test and feature. How much time will it take to create the auto-test? How much infrastructure does it need? Would we be comfortable with just doing it manually? And what about running the test? How complex, time consuming and resource demanding will it be in the automated testing infrastructure? And the maintenance? Will it be easy for someone to approach this test when it fails (even providing we do it right)? Will it be prone to produce hard-to-solve false failures? And finally, what about the use case it covers? Is this a common scenario or one of those exquisite situations that we're not even sure will ever happen out there?

We try to take all of this into account. Surely, it is generally desirable to have auto-tests that cover all of the functionality, but we have to be pragmatic and understand that an auto-test that covers a not-so-common scenario and will be hard to create and maintain has the potential to cost us more than just sticking to something simple.

It is not easy to decide how far we should go in each scenario. It takes a lot of understanding of the testing infrastructure, the product being built and the way everyone works during development and maintenance. It is my opinion that only experience and the continuous asking of the questions I've laid out allows us to make the decisions that effectively maximize the ROI of our tests.

Keep those tests in check

There's a temptation in every development project to keep delaying the "tidying up" part. As the code changes, we start to see automated-tests failing (both old and new) and everyone starts making I'll-deal-with-this-later decisions. This "later" can end up being at the end of the project, when the product has already become far too unstable and it is harder to know where to start from.

The key to this is discipline; keep those tests in check! The ideal setting would be to have a testing infrastructure that immediately warns a developer of broken tests after a change in the code. However, it is very hard to have complete test suites that are extremely fast and run effortlessly at any given time, but there are alternative ways to approach this problem. A good strategy is to implement short-milestones: the team agrees to a rule stating that every two or three weeks the code should reach a stable state with no test failures. This will make everyone conscious of checking and doing what's necessary to avoid breaking older tests after altering the code (which is the best time to do it). If you can't implement such a rule because of the nature of the project, consider having a "bug fix day" every two weeks, in which the team spends their whole time dealing with the top identified bugs.

Bring everyone onboard

This one relates to the previous one: make sure the team understands that product quality is up to everyone. It is very important that everyone commits to this and that the function of assuring quality is not attributed to a smaller group inside the team (even if there are some members that focus more on QA). Releasing designers and developers from this responsibility can lead to having tests that are not aligned with the use cases and are difficult to implement and maintain, because the product is not built to be tested! Sometimes it is hard to bring everyone onboard on this, but be sure that high quality can only be obtained when everyone contributes to it, not just the QA team, so all the team members must understand and abide by this!

In Conclusion

As said, assuring quality in a development project is not an easy task. I try to remind myself of these rules of thumb because they cover specific parts of designing, implementing and working with tests that we tend to forget. I feel that this type of consistency and discipline in the way QA work is done is crucial for you to deliver robust and efficient products that address the right use cases for your users.

At all times, be sure to QA your own QA processes.