How to Run Software Tests Without Losing Your Damn Mind
Software testing! Such a great, time-saving, bug-catching thing to do.
It’s nice for your users. It’s nice for your engineering team. It’s nice for your QA team.
In our experience, many product owners or teams lose their minds over testing coverage. They either set these expectations themselves or have pressure from leadership that 95% testing coverage is the way.
For most software products, this is not the way.
Yes, Testing is Important
As mentioned above, testing is important. Well-planned tests catch bugs before software is updated, which greatly improves the user experience, and therefore revenue.
Tests also save a lot of time. Manually clicking from screen to screen in search of bugs is neither fun nor a good use of time. Automating the testing process before you ship is an ideal solution.
The Problem with Minimum Testing Coverage
Many teams have KPIs that measure testing coverage. “We need 97% testing coverage” is a common metric we hear.
High testing coverage is good in theory, but here’s the thing: You can’t put a specific number on it if you’re interested in keeping your sanity.
When you or your manager declares a specific number, what ends up happening is you spend hours on just a few pieces of code. This generally falls in line with the 90/10 rule. This rule states that about 10% of your effort will result in 90% of the test coverage you need. The remaining 10% of coverage? You’re going to spend 90% of your effort (and likely chuck your computer out the window).
How to Run Software Tests Without Throwing Your Computer Out the Window
To do this, you need to change your mindset about tests (and maybe your leader’s in the process). Being smart regarding testing will still allow you to accomplish your goal of quality, as-bug-free-as-possible software.
To put this “smart testing” into action, you’ll need to create a software testing strategy*.* This strategy will outline what you test, when, and KPIs that actually matter.
Here’s how to create a software testing strategy.
Determine Where in the Software Lifecycle You Are
If you’re still figuring out your codebase, it’s a waste of time to write meticulous tests. While test-driven development (TDD) is a nice idea, it often flies in the face of agile practices. Requirements change, and that’s okay.
If you’re maintaining core business code where downtime means lost revenue, it’s almost impossible to write too many tests.
Your software probably lies somewhere in between these two extremes and time for testing should be allocated accordingly.
Identify Worthwhile Software Tests
Identify the most important areas of the codebase. Aim for 100% coverage of those.
Identify the most complicated areas of the codebase (things that have too many edge cases to test manually each time). Aim for 100% coverage of those too.
Wait for bugs to occur. If something breaks once, there’s a good chance it will break again. These areas of the codebase are good candidates for increased code coverage, that way you don’t waste time writing tests for the straightforward areas.
Focus on Quality
To reiterate, XX% test coverage is a meaningless metric. Instead, focus on quality.
Make liberal use of “code-coverage-ignore.” Setting up automated continuous integration (CI) that ensures a certain percentage of code coverage is great. It reminds your team of their testing intentions as the project moves forward. However, it can be annoying to force tests for parts of the codebase that don’t need it, just to satisfy some arbitrary metric.
Luckily, most code coverage calculation tools have a mechanism for ignoring a section of code for the purposes of calculating your coverage percentage. Utilize your tool’s settings to do this!
Make sure you’re testing things under your control. End-to-end integration tests are great, but it’s frustrating to see a test run fail because an external API you rely on is failing
Leverage user stories. To create valuable tests, turn user stories into tests.
Run Your Tests!
This can’t be stressed enough. You can write millions of tests, but they don’t matter if you never run them. Better yet, in your strategy, allot a certain amount of time to creating and running tests, e.g., “every month, we will dedicate 10 hours to testing.”
If you can create a software testing strategy that includes worthwhile, quality tests and then dedicate time to run them, you will be in a better position than chasing an arbitrary coverage number.
Need help crafting a software strategy? Our team can help. Let’s chat.