I’ve been noticing a disturbing trend recently that I’m going to call the myth of 100% automated testing. Clients I’ve worked with and articles I’ve read are talking about test automation as a solution that can replace manual testing. The idea is that true Continuous Integration/Continuous Delivery (CI/CD) requires a fully automated test suite. Supposedly, this allows developers to deploy features to customers with no human interaction. Let’s talk about why that isn’t a great idea…
Digital teams are (rightfully) adopting DevOps approaches at an exponential rate. The 2019 State of DevOps Report put out by Puppet categorized 93% of their respondents as being at either “Medium” or “High” in their DevOps evolution. Google search trend data for the term “DevOps” shows more than six times growth in interest over the last five years.
This adoption of DevOps brings a focus on automation in the development and delivery process. A big part of that is automation of the test suite. As a result, many are targeting 100% coverage in their automated test suite. I’ve even seen a few articles like this one that predict the demise of the manual QA function. I’m going to tell you why manual testing isn’t and shouldn’t ever go away completely.
One caveat: I’m all for automated testing. Please don’t read this as an argument for manual testing over automated testing. In testing (and in all processes), there’s a place for automation and a place for manual action. Let’s just not get carried away with automating ALL THE THINGS!
What is Test Automation?
Test Automation is the process of coding or using tools to create test cases that run automatically. Think of the classic example of writing unit tests for each function in your code. Each time the code is built, these tests ensure that changes haven’t broken any of those functions. Frameworks and tools can be used to write automated tests covering everything from individual code functions to front end layout and end-to-end user actions. Some of these allow you to do so graphically!
Include automated tests in your build or deployment pipelines to run them every time you do either of those things. These tests will ensure that your newest changes haven’t inadvertently broken anything elsewhere in the codebase. Automated tests run much more quickly and predictably than manual ones, creating many benefits:
Benefits of Automated Tests:
- Enabling more frequent runs – Try running a full manual regression suite anytime a developer builds the code…
- More consistent results – Automation eliminates human error.
- Enabling performance and load testing – Again, try manually testing a load of 10,000 concurrent users of your site…
- Cheaper test runs – Less manual tester time means lower costs.
The modern digital product team simply needs to incorporate automated testing in its workflow.
Can we achieve 100% automated test coverage?
Well, yes and no.
Test coverage has long been a problematic metric. At first, it sounds like a great idea to measure test coverage as a percentage of code or discrete user functionality that has tests written for it. In practice, however, it’s easy to game this system.
Take this hypothetical example from my consulting website, JanaitisEngineering.com. At the bottom of the homepage, I have a call to action and contact form that looks like this:
Great! Now, to make sure my automated test suite covers this form, let’s automate a test that a user can always contact me via my homepage. I’ll use a tool like selenium and create a test that ensures all fields are filled in and the send button is clicked. If I call this is one of the ten primary functions of my website, I may say that this test covers 10% of my functionality.
But here’s the problem: What if my designer tweaks the CSS elsewhere on the site, and all of a sudden my send button now looks like this:
The white text on the grey button is barely readable. Of course, my automated test can still fill in all of the fields and submit the form, but any human tester would flag this as a bug. While my automated test did give me 10% coverage of the functionality, it did not catch all bugs that could occur in that 10%.
You can have automated tests that cover 100% of the functionality of your site or app, but that does not mean they will catch 100% of the possible bugs. So 100% coverage in your automated test suite is a great goal, but there are important caveats:
- Just covering a piece of code or functionality is not enough. Tests must be thoughtfully written so they cannot easily pass with code that you wouldn’t want going out to customers. Remember that test coverage does not measure the quality of your tests.
- It is impossible (or prohibitively expensive) to write automated tests for some code or functionality. Track test coverage as a metric to measure continuous improvement, but don’t expect perfection (100% coverage).
The role of manual testing in a DevOps environment
In that last example, we saw a case where an automated test passed a bug that a manual tester would have caught. The problem is that automated testing can only detect failure cases that you’ve imagined already. Many teams make the mistake of building out their test suite by writing tests that would have caught bugs that have already been reported in production. This is an inherently reactive posture.
The problem is that automated testing can only catch failure cases that you’ve imagined already.
Bugs in software development usually crop up when you least expect them. They often come from interactions with various pieces of the site or app, when one change over here breaks something over there. No developer ever tries to introduce a bug, so it stands to reason that bugs are usually the result of unforeseen failure modes. A second set of eyes and the flexible mindset of a human being can help here.
Computers run well-defined tasks much more quickly and with fewer errors than humans. This is the main advantage of automated testing. But humans testers have a few significant advantages over computers as well:
- Humans can “think outside of the box.” We can catch little things that just don’t look right, even if they are outside of the exact test case that we are running. This is especially important for UI and layout bugs.
- Certain functionality can only be run by humans. Many systems use third-party providers to manage billing. Say you’re trying to test paying a bill. You likely need to create a fresh bill each time the test is run. Imagine your third-party vendor does not include hooks to automate the bill creation process. In this case, there is no way to run the payment test repeatedly without a human in the loop, creating new bills to be paid. I’ve come across many examples like this. Complexity or business constraints make it impossible or too expensive to write automated tests for some pieces of functionality.
The ability to think creatively, respond flexibly, and catch things that they didn’t know they were looking for is the craft of the manual tester. Computers will never be able to mimic these attributes completely, and this is why I don’t see the QA function going anywhere anytime soon.
Focus on a test strategy that includes the best of both worlds
So what is one to do? Automated testing promises to reduce the time it takes to run a full regression of your site or app. At the same time, there’s plenty of bugs that only manual testing will catch. The answer is to do both.
I’m a big proponent of running a regression of the site or app each time you release to customers. I always suggest the following to my clients:
- Start with a manual regression suite of test cases that gives a level of comfort that no major functionality is broken.
- Then, take a look at where automated tests can help save time in this suite by testing all of the minor edge cases, or running repetitive tests quickly.
- Finally, add in the type of tests that cannot be run manually to catch other classes of errors. These would include unit tests written in code or performance/load tests.
- Repeat. This is an iterative process and you can always improve.
Remember that some errors won’t be found by automated tests. If you eliminate all manual tests from your regression suite, you didn’t remove these errors; you just shifted the testing burden to your users. Use automation to speed up your process, but don’t replace your QA function by turning all of your users into beta testers!