Image for post
Image for post
credit: https://www.youtube.com/watch?app=desktop&v=f3b8J51XtIU

Much like the book, Thinking, Fast and Slow, testing often falls into one of two buckets, ‘fast’ and ‘slow’. Fast, unconscious-like testing consists of quick checks, with little or no thinking to determine if a test should pass or fail. Fast tests click through test case steps with haste and obviously complete quickly but only focus on what is necessary to complete the test. Slow testing is where the computer, or human, takes a bit more time to measure, compare, or consider how to execute the test case, and to decide if a test should be marked pass or fail. It is critical to know when to test fast and when to test slow.

It used to be obvious when to test software quickly and when to test it slowly. When the code is old and mostly unchanged, simple checks would do. When the code was freshly modified, or the design was new, slower testing was needed. That's all changed with modern apps. Most apps are now ‘complex’, and continually changing. Modern apps also reuse many components across the application, even across applications and across platforms. so one small change can have a dramatic impact on quality. Today’s apps are also far more critical to the business, customer, and overall experience of a product. But, most testing is still focused on going as fast as possible.

Testing fast manually might mean quickly clicking the login button as fast as possible to verify it takes the user to a login screen. How can testing this fast be dangerous? If the tester is a human, they might not notice the layout on the page is incorrect, and won’t try flipping the screen to landscape to see if the page layout still looks reasonable. If they click as fast as they can, they might click the button before the page is fully ready to accept the input and report an uninteresting corner-case failure mode, click it twice, or miss a bug where the user leaves the app to answer a call, re-enters the app, and the login button is now non-functional because of an inner-timeout. Testing fast can miss many things and introduce errors from the testing methodology in addition to those bugs in the product.

Testing fast with automation can be far more dangerous. Embarrassingly, test infrastructure is only now realizing that it is best to wait for the page and element to render before trying to interact with it — but there is progress. Machines can respond far more quickly than a normal human, and machines can attempt to interact with page elements programmatically before a user can, and before a user would even see the element. The worst form of this is ‘monkey testing’, where some folks use heuristics, or ‘AI’ to generate as much clicking as possible — producing false positives, duplicate failures, and often tests that never happen in the real world. It is naive to think that the primary goal of automation is be fast, the key value of automation is to free more expensive human brains to test ‘slower’, and add repeatability to a test, not simply to ‘test faster’. Modern UI automation can and should slow down and act more like a real-world user. Modern API testing should slow down and execute API tests with the same sequencing and timing as they are invoked in real-world situations. Worse, this desire to automate ‘fast’, introduces test ‘flakiness’, and false positives which require expensive human-time to investigate — the very thing they were created to avoid! Remember, the cost of automated test execution, even if it is slow, is orders of magnitude less expensive than the human time it takes to debug/investigate failures because tests are executed in fast mode. Slow down, be careful, because accidents are far more expensive to clean up.

The solution to long-running test automation suites is not faster test steps — it is simply parallelization. Modern testing should borrow from modern DevOps and containerize their environments and execute their tests in the cloud. Too often testers just aren’t familiar with the cloud, so their tests are executed one after the other, meaning it can take hours, days even to get test results. Optimizing every test step for speed is folly and only adds to the problems. Simply parallelize test execution in the cloud. Fast test automation actually ends up being very slow, and very expensive. Slow down, learn a little bit about containers, and parallelize your tests.

Testing slow brings added benefits. Not only are slow manual tests more reliable, ultimately saving human time, but they also test ‘better’. When testing manually, slower testing means the human brain can be leveraged for what it is great at: ‘thinking’. Testing slowly and carefully means different aspects of product quality can be assessed, and questioned, during test execution. It really is a tragedy that teams create thousands of test cases, and rush human beings through them as quickly as possible. Not only are mistakes made, but actually ‘testing’ is dramatically diminished as the humans don’t have time to consider the context of the test, explore around test failures, or question the verification steps of a test case. I’ll admit it, I’ve managed test cycles in the past where we tried to execute them as quickly as possible, only to have production issues with customers show up because we were accidentally verifying results that were out of date with recent product changes. It is far better to have fewer tests, with slow execution, than many more tests executed with urgency. Slow down to avoid looking goofy, and don’t waste that precious and expensive human brainpower.

Test automation should be fast right? Nope. Do you want to purchase something that was tested fast, or slow? Do you think quality will tend to be higher if the testing is performed fast, or slow? If a test automation script is written to simply find the login button on a page and click it, that should be pretty fast or there are fundamental problems in the automation script. Many automation engineers today think ‘great’ tests only test one thing atomically and can click as fast as possible. Automation should actually move at the speed of your end-users to be sure it works correctly for them, not for bots trying to buy Playstation 5's. Even that aside, if the test script doesn’t do any basic verification of the size of the login button, the login button could be 1x1 pixel and invisible to the user — but your test automation declares ‘Pass!’. I remember a time when running 10’s of thousands of automated WebKit (Chrome) test cases, almost all of them passing, only to release the build to humans who immediately realized that nothing but a white screen was rendered to the end-user. Maybe the tests should have slowed down and done some additional basic checking that the pixels were rendered. The best modern automated test suites and test case design test more than just the simple functional aspect of the test, and do as much as they can to verify that several aspects of the application are performing as expected, even though that means more time coding up the test case, and more time executing it. Automating tests ‘fast’ is like driving fast — can be fun for the moment, but inefficient, dangerous to you and dangerous to others. Slow down.

It is worth noting that AI-based testing approaches use a lot of computing power. AI-based test cases are by definition going to execute slower than classic script-based test cases because they are analyzing far more data. Yes, some of this compute can also be sent out to the cloud and executed in parallel, but there is still more compute happening than in a non-AI script test case. These AI-based approaches are much like the manual testers mentioned above — they are slower because they are doing more thinking. This additional machine thinking means fewer false positives, more robust test execution as the application's user interfaces, APIs, and flows change over time. AI-based testing approaches also mean less human time to create the tests in the first place. Expect AI-based test execution to be slower — and if it isn’t, there probably isn’t much ‘AI’ in there in the first place. It is better to give humans and machines the time to consider the tests.

Lastly, consider that 100% test coverage is impossible, even in the simplest applications or even functions. Frankly, the less coverage a tester claims, the more experienced and knowledgeable they tend to be. They know what is not tested! Combinatorics alone should make it obvious. If there is something like 10 steps in a test case, and at each step in the UX the user can make one of 70 different inputs such as text, taps, or swipes, there are more possible variations of that sequence than there are of atoms in the universe. When you add the 101'st test case, you are increasing the test count by 1%. When you add the 1001st ‘test case, you are adding 0.1% more tests. Test counts aren’t the right metric and getting there by adding them quickly and running them fast quickly encounters diminishing returns. Running more tests faster may feel good, but it is a fool's errand.

The lesson is testers, and their products, are far better off writing very robust, thorough, slow tests, than trying to write thousands of simple and fast tests in a bit to add coverage. Test what you can, as reliably as you can, and as thoroughly as you can. Go slow.

— Jason Arbon

CTO test.ai

blending humans and machines. co-founder @testdotai eater of #tunamelts

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store