Checking vs Testing

jason arbon
5 min readMay 4, 2020

--

Too much energy, creativity, and time are spent debating ‘testing versus checking’ and ‘manual versus automated’. Testing philosophers wander town to town, blog to blog, webinar to webinar, sowing confusion and angst. The motivation is not the pursuit of Truth. They are squeezed by modern trends in software testing and trying to preserve their role as experts from a time long gone, rather than embrace and extend modernization. The tragedy is that these debates really do confuse and detract from the art and practice of software testing.

Those senior testers, who dress to stand out, need people to revere them. They don’t believe people will appreciate their intellect if they are associated with rote, repeatable testing. No one will pay for their seminars or training. This view is both wrong and demeaning to the very audiences they are appealing to. Ironically these testing philosophers are far better at testing software than they are at Socratic debate.

Let’s do some deep research and Google definitions for ‘test’ and ‘check’:

test: “take measures to check the quality, performance, or reliability of (something), especially before putting it into widespread use or practice.”

check: “examine (something) in order to determine its accuracy, quality, or condition, or to detect the presence of something.”

Literally ‘check’ is in the definition of ‘test’ :) Is there really any great chasm between these two words? No. The flak thrown up by the debate tries to indicate that ‘checking’ is mindless rote test execution and less-structured ‘testing’ is somehow a more creative activity — far more intellectually challenging and difficult and valuable.

The reality is that most test cases that are called ‘simple checking’, require different and perhaps more mature testing skill sets to manage the test cases. These testers need to keep alert to changing requirements, understanding how to optimally order the execution of the test cases, stay alert, interpret the meaning of tests written by others and question the test steps and verification as they execute the test passes. The more creative and exploratory “Testing” could actually be described as simpler, less structured, less formal, more ad-hoc, and less professional. But, let’s not roll in the same mud — the reality is that both types of testing are incredibly important and draw on different testing skills. Moreover, none of these testing skills are ever fully mastered as requirements and software itself is continually changing, testing is really never complete, and our profession is still evolving.

Automation vs Manual

Our quirky deep thinkers also create a strawman around automated test scripts being simple ‘checking’. They insinuate that test automation cannot be as smart or as effective as a human tester because the software simply repeats the same checks over and over, and only verifies things that the test script author thought to check when it was written. This implies a human tester, of course, is always executing tests with creativity (we wish!) and that the automation only automates what humans could have done. They often ignore the value of speed, cost, and repeatability of automated testing. Their goal is larger than simply adding fear uncertainty and doubt. These evangelists are working to ensure the perception of the supremacy of their human thinking in the face of possible automation — a field they aren’t fluent in, and one that distracts their followers from the single-threaded belief that only humans can ‘test’. Only they know how to test.

If you look into it, most of these sages have done very little test automation, it was a very long time ago, and/or it wasn’t all that sophisticated. They are intimidated by automation and must attack it. Any value not attributed to them is a threat to their fiefdoms.

There are really two types of test automation: regression and generative. Regression is what most people think of when they think ‘test automation’. These are test scripts, often written to reproduce the steps that human testers execute frequently, with the intent that the humans can instead focus on other work, and that the testing can be done faster and cheaper. At first glance, this automation might seem to remove or reduce human judgment and creativity from the test case execution — but sophisticated test automation engineers will do things such as vary the execution environment during test cycles, programmatically vary the test inputs, check for system-level correctness at every step that a human could not measure or perceive, and even automatically decide which, if any tests, need to be run based on code changes. This ‘simple’ automation in the hands of great test automation engineers can do far more testing than an infinite number of creative humans with a keyboard.

I’ve yet to notice a single monk aware of the second flavor: generative test automation. They can be forgiven as they spend most of their time building rhetoric, instead of testing. Generative test automation is often the best use of time for experienced and capable test engineers with an automation bent. Rather than reproduce test cases that humans have been executing, this type of test automation analyzes the software specifications, implementation, or the application itself and automatically generates test coverage. Examples are as simple as web crawlers looking for errors that are general enough to be re-used on any website, test code that parses API definitions of input parameters and generates valid and invalid input tuples, parsers of live site usage analytics to automatically re-run user flows, code that can ‘diff’ the entire product state between two builds, and even modern AI reinforcement learning-based testing approaches where the test bot figures out ‘all’ possible ways to execute a given test. Most critics of who think of test automation as simply ‘checking’, have little exposure to the more sophisticated approaches to test automation that can create test input variations and validations humans haven’t dreamed of, and couldn’t if they tried.

Why does anyone listen?

Why does anyone listen to these sophists? It seems there are two test personas who seek out these contrived arguments. There are the semi-experienced, and intellectually open folks seeking to learn all aspects of their profession. It is great there are different voices and they can be heard, and most of these listeners are smart enough to take note and move on. Interestingly, you might expect the newest recruits to the testing profession to be vulnerable, but they are often just overwhelmed doing the job of testing and don’t have the time for philosophical debate. The segment of the testing world that is concerning and stunted by this self-serving propaganda are those that are mid-career, thinking they have hit a plateau in professional growth, or feel their job is threatened by automation. Rather than improve themselves and their profession, this rhetoric lets them lazily plateau in good conscience. The impact on their personal career growth aside, the real impact is felt by the people that use the software that these testers should have been testing and were instead embroiled in a manufactured debate.

What software testers need are people in the community who are fantastic, fanatic, and friendly testers who spend their time sharing how they do a mix of testing, checking, verifying, asserting, affirming, evaluating, inspecting, probing, questioning, automating, scripting, and executing software. (Just wanted to get ahead of future debates). What software testers don’t need is self-interested, calculated debates about nothing.

Yes, I get the irony of writing this :)

— Jason Arbon. CEO @ test.a

--

--

jason arbon
jason arbon

Written by jason arbon

blending humans and machines. co-founder @testdotai eater of #tunamelts

Responses (7)