Testers Don’t Test Anymore

jason arbon
7 min readApr 20, 2018

“…rise of the Software Verifier”

Have you noticed that software these days is a bit more delicate than you’d like it to be? More delicate than it used to be? Do you feel the urge to close all open apps or restart your phone? Why does my 6-year old know how to force-quit apps like a gunslinger? The reason is that testers don’t test anymore.

Modern software testing doesn’t have much to do with testing. Testing is the art and science of taking a piece of software and trying all sorts of ways to break it. You used to expect a great tester to huddle at their desk in the morning, find some bugs, and spend their lunch break telling their developer and product manager buddies about all the ways they just saved the company and the universe from destruction. This doesn’t happen much anymore. OK, the testers never really did sit at the same table as the developers. If the testers aren’t testing much anymore, what exactly are all those software testers doing?

Modern software testers spend the day with twitch on one screen, and on the other screen, they are verifying the app still works like it did yesterday. They are re-checking login, search, and cart functionality on the new build that their development teams are producing each day and sometimes every hour. Every new version has only a few small changes, but key functionality could have been broken accidentally. The testers barely have enough time to verify that the build is basically working before a new version is ready for testing the next day. I got a preview of the impact of rapid continuous integration when the Android test team at Google simply had 15 testers manually walking through the exact same list of tests every day. It makes sense — what value is an obscure bug when the basic operations of the application might be toast?

If the tester can code, the situation is worse. The automation tester tries to automate the verification of basic functionality so they can theoretically have time to actually test later. But, the automation itself is too flaky and provides too little coverage to trust its results. All this focus on the test code means another tester isn’t testing the product — they are futzing with new code that customer and the business don’t care about. No one talks about “Test Club”.

Testers used to test. I’m an old man and my first creative act as a software tester was to setup a system to automatically and continually reboot a WindowsCE device (yes, that is a thing). After a hundred cycles overnight, the device failed to boot ever again without clearing the NVRAM. Most updates worked, but I was looking for those that didn’t. My friend who went to a fancier school than I did, also found a great bug that week by leaving a programming book on the wireless keyboard and checking it in the morning. Turns out, the operating system would die after you typed the letter X ten thousand times. Both bugs were deemed important, fixed, and we were both rapidly promoted. We were testing 80% of the time, and only 20% of were we actually verifying the basic functionality still worked. We were bug hunters. We were testers.

Modern software development has changed all that. The development tools and languages are much easier to use, more powerful, and produce far more software — which means there is a lot more software to be tested. Agile processes mean there are new builds every day, and the team expects to ship a new release every ‘sprint’ leaving barely enough time to verify basic functionality and a couple of new ‘stories’ before another cycle begins. A recent fad is to not have testers on projects at all — why have them around when they are expensive, and just doing repetitive work anyway, right? DevOps has become so sophisticated that there is little fear of bugs. DevOps teams can now deploy in increments, monitor logs for misbehavior, and push a new version with fixes so fast that only a few users are ever affected. Modern software development has squeezed the testers out of testing.

Features are more important than quality when teams are moving fast. Frankly, when a modern tester finds a crashing bug with strange, goofy, or non-sensical input, the development team often just groans and sets the priority of the bug to the level at which it will never actually get fixed. The art of testing and finding obscure bugs just isn’t appreciated anymore. As a result, testers today spend 80% of their time verifying basic software features, and only 20% of their time trying to break the software.

Based on search queries, the interest in software testing appears the strongest in regions of the country producing complex and critical software platforms (WA, CA) and Medical (DC Area), and Financial (NY). But even their interest is Software Testing fading fast…

Why should we care? Software powers most of our lives now. As I was typing this brilliant prose I was also battling an Alaska Airlines app that had crashed five times since I got to the airport. I sheepishly went to the desk to get a printed boarding pass, only to restart the device one more time while in line, and yes! The app loaded the virtual boarding pass. Not trusting the app, phone or network, I took a screenshot to use at the gate — thankfully, that worked. It might be related to the fact that the phone had started consuming all my free storage, the recommended fix for that, of course, is a full is restore from scratch. We need to do something about software quality.

What to do about all this? The fix is a pretty obvious one. Software Verification is important. Software Testing is important. But, they are very different jobs. We should just call things what they are, and split the field in two. Software testers who spend their day trying to break large pieces of important software, and software verifiers, who spend their time making sure apps behave as expected day-to-day should be recognized for what they are actually doing. The world needs to see the rise of the “Software Verifier”.

Software verifiers focus on understanding what the software should “basically” do. What do users expect? What does the business require? What features are most important? What data and contexts and conditions are most expected in real-world usage? Then put intelligent and efficient plans into motion to verify the software behavior. Software verifiers verify the software ‘works’, there is no expectation that they are testing and are rewarded for the software just plain working.

With software verifiers around, software testers can unapologetically focus on breaking software. They can be rewarded for pushing the limits of the software. Celebrated for finding rare but devastating bugs and corner cases. Focusing on techniques and tooling specific to breaking software. Software testers can focus on conditions that may happen only 1% of the time but might cause loss of life, money, data — that is the domain of the software tester. Software testers can and should explore the app hunting for bugs.

How to know the right mix of software verifiers vs software testers your project needs? If your project is moving quickly and can detect and deploy fixes rapidly in production, consider a split of 80% verification and 20% testing. If your project can kill humans or the business if it fails, ships infrequently or is very complex, consider 80% testing and 20% verification. If your software stands somewhere in the middle, use your judgment given these two extremes. Too often, dangerous, complex and slow-to-build software teams transition to agile and only have verification. Too often, low-risk apps have testers trying edge cases all day while basic functionality is routinely broken. Be smart and divide resources appropriately between verification and testing. For those that want a formula, try this:

Verification_Percent_Rule = Average (Complexity, Danger, Speed)WhereComplexity := 1-100 where 1 ~= Windows Operating System, 100 ~= Hello World
Danger := 1-100 where 1 ~= Can kill people, 100 ~= Flappy bird
Speed := 1-100 where 1 ~= Ship Yearly, 100 ~= Ship Continuously

Credit: Thanks to Tom Flynn for the idea of quantifying this.

Calculate a quick estimate of how many Verifiers vs Testers you should have on your project.

To address the baby elephant in the room — what happened to quality assurance (QA) in all this? QA is more a process than a job. QA is the planning of both testing and verification and mapping those back to the business goals. Often this role is fulfilled by product managers and/or testing leadership in collaboration with development and business management. QA is making sure that the right amounts of verification and testing are happening and interpreting the results to advise the business functions on risk. QA is an often misused, controversial and confusing job title. So it is best to let this one fade into history.

If you are a tester doing verification work all day — describe yourself as a verification expert so people understand and can better value your work. If you are a test manager, give your reports the title they deserve — and reevaluate how much testing versus verification your team actually does. Reward verifiers for maintaining fast and efficient methods of verifying expected functionality. Reward software testers for finding scary or hard to find bugs in the system. Don’t blend or confuse the two professions and skill sets — they are very different and you ignore this at your peril. Please do this soon careful so you don’t mess up my flights between Seattle and San Francisco.

It is a new world where the Software Verification role has risen, and the Software Tester is almost extinct. The key is to recognize these are two separate roles with different values, methodologies, and goals.

— Jason Arbon CEO @test.ai, convincing AI to help humans with our software verification and testing problems.

--

--

jason arbon

blending humans and machines. co-founder @testdotai eater of #tunamelts