Turning Testers into Machine Learning Engineers

jason arbon
7 min readSep 13, 2018

--

Software testers should welcome this new world of AI-driven testing. Most testers are worried their jobs will be replaced, or the AI will have all the fun. This transition to AI-driven testing is an opportunity for testers if they seize the moment. Testers can easily make their jobs more interesting and more profitable. What does this new world of testing look like?

Let’s start with a quick description of the coolest and most highly paid jobs at Google: working on Search and AI. What do these folks do all day? Here is a clue, the engineers working on web search are called “Search Quality Engineers”. What those engineers are doing all day is coming up with tests. Yes, test cases. These search quality engineers comb through test results of the last version of the search engine, looking at ways the search engine results are less than perfect. That sounds a lot like testing — looking through test data for ‘bugs’, or problems that need fixing. Then, they have an idea of how to fix it. To make the fix, they often just add more emphasis to something like how significant it is that the search term shows up in a document with some frequency. The changes are often just changing a value or two in a configuration file. They then spend the next few days ‘testing’ out this idea, by re-running a large set of search queries through this new version of the engine, and again comb through the data to see if the web search results are better or worse. Notice they aren’t writing a whole lot of code. This sounds a lot like software testing, just at a large scale and where the results are more quantified and scientific.

What are those AI researchers at Google doing all day while they get the big bucks? As you might suspect — they are testing all day. The basic process of machine learning is simply a matter of having a lot of test input examples (say pictures of dogs and cats), and knowing the expected outputs (“dog”, or “cat”, or “CatDog”), and giving them to a machine to learn on its own. Sure, these machine learning engineers try different algorithms and tweak the weights of the training mechanism, but by and large, this is simply testing. These engineers are spending most of their time testing these software systems and algorithms, with only the occasional creative thought, followed by lots and lots of testing. Notice again, these engineers aren’t usually writing a whole lot of code, and when they do write code, they are often writing additional testing code. These high-paid “AI” engineers are really just very sophisticated testers.

Training a neural network looks a whole lot like ‘Testing’

So why aren’t traditional software testers paid as much? Why is software testing so different? So painful? Because we don’t have a common set of shared algorithms and tools for basic software testing. Those Search and AI engineers have large, sophisticated, shared algorithms, infrastructure, and tooling. Software testers have just never had the money, time and expertise to build a similar environment for testing. These AI and Search engineers have access and contribute to open source libraries like SciKitLearn and TensorFlow which are really just glorified testing infrastructure. TensorFlow even has drag-and-drop user interfaces. Nothing like this really exists today for software testing.

TensorFlow Playground

A key point that causes a lot of fear, uncertainty, and doubt among testers is the thought that they not only need to learn to be a great programmer but also understand how all this “AI” works. What people miss is that you don’t have to be a great programmer to be a great Search or AI engineer. You do need to have the basic skills of data analytics and statistics, but a few quick online courses can ramp most technical folks on these topics enough to be a competent machine learning engineer. You do *not* need to know how all this “AI” magic works to apply it to testing problems. It is helpful, but not necessary. Much like how my Dad doesn’t need to know about TCP/IP to use a web browser or need an electrical engineering degree to drive a Tesla, testers can use these tools to apply AI to their testing problems.

All that said, it would be even better if there was a suite of shared algorithms, datasets, and tools specifically designed for testing problems. If such a suite of tools existed, testers could easily drop into this new world of AI-driven testing, without needing to know the mechanics of the underlying AI. Testing-specific tools need to be built to abstract testers from the complexity of the training mechanisms, and let them focus on what they are great at — designing test cases. The humans should design the test cases and the machines (AI) should do the hard work of figuring out how to execute those test cases.

At test.ai we are quietly working to open source our standard image classification system for identifying screens (login, search, shopping carts, etc.), and identifying individual elements such as (username, password, search text boxes, and shopping cart buttons). And we are working to bring this AI capability to every test automation framework, freeing testers from wrestling with the mundane work of CSS Selectors, magic IDs, and XPath’s.

Search and AI engineers use tools like LabelBox and Supervisely to label their data. Testers need similar tools to make labeling their test data easy. At test.ai we are working on similar tools, focused on software testing, to make it easy for testers to become machine learning engineers. Behind the scenes, the system automatically builds classifiers and trains networks to recognize the different parts of an application. Much like those Search and AI engineers, testers need only tell the machine what each element or screen is called. Much like Search and AI engineers have shared systems for image recognition for cats, dogs, sunsets and people, the test.ai system also has pre-trained classifiers for things testers care about like product categories, home pages, login buttons, etc. Software testers just need the right tools to become machine learning engineers themselves.

Those Search and AI engineers, also have visual tools to connect data pipelines and easily mark parts of the application as rewards for reinforcement learning. Testers also need something similar to transition to a world of machine learning. test.ai has built simple drag and drop user interfaces to allow testers to compose a sequence of test steps and verifications. All the hard work of training the AI tens of thousands of times to execute the test case is hidden from the tester — they don’t need to know how it works to be a machine learning engineer.

Combined Test Data Labeling and Test Composing UI (test.ai)

Lastly, all those Search and AI Engineers have fancy query and visualization tools to see what the AI is actually doing. Software test engineers need similar tools. Again, test.ai has built intuitive reporting infrastructure to abstract the details of mining the gigabytes of data from each test run.

AI Test Result Reporting (test.ai)

Just like the Search and AI machine learning engineers have access to a toolset that lets them focus on their job, testers now have access to a similar suite of tools for their profession. The combination of test-specific tooling for labeling, training, and reporting, now means any software tester can be a machine learning engineer.

Until the machines are actually sentient, the key role for humans is the design of these tests. There is still plenty of room for work that only a machine learning engineer with a testing background can accomplish. Teaching the machines what is good, or bad behavior in an app is still a human activity. Analyzing the ‘correctness’ of the AI-driven tests, and understanding how to apply that to business decisions still needs the wetware of humans. The takeaway is that AI-driven testing enables the humans to focus on the things only human minds can perform and leaves the drudgery to the machines. Ironically, the encoding of that human expertise is training data for future machines to mimic and replace that human intuition.

Yes, there really will be fewer “Software Testers”, because so many of them will transition to machine learning engineer roles that happen to focus on software quality instead of just driving cars, or finding cats and dogs in photos. AI will fundamentally change how all software testing is performed, but it is a good thing. As a software tester, if you play your cards right, you can have a lot more fun, have a cool title, and finally make the big bucks.

— Jason Arbon, CEO test.ai

--

--

jason arbon
jason arbon

Written by jason arbon

blending humans and machines. co-founder @testdotai eater of #tunamelts

Responses (2)