AI Broke Test Automation

jason arbon
5 min readAug 9, 2023

--

Amazingly, generative AI has had little impact on the world of test automation. Microsoft has added amazingly smart AI into production versions of Office and Windows. Google is rolling out a whole new Generative AI-based search engine. Is software testing just so much more complex? Are the existing approaches to test automation just so much better? Probably not.

More than manual software testers, test automation engineers are inclined to ignore AI. Many of these engineers are often just learning Java and existing test frameworks in the hope of eventually working on the engineering team-building buttons. Those that are experienced in test automation, pride themselves in their depth of knowledge in languages like Java or Python, or expertise in applying test frameworks like Selenium, Appium, or Playwright. These engineers have seen AI as a curiosity, a mysterious black box, an overly sophisticated form of computer science that requires years of experience and vast amounts of computing to leverage. Test Automation engineers are generally happy about that perception, so they can hide in their sandbox of expertise — but generative AI just broke that in several ways.

The manual tester in the cubicle next to them, their manager, and even their grandmother can now simply ask an AI to generate Java/Selenium Automation. Test automation now looks easy. Much like manual testers complained about test automation, test automation engineers now protest that generated code is ‘too simple’, ‘it needs meticulous human curation’, or ‘who is going to check the AI output’? People are questioning why their skills are still needed. Even if their arguments are sound, maybe they should just be checking the output of the AI. AI is getting smarter, and faster than human testers can learn new skills — isn’t it just inevitable that machines will generate all the tests anyway? Maybe the manual tester should really be the one validating the results of the AI-generated automated test code?

Microsoft is already releasing Generative AI for software testing — without the permission of Software Testers. The software testing industry has been talking about shifting left for years, but now developers with real AI tooling are shifting right, and quickly. The trend now seems stronger for developers doing more testing, then testers becoming more developer-like. Developers never really wanted to do the testing, but if it takes less time to generate some OK tests, versus hiring and talking to a team of testers, things might be about to change.

AI is Fast. Not just a little faster, but more like 10X, or even a 100X faster generating test code vs an experienced human with a compiler. The cost? Perhaps 1/1000th the cost of that experienced human. If the generated test automation is wrong 1%, even 10% of the time — maybe the cost/benefit is still interesting.

Obvious Tip: When it comes to test code generation, many of the errors are simply due to assuming the wrong version of the language or testing library. Just tell the AI upfront in your prompt what versions you are using.

About all that bad test code AI generates — most test code isn’t all that stable or well-architected itself, but we’ll ignore that for now. Like many manual software testers, most test automation engineers have only dabbled with generative AI for test code. Maybe they have toyed a bit with AI for input variations. But in reality, testers avoid finding out the answers they don’t want to hear. Most of the test code generated is pretty good. It's amazing that so many developers seem to be pretty happy and excited about generated test code — but testers, not so much. Bias? Only human testers can write real, and reliable test code — right? Most testers haven’t tried simple things like asking the AI to check the generated code for errors or asking the AI to fix its own generated code. If they did, they probably wouldn’t like the test results. They often prefer to quickly declare “it doesn’t work” and head back to their normal routine of writing page-objects, or tackling flaky tests.

Tip: When asking generative AI for test code, immediately ask it to think through the logic of its answer again, and fix any logical or code errors — you will be amazed at how well this works.

When the few brave souls, with optimism, in test automation start experimenting with generative AI, it's often fraught with frustration and error. AI is the worst nightmare for many test automation engineers:

  1. Variable: It often produces different outputs for the same input
  2. Qualitative: The output is often ‘qualitative’, not boolean or yes/no, pass/fail.
  3. Fuzzy Responses: Processing the output requires parsing human language instead of well-typed values on a class instance.
  4. Persnikity: Even if you demand the AI give you a JSON response — the AI is moody and will still sometimes change the JSON structure or add extra commentary to the response and break your parsing. The Horror!

Ironically, these are the types of things Software Test Automation Engineers worry about in product code. These are also the things that they demand product code be able to handle. As we all know, some defensive coding, input validation, retries, and a few try/catches can go a long way.

To date, other than cost, the key complaints with test automation were the pace and lack of coverage — generative AI has the opposite problems: it is too fast and it is too easy to generate a lot of it :)

Those few brave engineers still trying to leverage generative AI in their test automation after their first encounter will benefit greatly. The field of test automation engineers in general will likely narrow, but the market demand for experts with GenAI experience will grow exponentially in the coming years. The test automation engineers that aren’t broken by generative AI, despite all the reasons they should be, will thrive in this new world of software testing. But, many will be broken along the way.

If any one is left, next post will cover how AI has broken Test Tools and Marketing.

— Jason Arbon

--

--

jason arbon
jason arbon

Written by jason arbon

blending humans and machines. co-founder @testdotai eater of #tunamelts

Responses (4)