AI QuickStart for Testers

Here’s a quick start guide for software testers looking to quickly incorporate AI into their workflow.
Important:
- If you’re concerned about privacy or security when using AI — don’t stop here! Refer to the Privacy and Security section below.
- If the cost of $30 is a barrier to adopting AI to enhance and future-proof your career, check out the Cost section below.
Starting Now is Key
The most important step is to start now. This might be your last, best chance to get ahead.
AI tools like ChatGPT are currently more accessible, affordable, and effective than ever. Use ChatGPT for everything: drafting emails, creating test cases, analyzing reports, and even brainstorming during your lunch break.
If you’re unsure how to use it, don’t waste time on classes or tutorials. Treat it as your personal assistant and just start asking questions.
If you can’t figure out how ChatGPT can help with your work as a tester, you likely haven’t tried enough — or may be resisting as a defense mechanism. Without learning soon, you risk being left behind. Starting is super easy — just follow the guidance below.
Key Things to Keep in Mind
When using AI/LLMs as a tester, here are the key thing to keep in mind:
- Context: Help the AI help you by sharing what’s happening. Clearly explain what you’re trying to accomplish and how the AI can assist. Provide as much data as possible — this could include test cases, product code, logs, screenshots, or even team photos. If the AI’s responses seem incomplete, it’s often because it lacks sufficient context. Don’t hold back — equip it with the details it needs to give you the best possible answers.
- It is a “ChatBot” — chat with it. Don’t treat AI like a search engine. Have a conversation. Simple questions lead to simple answers. Instead, ask complex questions, request detailed explanations, generate test cases, and seek iterative feedback. Dive deeper, asking why at every step. AI learns from your conversation, just like a child.
- Truthiness and Accuracy: Many testers worry about LLM/AI output accuracy due to stories about hallucinations or errors. Remember, testing questions are often straightforward. If you provide clear instructions, AI can deliver highly relevant answers. Complex or ambiguous prompts lead to unpredictable responses.
- Speed and Cost Efficiency: AI is 100 times faster than manual work and 100 times more cost-effective than human effort. Don’t hesitate to experiment and explore its potential — the time you invest in learning how to use AI will yield significant efficiency gains and transformative results.
- It’s Smarter Than You Think: Approach AI as though it’s smarter than you. It might occasionally misinterpret your instructions, but with the right prompts, it often provides superior solutions.
Advertisement: The easiest way to use AI for test coverage is with Checkie.AI. Simply provide your website’s URL, and you’ll receive free automated test results powered by AI.
Pick an AI / LLM ChatBot
All the major chatbots are useful for software testing and no single one stands out as best specifically for software testing purposes. When you do hear one is better than another — its is often just momentary. All of them have the same core value for a software testers for most all purposes.
Generate Bugs
As testers, your goal is to find and document bugs, and AI makes this incredibly simple:
- Take a Screenshot
Capture anything — a webpage, text document, product requirements, or even network traffic. - Upload and Prompt
Ask the AI: “Check this screenshot for any bugs. If there are any, please list them. If not, don’t list any.”

Tip: As a tester, always include the option for the AI to return "no issues" in your prompt. This helps ensure the AI doesn’t hallucinate or invent problems where none exist, leading to more accurate and reliable results.
The AI will try its best to find bugs, but it’s up to you to review them and decide which ones might be interesting or useful. Who can argue with quick, “free” bugs that you can take credit for? 😊
In this case, the AI identified several issues — even spotting an inconsistency in the font!

We’ve just found our first bug with AI! Try asking the AI if there are any bugs or issues with the webpage or application you’re testing — it’s embarrassingly easy, incredibly fast, and let’s be honest, what tester wouldn’t want “free” bugs? 😊
Generate Tests
A key responsibility of a test engineer is creating test cases, and AI can definitely help with this — especially the latest models, which are smarter and capable of processing images and large amounts of data or context.
Let’s generate some test cases for the Google homepage!
- Take a screenshot of the webpage (or whatever you’re working on). This could be a picture of text, documentation, product requirements, or even network traffic. Alternatively, you can provide the raw text from these documents as well.
- Upload and Prompt: Provide the AI with the relevant screenshot or text, and then ask a simple question like, ”Please generate test cases for this webpage.”

The AI delivers exactly what you ask for — faster than you could have typed, let alone thought of, the test cases yourself.

The list of tests goes on, and you might be thinking, “I could have come up with these myself.” Well, yes, but remember — we only asked a simple question. Now, let’s take it a step further and ask the AI to generate tests that testers might have missed but are still useful.

If you’re a skeptical tester or genuinely curious about the difference between your own thought process and the AI’s, stop here and ask yourself the same question. Then compare your results.
Also, remember to ask the AI to explain its reasoning — it’s always a great way to gain more context and insights, especially for AI-driven testing tasks.
Take a few minutes and come up with your own list.
The AI responds with over 16, here are 3 of them:

The test cases are pretty interesting — I know I didn’t think of some of them. For instance, the AI suggested testing different interactions during page load. Take a couple of minutes to consider what other interactions might be worth exploring yourself.
Now, let’s ask the AI again, this time specifically requesting explanations for its reasoning.

And the AI answers with:

Here are 2 of the 16 suggested test scenarios targeting issues caused by interactions during page loading. The test cases are not only interesting but also come with insightful technical reasoning about why these scenarios might cause problems before the page fully loads.
Great testers might be thinking, “Wait a minute, the Google homepage loads too quickly for me to even test these scenarios!” Silly AI doesn’t always account for the realities of the physical world. But when in doubt, let’s ask the AI for help — even in figuring out how to execute these test cases effectively.

Just when you think the AI might be getting too creative, it surprises us with insights that many testers might not be familiar with. And guess what — these scenarios actually occur on slow networks!
Of course, we can keep asking “how” and “why” as many times as needed. Great testers will continue to prompt the AI for more test cases, different types of test cases, ways to verify functionality, and explanations for why the AI finds them interesting or useful.
AI isn’t just a test case generator — it’s a helpful partner that enhances our testing suites and even educates us on the mechanics of executing different types of test cases.
Generate Test Automation
AI can assist with traditional test automation, but many automation engineers remain skeptical — especially after brief interactions. This hesitation is understandable, as test automation engineers have invested significant time and money learning to code, and they may feel anxious about how AI could impact their role.
The good news? AI can’t fully handle all aspects of automation yet, but it can significantly accelerate your scripting work.
Code Generation: Let’s start by asking AI to generate the test code needed to automate a search execution on the Google homepage. It’s as simple as asking!
Keep in mind, whenever you leave something relevant out of the prompt, the AI will make assumptions. In this case, we didn’t specify the programming language or test framework, so the AI defaulted to what it sees most often — Java and Selenium. We could have just as easily asked for Python, Playwright, or any other combination to suit our needs.

Yes, the code is missing try/catch logic. The AI is aware of this and generates code primarily for clarity and demonstration purposes, not as production-ready code.

How did the AI know to find the element using name="q"
? While we can’t know for sure, it’s likely because the AI has seen many demo examples of automation code that use this exact approach. However, when testing a page the AI has never encountered, you may need to provide HTML snippets or other hints to help it generate the correct selectors.
But wait — it didn’t use page-object-models (POM) or any of the libraries we typically rely on for setup, cleanup, or environmental parameters in our existing tests. To address this, you’ll need to share that information with the AI, and it will happily incorporate your existing assets.
In fact, the AI recognized that the POM was missing search functionality and extended it. Pretty cool. Pretty smart.

And, it updated the test script to use the new Page Object Model.

The AI not only leveraged our POM but also extended it to be more useful in other automation scripts.
If we want the code to be “production ready” and avoid flakiness or false negatives/positives, we can simply ask the AI to make it so. Adding try/catch logic is just one example of how the AI can improve robustness. Here’s a great hack: ask the AI to generate “robust and verbose code.”
This approach is perfect for test code. It makes the scripts less likely to break and ensures that, when they do, it’s easy to understand what the code was trying to do and what actually happened — crucial for those stressful moments when you’re debugging in a hurry.
Let’s try that and see if the updated code is better suited for a test infrastructure.
Tip: Ask AI to make the code “robust and verbose” when working with test scripts.
You can see the AI has added a lot of useful logging and try/catch blocks. This code is likely more robust and verbose than some of the code you’ve written recently — go ahead, check for yourself if you’re feeling brave! 😊

OK, but only a human tester knows the specifications, right? Or the test plans? Simply attach them to the prompt, and you can ask the AI to add more test cases, identify gaps in coverage, and even double-check that you’re verifying the correct behavior in the code.
If you have existing test code, you can upload it to the AI and prompt it to perform a code review, make the code more robust, add comments, refactor it, or suggest improvements. AI isn’t just great for generating new code — it’s also incredibly helpful for fixing up and enhancing old code.
Advertisement: The easiest way to use AI for test coverage is with Checkie.AI. Simply provide your website’s URL, and you’ll receive free automated test results powered by AI.
Test Planning
AI can assist in creating or reviewing test plans and strategies.
- Take a Screenshot: Capture the webpage (or whatever you’re working on), specifications, a basic description, or any context related to what you want to test.
- Upload and Prompt: Share the details with the AI and ask a simple question like, “Can you create a test plan or test strategy for this application?” The AI will generate a structured and insightful plan to guide your testing efforts.

Experienced testers might think the AI’s test plans are superficial or too high-level to be useful. But compare your current test plans with what the AI suggests — you might just find you’re missing a section.
Keep in mind that AI responses are limited by the number of words it can return in a single response, and it’s generally trained to be concise. However, turning its output into a deep and comprehensive test plan is simple — just ask the AI to elaborate and provide more details for each section individually.

And, you can keep getting even more details…

Simply stitch these subsections together, and you’ll likely have a more detailed and thorough test plan in just 15 minutes — far better than what you might have today.
If you already have existing test plans, upload them to the AI and ask if anything is missing or if it has suggestions for improvement. The AI can help refine and enhance your plans quickly and efficiently.
Requirements/Specifications
Some software testers feel blocked without formal requirements. If that’s the case, follow the same process as above: share any related documents, screenshots, or context you have, and ask the AI to reverse-engineer a product specification or product requirements document. This can provide a solid starting point for your testing efforts.

Similar to test plans, if you’d like more detail on any subsection of the reverse-engineered specifications or requirements, simply ask the AI to expand on each. It can provide the depth and clarity you need.
Privacy and Security
Concerns about using Large Language Models (LLMs) are understandable, especially among testers. However, dismissing AI without thorough consideration may mean missing out on valuable tools. Some testers privately admit to using LLMs despite corporate IT restrictions, though this approach isn’t advisable.
It’s important to note that most cloud-based LLM providers, particularly those offering paid plans, have implemented strict data privacy measures. They typically do not use your data for training unless you explicitly permit it. Paid API versions generally ensure that your data remains excluded from their training datasets, as their business reputation depends on maintaining trust. While early implementations faced challenges, the industry has rapidly evolved to meet enterprise standards, prioritizing data privacy and security. The significant financial incentives and potential risks have driven providers to handle your data with utmost care.
There are many options for leveraging AI while addressing privacy and security concerns:
- Trust the AI Companies:
Review the privacy and security policies of AI providers, then intelligently opt in or out based on your comfort level. For many, this is sufficient. However, note that U.S. federal regulations may require companies to retain data temporarily (~90 days) for potential government access. Criminals and highly sensitive operations should obviously avoid public cloud services. - Public and Private Clouds:
Providers like Azure, Amazon, and Google Cloud offer both public and private corporate clouds to host LLMs. Private clouds function as isolated environments, either within public cloud data centers or on your own premises, ensuring exclusive access. These setups can meet even the strict security requirements of U.S. government spy agencies. Many large enterprises already have such configurations, but testers may not always be aware. Check with your IT or development teams for access. - Self-Hosting:
For maximum privacy and security, you can run AI locally. Tools like Ollama make it easy to host open-source or public models directly on your machine. With just a few clicks and some storage space, you can run AI completely offline. The data never leaves your device, and it’s free. - Cost:
LLM costs have dropped over 95% in the past 18 months. Many providers offer free or affordable options, including chatbots. It’s surprising to see professionals spend hours writing articles or books about AI while relying on free versions. While free versions are slightly behind paid options in terms of intelligence, speed, and features, they’re often sufficient for most tasks. However, for professional use, investing $30/month in a paid plan to boost productivity is a small price to pay and often well worth it.
IP, Digital Rights, and Ownership
Note that major LLM providers grant you ownership of the content their models generate based on your prompts. This policy aligns with their own self-interest, ensuring you don’t need to worry about the code, text, or images being owned by someone else — you own the output.
You might wonder whether the generated content, such as text or images, is derived from other people’s work. Like most human thought and creativity, LLM outputs build on the collective knowledge of humanity — they stand on the shoulders of giants. You can safely assume these models have ingested vast amounts of information from books, websites, songs, and other media.
That said, you shouldn’t worry about using the content generated by the LLM. The companies behind these tools, supported by their legal teams and substantial resources, protect users from lawsuits related to generated text, images, or sounds. In the unlikely event of a legal dispute, these companies would defend you and cover any damages.
The real protection lies in the transformative nature of the output. The AI modifies and interprets the information it processes, creating derivative works. Just as Google is legally allowed to display snippets of websites on search results or artists can be inspired by others’ techniques, AI is within its rights to generate derivative content.
Concerns / Problems
If you’re waiting for AI to be perfect, you’re going to miss the train. Remember, humans are far from perfect too.
If you’re waiting for AI to be perfectly consistent or repeatable, consider this: humans, including testers, are not perfectly consistent or repeatable either — especially when multiple testers are working on the same task or running the same test case! Don’t let this hold you back. In fact, this variation can actually be valuable if you’re clever enough to capture its benefits. 😊
— Jason Arbon
CEO/Founder @ https://checkie.ai