The Rise of TestOps
How do the most well-funded, sophisticated software test teams on the planet test everything? — they don’t! There is a new way to test that delivers great coverage, finds important bugs early, and ensures that core functionality works on every new release. This new way isn’t well described because it feels like a capitulation to some, but it is a direct response to delivering high-quality software that ships every month, week, even every day. The key to delivering modern software with new features and not breaking existing functionality is “TestOps”.
“TestOps”, much like DevOps, is the convergence of testing with operations and development. We know most testing today is too slow and unreliable to keep up with development and operations — even in the best of situations. Yes, DevOps often handles API testing and unit testing that quickly verify isolated portions of a program. Still, we all encounter major bugs and regressions in production software. What has been missing is quick, automated, end-to-end testing where the entire product is tested to make sure that it all works together and for the end-user.
The keys to successful TestOps is a combination of principles and often the application of Artificial Intelligence and Machine learning. The principles outline what to test, and the application of AI finally ensures that test automation is robust enough for operational testing.
The TestOps principles are
- Only automate the easiest, most reliable, and most important tests.
- Execute them as often as possible.
- Alert to any failures.
Only the most important tests should be automated, but not to a fault. Modern TestOps teams only automate the ‘easy’ and important tests. If the test is difficult to automate, it means that more resources will be spent on it that could have been spent on other test cases. Difficulty and complexity are the enemies of reliability. TestOps test cases need to be extremely reliable because they will be executed often and executed with a lot of visibility within the development and operations teams. For example, even if you are testing a shopping app and the process of sign-in is obviously important but difficult, it is far better to build very reliable tests for other aspects of the application that are easier to test. This can seem counterintuitive, but test reliability is more important than exactly what is tested because test failures randomize the entire team.
Like unit and API tests that are written by developers, the goal of TestOps is to have the tests run as often as possible, and by many parts of the development and operations teams, in their processes, to catch bugs as early as possible. Most often, on agile and modern app teams, test automation is run as an afterthought and background process that only testers monitor. Ideally, TestOps tests can execute on developer machines, but at minimum, they should execute on every new official ‘build’ or deployment, including production for monitoring. TestOps tests should strive to run everywhere to get the maximum value from the test automation investment. Fewer tests running often and reliably is a great thing.
Gone are the days of traditional test result dashboards — no one has the time to look at them. Not even testers want to stare at grids of green and red boxes, with donut charts and meaningless % test pass summaries. People only want to know what has broken that warrants blocking the next release. As TestOps only tests key functionality, by definition when these tests fail, the development and operations teams should want to know about it immediately. If a test fails but doesn’t warrant a text message to the development lead, or wake up some operations folks, the test isn’t important enough to be included in TestOps. TestOps failures should be shared in realtime with custom emails, text messages, or more formal escalations with tools like PagerDuty.
TestOps is only now a realistic option thanks to the introduction of AI into the world of testing. Most TestOps teams are leveraging AI for test development, test execution, or test verification steps. Traditional test automation is often hard-coded to specific user interface elements or specific user-flows. But, these test scripts are fragile and break whenever the look or flow of the application changes. Even more, most every team deploys on multiple platforms like iOS, Android, Desktop, Web, even embedded devices, — this requires a different script for every platform, and increased complexity for every test case. AI changes all that with the ability to make more resilient test case execution across platforms, UX changes, even differences between staging locations and A/B flights. Even better, AI is enabling folks who haven’t spent years debugging JAVA or Python code and learning frameworks to write far more reliable tests, and more quickly. It is difficult to get tests reliable enough to execute in a TestOps world with traditional tooling — AI is enabling a new, and smarter type of testing.
What about traditional testing and how does TestOps relate? Firstly, TestOps most often can and should be written by skilled testers. TestOps is a new way to think about testing. If a team is starting from scratch, it’s likely best to focus on TestOps first. Many large-scale, high-visibility projects have shipped recently with only a TestOps approach. Teams with existing manual or automated tests should prioritize a TestOps approach now, and only after TestOps is deployed to some level of satisfaction, return to automating lower priority test cases, more difficult test cases, or manual test cases, and continue to run those in the background. It's just not worth interrupting the latest agile sprint, or continuous deployment for a lower priority bug. Focus on TestOps first.
TestOps has organically appeared across large and small organizations over the past 18 months. Sometimes using the term, most often just realizing this is the best way to test modern applications. If your team is capable of writing incredibly simple and reliable tests using traditional programming environments, frameworks, and test infrastructure and services you are one of the lucky few, in fact, you are a unicorn. For everyone else, the good news is that AI-powered test automation, that even non-programmers can train, is a great leveling factor, and all of us can now quickly jump to AI-powered TestOps.
Note: test.ai is rolling out a Community Edition of our AI-based training system for TestOps. Connect with this new community and technology to bring TestOps to your team: https://www.test.ai/product/aitf
— Jason Arbon