Near-Future of Testing: Relative Quality

jason arbon
3 min readMar 21, 2023

--

How Does the CEO of Hired or LinkedIn feel about Home Page Performance?

Test results are often viewed as dull and uninspiring, with dashboards of pass/fail results numbing the minds of those who encounter them. Simple summaries of test pass percentages lack context and meaning for most team members. A key issue with software test results is that they are arbitrary quality measures. However, the near-future of software testing will make test results engaging and relevant for everyone on the team and within the business by providing a relative context.

One challenge is that test results have numbers in them but are not absolutely quantitative. Pass percentages depend on the number of tests written at any given point, and the number is often dominated by the results of test cases that are easy to write or permute. Interpreting these results typically requires a tester or someone with a vested interest in quality to contextualize the data. Even then, interpretations are highly opinionated, rarely rigorous, and easy to dismiss when prioritizing work items.

Unlike fields such as civil engineering and aeronautical engineering, which have established sophisticated and repeatable standards for quality, software engineering lacks absolute measures. Some might say code coverage is a solid metric — it isn’t. Not all code paths are equal in importance to the product, some are easier to hit than others, and the first 80% of code coverage costs 20%, and the last 20% costs 80% of effort, hints that a linear % is not the right metric to be using. This makes it difficult for companies to integrate test results into their decision-making processes regarding product releases.

How do we get teams to understand and care about quality? We make it relative to the competition. Either another app in the market or the app made by another internal team. I was working with an app team once and couldn’t get them to care about their app's load time — it took some 5 seconds to load. Chirps. Then, I ran the same test on their competition’s apps and showed them their app was 2.5X slower than the next-worst app. The meeting went from Blaise to “I told you it was slow!” and “Wow”, and “We need to share this with management”. The mobile engineers claimed that it wasn’t possible to make it much faster without a full rewrite — and even then, they weren’t sure how much better it would be. Within two weeks, the team had let go of 2 engineers and decided to simply drop their mobile native client and just point users to their mobile web page. How's that for impactful data? Perhaps too impactful…

If CEOs are informed that their website is slower than the competition — they take notice. Similarly, engineering managers pay attention if their product has bugs that competitors don’t. However, current testing methods don’t provide these metrics, as time and money constraints hinder thorough testing of their own app, let alone their competitors’ applications.

The promise of AI-powered testing will soon revolutionize how we assess software quality by making test results for all apps publicly available. By enabling test case reuse across similar applications, AI-native test infrastructure will facilitate cost-effective comparisons of relative quality. Imagine a brave new world where people care about test results and are actually motivated to fix bugs.

Teams will appreciate test results that demonstrate superior performance compared to competitors. Rather than being seen as the “bad news bears”, testing teams and test results can actually accelerate release cadences and reduce engineering costs. CEOs and engineering managers may even prioritize test data over feature completion in their Jira work items.

AI will soon enable a new era of quantifying application quality, replacing the traditional grids of pass/fail results and percentages with meaningful data and a motivating context. Teams will not only care about testing but might even celebrate the testers who generate these valuable insights that not only show what needs to be fixed but also what is going well.

At Testers.ai, we are working on the near-future of testing and relative measures of quality.

Join us and signup for the beta!

— Jason Arbon

--

--

jason arbon

blending humans and machines. co-founder @testdotai eater of #tunamelts