Testing: Uniquely Human

jason arbon
3 min readAug 28, 2023

--

In the age of artificial intelligence (AI) and machine learning, there remains a profound need to understand the unique capabilities of human testers. While AI can handle vast amounts of data at unparalleled speeds, it still struggles with the intuitive, emotional, and contextual nuances that only humans possess. Testers say there are differences all the time, but there are rarely any public examples of exactly what testing is uniquely human. Lots of talk, not a lot of data — lets change that!

Why is Understanding Human Testers’ Unique Abilities Important?

1. Beyond Algorithms: Human testers often find bugs and anomalies that might be overlooked by AI or standard functional testing. These bugs often stem from contextual or situational understanding, or an intuitive grasp of user behavior that a machine might miss.

2. Human Intuition: Testers have a knack for identifying “out-of-the-box” issues based on personal experiences, cognitive biases, and cultural backgrounds that AI hasn’t been trained on.

3. Emotional Context: AI cannot (yet) fully understand human emotions, feelings, or the subtleties of user experience like a human can. For example, a webpage might technically work as expected but could still be perceived as frustrating or confusing from a user perspective.

4. Complex Interactions: Even on popular websites like Google, YouTube, or Microsoft, humans interact with web pages in intricate, sometimes unpredictable ways. These complex scenarios can be challenging for AIs to replicate or fully understand.

A Call to Help

By gathering real-world examples of tests and bugs that human testers believe are unique to their skill set, we can:

  • Understand the human tester’s value better.
  • Measure and test these hypotheses.
  • Potentially discover that AI can do more than we currently believe–possibly train it to be better.

To help make this a reality, I’ve created a public spreadsheet. Yes, a spreadsheet. Humanistic Testing Examples . It is super simple, testers just pick a tab for any given website and add rows for things they try or issues they find, or even question they ponder that they think are likely only done by humans. I tried to make it as simple as possible, and as public as possible.

I’ve personally taken a couple hours and did a scan of the Google.com website, from a humanistic perspective. Avoiding many of the ‘easy’ things for AI/automation like functional testing, or automated accessibility/security scans. If you are a real nerd tester, you might be curious to see that I found quite a few ‘nits’ on Google’s website. Google might be curious to know too :).

How can you contribute?

  • Examples of tests or bugs that you believe only a human could identify.
  • Unique testing scenarios or strategies or bugs noticed on popular platforms like Google, YouTube, and Microsoft (or add any new ones you like!)
  • Insights on the differentiation between AI and human capabilities.

By sharing simple lists of humanistic tests and issues, we aim to:

  • Quantify the differences between AI and human testing capabilities.
  • Understand the potential training data required to enhance AI capabilities.
  • Propel discussions on whether AI can ever truly replicate or surpass the human testing experience.

As we delve deeper into the digital age, it’s essential not to overlook the unique strengths and insights human testers bring to the table. By fostering a collaborative environment, I aim to bridge the gap between AI and human capabilities, ensuring that the future of testing is both advanced and empathetic.

After we get samples of the Humanistic testing, lets see what the AI bots find, and measure the differences.

Join other testing nerds in helping understand what makes us uniquely human and let’s redefine the boundaries of testing together with a little bit of data!

–Jason Arbon

--

--

jason arbon
jason arbon

No responses yet