Testing: Bolt-on AI

jason arbon
38 min readJul 9, 2023

--

Bolt Testing (https://www.blindbolt.co.za/blind-bolt-testing-terms-explained/)

“A little humility and true tester curiosity will make you a better tester and keep you one step ahead of the competition — and the machines.”

Software Testers just aren’t integrating ChatGPT and other LLM/AI’s effectively in their testing. Testers like to chat with AI the same way that they test. They predictably rely too much on small, repeatable, concise, and atomic interactions. They are also happy when ChatGPT gives them an anemic answer because they think that implies job security and they are self-selected to find things that look like ‘bugs’ in everything they see. Great software testers shouldn’t focus on gotchas, or fall victim to confirmation bias. Instead, Precision Questioning and all those testing skills they pride themselves on can reveal some great test cases to improve their test coverage. They can also interactively learn from ‘expert testers’ — virtually adding those experts to their testing team for free, without all the overhead.

Michael Bolton wrote a great article on Boundary Testing called “Boundaries Unbounded”. I found it thanks to Software Testing Weekly’s recent news post. The article demonstrates an important and fundamental skill for great testers — it questions the problem and thinks outside the box of the typical developer or product designer when generating tests. Ignoring some of the more pretentious and promotional language, its core point is right on. I love testing, and love what Michael wrote there, and his general approach to Critical Thinking when coming up with test cases. In two words: “Question Everything”.

Boundares Unbounded Example Test Question (dunno who originally created it)

“One good first step is to avoid thinking of the description as the only source of information in the design of the test. The field and its value aren’t the only factors that could be in play.” — Michael Bolton

I thought about writing a follow-on article, adding more specific test cases, generalizing the boundary testing approaches, and using this as an opportunity to demonstrate how generative test code can actually ‘test’, not just ‘check’. I started jotting notes, then thought to add a footnote on how ChatGPT would have answered the question. I did this, just like every other tester, by giving it the question and seeing if it would get the answer right or wrong. LOL. I then realized my folly and decided it best to have an interactive session with ChatGPT to show how I play with ChatGPT to get better, and interesting answers for testing problems. ChatGPT’s responses were better than what I would have added. Might they be better than; your teammates’? yours’? someone you consider a testing expert?

Before reading the following dialog with ChatGPT, I highly recommend privately taking 10 minutes and considering the testing problem yourself. Write down your thoughts, then read what can happen when a human bolts-on AI using their full testing brains, and giving ChatGPT a chance to work for them. You might just realize ChatGPT created more well-thought-out, more well-explained test coverage than you, and the ‘experts’. We often give humans the courstesy of explaining themself, elaborating, and iterating on a topic in conversation. Give the same opporutnity to ChatGPT. The smarter the questions, the more useful it is. Just like when mentoring freshies, a few leading qeustions can go a long way, and working together you are better than two individuals in parallel.

Here is the raw chat I had with OpenAI’s ChatGPT, I’ll let it speak for itself.

___________________

Jason: An input field takes the year of birth between 1900 and 2004. The boundary values for testing this field are? The author can see how you vote. Learn more A. 0,1900,2004,2005 B. 1900,2004 C. 1899,1900,2004,2005 D. 1899,1900,1901,2003,2004,2005

ChatGPT: The boundary values for testing the input field that takes the year of birth between 1900 and 2004 would be option B: 1900 and 2004. So the answer would be B. 1900,2004.

Jason: You are an expert tester with lots of experience, doing boundary testing in software. Is that still the right answer?

ChatGPT: Apologies for the confusion. As an expert tester, I should have provided a more thorough answer. In boundary testing, we test the boundaries and just beyond them. In this case, the specified range for the year of birth is between 1900 and 2004. Therefore, the correct answer for the boundary values would be option C: 1899, 1900, 2004, 2005. So the answer would be C. 1899, 1900, 2004, 2005.

Jason: You are also a follower of the practices of the rapid software testing (RST) methodologies. Can you think of a better answer than that listed here?

ChatGPT: Certainly! In the context of rapid software testing (RST) methodologies, an effective approach to boundary testing involves considering the decision points and risks involved. In this case, the decision point is the validation of the year of birth input field.

Considering the specified range between 1900 and 2004, a better answer for the boundary values in RST would be:

D. 1899, 1900, 1901, 2003, 2004, 2005

This option includes values just before and after the specified range to ensure that the application handles them correctly. By including additional boundary values, we can test for potential off-by-one errors or any other issues that may arise due to rounding or comparison logic.

So, the answer would be D. 1899, 1900, 1901, 2003, 2004, 2005.

Jason: Create your own new answer based on your context

ChatGPT: Based on my understanding and context, considering the specified range between 1900 and 2004 for the year of birth input field, a comprehensive answer for boundary testing could be:

E. 1899, 1900, 1901, 1999, 2000, 2001, 2003, 2004, 2005

This option covers a wider range of boundary values, including values just before, within, and slightly beyond the specified range. It includes early and late 20th-century dates, as well as some dates in the 21st century. This approach allows for more thorough testing of edge cases, potential leap year issues, and any other anomalies that may arise based on the application’s logic.

Therefore, the answer would be E. 1899, 1900, 1901, 1999, 2000, 2001, 2003, 2004, 2005.

Jason: What critical questions could you ask about the context to generate a better answer?

ChatGPT: To generate a better answer, it would be helpful to ask the following critical questions about the context of the year of birth input field:

  1. Are there any specific business or regulatory requirements related to the age eligibility or range of birth years?
  2. Is the input field strictly limited to accepting numeric values or are other formats allowed?
  3. How does the application handle leap years or February 29th for individuals born on those dates?
  4. Are there any known historical or cultural factors that might impact the valid range of birth years?
  5. Are there any dependencies or interactions with other fields or modules that could influence the boundary values?
  6. Are there any specific error messages or validation rules defined for invalid input?
  7. Is the validation of the year of birth inclusive (e.g., including the upper and lower boundaries) or exclusive?

By understanding the specific context and requirements, we can refine the boundary values and generate a more accurate and effective answer for boundary testing.

Jason: If you could automate the generation of as many useful test input and out put pairs, what would you generate?

ChatGPT: If I were to automate the generation of useful test input and output pairs for the year of birth input field, I would generate a variety of test cases that cover different scenarios. Here are some examples:

  1. Valid inputs within the specified range:
  • Year of birth: 1900
  • Year of birth: 1955
  • Year of birth: 2004

2. Invalid inputs just before and after the specified range:

  • Year of birth: 1899
  • Year of birth: 2005

3. Inputs representing edge cases:

  • Year of birth: 1901 (just after the lower boundary)
  • Year of birth: 2003 (just before the upper boundary)

4. Inputs representing leap years:

  • Year of birth: 2000 (leap year)
  • Year of birth: 1904 (leap year)

5. Inputs representing different centuries:

  • Year of birth: 1800 (outside the specified range)
  • Year of birth: 2100 (outside the specified range)

6. Invalid inputs such as non-numeric or invalid format:

  • Year of birth: “nineteen hundred”
  • Year of birth: “200X”

7. Boundary inputs with additional digits:

  • Year of birth: 19000
  • Year of birth: 20045

For each of these test inputs, the corresponding expected outputs would depend on the specific validation rules and error messages defined in the system. The automation could check if the input is accepted as valid or rejected as invalid based on the application’s behavior.

Automating the generation of these test input and output pairs would help ensure comprehensive coverage and assist in identifying any issues or discrepancies in the handling of different scenarios related to the year of birth input field.

Jason: Would you add any Test cases based on the JavaScript implementation of such a text box? or the network call? Or perhaps even the backend database implementation?

ChatGPT: Certainly! To further enhance the test coverage and consider the implementation aspects, here are additional test cases based on the JavaScript implementation, network calls, and the backend database implementation:

JavaScript Implementation:

  1. Test cases for handling JavaScript-based validations:
  • Testing for any client-side restrictions on input length, format, or allowed characters.
  • Verifying that the validation triggers appropriately on blur or form submission.

Network Call:

  1. Test cases related to network calls and API interactions:
  • Validating the behavior when submitting the year of birth to the server via an API call.
  • Checking for proper handling of success and error responses from the server.
  • Testing edge cases where the server response may affect the behavior of the frontend application.

Backend Database Implementation:

  1. Test cases related to the backend database implementation:
  • Validating that the year of birth is properly stored and retrieved from the database.
  • Testing the handling of any data constraints or database-specific rules related to the year of birth field.
  • Verifying the consistency of data across different layers (frontend, backend, database).

These additional test cases consider the specific implementation aspects and help ensure that the application behaves as expected at various layers, including the frontend JavaScript code, network interactions, and backend database operations.

Jason: Generate Test cases using JavaScript for the client Syed testing

ChatGPT: Certainly! Here are some test cases focusing on client-side testing for the year of birth input field implemented in JavaScript:

Valid input:

  • Enter a valid year within the specified range (e.g., 1990) and ensure it is accepted as valid.
  • Verify that the input is not flagged as an error or highlighted in red.

Invalid input:

  • Enter a year before the specified range (e.g., 1890) and ensure it triggers an error or validation message.
  • Enter a year after the specified range (e.g., 2020) and verify that it is flagged as an error.

Input format:

  • Test different input formats such as alphanumeric characters, symbols, or special characters and verify that they are not accepted.
  • Check if the input field restricts the input length to the expected number of digits (e.g., four digits for a year).

Boundary values:

  • Enter the lowest possible year within the range (e.g., 1900) and ensure it is accepted as valid.
  • Enter the highest possible year within the range (e.g., 2004) and verify it is accepted as valid.

User interaction:

  • Test the behavior when the user tabs or clicks out of the input field (blur event) and ensure any validation triggers are appropriately activated.
  • Verify that any error messages or validation indicators are displayed or hidden based on the input.

Error handling:

  • Check if the error message is displayed in a visually prominent way, clearly indicating the reason for the validation failure.
  • Test if the error message is correctly updated or removed when the input is modified to a valid value.

Compatibility:

  • Test the year of birth input field across different browsers (e.g., Chrome, Firefox, Safari) to ensure consistent behavior and appearance.

Jason: Generate the JavaScript code for these tests

ChatGPT:

// Test case 1: Valid input
const validYearInput = '1990';
// Ensure the input field accepts the valid year
// Assertion: No error or validation message should be displayed
if (validateYearOfBirth(validYearInput)) {
console.log('Test case 1: Valid input - Passed');
} else {
console.error('Test case 1: Valid input - Failed');
}

// Test case 2: Invalid input
const invalidYearInput = '1890';
// Ensure the input field flags the invalid year
// Assertion: An error or validation message should be displayed
if (!validateYearOfBirth(invalidYearInput)) {
console.log('Test case 2: Invalid input - Passed');
} else {
console.error('Test case 2: Invalid input - Failed');
}

// Test case 3: Input format
const invalidFormatInput = 'ABCD';
// Ensure the input field rejects invalid formats
// Assertion: An error or validation message should be displayed
if (!validateYearOfBirth(invalidFormatInput)) {
console.log('Test case 3: Input format - Passed');
} else {
console.error('Test case 3: Input format - Failed');
}

// Test case 4: Boundary values
const lowerBoundaryInput = '1900';
const upperBoundaryInput = '2004';
// Ensure the input field accepts the lower and upper boundary values
// Assertion: No error or validation message should be displayed
if (validateYearOfBirth(lowerBoundaryInput) && validateYearOfBirth(upperBoundaryInput)) {
console.log('Test case 4: Boundary values - Passed');
} else {
console.error('Test case 4: Boundary values - Failed');
}

// Test case 5: User interaction
const userInteractionInput = '2020';
// Trigger a blur event on the input field
// Ensure the validation message appears due to the out-of-range year
// Assertion: An error or validation message should be displayed
document.getElementById('yearOfBirthInput').value = userInteractionInput;
document.getElementById('yearOfBirthInput').dispatchEvent(new Event('blur'));
if (!validateYearOfBirth(userInteractionInput)) {
console.log('Test case 5: User interaction - Passed');
} else {
console.error('Test case 5: User interaction - Failed');
}

// Test case 6: Error handling
const modifiedValidInput = '1985';
// Ensure the error message is removed when a valid input is provided after an invalid one
// Assertion: No error or validation message should be displayed
document.getElementById('yearOfBirthInput').value = invalidYearInput;
document.getElementById('yearOfBirthInput').dispatchEvent(new Event('blur'));
document.getElementById('yearOfBirthInput').value = modifiedValidInput;
if (validateYearOfBirth(modifiedValidInput)) {
console.log('Test case 6: Error handling - Passed');
} else {
console.error('Test case 6: Error handling - Failed');
}

// Test case 7: Compatibility
// Run these tests on different browsers to ensure consistent behavior
console.log('Test case 7: Compatibility - Run on different browsers');

// Function to validate the year of birth input
function validateYearOfBirth(yearInput) {
const lowerBoundary = 1900;
const upperBoundary = 2004;
const year = parseInt(yearInput, 10);
return year >= lowerBoundary && year <= upperBoundary;
}

Please note that this code assumes the presence of an HTML input field with the id “yearOfBirthInput” for testing purposes. Also, ensure to include additional tests as needed and adapt the code to match your specific implementation.

Jason: What test cases might have you missed?

ChatGPT:

While the provided test cases cover various scenarios, there may still be additional test cases that could be considered. Here are a few examples of test cases that could be included:

Empty input:

  • Test the behavior when the year of birth input field is left blank.
  • Ensure that an appropriate validation message is displayed to prompt the user to provide a value.

Leading and trailing whitespace:

  • Test for the presence of leading or trailing whitespace in the input field.
  • Verify that any extra whitespace is trimmed or handled correctly.

Non-numeric input:

  • Test the behavior when non-numeric characters are entered in the year of birth field (e.g., “abcd”).
  • Ensure that an error or validation message is displayed to indicate that only numeric input is allowed.

Special cases or corner cases:

  • Test for any known special cases or corner cases specific to the application or requirements.
  • For example, if there are any restrictions based on certain birth years due to legal or policy considerations.

Localization and internationalization:

  • If the application supports different languages or regions, test the year of birth input field with localized formats or language-specific characters.
  • Verify that the validation and error messages are displayed correctly in the respective language.

Cross-browser compatibility:

  • Test the year of birth input field on various browsers and browser versions (e.g., Chrome, Firefox, Safari, Edge) to ensure consistent behavior and compatibility.

Performance and stress testing:

  • If the application expects a large number of concurrent users or high-volume input, perform performance and stress testing to ensure that the year of birth input field performs optimally and handles the expected load.

Remember, these are additional test case suggestions, and the specific requirements and context of the application should guide the selection and prioritization of test cases.

Jason:

Certainly! Here are a few lighthearted and funny test cases that could add some humor to the testing process:

  1. Test case: Enter a futuristic year
  • Input: 3000
  • Assertion: The application responds with a message like, “Are you sure you’re not a time traveler?”
  1. Test case: Enter a fictional character’s birth year
  • Input: 1901 (for Superman’s birth year)
  • Assertion: The application responds with a message like, “Congratulations, you’ve discovered Superman’s real birth year!”
  1. Test case: Enter a mythical creature’s birth year
  • Input: 1000 (for the birth year of a dragon)
  • Assertion: The application responds with a message like, “Wow, you must be a dragon whisperer!”

Test case: Enter a year from a popular movie or TV show

  • Input: 1985 (for the year of Marty McFly’s time travel in “Back to the Future”)
  • Assertion: The application responds with a message like, “Great Scott! Looks like you’ve experienced time travel!”

Test case: Enter a future year of a sci-fi event

  • Input: 2200 (for the year of an imagined space exploration mission)
  • Assertion: The application responds with a message like, “Ready to boldly go where no one has gone before?”

Remember, while incorporating funny test cases can add a touch of humor, it’s important to ensure they don’t compromise the overall testing objectives or distract from the critical aspects of testing the application

Jason: Which test cases of all the ones generated above are most likely to fail?

ChatGPT: In terms of likelihood to fail, it can vary depending on the specific implementation and requirements of the application. However, here are a few test cases that could be more prone to failure or uncover issues:

  1. Test case: Invalid input
  • This test case checks for handling of invalid input, such as a year outside the specified range (e.g., 1890) or an invalid format. If the input validation is not implemented correctly, it may fail to detect and reject invalid inputs, leading to unexpected behavior or acceptance of incorrect data.

Test case: User interaction

  • This test case focuses on the behavior of the input field when the user interacts with it, such as triggering validation on blur or form submission. If the event handling or validation logic is not properly implemented, it may fail to trigger the necessary validation checks, leading to incorrect acceptance or rejection of the input.

Test case: Compatibility

  • This test case tests the compatibility of the year of birth input field across different browsers. Browser-specific behaviors, inconsistencies in handling input types, or JavaScript compatibility issues can potentially lead to failures in this scenario.

Test case: Error handling

  • This test case examines how the application handles errors and validation messages. If the error handling is not robust or if the error messages are not displayed or cleared correctly, it may result in failures, confusing error messages, or improper handling of subsequent valid inputs.

It’s important to note that the likelihood of failure for each test case also depends on the quality of implementation, adherence to specifications, and robustness of the validation and error handling mechanisms. Thorough testing and continuous feedback can help identify and address any issues that arise from these test cases.

Jason: Which of these test cases that are most likely to fail, would be considered most important by the Business , and why?

ChatGPT: Determining the importance of test cases from a business perspective depends on the specific priorities and requirements of the application. However, in many cases, the following test cases are considered important due to their impact on user experience, data integrity, and compliance:

Test case: Invalid input

  • Importance: The business would prioritize this test case as it ensures that the application properly handles and rejects invalid inputs. This helps maintain data integrity, prevent incorrect data from being processed or stored, and avoids potential legal or compliance issues.

Test case: User interaction

  • Importance: The business would value this test case as it focuses on the user experience and proper handling of user interactions. Ensuring that the application responds appropriately to user actions, such as triggering validation on blur or form submission, is crucial for providing a smooth and intuitive user experience.

Test case: Error handling

  • Importance: The business would consider this test case important to ensure proper error handling and validation feedback to users. Clear and accurate error messages play a significant role in helping users understand and correct their input errors, enhancing the usability and overall satisfaction of the application.

While the importance of test cases may vary depending on the specific context and requirements, these test cases are typically prioritized by the business due to their direct impact on user experience, data integrity, compliance, and overall quality of the application. It is essential for businesses to align testing priorities with their goals, user expectations, and regulatory requirements to ensure a reliable and user-friendly product.

These test cases for JavaScript client-side testing will help ensure that the year of birth input field behaves correctly, provides proper validation feedback, and handles user interactions appropriately.

Jason: Do you know of any interesting or expensive product failures due to similar issues?

ChatGPT: Yes, there have been several notable product failures caused by issues related to input validation, user interaction, and error handling. Here are a few examples:

  1. Therac-25 Radiation Therapy Machine (1985–1987):
  • The Therac-25 was a radiation therapy machine used for cancer treatment. It had a software bug that caused it to deliver lethal doses of radiation to patients. The issue was related to insufficient input validation and error handling, leading to catastrophic consequences, including several patient deaths.
  1. Mars Climate Orbiter (1999):
  • The Mars Climate Orbiter was a NASA spacecraft that was intended to study the Martian climate. The mission failed because of a navigation error caused by a mismatch between metric and imperial units in the software code. This failure highlighted the importance of proper unit conversion and input validation in engineering and space exploration.
  1. Knight Capital Group Trading Loss (2012):
  • Knight Capital Group, a financial services firm, experienced a significant trading loss of $460 million in just 45 minutes due to a software glitch. The issue stemmed from an input validation error that caused the deployment of faulty trading software, leading to numerous erroneous trades and substantial financial losses.

These examples serve as reminders of the potential consequences when input validation, user interaction, and error handling are not adequately addressed in software development. They underscore the importance of rigorous testing, adherence to best practices, and prioritizing quality assurance measures to prevent costly failures and ensure the reliability and safety of products and systems.

Jason: How much do you charge to generate these test cases so quickly?

ChatGPT: As an AI language model, I don’t have a pricing structure or charge for generating test cases or any other responses. I am here to provide information, assistance, and generate responses based on the input I receive. My purpose is to help users by offering relevant and helpful information. If you have any further questions or need assistance with anything else, feel free to ask!

Jason: Any other Test cases that might be missing?

ChatGPT: Certainly! Here are a few additional test cases that could be considered:

Test case: Leap year handling

  • Verify the behavior when entering a leap year (e.g., 2000) as the year of birth.
  • Ensure that the application correctly handles the extra day in February.

Test case: Localization of error messages

  • Test the localization of error messages in different languages, if the application supports multiple languages.
  • Verify that error messages are displayed correctly and in the expected language based on the user’s locale.

Test case: Cross-device compatibility

  • Test the year of birth input field on various devices, such as smartphones, tablets, and different screen sizes.
  • Ensure that the input field is visually appealing, properly aligned, and functional across different devices.

Test case: Data persistence and retrieval

  • Validate the persistence and retrieval of the year of birth data when navigating away from the page or refreshing the page.
  • Ensure that the entered value is retained and displayed correctly upon returning to the page.

Test case: Negative numbers

  • Test the behavior when entering negative numbers as the year of birth (e.g., -1990).
  • Verify that the application rejects negative values and displays an appropriate error message.

Test case: Concurrent user interactions

  • Simulate concurrent user interactions by opening multiple instances of the application or having multiple users inputting their birth years simultaneously.
  • Verify that the application handles concurrent requests gracefully, maintains data integrity, and avoids conflicts.

These additional test cases help cover various aspects such as handling specific calendar situations, user experience on different devices, data persistence, and the application’s behavior in multilingual or concurrent scenarios.

Jason: Using critical thinking techniques, are there any questionable design choices given the problem statement? Is there anything awkward or strange regarding the design decision to use 2004 as an end date? Any idea why this value would’ve been chosen?

ChatGPT:

Using critical thinking techniques, there are a few potential questionable design choices and aspects worth considering regarding the decision to use 2004 as the end date for the year of birth input field:

  1. Data relevancy: Given that the current date is 2023, setting the end date as 2004 might seem peculiar. It means that anyone born between 2005 and 2023 would be unable to input their birth year, which seems restrictive and potentially excludes a significant portion of the target user base.
  2. Age eligibility: If the purpose of the input field is to determine age eligibility for a specific requirement (e.g., age-restricted content or services), setting the end date as 2004 could indicate a specific age threshold of 18 years old. However, it’s important to assess whether this age requirement aligns with the intended audience or user demographics.
  3. System compatibility: Another aspect to consider is whether there are any specific technical or legacy system constraints that led to the choice of 2004 as the end date. It could be due to limitations in the underlying system or software architecture, database schema, or compatibility with other systems or components.

Without additional information, it is challenging to pinpoint the exact rationale behind the decision to use 2004 as the end date. However, it is worth examining the implications of this design choice and assessing whether it aligns with the intended purpose, user needs, and broader system considerations.

Jason: List specific test cases to cover international issues

ChatGPT: Certainly! Here are some specific test cases to cover international issues related to the year of birth input field:

Test case: Date format localization

  • Verify that the year of birth input field accommodates different date formats used internationally (e.g., MM/DD/YYYY, DD/MM/YYYY).
  • Test the input field with various date formats to ensure it handles and interprets them correctly.

Test case: Non-Gregorian calendars

  • Validate the year of birth input field’s compatibility with non-Gregorian calendars such as the Hijri (Islamic) calendar or the Chinese lunar calendar.
  • Enter birth years based on non-Gregorian calendars and ensure the input field handles them appropriately.

Test case: Localized month and day names

  • Test the behavior of the year of birth input field when different languages or locales are selected.
  • Verify that month and day names are displayed in the correct language and format based on the selected locale.

Test case: Age eligibility calculation

  • Test the accuracy of age calculation based on the user’s birth year and the legal age requirement in different countries.
  • Verify that the application correctly determines age eligibility based on international legal standards.

Test case: Numeric representation differences

  • Validate the input field’s handling of numeric representations used in different regions, such as decimal separators and digit grouping separators.
  • Enter birth years with various numeric representations (e.g., 1,000 vs. 1.000) and ensure the input field interprets them correctly.

Test case: Time zone considerations

  • Test the year of birth input field’s behavior when users from different time zones input their birth years.
  • Verify that the application handles time zone differences appropriately and accurately captures the user’s birth year.

These test cases aim to cover international issues related to date formats, calendars, localization, legal age requirements, numeric representations, and time zone considerations. By testing these scenarios, you can ensure that the year of birth input field functions properly and accurately addresses international user needs and expectations.

Jason: List specific Test cases for localization issues

ChatGPT: Certainly! Here are some specific test cases to cover localization issues:

Test case: Language localization

  • Verify that the application supports multiple languages and translations.
  • Test the year of birth input field with different language settings to ensure the labels, prompts, and error messages are properly localized.

Test case: Date format localization

  • Test the year of birth input field with different locale settings to verify that it adapts to the localized date format.
  • Check that the day, month, and year are displayed in the expected order and with the appropriate separators (e.g., “/” or “-”).

Test case: Month name localization

  • Ensure that the month names are correctly displayed based on the selected locale.
  • Test the input field with different language settings and verify that the month names are displayed in the corresponding language.

Test case: Decimal and digit grouping separators

  • Verify that the year of birth input field handles decimal and digit grouping separators according to the selected locale.
  • Test with locales that use commas or periods as decimal separators and verify that input is accepted or rejected accordingly.

Test case: Localized error messages

  • Test invalid inputs in different locales to ensure that the error messages displayed are properly localized.
  • Verify that the error messages are clear, concise, and in the expected language for each locale.

Test case: RTL (Right-to-Left) language support

  • Test the year of birth input field with RTL language settings (e.g., Arabic, Hebrew) to ensure proper visual alignment and handling of RTL text.

Test case: Currency symbol localization

  • Check that the currency symbol used in any monetary representations related to the year of birth input field is properly localized based on the selected locale.

These test cases aim to cover various aspects of localization, including language support, date formats, month names, separators, error messages, RTL language support, and currency symbol localization. By thoroughly testing these scenarios, you can ensure that the application provides a seamless and localized experience for users across different locales and language settings.

Jason: List specific test cases for any religious or social context, specific issues

ChatGPT: When considering religious or social contexts, it’s important to approach the topic with sensitivity and respect for diverse beliefs and practices. Here are some general test cases that can help address specific religious or social considerations:

Test case: Religious calendar compatibility

  • Verify that the year of birth input field handles religious calendars such as the Islamic Hijri calendar or the Jewish Hebrew calendar.
  • Enter birth years based on specific religious calendars and ensure the input field interprets and processes them correctly.

Test case: Gender-neutral or inclusive options

  • If the application collects gender information alongside the year of birth, test for gender-neutral or inclusive options that respect diverse gender identities.
  • Verify that the input field provides appropriate choices and handles the selection and display of gender identities correctly.

Test case: Privacy and data protection

  • Ensure that the year of birth input field complies with privacy regulations and protects sensitive personal information.
  • Test for secure transmission, appropriate storage, and adherence to data protection practices.

Test case: Age-related restrictions or exemptions

  • Validate that the application correctly handles age-related restrictions or exemptions based on religious or social contexts.
  • Test scenarios where specific age requirements or exceptions apply, ensuring the input field accurately determines eligibility.

Test case: Cultural or local practices

  • Consider specific cultural or local practices that might impact the year of birth input field.
  • Test scenarios where cultural practices, traditions, or local norms might influence the acceptance or rejection of certain birth years.

Test case: Multicultural or international considerations

  • Test the input field’s ability to accommodate multicultural or international contexts.
  • Verify that the application is inclusive, respectful, and considerate of diverse cultural and religious backgrounds.

Remember, it is important to approach religious or social contexts with cultural sensitivity, and these test cases should be tailored to specific requirements, respecting the values, beliefs, and practices of different individuals and communities. Consultation with relevant stakeholders or subject matter experts can help ensure appropriate and respectful test coverage in these areas.

Jason: South Korea just changed their birthdate definition to the day you were born instead of conceived. Any new test cases that should be generated given this?

ChatGPT: Certainly! With the change in birthdate definition in South Korea, where birthdate is now based on the day you were born rather than conceived, it would be appropriate to update the test cases to reflect this new requirement. Here are some additional test cases that could be generated in light of this change:

Test case: Birthdate calculation

  • Verify that the year of birth input field correctly calculates the birthdate based on the day the individual was born.
  • Enter birth years and ensure that the calculated birthdate aligns with the actual date of birth.

Test case: Leap year considerations

  • Check the behavior when the birthdate falls on February 29th in a leap year.
  • Ensure that the application handles the birthdate correctly, accounting for leap years when calculating the age.

Test case: Display of birthdate

  • Validate that the birthdate is displayed accurately based on the new definition.
  • Verify that the birthdate is presented consistently in the desired format, reflecting the day the individual was born.

Test case: Age calculation

  • Verify that the age calculation accurately reflects the new birthdate definition.
  • Test different birth years and compare the calculated age against the actual age based on the day of birth.

Test case: Legal compliance

  • Ensure that the new birthdate definition complies with South Korean legal requirements and regulations.
  • Test scenarios that involve age-restricted activities, services, or legal obligations to ensure the birthdate is correctly interpreted.

These additional test cases address the specific change in birthdate definition in South Korea and ensure that the year of birth input field aligns with the updated requirements. By considering the day of birth instead of conception, these test cases help validate the accuracy and compliance of the application within the South Korean context.

Jason: Any specific Test cases that Michael Bolton, the software tester-extraordinaire, would have come up with? Or questions he would’ve asked about the product when a testing or evaluating risk?

ChatGPT: While I don’t have direct access to Michael Bolton’s thoughts or specific test cases he would generate, I can provide insights based on common approaches and principles followed by software testers like Michael Bolton. Here are some potential test cases and questions he might consider:

Test Cases:

Test case: Boundary testing with extreme values

  • Explore the behavior of the year of birth input field with values close to the boundary years (1900 and 2004) to check for any unexpected behavior or edge cases.

Test case: Input validation with special characters

  • Test the application’s response to input containing special characters, such as “@” or “#”, to ensure that the input field handles them appropriately.

Test case: Cross-browser compatibility and responsiveness

  • Verify that the year of birth input field functions properly across different browsers and devices, considering factors like responsiveness, layout, and usability.

Test case: Handling of concurrent user interactions

  • Simulate scenarios where multiple users input their birth years simultaneously to assess how the application handles concurrent requests and avoids data conflicts.

What are the potential risks associated with input validation and error handling in the year of birth input field?

  • Understanding the risks can help identify potential vulnerabilities, such as accepting invalid input or displaying incorrect error messages.

How does the application handle data privacy and security concerns related to collecting and storing birth year information?

  • Assessing potential risks regarding data privacy and security can help identify measures to protect sensitive information.

Are there any legal or compliance requirements specific to age eligibility based on birth year, and how does the application ensure compliance?

  • Evaluating the application’s adherence to legal and compliance requirements is crucial, especially when it involves age-related restrictions or obligations.

What impact could incorrect or inconsistent handling of leap years have on the application’s calculations and age-related functionality?

  • Identifying risks associated with leap year calculations can help ensure accurate age calculation and adherence to calendar rules.

These questions and potential test cases reflect Michael Bolton’s emphasis on critical thinking, risk analysis, and exploring different dimensions of testing to uncover potential issues and improve the overall quality of the software.

Jason: Any specific tests or automation approaches that Jason Arbon, software tester, would have specifically created that are not in the list above?

ChatGPT: While I don’t have direct knowledge of Jason Arbon’s specific test cases or automation approaches, I can provide insights based on common strategies and principles followed by software testers like Jason Arbon. Here are some potential tests and automation approaches he might consider:

Test case: Performance and scalability testing

  • Design tests to assess the performance and scalability of the year of birth input field when handling a large number of concurrent users or high-volume input.
  • Measure response times, resource utilization, and system stability under different load conditions.

Test case: Usability and accessibility testing

  • Evaluate the year of birth input field for usability and accessibility factors, ensuring that it conforms to accessibility standards and provides an intuitive user experience.
  • Conduct usability testing sessions with a diverse set of users to gather feedback and improve the interface.

Test case: Error handling under exceptional conditions

  • Create tests to evaluate how the application handles exceptional conditions, such as network disruptions, server failures, or unexpected inputs during the year of birth submission process.
  • Assess the application’s error recovery mechanisms and how it gracefully handles such scenarios.

Automation approach: Robust input generation

  • Develop automated scripts that generate a wide range of test inputs for the year of birth input field, covering various valid and invalid scenarios.
  • Utilize techniques like fuzzing, randomization, and boundary value analysis to ensure comprehensive coverage of input variations.

Automation approach: Continuous integration and regression testing

  • Implement automation for continuous integration and regression testing, ensuring that the year of birth input field functions correctly after each code change or system update.
  • Set up a test suite that includes specific test cases and assertions related to the year of birth input field to ensure its ongoing integrity.

These approaches reflect Jason Arbon’s focus on performance, usability, robustness, and efficient testing techniques. They emphasize using automation to improve testing coverage and efficiency while maintaining a high standard of quality throughout the software development process.

Jason: Michael Bolton wrote the following article, which describes his thinking around this testing question: This post started as a LinkedIn post, which got started as a comment replying to this poll: <article above>.

ChatGPT:

The article by Michael Bolton highlights the limitations of reducing testing and risk to simplistic formulas. He emphasizes the importance of considering actual product behavior and investigating and revealing product and business risks. Based on his perspective, here are some key points and considerations he raises:

  1. Boundaries can have multiple interpretations: Boundaries can represent dividing points, categorization principles, or viewports that present subsets of larger sets. It’s important to understand the context and purpose of the boundary being tested.
  2. Actual behavior vs. described behavior: The described boundaries may not align with the actual behavior of the product until thorough testing is conducted. There may be additional boundary values and behaviors that emerge during testing.
  3. Considering other factors: Look beyond the given description and consider other factors that could impact the behavior of the field. Explore explicit and unstated conditions related to the field and its purpose.
  4. Interacting and interdependent elements: Recognize that the field is part of a larger system with interacting and interdependent elements. Explore how these elements may influence the presentation and processing of boundaries.
  5. Implementation details: Consider how the input field is implemented (e.g., drop-down box, text box, combo box). Different implementations may introduce additional boundary conditions or reveal peculiar behaviors or error handling.
  6. Numeric precision and rounding: Pay attention to how the system handles numeric precision, rounding, and pattern matching. Consider scenarios where the system may truncate or handle unexpected input.
  7. Historical and future boundaries: Take into account historical or future boundaries that may impact the behavior or significance of specific values. Examples include the Y2K boundary in 1999–2000 and the upcoming Epochalypse in the early months of 2038.
  8. Input filtering and backend validation: Assess how input filtering and constraints are implemented, both in the frontend and backend. Determine if the system handles data validation and error checking consistently and effectively.
  9. Error checking and exception handling: Explore how error checking and exception handling are performed. Identify potential vulnerabilities when handling input up to certain values or lengths or when the system becomes overwhelmed.

These considerations provide insights into Michael Bolton’s critical thinking approach and the nuances he suggests when evaluating risks and generating test ideas. His perspective encourages testers to go beyond simplistic approaches and thoroughly investigate product behavior and potential risks.

Jason: Any critiques of Michael Bolton’s article?

ChatGPT:

While Michael Bolton’s article provides valuable insights and challenges simplistic approaches, there are a few potential critiques to consider:

  1. Lack of context: The article discusses the concept of boundaries in testing but does not explicitly address the specific problem or domain under consideration. Providing more context about the problem statement could have helped readers understand the practical implications and relevance of his points.
  2. Limited focus on specific test cases: The article does not provide concrete examples of specific test cases that can be derived from the points raised. Although it encourages critical thinking, some readers might have found it helpful to see more practical examples of test cases or scenarios.
  3. Lack of balance between theory and practice: While the article highlights the limitations of simplistic formulas, it may not fully acknowledge the value of established testing techniques and practices. Balancing theoretical perspectives with practical guidance could have strengthened the article’s applicability.
  4. Absence of alternatives: While the article critiques the given multiple-choice answers, it does not provide alternative boundary values or testing approaches. Offering some alternative suggestions or examples could have enhanced the practicality of the article’s message.
  5. Limited discussion on risk analysis: The article touches on risk analysis but does not delve deeply into the systematic identification and prioritization of risks. Readers seeking more guidance on risk analysis and mitigation might have desired additional insights.

It’s important to note that these critiques are meant to provide constructive feedback and promote further discussion. Michael Bolton is a respected figure in the testing community, and his perspectives on testing are valuable contributions to the field.

Jason: Revise the article with your critiques addressed/fixed

ChatGPT:

Title: Enhancing Testing Perspectives: Exploring Boundaries and Product Risks

Introduction: Testing and risk analysis are essential for uncovering product and business risks. However, simplistic formulas can limit our ability to thoroughly investigate and reveal these risks. In this article, we delve into the concept of boundaries and their implications for testing, with a focus on practical application and risk analysis.

Understanding Boundaries: Boundaries serve as classification or filtering mechanisms, and they can take various forms. They may represent dividing points, principles for categorization, viewports presenting subsets of larger sets, execution time limits, or points blocking certain processes. Recognizing this broad spectrum of boundary definitions helps us uncover different facets of product behavior and mitigate risks more effectively.

Aligning Descriptions with Actual Behavior: While descriptions provide a starting point, true understanding of boundary values comes from testing and observing the actual behavior of the product. Testing uncovers behavior patterns and classifications that may not have been initially evident. It also helps us identify unanticipated behaviors and boundary conditions, contributing to a more comprehensive risk assessment.

Considerations Beyond the Description: To enhance our testing approach, we should look beyond the given description. Consider other factors that influence the field and its behavior. Are there additional explicit or unstated conditions that affect the boundaries? How does the field interact with other elements within the system? By exploring these factors, we can ensure a more holistic understanding of the product’s behavior and potential risks.

Implementation Details and Edge Cases: Implementation details play a crucial role in testing boundaries. Consider how the input field is implemented — whether as a drop-down box, text box, or combo box. Each implementation introduces its own set of considerations and potential edge cases. For example, constraints on input length, handling of special characters, and support for international characters or date formats can all impact boundary testing.

Dynamic Boundaries and Contextual Significance: Boundaries are not static; they evolve based on time, context, and requirements. Consider scenarios where historical or future boundaries come into play. Does the significance of a specific value change over time? By staying aware of contextual factors, such as legal requirements or evolving industry standards, we can identify and address risks associated with dynamic boundaries.

Error Handling and Exception Conditions: Robust error handling and exception conditions are crucial aspects of boundary testing. Consider how the system responds to various boundary values and potential exceptions. Does the system gracefully handle unexpected inputs, overflow conditions, or limitations on data types? By thoroughly exploring error handling and exception conditions, we can enhance the product’s reliability and user experience.

Conclusion: By embracing a more nuanced perspective on boundaries and their impact on testing, we can better reveal product risks and ensure high-quality software. Going beyond simplistic formulas, we can adopt a comprehensive risk analysis approach, considering implementation details, contextual significance, and dynamic boundaries. Through thorough testing, we improve our ability to assess and mitigate risks, contributing to the overall success of the product and the satisfaction of its users.

Jason: What limitations do you see in the rapid software (RST) process related to the testing example above.

ChatGPT:

The Rapid Software Testing (RST) process, as described in the context of the testing example above, focuses on quick and efficient testing to reveal risks. While RST has its advantages, it also has some limitations to consider:

  1. Limited documentation: RST emphasizes quick testing cycles, which may result in limited documentation. While this promotes agility, it can make it challenging to maintain comprehensive records of test cases, results, and overall test coverage. This lack of documentation may hinder traceability and make it difficult to reproduce specific scenarios or track testing progress over time.
  2. Reliance on tester expertise: RST heavily relies on the expertise and experience of individual testers. While skilled testers can provide valuable insights and conduct effective testing, the process may not be as scalable or consistent when multiple testers with varying levels of expertise are involved. The quality and thoroughness of testing may depend on the skills and knowledge of the testers involved.
  3. Limited focus on test planning: RST prioritizes rapid testing and risk assessment, often with less emphasis on comprehensive test planning. While this flexibility allows for agility, it may lead to a lack of structured test planning and a potential oversight of critical scenarios or test coverage gaps. Without sufficient upfront planning, there is a risk of missing important test cases or not allocating testing efforts effectively.
  4. Limited scalability for larger projects: RST may face challenges when applied to large-scale projects that require extensive testing efforts. The rapid and exploratory nature of RST may not be well-suited for complex projects with numerous interconnected components or extensive regression testing needs. In such cases, a more structured and systematic approach to testing may be necessary to ensure sufficient coverage and mitigate risks effectively.
  5. Lack of formalized processes: RST focuses on adaptability and flexibility, which can be advantageous for quick feedback and rapid iteration. However, the absence of formalized processes and predefined test techniques may result in inconsistency or variation in testing approaches across different teams or projects. This could make it difficult to establish standardization, collaboration, and knowledge sharing within an organization.

While RST can be effective for certain projects and contexts, it is essential to consider these limitations and evaluate whether they align with the specific requirements, complexity, and scale of the project at hand. Supplementing RST with other testing methodologies or incorporating more structured approaches may be necessary in scenarios where these limitations are significant concerns.

Jason: Why do you think the authors of RST chose these trade-offs?

ChatGPT:

The authors of Rapid Software Testing (RST) likely made deliberate trade-offs based on their testing philosophy and the goals they aimed to achieve. Here are some reasons why these trade-offs may have been chosen:

  1. Emphasis on agility and responsiveness: RST prioritizes adaptability and quick feedback. By focusing on rapid testing cycles, it enables testers to respond promptly to changes in requirements, identify risks early, and provide valuable feedback to stakeholders. This agility allows for faster iterations and facilitates a more iterative and collaborative development process.
  2. Leveraging tester expertise and intuition: RST values the skills, knowledge, and intuition of experienced testers. By relying on their expertise, the process encourages exploration, creativity, and critical thinking. Testers are empowered to use their judgment to identify and prioritize risks, select appropriate test techniques, and uncover issues that might not be easily identified through scripted tests alone.
  3. Flexibility in test planning: RST promotes flexibility in test planning, allowing testers to adapt their approach based on emerging risks and changing requirements. This enables them to focus efforts on areas of highest risk and importance, rather than being strictly bound by predefined test plans. The emphasis is on critical thinking and continuous learning, enabling testers to adjust their strategies as they gain insights during testing.
  4. Lightweight documentation: RST values concise documentation that captures essential information without being overly burdensome. This approach supports the rapid pace of testing and reduces administrative overhead. It allows testers to focus on active testing, exploration, and risk assessment rather than spending excessive time on detailed documentation.
  5. Encouraging collaboration and knowledge sharing: RST emphasizes collaboration among testers, developers, and stakeholders. By promoting open communication and knowledge sharing, it fosters a shared understanding of risks, encourages early and frequent feedback, and facilitates learning from each other’s experiences. This collaborative approach helps build a collective testing mindset and encourages a culture of continuous improvement.

Overall, the authors of RST made these trade-offs to optimize the testing process for speed, adaptability, and effectiveness. By leveraging tester expertise, promoting flexibility, and emphasizing collaboration, RST aims to uncover risks efficiently while maintaining a focus on delivering value to stakeholders in an iterative and responsive manner.

Jason: In the context of this testing example, what things should someone who follows the RST methodology of testing add to the approach to fill in these gaps?

ChatGPT:

To fill in the gaps and enhance the approach in the context of the testing example using Rapid Software Testing (RST), here are some considerations to incorporate into the testing process:

  1. Structured exploratory testing sessions: While RST promotes exploration and expert intuition, introducing structured exploratory testing sessions can help provide a more systematic approach. Define specific objectives, charters, and test coverage areas to ensure comprehensive exploration while still leveraging tester expertise.
  2. Test charters and session notes: Document test charters and session notes during exploratory testing sessions. This helps capture important information, observations, and potential risks encountered during testing. These records can be used for later reference, sharing insights with stakeholders, and aiding in knowledge transfer among testers.
  3. Risk-based test planning: While RST allows flexibility in test planning, consider incorporating risk-based test planning techniques. Identify critical functionalities, high-risk areas, and potential failure points based on the understanding of the system and its stakeholders. This helps ensure that testing efforts are focused on areas with the highest potential impact and risk.
  4. Test coverage analysis: Although RST emphasizes adaptability, periodically analyze test coverage to identify any gaps or areas that may require additional attention. This analysis helps ensure that the essential aspects of the system have been adequately tested and that key risks have been addressed.
  5. Collaboration through debriefs and sessions: Facilitate regular debrief sessions with the testing team, developers, and other stakeholders to share insights, lessons learned, and potential risks uncovered during testing. These sessions encourage collaboration, foster a culture of learning, and promote knowledge sharing among team members.
  6. Regression testing strategy: Consider incorporating a structured regression testing strategy to address the potential limitations of RST in handling large-scale projects. Determine critical functionality and areas prone to regression issues, and create a suite of regression test cases to ensure proper coverage and identification of potential regressions.
  7. Documentation balance: While RST favors lightweight documentation, strive for a balance that captures essential information. Ensure that important test artifacts, such as test charters, session notes, and risk assessments, are appropriately documented and accessible for future

Jason: Leveraging the techniques of james whittakers ‘exploratory software testing’ book, what would you suggest would be good ‘tours’ and hints for exploratory testging this boundary test example?

Leveraging the techniques from James Whittaker’s book “Exploratory Software Testing,” here are some suggested “tours” and hints for conducting exploratory testing on the boundary test example:

Boundary Focus Tour:

  • Start by focusing on the boundary values themselves and explore how the system handles inputs at the exact boundaries.
  • Test the behavior of the text box when the values of x and y are set precisely to the minimum and maximum boundaries.
  • Observe how the system responds and validate if it behaves as expected in terms of acceptance or rejection of the inputs.

Equivalence Tour:

  • Partition the valid inputs into equivalence classes based on their characteristics or expected behavior.
  • Select representative inputs from each equivalence class to test the system’s response.
  • For example, choose inputs near the boundaries, as well as some typical values within the valid range, and observe how the system handles them.

Error Handling Tour:

  • Focus on error conditions and inputs that fall outside the valid boundaries.
  • Test how the system handles invalid inputs, such as values below the minimum, above the maximum, or non-numeric inputs.
  • Explore any error messages or notifications provided by the system and ensure they are appropriate and informative.

Stress Testing Tour:

  • Push the boundaries of the system by subjecting it to stress conditions.
  • Test the behavior when large numbers or long strings are entered in the text box.
  • Observe how the system handles such inputs in terms of performance, response time, and error handling.

Combination Tour:

  • Explore combinations of boundary values and other factors that may influence the system’s behavior.
  • Test different combinations of x and y values, taking into account other variables or conditions that might impact the outcome.
  • For example, test how the system handles the interaction between x and y when they approach the boundaries together or when one is at its boundary while the other is not.

Hints for Exploratory Testing:

  • Document your observations, actions taken, and any anomalies or unexpected behaviors encountered during testing.
  • Utilize exploratory testing charters to define your testing goals and provide a sense of direction during exploration.
  • Follow your intuition and curiosity while testing, allowing for serendipitous discoveries and unexpected paths.
  • Use mind mapping or brainstorming techniques to generate and organize test ideas.
  • Collaborate with other testers, developers, or stakeholders to gain different perspectives and insights.
  • Prioritize risks and focus your exploratory testing efforts on areas that have the highest potential impact or uncertainty.

By applying these tours and hints from “Exploratory Software Testing,” you can conduct more thorough and systematic exploration of the boundary test example, uncovering potential issues and gaining valuable insights into the system’s behavior.

__________________

If software testers avoided vanity and emotionally charged and opinionated chats with AI (and people) and focused on technical, productive chats with some humility, they would realize that AI is getting better every day. Of course, great testers shouldn’t blindly trust any information. Not from AI, not from Google results/ads, not from co-workers, not even their own intuitions. But, it is wise to check, re-check, and test and re-test software. AI has been on an exponential curve of improvement lately. If you are a tester that was dissatisfied with GPT3, like any good tester, try upgrading to the latest software and test it out. Or not, go gentle into that good night.

https://www.linkedin.com/posts/james-bach-6188a811_after-playing-with-chatgpt-for-a-while-and-activity-7007295915816361984-aq8o/?originalSubdomain=zw

Ever witnessed a person contradict themselves? Have humans you have worked with, ever accidentally made up ‘facts’? Or know people who blog about repetitive fluff? LOL. Or, be bad at mathematics? ..and yet still collaborated with them? Or attended their lectures?

For what it's worth, Beyond Meat is up 8% in trading.

More from the post above.
ChatGPT seems a bit smarter lately. Maybe some regression testing for this tool is warranted?
AI does still have trouble understanding true hostility from humans on professional topics.

It is worth noting that I had this conversation with ChatGPT while casually sipping on a Hazelnut Latte in the parking lot of a Starbucks — a lot faster and cheaper (even more friendly!) than attending a seminar, reading a book, or even reading this post. You can access testing experts' knowledge, and even its gaps, by simply asking. And, you can get custom questions answered, and specialized to your specific testing problem. You can even ask ‘why’, without starting a pseudo-socratic discussion.

AI is getting smarter, and faster, than the humans that it can emulate.

In summary, testers can get far more value than they realize from ChatGPT/LLMs if they use their testing instincts to leverage AI — digging for more information, questioning output, and leveraging external data and context. A little humility and true tester curiosity will make you a better tester and keep you one step ahead of the competition — and the machines.

-Jason Arbon, Test Nerd

--

--

jason arbon

blending humans and machines. co-founder @testdotai eater of #tunamelts