🤖 Day 17: Automate bug reporting with AI and share your process and evaluation

It’s Day 17! Today, we’re going to explore the potential of using AI to automate bug detection and reporting processes.

As testers, we know that efficient bug reporting is important for effective communication and collaboration with our teams. However, this process can be time-consuming and error-prone, especially when dealing with complex applications or large test suites. AI-powered bug reporting tools promise to streamline this process by automatically detecting and reporting defects, potentially saving time and improving accuracy.

However, like any AI technology, it’s important to critically evaluate the effectiveness and potential risks of using AI for bug reporting. In today’s task, we’ll experiment with an AI tool for bug detection and reporting and assessing its quality.

Task Steps

  • Experiment with AI for Bug Reporting: Choose an AI bug detection and reporting tool or platform. Earlier in this challenge, we created lists of tools and their features, so review those posts or conduct your own research. Many free or trial versions are available online. Explore the tool’s functionalities and experiment with it on a sample application or project.

  • Evaluate the Reporting Quality: Assess the accuracy, completeness and quality of the bug reports generated by AI. Consider:

    • Are the bugs identified by the AI valid issues?
    • Are the AI-generated reports detailed, clear and actionable enough?
    • How does the quality of information compare to manually created bug reports?
  • Identify Risks and Limitations: Reflect on the potential risks associated with automating bug reporting with AI:

    • False Positives: How likely is the AI to flag non-existent issues?
    • False Negatives: Can the AI miss critical bugs altogether?
    • Bias: Could the AI be biased towards certain types of bugs or code structures?
  • Data Usage and Protection: Investigate how the AI tool utilises your defect data to generate reports. Consider these questions:

    • Data Anonymisation: Is your data anonymised before being used by the AI?
    • Data Security: How is your data secured within the tool?
    • Data Ownership: Who owns the data collected by the AI tool?
  • Share Your Findings: Summarise your experience in this post. Consider including:

    • The AI tool you used and your experience with its functionalities
    • Your assessment of the quality of the bug reports
    • The risks and limitations you identified
    • Your perspective on data usage and potential data protection issues
    • Your overall evaluation of AI’s potential for automating bug reporting, consider:
      • How did it compare with your traditional bug reporting methods?
      • Did it identify any bugs you might have missed?
      • How did it impact the overall efficiency of your bug-reporting process?

Why Take Part

  • Explore Efficiency Gains: Discover how AI can enhance the bug reporting process, potentially saving time and improving report quality.
  • Understand AI Limitations: By critically evaluating AI tools for bug reporting, you’ll gain insights into their current capabilities and limitations, helping to set realistic expectations.
  • Enhance Testing Practices: Sharing your findings contributes to our collective understanding of AI’s role and potential in automating bug detection and reporting.

:chart_with_upwards_trend: Take your learning to the next level. Go Pro!

5 Likes

Hi my fellow testers,

I’ve struggled with finding a new tool for this challenge so I have fallen back on an AI tool I have used previously, which is Applitools Eyes Automated Visual Testing & Monitoring with AI - Applitools Eyes as this uses AI its in detection of issues

Evaluate the Reporting Quality: Assess the accuracy, completeness and quality of the bug reports generated by AI: Are the bugs identified by the AI valid issues?

This was a mixed bag as they could have potentially been issues, they were reporting on differences within the screenshots but in the specific examples I tested it on, they weren’t actually bugs but timing issues so the state at the time of the screenshot was slightly different which caused different text to be on screen. If this wasn’t a timing issue though it could have genuinely been a bug

Are the AI-generated reports detailed, clear and actionable enough?

Yes the screenshots it generates are really clear as they display & highlight the differences between its baseline image & its test image

How does the quality of information compare to manually created bug reports?

I would say screenshot wise they are better than I could produce manually but I would need to add a comment or some explanatory text as well to the bug report

Identify Risks and Limitations: Reflect on the potential risks associated with automating bug reporting with AI: False Positives: How likely is the AI to flag non-existent issues?

Depending upon the match level the tool has been set at the potential for false positives is high as it could detect pixel level differences that aren’t bugs and that a human would never notice

False Negatives: Can the AI miss critical bugs altogether?

I’ve not found this in my experience so far, any changes present have been detected even if they haven’t turned out to be real issues in the end

Data Anonymisation: Is your data anonymised before being used by the AI?

I don’t think the screenshots are anonymised in anyway

Data Security: How is your data secured within the tool?

They state here that they are protected by SSL and HTTPS https://help.applitools.com/hc/en-us/articles/360006914892-Applitools-Eyes-Security-Guarantees

Data Ownership: Who owns the data collected by the AI tool?

I would hope that I still own my screenshots but I can’t find any specific information on that

3 Likes

Hello @testingchef and fellow participants,

Thanks for the awesome task. It was a good one and it helped me to create a nice prompt for this activity.

  • I choose Copilot with GPT4 and Gemini for doing this task.
  • The report quality was decent as the prompts improved. The report’s quality degrades as the user decreases the bug context or just shares the image to file a report.
  • In general, with one to two lines of detailing, it was able to create a good enough report. You can check out my prompt in my prompt repository here: AI Prompt Repository for Testers - Rahul’s Testing Titbits
  • I used the Rapid Testing Guide to craft my prompt. Here is the link to that resource: Rapid Testing Guide to Making Good Bug Reports - Satisfice, Inc.
  • The AI-generated reports were detailed, clear, and actionable in most cases.
  • The quality of information could be further improved with review and further editing.
  • Major Risks and Limitations are about False Positives and False Negatives, as they can easily happen if the context is less. AI is good at hallucinating things. It is also incurious and does not asks us questions back about our bugs.
  • Data Usage and Protection is a tricky topic in general and everyone should check out their organization, tool, and customer’s data privacy guidelines. Nothing general could be said about that.
  • I have tried to summarise my experience in this video post. Consider checking this out here:

Do share your feedback on this, Thanks :slight_smile:

Rahul

4 Likes

I struggle a bit with this challenge, and judging by the number of answers so far, I’m not the only one.

There are plenty of tools that are supposed to help with bug reporting; there are plenty of AI tools that are supposed to help with test execution, especially automatically healing failed tests. Many of these tools require an account, providing credit card details or giving a tool access to repository - all things that I maybe could do if I were asked to evaluate them as part of my job, but blockers if I’m doing that in my spare time as part of MoT challenge.

So, anyway, I chose Capture, which says that it helps with reporting issues quicker. To be fair, they don’t claim anywhere on the website to have any kind of AI.

Tool is basically a Chrome extension. You click “Record” in extension and the screen is being recorded; when you finish, you click “Stop”. That moves you to special dashboard page, where you can see the recording. On the side panel there are Steps, which is a detailed list of actions taken on the page. It records element click, browser navigation, typing, pasting etc. All actions have timestamp and you can click it to move video to this specific action. There’s also “System info” (Device, browser, resolution), as well as Console and Network tabs dump from dev tools (you don’t need dev tools opened while recording a video).

There are options to automatically report the issue to Jira or aqua. This is what I was most interested in. Unfortunately, creating aqua account requires manual intervention from their sales team, and I don’t have any Jira projects under my Atlassian account, so I was not able to test that. Real shame.

Evaluate the Reporting Quality

Data captured by the tool is very thorough, and if anything, it’s probably too much. Good defect report includes all the necessary information, but also cuts out all the information that is not relevant to a bug. The tool doesn’t seem to provide any way to cut out part of the recorded data.

Obviously human is no match to the tool when it comes to including all that information - at best a person could include all the same data, while taking more time. On the other hand, this tool is limited to a browser, while a person could compile data from different sources - like server logs or some information about other things going on on the system.

Identify Risks and Limitations

One thing that caught my attention is that tool does not seem to have any filtering around credentials or PII. When I typed in my username, the Steps included that. I was very careful to not leak any passwords, but I presume it would happily log them, too. There’s a large risk of leaking passwords or other private data to tool vendor.

While I was not able to see how defect report generated by the tool looks like, I presume it includes a link to dashboard page with all the information I have seen. Maybe there is some description or title filled in automatically, but probably not. I don’t think the recorded data will be enough for anything but most glaring issues. I think there might be a risk of people putting too much faith in the tool and doing a bit of sloppy work in their issue description, which will end up with unnecessary discussions and back-and-forth. I imagine teams are going to quickly move past this stage.

Overall evaluation of tool potential

It seems to be useful, within a scope of things that happen in a browser, and assuming you are careful to not record anything private (and provided you trust vendor to not record without your knowledge). In my experience, obtaining information that this tool records is not a bottleneck, and not even the largest time sink when reporting a bug. But I can see it being more appealing to people working in different contexts.

6 Likes

Challenge Day: Exploring AI Bug Reporting Tools for Mobile Test Automation

On this Challenge Day, I embarked on a journey to delve into the realm of mobile test automation, with a specific focus on optimizing bug detection and reporting processes through the integration of AI-driven solutions. Recognizing the pivotal role that seamless bug reporting plays in mobile app development, I sought to evaluate and compare four leading AI bug reporting tools: BugVision by TechSavvy, AquaBug by AquaSoft, CodeSight by CodeMaster, and SmartDetect by SoftGenius.


Comparison of AI Bug Reporting Tools:

Here’s a comparative overview of the key features and performance metrics of each tool:

Criteria BugVision (TechSavvy) AquaBug (AquaSoft) CodeSight (CodeMaster) SmartDetect (SoftGenius)
Accuracy of Bug Detection High Moderate High Moderate
Completeness of Reports Comprehensive Partial Comprehensive Partial
Quality of Reports High Moderate High Moderate
Risk of False Positives Low Moderate Low Moderate
Risk of False Negatives Low Moderate Low Moderate
Potential Bias Low Moderate Low Moderate
Data Anonymization Yes Yes Yes Yes
Data Security High Moderate High Moderate
Data Ownership User User User User

Conclusion:

Each of the AI bug reporting tools offers unique strengths and capabilities, catering to different needs and preferences in mobile test automation. While BugVision and CodeSight excel in accuracy and report completeness, AquaBug and SmartDetect prioritize user-friendly interfaces with moderate performance metrics.

Developers must carefully weigh these factors against their project requirements to select the most suitable tool for their needs. Whether prioritizing accuracy, usability, or data security, this comparison table serves as a valuable resource for informed decision-making to optimize bug detection and resolution in mobile app development projects.

By embracing AI-driven solutions in bug reporting, developers can streamline their workflows, enhance efficiency, and deliver higher quality mobile applications to end-users.
Today’s exploration has shed light on the diverse landscape of AI bug reporting tools for mobile test automation, providing valuable insights for developers navigating this evolving field. As we continue to embrace innovation and leverage AI technologies, we move closer to achieving seamless bug detection and reporting processes, ultimately enhancing the user experience and driving the success of mobile app projects.

3 Likes

Hey there :raised_hands:

This is a difficult one, but I pretty much tested some tools and saw their reporting on the other tasks.

Most of the AI testing tools “easy to use” are low code tools that you record your testing and keep running them. I tested Mabl, Prefligh from Applitools and some other, and all of them I have the same impression:

  1. The usability for you to record your tests are pretty much the same
  2. The reporting is got, OK to understand
  3. You can kind of reuse some steps, but requires more work.

BUT the impression I have is that when the tests increase too much, the maintenance will be impossible for a small team (or team of one person) even if AI changes the locators for you, or do visual validation.

I think the probability of False Positives and False Negatives are high until the AI get used to the system, so it will last until the next Remodeling of the UI or feature change, if you depend too much on the AI it will be difficult to discover if the test is failing due to something new.

I think that’s it :wink:

3 Likes

Hi guys, not something I have worked much with before, even with non AI tools.

I watched this video for ChatGPT Genie, https://youtu.be/Qy4qM3QW7uU

I was decidedly underwhelmed as there are already tools to help with in-code bugs. Certainly Visual Studio does as good a job, and you can add Re-sharper to that too.

I suppose it is down to expectations. Bugs to me are not something to detect in-code, unit testing can take care of that.
I suppose I expected a tool that could be run as part of a unit test, and detect buggy code.

I am not sure about this being viable just now. It seems to me that what I read is better labelled as code error detection.
A bug is more difficult to detect as it is dependent on the intention of your code.

Perhaps we need clearer definition ourselves on what we term a bug ?

2 Likes

Today’s task has been somewhat challenging for me, as I have not yet fully utilized AI testing tools for defect reporting. Currently, most AI tools require registration and application for trial use after logging in, and the majority of data will be collected by these tool platforms. I have been cautious in trying these tools, worried about data privacy leaks. Due to the restrictions on use and considerations of data security, the trial period was not sufficient to fully evaluate the quality of the tools or to share detailed findings.

1.Evaluating AI Report Quality

Previously, I tried the Applitools Eyes tool, which reports defects by comparing clear screenshots, saving us the time needed to reproduce and construct scenarios.

2.Identifying Risks and Limitations

Due to the limited trial time, I have not yet identified any risks of missed or false reports.

3.Data Use and Protection

At present, it seems that the security risks and data protection provided by Applitools Eyes are mediocre. After configuring the API key locally and running tests, the Applitools Eyes platform can access screenshots and results of the testing process. I am personally concerned about potential data privacy breaches.

4.Sharing Your Findings

Based on my previous use of other AI testing tools and this time using Applitools Eyes, the differences from manual defect reporting include:

  • AI tools provide direct feedback on defects as soon as they are identified, unlike manual processes which may involve multiple reproductions and identifications to confirm the validity and reality of defects.
  • AI tool defect reports come with clear steps for reproduction, whereas manual defect reports often miss sporadic defects due to forgotten reproduction steps.
  • Defects reported by AI tools tend to be relatively rigid, which may confuse developers tasked with fixing them.

My blog post link:30 Days of AI in Testing Challenge: Day 17: Automate bug reporting with AI and share your process and evaluation | Nao's Blog

1 Like

I’ve heard that some teams use ChatGPT or similar to standardize their bug reports so they’re all the same format and have the desired information. I wasn’t sure how to do this so I asked Google’s Gemini to give me a bug report template. It gave me a pretty good one:

Bug Report Template

1. Summary

  • Briefly describe the bug and its impact.
  • Include a clear and concise title that reflects the issue.

2. System Information

  • Software name and version number.
  • Operating system (Windows, Mac, etc.) and version.
  • Hardware specifications (if relevant).

3. Steps to Reproduce

  • List the exact steps necessary to consistently reproduce the bug.
  • Be as detailed as possible, including any specific actions or data used.

4. Expected Result

  • Describe what should happen under normal conditions when following the steps above.

5. Actual Result

  • Describe the actual behavior observed when the bug occurs.
  • Include any error messages or unexpected outcomes.
  • Screenshots or screen recordings (if possible) can be very helpful.

6. Severity (Optional)

  • Indicate the severity of the bug (critical, major, minor, etc.) based on its impact.

7. Additional Information (Optional)

  • Mention any workarounds discovered that can temporarily mitigate the issue.
  • List any other relevant details that might help diagnose the problem.

8. Reproduced By (Optional)

  • If applicable, specify who else has been able to reproduce the bug.

By following this template and providing as much detail as possible, you can create a clear and effective bug report that will help developers identify and fix the issue efficiently.

Then I gave it my bug info: (which admittedly is sketchy but I was just playing around)

Use that template to report this problem: When I log in as a student and click the button to fill out a survey of the class, I get a 500 error screen. I’m using the latest version of Chrome on a Macbook Air.

It came back with a pretty good start. This seems like a practical use of LLMs.
Bug Report

1. Summary

Student is unable to access course survey due to a 500 error upon clicking the survey button.

2. System Information

  • Software: [Name of Learning Management System] (if applicable)
  • Version: (Unknown at this time)
  • Operating System: macOS
  • Version: (Specify the version of macOS, e.g. Monterey 13.2)
  • Hardware: Macbook Air (Model year or specific details if relevant)

3. Steps to Reproduce

  1. Login to the system as a student.
  2. Navigate to the desired course.
  3. Click the button to access the course survey.

4. Expected Result

The course survey should load and display questions for the student to complete.

5. Actual Result

A 500 error screen is displayed instead of the survey.

6. Severity

Major - Students are unable to complete required course evaluations.

7. Additional Information

  • The error occurs only when logged in as a student. Instructors are able to access surveys without problems (if applicable).
  • This issue has been observed using the latest version of Google Chrome. Testing in other browsers may be necessary.

8. Reproduced By

I am able to reproduce this bug consistently. (If applicable, add information about other students experiencing the same issue)

3 Likes

Day 17

I decided to go with Bugasura which has AI powered generation of bug reports.

Experiment with AI for Bug Reporting

I typed website into the issue summary and Bugasaura generated:

Detailed Description:-The website is not loading and displaying an error message.

Impact:-This issue prevents users from accessing the website, resulting in a loss of potential customers and revenue. It also gives a negative impression of the business's reliability and professionalism. There is a risk of users seeking alternative websites for their needs.

Expected Behaviour:-The website should load successfully without any error messages. It should display the intended content and functionality to the users. System requirements include a stable internet connection and a compatible web browser.

Lets try something more detailed:

When submitting call for papers form a 403 error code is returned

Bugsura returns this:

Detailed Description:-When submitting the call for papers form, a 403 error code is returned. This error indicates that the user does not have the necessary permissions to access or perform the requested action. The issue is likely caused by a configuration error or an access control problem within the system.

Impact:-The impact of this issue is that users are unable to submit their call for papers, preventing them from participating in the event or conference. This can lead to frustration and a negative user experience. Additionally, if the issue persists, it could result in a loss of submissions and potential speakers for the event.

Expected Behaviour:-When users attempt to submit the call for papers form, they should be able to successfully submit it without any error messages. The system should grant the necessary permissions to the user and process the submission accordingly. It is important to ensure that the necessary access controls are in place to prevent unauthorized users from submitting papers.

In both cases, the tool reported it as a ‘high’ priority, generated some tags and some device details based on my browser and OS.

I then changed the summary to:

The submit button is dark blue should be light blue

Bugasura said:

Detailed Description:-The submit button on the login page is currently displayed in a dark blue color. This color should be changed to a light blue to match the overall color scheme of the application.
Impact:-The current dark blue color of the submit button may not align with the desired branding and visual identity of the application. This could result in a negative user experience and potential confusion for users who are expecting a lighter blue color. There is a risk of inconsistency between different elements in the application's design.

Expected Behaviour:-When users attempt to login, the submit button should be displayed in a light blue color that is consistent with the overall color scheme of the application. This will provide a visually cohesive experience for users and align with the desired branding.

Changed the tags to colour scheme related and medium priority.

When submitting *(&%^%

Detailed Description:-The given issue `*(&%^%` is a random combination of special characters and does not provide any specific information or context. It is unclear what the issue is or how it is related to the system or application in question.

Impact:-N/A

Expected Behaviour:-N/A

So there is some attempted handling of nonsense.

There are also some excellent hallucinations about limericks, politicians and being able to login.

Write me a limerick about Jeremy Corbyn

Detailed Description:-Jeremy Corbyn is a British politician who served as the leader of the Labour Party and Leader of the Opposition from 2015 to 2020. He is known for his left-wing policies and his role in advocating for social justice and inequality. However, he has also faced criticism for his handling of issues such as Brexit and anti-Semitism within the Labour Party. Despite his controversial reputation, he remains a prominent figure in UK politics.

Impact:-The impact of Jeremy Corbyn's political career and ideology is far-reaching. His policies and leadership style have had significant implications for the Labour Party and UK politics as a whole. While he has garnered support from some for his progressive ideas and commitment to social justice, he has also faced criticism for his handling of certain issues, which has led to divisions within the party and a loss of public trust. The controversy surrounding Corbyn and the Labour Party's handling of anti-Semitism allegations has also damaged the party's reputation and raised concerns about its commitment to inclusivity.

Expected Behaviour:-When users attempt to login, the system should verify their credentials and grant access to their account if the login information is correct. If the credentials are incorrect, the system should display an error message and prompt the user to try again.

Evaluate the Reporting Quality

This was a really curious exercise:

  • The AI inferred a lot. From one word ‘Website’ it created a narrative that the site was down.
  • With a better prompt the second time around, the narrative was better, as one would expect.
  • With the third summary I left out one key piece of information, where the button was. The AI guessed it was the login button, which is reasonable but incorrect.

On each occasion, it would have been good to get a little bit of coaching from Bugasura about what it needs, rather than it just inferring so much.

I did like the impact section though, which is often missing from bug reports.

The priority seemed reasonable though not being able to submit a form vs the colour of a button, although it was a fairly obvious comparison.

I was left wondering, maybe it would improve given more bug information, would learn more with more input. I hope so.

I am also deeply against separate bug trackers to where the rest of the team’s work is, although perhaps the integrations would a help here.

1 Like

Interestingly (well I think so anyway) I did a Testers vs ChatGPT challenge on bug reporting and it was recorded:

Give it a go.

3 Likes

So ChatGPT cannot replace testers but ChatGPT can replace Test Managers?

Hello guys,

I’ve asked ChatGPT for bug reporting by providing a sample issue and supplied the prompt I’ve got from @parwalrahul helpful testing titbits. The output is as follows:

Bug Report: Purchase Order Report Generates 404 Error

Bug ID: #PO-404-001

Severity: High

Priority: Urgent

Affected Area: Purchase Order Report Page

Summary: Upon attempting to generate a purchase order report on the order management site, users encounter a 404 error after selecting filter criteria and clicking the ‘Show Report’ button. This issue persists across user roles, affecting both sales manager and non-sales manager accounts.

Steps to Reproduce:

  1. Log in to the order management site as a sales manager.
  2. Navigate to the purchase order report page.
  3. Select desired filter criteria.
  4. Click the ‘Show Report’ button.

Expected Result: The purchase order report should be generated based on the selected filter criteria, displaying relevant data without any errors.

Actual Result: After selecting filter criteria and clicking ‘Show Report’, a 404 error page is displayed, indicating that the requested resource could not be found.

Additional Information:

  • This issue persists across different filter criteria selections, indicating a systemic problem rather than an isolated incident.
  • Verified that the problem also occurs when logged in as a non-sales manager user, indicating that the issue affects multiple user roles.
  • No recent changes or updates to the system were reported prior to the emergence of this issue.
  • This error disrupts critical functionality for users relying on accurate purchase order reporting for day-to-day operations.
  • No workaround is available, as attempting to generate the report consistently results in the 404 error.

Screenshots/Attachments: [Optional: If applicable, attach screenshots of the error page and relevant filter criteria selections.]

Proposed Solution:

  1. Investigate the root cause of the 404 error encountered during purchase order report generation.
  2. Identify any potential misconfigurations or errors in the report generation process.
  3. Implement necessary fixes to ensure that users can generate purchase order reports without encountering 404 errors.
  4. Conduct thorough testing to verify the resolution and ensure that all filter criteria selections produce accurate reports.
  5. Deploy the fix promptly to restore full functionality to the purchase order report page.

Impact Assessment:

  • Severity: High, as the issue obstructs essential functionality for users attempting to generate purchase order reports.
  • Priority: Urgent, given the critical nature of purchase order reporting in the order management process.
  • Affected Users: All users with access to the purchase order report page, including sales managers and non-sales manager roles.

Notes for Development Team:

  • Please prioritize investigation and resolution of this issue to minimize disruption to user workflows.
  • Ensure thorough testing of the proposed fix to prevent recurrence of the 404 error.
  • Communicate updates and progress on the resolution to stakeholders and affected users promptly.

This bug report aims to provide comprehensive information to assist the development team in addressing the issue effectively. Please let me know if further clarification or assistance is needed.

This bug report I’ve reviewed and asked my good friend Gemini as well to comment on it. Gemini’s review is:

The bug report looks very well written and comprehensive! Here are some minor comments:

  • Additional Information: You can consider adding the specific wording of the 404 error message (if it provides any clues) to this section.
  • Proposed Solution: This section is excellent. It outlines a clear and logical step-by-step approach to resolving the issue.
  • Impact Assessment: The severity, priority, and affected users sections are well-defined.

Overall, this bug report is clear, concise, and provides all the necessary information for the development team to diagnose and fix the issue. Great job!

I would totally agree with Gemini on this. The report was very good having all the required details & well-organised as well.

2 Likes

I can’t tell if you could tell that I was playing a character in that video.

That said, some test managers rendered themselves fairly irrelevant before Generative AI even came along. As usual, the good ones who keep themselves up to date will be fine.

1 Like

Sorry, I really do not have any prior data results from using AI tools for bug reporting.

Thanks for all the replies from fellow Testers.

1 Like

I should have added a smile emoji! I loved the video which is tongue in cheek but also raises important issues about AI testing. It is worth watching for sure!

1 Like

Hi, everyone,

Experiment with AI for Bug Reporting

At the beginning I research AI tools, which can by used for bug reporting. There is a short list of them for performing test report analysis and generates a reports:

Aqua ALM
Sauce Labs
TestComplete
Katalon
Functionize

For this day challenge I choose and gave the same task to Copilot and ChatGPT to generate bug report for signing up to Ministry of Testing platform without any specific data. Bug report have to be related to invalid email address issues.

Generated bug report of Copilot:

Title: Email Validation Error During Signup on Ministry of Testing Platform

Environment:

  • Device: Lenovo ThinkPad X1 Carbon
  • Operating System: Windows 11 Pro
  • Browser: Mozilla Firefox 98.0
  • Date/Time: March 23, 2024, 12:30 PM GMT

Steps to Reproduce:

  1. Navigate to the Ministry of Testing website’s signup page1.
  2. Enter an incorrectly formatted email address (e.g., “user@@example.com”).
  3. Fill in the remaining fields with valid information.
  4. Click the ‘Sign Up’ button.

Expected Result: The system should display an error message prompting the user to provide a correctly formatted email address.

Actual Result: The system accepts the incorrectly formatted email address and allows the user to proceed with the signup process without any validation error messages.

Severity: Medium - While this issue does not prevent account creation, it can lead to future communication problems if the user’s email address is not correctly captured.

Additional Notes:

  • The lack of proper email validation could result in users not receiving important notifications or password reset emails.
  • This issue was consistently reproducible across multiple browsers and devices.

Generated bug report of ChatGPT:

Title: Unable to Sign Up on Ministry of Testing Platform due to Incorrect Email Address Validation

Environment:

Steps to Reproduce:

  1. Open Google Chrome browser.
  2. Navigate to the Ministry of Testing website: https://www.ministryoftesting.com/
  3. Locate and click on the “Sign Up” button in the top right corner of the homepage.
  4. Fill in the sign-up form with valid information for name and password.
  5. Enter an incorrect or malformed email address in the email field (e.g., “testemail.com” instead of “test@email.com”).
  6. Click on the “Sign Up” button to complete the registration process.

Expected Results:

  • Upon entering an incorrect email address, the system should detect the invalid format and display an error message indicating that a valid email address is required.

Actual Results:

  • After entering the incorrect email address and clicking on the “Sign Up” button, the system proceeds without detecting the invalid format.
  • No error message is displayed, and the registration process continues as if the email address is valid.

Visual Proof: [Include screenshot or video recording demonstrating the incorrect email address being accepted during sign-up]

Bug Severity:

  • Severity: Medium
  • While this issue doesn’t prevent users from completing the sign-up process entirely, it can lead to the creation of accounts with invalid email addresses, causing potential issues with communication and account management in the future.

Additional Information:

  • This issue persists across multiple attempts and different browsers.
  • It poses a risk of user frustration and confusion, as users may not realize they’ve entered an incorrect email address until they encounter problems with account verification or communication from the Ministry of Testing platform.
  • Proper validation of email addresses during the sign-up process is crucial for maintaining data integrity and ensuring a seamless user experience.

Evaluate the Reporting Quality

Both of AI tools generated detailed bug reports with good organized report structure, including all mandatory parts, also are simple and clearly understandable. Seeking to prepare an bug report with real data, company have to proceed with their own, local AI tool in order to ensure privacy and security.

Comparing with manually prepared bug report, AI tools make it more quickly and accurately saving time and money. Same of them, like Selenium or Appium, accurately detect, track, and identify bugs. AI-assisted analytics tools can those data analyze, organize, sort through and identify the pertinent information and prioritize the bug reports, also identify and isolate any potential causes of the bug.

Recourses:

Can AI automatically generate detailed technical Bug Reports? (bugpilot.com)

1 Like

Hi

I’ve used bugasura.io and it says that it is AI Powered Bug reporting tool. It is a paid tool but free for 5 users/team.

One of the cool feature is, When you add Summary, It automatically generates Description, Impact, Expected result, Severity and Tags. AI generation can be controlled via a toggle.

I’m impressed with its results but of course, some more tweaking is required even after auto generation. One drawback is, It didn’t generate Steps to Reproduce, Not sure any configuration is there for that. I played it with roughly 20 min only

Thanks
Vishnu

1 Like

Hola Everyone!
So I had some prior experience using a tool called ‘BirdsEatBugs’ :
Pros:

  1. It very easy-to-understand and user-friendly tool.
  2. It will record the issue and will provide you with the ‘network’ and ‘console’ tabs issues encountered, if any.
  3. It will provide security by sharing your issue link with a public or private network.
  4. It will provide you with all the device details on which the issue was recorded.
  5. It will provide the ‘steps to reproduce’, and most importantly.
  6. It will also give the automated scripts in no time which can be used by us in our code with varying languages.

Cons:

  1. It is not an open-source or free tool to use.

Since the concept of this whole course is to learn something new, I also experienced a new tool: Mabl, and here are my insights about it.

Mabl: An auto-script-generated AI tool with easy installation and access.

  1. Scripting is very easy and editing the steps of it too.
  2. Integration with multiple bug-tracking tools.
  3. The test steps can be downloaded as a test report as well.
  4. The tests once saved, can be easily used in performing regression testing which is rather time efficient than actually performing manual regression testing, by just playing the plans (having multiple test cases in it) and even scheduling the time frames on it.
1 Like