With genAI helping in every field, do we also have some libraries that can help in Generating Test Scenarios?
I have a requirement document that contains text and some flowcharts. can we feed this to some genAI which can take context from the requirement and generate all possible scenarios (happy, anomaly, boundary, security, performance etc) test cases with detailed steps?
A few examples like spaCy , TextBlob are there but they give high-level test scenarios.
Also if ant tool can take PDF as feed and can generate test scenarios
Do you have any suggestions/tools which can be explored?
You are expecting magic. AI is not magic. I once had a manager that asked if AI could resolve all the bugs and tech debt. This is on the same path, though not as far into fantasy.
Could it be possible in the future? Sure. I am not sure why anyone would make the effort, though. Test identification is just one part of it - test implementation is the other, which will still require people. (Otherwise, you are asking if AI can write your software for you, which is science fiction.)
Ideally test identification should be a team ensemble activity, not something you can throw primitive AI tools at.
I think you should try a few of tools and figure out what happens.
You should think about why this is appealing to you.
Complete (exhaustive) testing is generally impossible given the combinatorial problems inherent in paths. So unless you are constraining the problem set, GenAI won’t be able to do this for you.
When it does, the AI generated responses will be wild ass guesses. How will that help you? How will you spot the problems?
Why not look at or build a tool that will smartly do this / help you?
You’ll probably need multiple AI Agents for this, one who specializes and is trained in performance, security, manual testing, … etc
You can do this with simple LLM systems as ChatGPT but it all depends on the training data that you feed it. There isn’t going to be a specific AI system that is going to bluntly give you all the scenario’s that you want.
Part 2 is prompt engineering, you’ll need to be very specific about your request and not just write " give me all possible scenario’s" since it’s only going to give you a few samples (even paid versions). That’s why you only get “high-level test scenario’s”.
The combination of proper prompting & training your Agent will do the trick.
A tool isn’t going to solve your problem, they’ll all do the same because most of them are just wrappers around GPT.
This may not be exactly what you want, but AIO Tests (can be used as a Jira app) offers generative AI which offers something similar to what you want. It takes a work item, with however much information you like and generates tests based on it.
I was playing around with this and with a fairly simple user story around sorting / filtering a table, it came up with fifteen different detailed test cases for me, all of which seemed valid.
I’m not sure if it would be the fastest method for achieving your goals, unless you are happy with the created tests being stored in AIO as test cases. It would also be limiting unless you paid, because you get a set amount of API usage of their AI on the free trial.
I don’t know such tools on the market as the main point here is end result and quality of such result. Current toolset with GenAI won’t have any good results as it’s just simple generation based on limited context. To make really good results the tool should create vector database where you can load of all your tests, all your source code and of course requirement. Then, probably, it will be a good result. We’re experimenting with this in testomat.io and once we will have good results I will show it here
We tried it with standard requirements (and added at one point a value table which the AI didn’t recognize at all), for example test border values in a simple field with limited values.
It might very well be, that there are AI test generators that work perfectly well but I haven’t encountered them yet.
I did more research and found that Automatic tests can be generated but based on input the quality of the test case can differ if someone doesn’t have knowledge of any domain and needs to work then this will be a good starting point and the one who already know the domain can also generate and may validate that if some additional test cases suggested by automatic test generation.
Sometimes requirement documents also have flowcharts, To generate test cases from a flowchart convert it in DOT format (for example using Graphviz)
digraph CarRental {
node [shape=rectangle];
Start [label="Start"];
Choice [label="Input Customer Choice"];
CheckAvailability [label="Check Availability"];
Available [label="Car Available?"];
DisplayInfo [label="Display Information"];
BookCar [label="Book Car"];
End [label="End"];
NotAvailable [label="Display Not Available Message"];
Start -> Choice;
Choice -> CheckAvailability;
CheckAvailability -> Available;
Available -> DisplayInfo [label="Yes"];
Available -> NotAvailable [label="No"];
DisplayInfo -> BookCar;
BookCar -> End;
NotAvailable -> End;
}
Explanation of the DOT Code
Nodes: Each step in the flowchart is represented as a node. For instance, “Start,” “Input Customer Choice,” “Check Availability,” etc.
Edges: The arrows represent transitions between steps, such as Start -> Choice;.
Conditions: The “Available” node has two branches, one for “Yes” (car is available) and one for “No” (car is not available).
+--------------------+
| Start |
+--------------------+
|
v
+--------------------+
|Input Customer |
|Choice |
+--------------------+
|
v
+--------------------+
|Check Availability |
+--------------------+
|
v
+--------------------+
|Car Available? |
+--------------------+
/ \
/ \
v v
+-----------------+ +-----------------------+
|Display | |Display Not Available |
|Information | |Message |
+-----------------+ +-----------------------+
| |
v v
+-----------------+ +--------------------+
|Book Car | | End |
+-----------------+ +--------------------+
|
v
+--------------------+
| End |
+--------------------+
If you’re using requirements documents, make sure the level of detail is deep enough to go into the level of detail that you require for testing. Otherwise, you’ll have an AI just hallucinating what the details will look like and you’re test scenarios will have no use. Perhaps split it out in deeper feature level requirements if you have such.
Another suggestion, if you’re using Jira, there are good add-ons that take the full story as a context and generate the test cases for it. Assuming your stories have sufficient level of detail, the result will be way better than a high level document. Here’s an example of such an add-on: https://marketplace.atlassian.com/apps/1235008/ai-test-case-generator