To what extent do you feel current AI software testing tools meet your project's needs?

Are AI tools meeting your needs? Why, or why not?

  • Not at all
  • Partially
  • Mostly
  • Completely
0 voters
2 Likes

Two main things I would need help with in my current project:

  • understanding and learning the backend functions, limitations, workarounds, bugs;
  • data manipulation, configuration, and customization of product based on data;
    The product is private and highly sensitive. Nothing can be shared with an AI;
2 Likes

In regard to LLMs:
I wonder for what a tester would need a muse / word prediction program. :thinking:
Either for inspiration or creating common texts in a specific mood?

What would a tester do with created images or sound? :thinking:

Other than that: what @ipstefan wrote.

I’ve not used it extensively for this but one area came up for discussion recently regarding who the models were learning from.

Even the basic, looking for test ideas, will it respond with the average generic mainstream views because they are in abundance online or will it be able to pick out thought leaders in the field, those who consider things a bit deeper or have fund flaws in the mainstream ideas and utilise their ideas.

Now in asking this it is interesting that ministry of testing does come up as one of its named sources of information, now that is a good sign as there are lots of people I’d put in that thought leadership category.

When you ask about thought leader sources in themselves it gets a bit more vague.

I’m still wary a bit, linkedin for example has AI generated questions on things like testing, a lot of them to me are nonsense questions and also a lot of the human responses again to me are shallow and even absurd to me at times. It’s got to the extent that those with the good ideas wont contribute to nonsense questions.

Is it then learning only from the shallow responses?

Who controls the learning models, how much manipulation could they wield?

Now if you ask AI To mitigate these risks you may get the following, so awareness of the risk is there which is a good thing but how well its implemented for me remains- this leaves me in the partially vote for now.

  • Data Cleaning and Filtering: Techniques are used to identify and remove irrelevant or low-quality data before training.
  • Data Diversity: The training data is curated to be diverse and representative of real-world language use.
  • Evaluation and Monitoring: The model’s performance is constantly evaluated to detect and address biases or factual errors.