What are we not talking about enough?

What does fundamental good testing look like? If we had a list of topics to cover, what would it include?

1 Like

Are there any good resources on learning security testing and how we can leverage AI for better security testing? and so curious about this trend of hiring people specialized in security and performance testing. I dont see recuirters hiring sesoned testers who have less experience in performance and security testing and what could enable them to go past that barrier.

1 Like

I’ve written a post about it a while back, just not about using AI in it, that part is actually a self-discovery part I believe. Once you get to a certain topic, you’ll see that AI is often used or not.

Personally I’ve been using AI to do OSINT better and sometimes I still use chatGPT as a rubber duck.

1 Like

Thank you so much! That is really informative.

1 Like

I agree with Caleb on that point: we need to talk about software testing fundamentals.

We have over 3-4 decades in this profession and it doesn’t seem to evolve. It’s getting worse as time passes, with fewer leading professionals trying to pave the good way forward.

How about some of these topics to start some ideas going:
Ethics, sociology or testing as a social science, psychology(e.g. Johari window), metacognition, biases, reasoning, meaning, semantics, ontology, fallacies, principles of engagement where there’s disagreement, scientific model, confirmation vs refutation, creativity, lateral thinking, critical thinking, uncertainty and complexity
Or:
The role of a tester, the meaning of the testing profession, the basics of testing, non-conformity of a tester or test approach, failed testing projects, mathematical testing, problem-solving
Or:
curiosity, distractibility, disobedience, silliness, accountability, honesty, integrity.

And it does require to have real hard conversations, debates, to put things under scrutiny, be uncomfortable, be wrong a lot, learn.

4 Likes

Butter is great for discussions sessions and capturing information of what is shared, not just through transcripts and AI summaries, but it could be designed to capture information through polls too.

I wonder if this is something we can explore, perhaps covering one keyword/term per session and then publishing something at the end of it.

We have a software testing glossary at MoT, but I want to completely revamp it, maybe we could revamp it by doing this?