Iāll take a stab at a couple:
@lillaqa
Iām inclined to say ānoneā but Iāll follow that up with a better answer . While I donāt think that writing test cases, per se, adds much value, I strongly believe that we as testers can add value at every stage of the software development process.
For example, in requirements discussions, we can add value by looking for holes, bad assumptions, ambiguities, unexpected interactions between requirements, and asking many other questions both to enhance our own understanding of whatās being planned, and to help shake out problems early, before they make it into the actual product.
If there are periods in between projects, or the development team is doing work on things like research or basic feasibility, we can sharpen our skills by doing our own testing research, trying out new ideas, identifying lessons to learn from previous projects, patterns of defects, etc. We might even explore the teamās early software prototypes and give feedback on things like testability.
If weāre focused on finding problems rather than generating test artifacts, I think it becomes much easier to find ways to contribute to the success of a project, regardless of what stage of the project we find ourselves in.
@msj
I would say no. To use a similar analogy, would we want non-coding developers to design the product code? You really want the people skilled in that area to do both the design and implementation, because if you just transliterate how someone would interact with the software into an āautomatedā test, youāre probably going to miss a lot of what you could gain from a computer-driven test (e.g. randomization, permutations, speed, reliability, maintainability, and other such considerations). In terms of documenting various ideas or scenarios to test, which the test team might decide to test in different ways, including using code, I think there are some pretty lightweight options that are more effective than traditional test cases. Mind maps and test charters are some good examples.