I am QAing for an app where it is expanding quickly and few parts are not changing. I am one QA for multiple devs that are all working on separate parts of the app. I have a list of test scenarios that’s like 70 long, and still growing. Currently when I write test cases it feels like busy work that will need changed soon anyway…
But I want a medium to be able to discuss what needs automated and the like with the devs and BA.
Any advice from more experienced QA’s?
Thanks for sharing. One thing that comes to mind is the power of collaborating on writing & reviewing acceptance criteria together with people who wear different “hats”. A way to start a good conversation about what’s important and what might not be so important – as a way to feed into what scenarios you (and others) write and implement.
@callum shares an excellent article on this topic, if it’s not on your radar.
If you’re the main one consuming the test cases, I’d look at figuring out what the lightest format you can use for writing them is. i.e. if they’re not meant to be a script for anyone to follow, then maybe mind maps or high level outlines are sufficient for showing the general shape of the tests you’re executing/behaviors you’re verifying.
To help explain my context a bit, I have not written down a test case in a repository of any sort in over 10 years unless I was already writing automation scripts directly.
I generally work at the test idea level or some people talk about charters, these can take the discussion up a layer but often they too quickly become redundant.
But that level may worth considering working at. Some people find a mind map covering features and risks a good discussion tool and you can often quickly flag areas that make sense to automate as part of those discussions. Noted ernie mentioned similar points here.
Why are you writing test cases? Then decide if there are alternatives that do not quickly become redundant.
Sometimes test cases are the right things, some complex banking apps for example I’ve needed complex spreadsheets just to get some sort of oracle that a 3 year performance calculation on a bond was roughly correct but that’s often rare cases.
Your test scenario level might also suffice without need to go into details.
Definitely like this approach of not “capturing” test cases. The time wasted updating them is a huge process friction in itself. However it also means, if the app is changing rapidly, that maybe the business is shifting it’s goals rapidly, and you find yourself are focusing more on the shift, and less on the final goal by writing down test cases.
I’m a firm believer in knowing which app “behaviors” should never change, and which ones might change often. The latter will fall into a bucket of things like “business logic” etc., while things that won’t change much will be around areas like security (both kinds), and general acceptance criteria. I tend to only write down detailed test cases for things that are unlikely to change. Eventually a pattern will emerge for what counts as things that don’t or should not change often.
Any advice from more experienced QA’s?
Hi)) I am pretty new here and in QA in general. But I noticed that even I want to keep doing manual testing, I make mistakes in the tests that basically would be better to be automated. I talked to my team lead and developers and they said ‘Yeah, man. Automation will save your time.’’
So we have been partially automating our testing for almost 3 months and tell the truth this is a time saver. I also re-use test cases, so my list doesn’t grow exponentially (I found this option in the tool that we use, aqua cloud alm). And right now developers are considering to hook up Jira and Jenkins, so first of all we will collect all testing results from all integration and other test tools in one place.
I think it would be great to discuss the possibility of automation with your team first, maybe to find a tool that also allows re-using test cases and scenarios and a couple of integrations would also be pretty useful to help with automation.
I noticed that after we did this, I make fewer mistakes, I can see what tests are already automated and which ones I can automate later, I do my work faster and developers also can see the results.
You could use things like a feature map and have rows for all the different areas, column for test automation and also a column of notes on how to test. It would speed it up rather than the step by step test cases.
Scroll down to the bottom of the blog with the example table of a feature map.Stop writing step by step test cases! | by Melissa Fisher | Medium
People who have not read ^^ this blog ^^ by Melissa need to do so. #JustSaying
While I encourage my team to write test cases (more as proof of testing than anything) I’m happy if we get the “happy path” covered as that will drive automation plus any high risk scenarios. Ideally I’d like to see consideration of edge cases (in particular those that could cause a major issue). We document in story in some projects, in word docs for others and use zephr for others. It’s whatever best fits tbh.
The article was very us full.
You can automate the major functionalities and used data driven approach. Through data driven approach, you will take the input data from table so if the dev changes the functionalities you just need to update the data in table , it automatically change that values through your code.
For Data Driven approach, you can use many tools i.e. Mabl , TestProject , Cypress , Selenium etc.
But in all these tools , TestGear is best in all of them. It gives best results in a very short time. It is low code platform and can automate with a basic knowledge of coding . You can automate the major functionalities and used data driven approach in TestGear. It is best tool for automating the applications through data driven. I have also used TestGear and it’s really very useful and fantastic.