Over my testing career Iāve moved from different roles/team structures
Initially being a 100% manual tester role in a team with about 5x other manual testers and 1x automation tester.
Then in that same team migrated to 20% automation 80% manual. The remaining team members stayed 100% manual or automation.
I then moved to a company where they no internal manual testers, they had a company contracted to do it whenever there was a release. Also no automtion testing of any kind.
This company I was hired to be an automation tester but āhelp as needed for manual testingā. Which translates to spending 6 months fighting to get the time to even set up the most basic automation suite between chaotic release schedules.
Eventually got to the point where all my time was 100% on automation and a second internal tester would focus on 90% of the manual testing and help with automation 10% (this in addition to the external manual testers being used as needed)
Im now in a company where I was hire as 100% automation tester with no expectation of manual testing. They have internal Subject Matter Experts (SMEs) who do the manual testing of new features prior to automation and smoke tests that havenāt been automated for business reasons. The SMEs sit in a team within the QA department separate to the Technology department. However, 1x SME is assigned to our product team and they cycle them every 3 months.
In every single one of these different roles/teams there has been a mix of automation testing, explortory testing, manual test cases and sanity/smoke/deployment checklists
However the manual test cases have different levels of detail between the different teams based on the needs and the people using them.
In the first company we had a lot of manual testers who were long term staff (10+ years). So they knew the business well. The test cases helped to plan expected testing workload. They also allowed us to have them reviewed by another manual tester prior to execution. This helped catch edge cases or prevent testing things that werenāt needing to be tested.
These test cases didnt have detailed test steps. They were a headline that sumerised the scenario. They contained any information a tester might need to know about preconditions and links to requirements/business rules/process documents that could be referenced if someone didnt know this feature/functionality.
These testcases were also used as the guide for the automation testers to know what scenarios might need to be automated. Often the automation tester was new to the business so this meant they werent starting from scratch deciding what scenarios needed to be covered. We would go through our manual scenarios with them and together we decide which ones are worth the ROI to automate vs which are candidates to be automated.
Meanwhile at the second company because we were working with an external contracting agency for our manual testers and had very tight deadlines our manual test cases were very different.
They were more detailed step by step instructions, with a lot more information included. We often would have to include screenshots of things they had to interact with in each test step and the expected results.
This was made worse by the fact it was a medtech comapany that was dealing with reciving CT scans, generating a 3d model of the hearts vessel and then also reporting a lot of information in a lot of screens. It was a diagnostic tool whose users would be radiologist and cardiolgists. So it was also complex in trying to explain to an external tester with no background that they had to find and scroll down the Left Main artry in one of the 10x views until they reach a slice of scan that has a section of stenosis and plaque. Then check that about 20 fields display or dont display certain things.
My current company while they have manual test cases they are more used for auditing purposes by the QA team to validate something has been tested. Ive had a look at them occasionally when looking for info on historic features im trying to automate and often they donāt include any real information. This does seem to depend on who creates them.
Often it will be something likeā¦
Test Case name = User login
Test step = Tests that a user can log in
Expected results = user logged in
If you get a good one they might have a test step with a list of different user types/scenarios.
And this will be the 1x test case for the entire login/logout/auth feature