Panel Discussion: The future of test cases

Our first TestBash Home panel is hosted by the fantastic @tristan.lombard to host a discussion on TestCases :star_struck:

Tristan will be joined by @kimberley and @bedfordwest to discuss the future of TestCases and answer your questions.

If we donā€™t get to your questions on the night, weā€™ll add them to the thread below for our panellists to answer later. If youā€™d like to continue the conversation from the webinar, this thread is an excellent place to do that :grin: Share resources and follow up success stories from your learnings here!

1 Like

Questions we didnā€™t get to:

  1. Lilla Kovacs - Which is the best station of the software development progress to start writing test cases?
  2. Michael Sturt-Joy - Do test cases not offer a key entry point for non-coding testers to define what tests should be planned for automation, UI or Unit?
  3. Lilla Kovacs - How do you deal with a management which mostly just cares to attach a report of the regression test and donā€™t care on improving the process?
  4. Kevin Perren - What resources do you recommend to learn more about "Test Case - less ". Software development lifecycle? Books? Videos? Talks?
  5. Becca Batchelor - Do you see a new type of test cases coming up thatā€™s not gherkin or ā€œstepsā€ based, maybe a hybrid or something totally different?
  6. Zoltan Ertekes - What are the best tools to manage test cases beside Gherkin.
  7. Zoltan Ertekes - Is it good practice to store test cases in the code?
  8. J H - What role does exploratory testing play in your testing? Do you find yourself leaning on it more as you step away from test cases?
  9. Julia Shonka - How would you know when testing is ā€œDONEā€ in cases where the risk is extremely high? Like nuclear power, hospitals, FAA, NASA?
  10. Paul Naranja - What tangible comparisons can you use to convince management to look at more meaningful metrics besides metrics based on test cases?

Resources

4 Likes

Iā€™ll take a stab at a couple:

@lillaqa

Iā€™m inclined to say ā€œnoneā€ but Iā€™ll follow that up with a better answer :slight_smile:. While I donā€™t think that writing test cases, per se, adds much value, I strongly believe that we as testers can add value at every stage of the software development process.

For example, in requirements discussions, we can add value by looking for holes, bad assumptions, ambiguities, unexpected interactions between requirements, and asking many other questions both to enhance our own understanding of whatā€™s being planned, and to help shake out problems early, before they make it into the actual product.

If there are periods in between projects, or the development team is doing work on things like research or basic feasibility, we can sharpen our skills by doing our own testing research, trying out new ideas, identifying lessons to learn from previous projects, patterns of defects, etc. We might even explore the teamā€™s early software prototypes and give feedback on things like testability.

If weā€™re focused on finding problems rather than generating test artifacts, I think it becomes much easier to find ways to contribute to the success of a project, regardless of what stage of the project we find ourselves in.

@msj

I would say no. To use a similar analogy, would we want non-coding developers to design the product code? You really want the people skilled in that area to do both the design and implementation, because if you just transliterate how someone would interact with the software into an ā€œautomatedā€ test, youā€™re probably going to miss a lot of what you could gain from a computer-driven test (e.g. randomization, permutations, speed, reliability, maintainability, and other such considerations). In terms of documenting various ideas or scenarios to test, which the test team might decide to test in different ways, including using code, I think there are some pretty lightweight options that are more effective than traditional test cases. Mind maps and test charters are some good examples.

1 Like

@lillaqa

I would suggest doing the minimum necessary to create the report management is expecting, but to then do some additional testing using your own ideas for improvement. You can then present it as ā€œhereā€™s what we found doing our traditional ā€˜regressionā€™ testing, but we tried using these other techniques to focus on risk, and hereā€™s what we learned, problems we found, etc.ā€ I would basically look for ways to demonstrate that other approaches are more effective and potentially less effort. If you can demonstrate that thereā€™s a better way and that youā€™re adding value, I think itā€™s more likely management will come around than if itā€™s just an abstract idea to change things.

5 Likes

@kperren

In addition to the ones mentioned, here are a few more:

1 Like
  1. @lillaqa - depends. Unfortunately that is a word I use frequently around questions that might be looking for a prescriptive answer. The reason I say this is:
  • what kind of product are we testing - highly regulatory / loss of life is high / or its an app for buy black tee-shirts. Different approach for different products
    To generalize this I would recommend start early, include business & tech input then potentially flesh out them more when you get a better handle on requirements
  1. @msj - non-coding testers Iā€™m thinking would be technical system experts maybe with UI/UX /DB/API testing everything but actually writing code. Maybe they review unit tests but I donā€™t think Iā€™ve ever had input into the testing developers. Other testers may have a different experience. UI testing is usually linked directly to a design or mockup (if you have access to a designer or you do the mock up yourself) but picks for Automation is definitely something the team might want to discuss & flesh out the requirements better

  2. @lillaqa Business aka management generally care about $$ & customer satisfaction plus sometimes the product itself. If you are able to identify gaps in the process and equate this to $$ they will listen. Miro has this template called ā€œGap Analysisā€ - Current State / Future State / Gaps / Remedies. If you can equate the gaps to $$ spent and the remedies to $$ saved I will guarantee you will have a listening ear. - if you do this let me know how you got along

  3. @kperren good question! There are a few out there talking about exploratory testing rather than a scenario test case situation I first heard Maaret Pyhajarvi talk about this. Sheā€™s very passionate about this approach. Honestly why I feel so strongly about this is from the last few years of working in a number of different gigs and experiencing different problems with different PO/teams/approaches. Hope this helps maybe one day I will write a book on the ā€œhard learning curve of testing when you know nothingā€ - that was me years ago

  4. @beccabatchelor Yes I doā€¦ I see us aka the testing community maturing to a more flexing up/down approach and changing our ways to suit the kind of product/requirements that is needed. One size does not fit all and we need to learn to be principled testers instead of prescriptive (as long as our product doesnā€™t legally require us to be prescriptive) Does this make sense? Iā€™m going to start writing some articles around this approach that I believe will take testing to a new level including widening our talent into other fields like BA which I currently do BA/QA. - watch my twitter space @C2KimN

3 Likes
  1. @zoltan.ertekes the best tool is the one that suits your needs now and you can use successfully to scale up. Iā€™m currently investigating a few tools for a POC and on our hit list now are the following:
    Zephyr / TestPad / maybe TestRail. But the tool needs to be fit for purpose something along these lines. Its a start anyways.
  • Orgainise large test case libraries across multiple platforms / products
  • All test cases are reusable across all projects
  • Ability to create a test cycle that matches with the development sprint
  • Track testing via dashboards
  • Test cycle reporting (E2E & Cross Project)
  • Version control
  • API Integration
  • CI - Automation Regression
  1. @zoltan.ertekes Yes I love that idea the code is the source of truth kind of thing BUT not everyone can read it. Wondering how we could use this for UAT? Good question I might need to ponder this further. Stay tuned :slight_smile:

  2. @jh Yes and no. I have always done exploratory testing even before I knew what it was called as it was a way for me to understand the system interactions. I tend to do targeted exploratory testing against the business requirements but this has its downside in that someone cannot walk in tomorrow and take over my testing. Iā€™m required to become a specialist in the systems / products / business logic/rules and business future roadmap (so we always stay aligned). This is not an easy role and each new gig has an extremely steep learning curve (not for the faint hearted :grimacing:)
    But it definitely makes life more interesting and itā€™s kinda weird you start becoming 1 with the systems, its a very zen moment when your testing.
    The no part of the answer comes back to running UAT where the SMeā€™s are more comfortable with a designed test case approach which I donā€™t mind either. In this way they are able to tick the boxes for them and we (aka dev team) can tick the boxes in a format that suits us.

  3. @dfsqe High risk = different approach. I would always have an agreed DoD owned by the dev team & the business. Within that DoD would be what the minimum the business would accept eg. defects no 1/2 or 3 's but 4 & 5 are all backlogged with a priortisation already attached or maybe assigned to a specific future sprint. At this level I would have test scenarios that link to each AC that can be mapped back and I would be using some sort of test management tool. Automation would potentially have its own team but they would need to work very closely with the dev team ideally have a automation picks session with everyone for future work.
    I think I wrote previously the approach the team choses towards testing needs to be suitable for the product your developing. I wouldnā€™t write test scenarios and tools etc for say a app where you can purchase dog toys. Everything is relative to the particular industry your team is developing in.

Last and final question
10. @Paul Naranja - what problem are you trying to solve is my question? Do you need more testers or better tools or maybe a IM/scrum master each one will have a different approach.
Say its tools cause we have literally nothing and I write things in excel but we have Atlassian products like JIRA & Confluence. Iā€™ve have heard a CEO say well you have those tools that should be enough cause they cost us XYZ . He/Her/They made an excellent point it does cost money and thatā€™s where we start. Something like creating a ā€˜cost analysis reportā€™ what is that? Its the process of reporting several elements within a costing report for example - current tools cost $500.00 - team cost implement testing with excel say 2 testers @250 per day (think contract billing rates event though we donā€™t actually get paid this). Now note how much time you spend testing & documenting things in excel ( is it easy, can everyone access it, can everyone work on it at the same time, any mistakes happened due to the tools limits) do this over say 2-3 sprints. Gather your data. Then have a look on line and see if there is any free tools that might be fit for purpose or download a trial period.
Hereā€™s the kicker - how much do you want this tool because for the first one you might need to do the work outside of normal hours. Iā€™ve done a lot of this in my time too and Iā€™m thinking Iā€™m not alone in this. You have a vision and you want to get it done. By the way this looks fantastic on a resume.
Gather all your data then present it to your audience. Make sure you have everything prior to the meeting and send out a PDF or something to your audience so theyā€™re in the right receptive headspace to hear your ideas.
I hope this helps and if you kick this off please let me know the outcome. @C2KimN

Well I hope I have answered everyoneā€™s questions satisfactorily if anyone has other please feel free to reach out on twitter or linkedin ( if itā€™s LinkedIn just fyi me cause I donā€™t always accept connects)

Ciao and happy testing

4 Likes
  1. Is non-scripting just a thing limited to testing actual hardware? I mean in an environment where you might release every 2 weeks, even the manual steps to do that release effectively become scripted at some level. I was watching the Griffin Jones video, and the whole time thinking about the ā€œmoonwalking bearā€ . Sleepwalking testing. Are we saying that scripts make some people sleepwalk? And are we saying that scripts allow us to run tests using people who are not experts at the product domain/art, and are thus more likely to not notice the moonwalking bear either?

When someone says ā€œtest caseā€, my mind is really saying ā€œCan I rather automate thisā€? And also ā€œwhat are the known constraints, versus our agreed business constraints?ā€

Why I ask, is because as a coder turned tester, I have had to learn a lot of test technique, but like every coder who is not an expert at every component and domain, sometimes you just have to expand your teamā€™s skills by having them do unfamiliar tasks. Unfamiliar tasks where you need to be giving them more context than usual, specifically when it comes to implicit things about the intent of a test. If we are not explicit, we run the danger the tester just say, I know how to do this, itā€™s just a few lines on the console here, and then I can continue. Maybe itā€™s natural to them to always run all commands with a sudo, when thatā€™s not what we expect. And thus break things because a step they took in a console, that was logical to them, might have not been the intent? Are we saying the level of detail needs to be aimed at a defined audience, rather than saying, ā€œdonā€™t scriptā€? Iā€™m also aware that coders will move a button or a step often, and that the environment we work in forces us to change. Accessibility improvements, 3rd party integration changes that impact us, operating system changes, all mean that written scripts are a waste. Time we spend documenting tests is always time we steal from actually executing tests, especially time lost at a point in the project that matters!

  1. For me the script is most helpful when any user task can possibly be completed more than one way, and if we donā€™t script at a detail level that will tell me which route is valuable. Then the cost of executing all 3 different ways or in my case often 3 or 4 different environments, than having something that says to me which route we used in the last release, so that I remember to use a different route for the next release in order to save time. Am I being too detailed?