I’m working with the continuous testing model that Dan Ashby has popularised.
Within a sprint (and DevOps) model, there are obviously lots of activities that testing can get involved with, including (but not limited to):
- Carryover from previous sprint (technical debt for example)
- Estimating new items
- Investigate / test user stories, acceptance criteria as part of planning - Three Amigos style
- If working with ATDD/BDD then setting up scenarios in parallel with development to achieve in-sprint automation.
- Reviewing any documentation/designs for disagreements with user stories and acceptance criteria, and anything else.
- Adding to test plan/strategy documents (I’m thinking the HTSM-style mindmaps here)
- Identifying sessions to perform via Session Based Test Management.
- Identifying new automation that needs to be done (beyond the acceptance test level).
- Actually automating new scenarios.
- Performing exploratory sessions.
- Investigating existing automation output as code is developed and deployed in a CI/CD fashion.
- Raising/retesting bugs.
- Monitoring already-in-production items and alerts etc.
A sprint may be 2-4 weeks (though I’m seeing more tending towards 2 nowadays). How do you split your time across the above activities? How do you determine how long to spend automating vs performing exploratory sessions?
In the idea of continuous testing, every role gets involved with testing but there may still only be 1 “tester” in the team so I’m curious to see how the “tester” spends their time in order to actually test new features and changes whilst still continuing to develop and expand upon existing automation.
I look forward to your thoughts!