Curious: Is there a better way to capture steps during manual testing?

Hi everyone,

I’m trying to understand something about exploratory testing workflows, and I would appreciate some perspective from people with more QA experience than me.

Lately, I’ve been noticing how much time gets spent capturing steps, screenshots, network info, etc. during manual testing. I started wondering:

How do experienced testers handle this efficiently?
Do you rely on tools, shortcuts, browser extensions, or just good habits?

This led me down a rabbit hole where I experimented with a small script that observes interactions during a session (clicks, navigation, delays, etc.). It’s very rough, and I’m not even sure if I’m approaching the problem the right way.

Before I go any further with it, I wanted to ask the community:

  • Is “documenting as you test” actually a major pain point, or just part of the job?

  • Are there tools you already rely on that solve this well?

  • What should or shouldn’t be captured during exploratory testing?

  • Are there privacy or accuracy concerns I might not be considering?

  • If you’ve ever tried tools that auto-record or observe sessions, what worked or didn’t?

I genuinely don’t know if this is a meaningful problem or a misunderstanding on my part, and I’d love to learn from people with real QA experience.

Thanks in advance to anyone willing to share how they think about this.

4 Likes

What is the intent behind recording your steps?

I don’t record every step along the way but it is important to note what tests/experiments I’ve performed so that I can discuss if I feel like I’m missing something or when a bug escapes, I can see exactly what I did. Quite often I’ll include screenshots just to “prove” that I’ve seen the expected behaviour and I’ve found it useful to note down test data used, such as the names of a user account that I created, any interesting observations (especially weird log messages).

There’s definitely times when I’ll either be writing down my steps or recording myself. For example getting reproduction on a bug or when it is REALLY important that I know what I did when. However these detailed notes / longer recordings aren’t stuff that I intend for others to read. It is my notes / recording to help myself. If I want to share something, I’ll usually re-write or re-record to get something nice and focused.

It is worth calling out that to me there is a difference between the notes that I may take along the way and what goes in my test report.

2 Likes

Thanks! Spot on - except on special tests where the bug is very subtle and not easily seen.
I have a tool that does something like:
so if i miss anything, i can always check

I usually just run a screen recording with the snipping tool (in Windows) and talk through what I’m doing, what I clicked, what I’m trying to achieve, and any observations along the way. I’ll pause the recording to jot down rough notes about what I’m doing, but only enough to avoid doing the same things twice. If something odd happens, I can replay the video to see exactly what occurred, which makes it much easier to replicate issues. If needed, I can also share the video with a developer.

I rarely keep the recordings because of the storage limitations, the main purpose is to avoid those moments where you think you saw the software behave a certain way but can’t quite remember the steps.

I’m just starting my “learning how to do testing properly” journey, so there’s probably a better way but I’ve found that recording my sessions has been really helpful. Perhaps it’s something everyone is already doing.

2 Likes

Your workflow (screen recording + talking through + rough notes + replaying when needed) is actually something I’ve heard from a few other testers too, especially early in their careers.

Hey, I echo @oxygenaddict ‘s question: What are you trying to achieve by recording all the steps?

To me, documenting exploratory testing shouldn’t mean creating a script for someone to repeat exactly, every single thing you did. But it is important for them to know what you tested, and how. A subtle, but important distinction.

  • Is “documenting as you test” actually a major pain point, or just part of the job?

    • Learning what to document in your context is something that comes with practice and experience; when you get it right, the benefits should outweigh what is hopefully a low pain
  • Are there tools you already rely on that solve this well?

    • Rather than recording steps, I take note of scenarios / cases / journeys / flows; I often include screenshots and, when it’s hard to describe, short videos using the Awesome Screenshot browser extension
  • What should or shouldn’t be captured during exploratory testing?

    • Should: Information that would be useful to your intended audience, be that developers, product owners, stakeholders, auditors, or even your future self; think about why someone would read your test documentation, and what would help them - you can always ask them directly too, you don’t have to guess
    • Shouldn’t: ET documentation definitely shouldn’t turn into a glorified test script writing session, nor should it be so detailed and verbose that it’s hard to digest / extract value from
  • Are there privacy or accuracy concerns I might not be considering?

    • I like that you’re asking this kind of question
    • I think any privacy concerns would come down to how, and to who, you’re making your documentation accessible, and what sensitive / production data you may be including within
    • Not sure what you mean with accuracy concerns; documentation should always be accurate, but that doesn’t necessarily mean covering every detail / step
  • If you’ve ever tried tools that auto-record or observe sessions, what worked or didn’t?

    • I’ve never had the need to record an entire ET session, outside of training purposes; even then, it would be done live with someone / people, and recorded more for reference; I wouldn’t record a “normal” ET session, but I might make short videos to show issues, as mentioned above

You might want to check out some blogs I’ve written on (documenting) ET: Exploratory Testing Archives - Cassandra HL

In particular, How to Document Testing with SBTM: Testing IRL Part 2 | Cassandra HL and SBTM in Practice with PQIP | Cassandra HL

2 Likes

I never had much issues with documenting as I test. Working on software with a lack of documentation it wasn’t that clear what should happen before I started anyways. Usually you get a feeling over time what you need to record for yourself so you remember what you did and can answer questions about coverage.

Several times I did regret not recording though as I couldn’t recreate weird behaviour afterwards. Without the recording I couldn’t fully retrace my steps and had no proof.

1 Like

Thanks MoTs,

Seems we all go with what works for us. I am a big fan of proper organization. I prefer to have my tools in a place where I can launch and do stuff at a go, i.e. something that can do the screen capture, create narrative steps, generate clean bug report, and I can just send to Jira once verified. Again considering the time, pressure from Product team, not to talk of work volume. Having one tool that do all those will save me time. If I am facing such a situation, I consider tens of others facing same.

A user on X once mentioned, the fact that he has to write down steps, monitor logs, network activities and do the tedious work of QA pisses him off. He is a founder, and was looking for a tool he could speak into and it does the magic, generate a rich report for his QA sessions, so he can just at a click send to Linear. I agree QA might not be his daily work.

Personally, I am experimenting with a script that takes my work further, saving me some time. I think of having a screen recorder, capture, a session recorder that records only my actions in real-time. I did write some python script + rust, ran it, and it is doing exactly as i wanted (don’t think anyone will penalize me for that). The goal is to deliver quality much faster without loosing context.

I found some interesting metrics while running the script:

I just summarized those into a dashboard and monitor my session.

It gives me better insights into what other QA might not look into, and I learn more technicalities. The workplace is highly competitive now. For those who do more of UI testing, it’s sometimes boring like that guy said.

Are there any legal or ethical issue with using the script for my work? Please do let me know

I come at exploratory testing without following anyone elses guidance on what is good exploratory testing. However, when it comes to documentation, (I know, I’ve mentioned this before and I’m going to mention it again :laughing: ) but I’ve found nothing more powerful than using a PQIP sheet. The document is not about what you did, but its all about what you find.
Ironically, we had an hour long time boxed PQIP session yesterday on a feature, recorded our findings. After the session finished we went through each category:

  • Problems - most likely to be bugs. Try to reproduce them and if you can, document your bugs well. If you can’t, don’t…how likely are they to reoccur if you can’t find them? But make a note its an area you may need to explore again
  • Questions - do you need to document them or just ask them? Do you need to document the answers, if so where? In additional tickets, test cases etc.?
  • Ideas - the majority of what we find here is ideas on how the product could be better, but not exclusively. Those ideas we hand those over to the product manager to make the call whether to add them to the road map. Also, we’ll have testing ideas, maybe areas we think would be ideal to automate. So documenting a test case would be useful so its clear what we need automating.
  • Praise - well that doesn’t necessarily need documenting, but it does need communicating. If there are good practices going on shout about them and keep moving forward with them. If there are great features shout about them so the whole team can get behind the roadmap.

So in summary, we don’t get preoccupied with documenting what we did, more what we found. Then taking what we found and documenting what we need to document.

Thank you Gary. Those are gems!

1 Like