Testing and notes/Evidence etc

Oh now we’ve deviated into “to file or not to file” defects. and that is a hill I will plant myself upon and take on all comers unto the bloody bitter end.:smiley: The last thing I ever want any of my QA to do, especially people new to the discipline, is to second guess whether or not a defect is “worth” reporting. For one, inexperienced testers dont have that very thing, the experience, to make that call. Secondly, few things will get me banging my shoe on the desk like “Oh yeah…I saw that defect a week ago” as I or someone else is creating that very defect report. Thirdly, I have often had a minor defect lead straight into a much larger issue hiding behind it.

There isnt a hard, fast, rule that applies universally. As noted, it depends on the state of development. We can also find ourselves facing a “target rich environment” and have multiple bugs in our sights at once. one might make a note to themselves privately when facing that to bookmark what seems to be the lower pri/sev items and dive into the defects that look to be uglier - I can always come back to that malformed JPEG logo later, right?. And for sure engage in “fencing” with the developer. No not the swords, but rather fencing off areas where the testing is open season through discussion with the developer. Developers also love to tell QA about areas where they have concerns…listen to em!

1 Like

I love this :slight_smile: <3

1 Like

Evidence? Well, depends on speed. I’m looking at a pending UI rewrite coming up in the next quarter. So I’m not going to raise defects about things like non-working scrollbars and controls that are just plain not supposed to be shown to users that way, because in a few months, these will all need testing again. But it makes sense to take notes now for 2 reasons.

  1. to compare the new UI which will have issues of it’s own and verify why some controls got moved in the new UI
  2. to prove that the new UI is either better looking or flows better

To prove the latter I’ll need decent evidence of how screenshots follow or sequence, like a video. Just yesterday I got asked for a screenshot of something. These come in handy, but capturing them in a bug tracker does not feel like “evidence”, because testers crop and mutilate the screenshots (rightly so) when adding them to a bug. So I’m keen to drop screenshots and videos onto a fileshare and somehow structure some metadata around these snippets. Any good ways to do that with images and no huge amounts of text?

1 Like

We also have frequency listed in the bug ticket. Something as simple as “once in a while” vs “1 out of 4 times” vs “100% of the time” helps a lot with determining priorities.

Obviously end users might hit that “once in a while” bug more often than internal users but that’s a different topic.

1 Like

Traditionally in my “upbringing” we used Priority and Severity. The combination creates a grid. So something could have a low priority but a high severity because even though it is a bad, bad outcome it only happens very rarely in specific and unusual circumstances (just to describe an extreme) In another post someone described a similar grid that escapes my memory at the moment. (because Pri/Sev is so deeply etched in my hardened lump of a QA heart) . All are perfectly legit and useful. Im just illustrating similar methods.

Pri/Sev can also change during triage. Product might determine that something should have a different urgency. Or Dev might uncover a hidden severity during the discussion.

I love hearing how other teams do things like categorize and sort defects.

Does anyone use Azure Dev Ops for their exploratory testing notes?

2 Likes

I have. But I used the manual test authoring form.

1 Like

We use ADO as our ALM but to be honest I’ve not really made full use of the test management side (few reasons for this).
I was having a look at the ET tool using the “Test and Feedback” extension and at first looks quite promising.
However big flaw in this unless I’m blind and and just can’t see it is that the notes etc are lost when you end the session (surely this can’t be true!!??).

1 Like

You’re [quote=“monsieurfrench, post:9, topic:73475”]
trying to gauge how many people are required to do this in their place of work
[/quote]

In my experience, people / committees making processes which require screenshots / videos sometimes imagine this need. If you have the chance to look at the decision to include the need, you might dig into what the recordings might be used for, who might be using them, and whether the expected benefit is worth the practical cost.

In one medical device org, Audit (when asked) were clear that they only required such records when seeking to know that a known problem had been fixed, and the fix checked. The process owners wanted most checks like that to be automated – and with that clarity, the long-term need to make and store detailed + searchable notes basically went away.

The testers, however, used more detailed (and more temporary) records to illustrate what they’d found, when sharing within the team. The benefit they saw was that spreading that out shared skills, and bought greater expertise to bear on that path through the system. I imagine that it also made the team more resilient to departures. Looking at the rest of this thread, it’s worth noting that their target didn’t have much of a screen-based UI, and that their exploration was typically around changing setup and simulated environmental inputs, and measuring outcomes and some internals.

In a regulator, I saw testing notes (made in Word / Notepad / markdown / knotted string) attached to whatever represented the act of doing work. The org used Jira and ADO and wiki and OneNote and auditable doc storage – and I saw notes kept in all those places (relying on fragile links). Each approach suited (and was made by) its small, typically isolated group of testers. When (rarely) people outside these teams asked for older or more-detailed records, those outsiders wanted, in effect, magic recall. From an organisational point of view, those notes were unfindable, unsearchable and unknown.

If / when I teach this stuff, I ask people to think of purpose by framing for their audience (us / people who know us / people who don’t know us) and timescales (right now / at a foreseeable juncture / later than we imagine). And, in terms of what to record, there’s the last few paragraphs of What to Record. Which were written in a fever dream half a life ago, and so demonstrate that one’s notes, written for you for right now, may still be useful to someone you’ve not met, who lives in some unimaginable future.

5 Likes

I dont have access to it anymore (previous job, and im in between jobs) AZ test management is…obscure. The documentation is poor and I never did go looking for tutorials. But manual tests are objects in AZ just like user stories. I did make use of the test plans which allowed for querying for tests to include for a given test activity. those executing tests were encouraged to update them and to create defects from those tests which would create related (linkrd) objects. Maybe that will help you explore a solution?

1 Like

I’ve used the test cases etc a few times and whilst they are ok, there’s aspects I don’t like, main one being

  • Test cases are linked to the user stories, but the status isn’t displayed (because it seems the test run is a separate entity, i.e. an instance of the test case with an execution status). So if you want to see the tests from a user story, it’s not really pretty - you just need to paste in links to the test plans etc instead of it being a bit more intuitive and automatically linking through.
  • I can’t see a way of searching for, filteing on the for example Failed or Blocked tests (including there’s no drill down from the charts into these either)

Ah I cant do anything with the first item without accessing AZ and playing with it. I bet there is a non-intuitive way to assign tests to stories - but it probably requires creating a task type and making a story dependent on those task types assigned to it. i.e. you probably need the cooperation of the AZ admin…

The second - also not intuitive, but that is done via the query feature for all AZ objects. so create a Query using the stacked sql-like query form and you can do things like “Where item type = Test Case” etc. I would run these sorts of queries by sprint and embed the query in Wiki articles for a given sprint. So there was an updated view of the status of cases in a test plan

EDIT: oh duh. there is another way to view test result. If you organize by Test Plan, then you get three tabs. The execution tab (I think) can be configured by column and then sorted by those columns. Status is one of those. I think its a default.

1 Like

Yeah that’s the way I use it but not easy to jump from a story to the test plan (I need to try this out- and see if there’s better ways of working). Possibly creating the test plan as a relationship to the feature or user story etc. So my “requirement” is for example to be able to [easily] jump direct from the user story to the test suite for that story.

1 Like

Great reply James, thanks for taking the time out to do this.

2 Likes

I usually ask an admin to update the Status field on the test case object to include Passed and Failed. Otherwise, we cannot see if a User story’s linked items are ok.
I manually sync from the test plan execution field, when required. It’s not pretty but it works. It’s been like that in ADO for years - and that’s with the Test Pro license

3 Likes

That sounds perfect idea - and never thought of that. Thanks

2 Likes

This is the issue we’re facing, the tool we’re using won’t allow us to record evidence in a way that’s helpful for the test case. I think it’s mainly a result of using Gherkin for manual tests with examples. We were using xray app but it’s so slow for screenshot that it became painful.

1 Like

Give YATTIE a try! We haven’t had any complaints on slowness around screenshots. As an open source tool any feedback on what would make the tool work better for you is always appreciated.

If you do have any problems we are a very active community and usually address GitHub issues that are raised within 24 - 48 hrs.

2 Likes