The ‘praise’ category is a nice touch! BTW I’m experimenting with rapidreportermac.app to keep track of ‘things’ I notice during testing. Esp for things that catch my attention while I’m actually targeting something else, so I can come back to it later. (But it does not have the collaboration-options GoogleDocs has, and is a bit ‘crude’.)
We’ve been using Rapid Reporter for recording our sessions. I like that it floats, takes screenshots, and produces CSV output.
Fiddler and browser dev tools are excellent to help out with issues diagnosis. I also like to have Task Manager open. It helped me spot a memory leak recently.
@jpeers note that in Exploratory Test Note Taking - Rapid Reporter (Mac) @del.dewar1 asks for feedback on Rapid Reporter.
Three things I couldn’t live without:
What has not been mentioned yet, is a tool for service virtualization/stubbing/mocking.
If you are not doing service virtualization yet, here is a quick introduction to the subject:
I’m a test automation engineer.
My secret weapon of choice is AutoITScript (aka AutoIT3, AU3 or just AutoIt)
It’s language is simple like vbScript.
It has literally 1000s of demo scripts in it’s Help folder.
It can be compiled.
Compiled scripts run on systems without install.
Used for many purposes - smooth mouse moves for demos, interfaces, self-closing popup windows.
But mainly for killing my locked up browser that locked up my primary testing tool - like separating two meshed gears where one has jammed.
I’ve even used it to enhance error reporting.
I have a demo where I write some notoriously horrible recursive code that consumes most system memory and halts execution with a dialogbox. My AutoIT script is launched at the start to catch this window - open it’s details - copy the content to a file - and click the OK button - which allows my primary test tool to break the recursive loop.
And that dialog content error description is the only place the tool reports the line number when an error occurred.
I’m a little late to the party but recently I made something to help with test case creation as well as record what I was doing whilst exploratory testing. It’s very rough and ready (was knocked up over a weekend) but it has lots of potential. So far I’ve found it helps me write the majority of my test steps as well as help create detailed bug reports.
I did a short write up as well as create some quick videos and share a link to download it for free if you would like to take a look? http://vivrichards.co.uk/manual%20testing/making-test-script-creation-a-breeze
I would really appreciate any feedback or if you find a bug please let me know.
For me it’s:
- Screen recorder because I regularly forget what I just did in the excitement of finding a bug
- Postits - when I see something I need to come back to urgently or a gap where a story may need to be written around a feature
- Note pad - here is where I roughly jot down what journey I took e.g. I created a profile with user X browser version You
- TestBuddy now thanks to @simon_tomes as we’re a remote team and I don’t always have a notepad to hand. It’s also easier to read than my handwriting
- I start with a plan of what I’d like to achieve in the session. If I find possible deviations then I’ll explore them but I’ll have my rough plan on a post-it or whatever to get me back on track.
- Dev tools each browser has their own strengths. I love the network throttler and the performance analyser in Chrome lately.
- I also use Katrina Clokies user personas plus a few of my own. For this it helps me to see our product the way our target market sees it. I’ve found some fun bugs this way.
On the face of it, first 20 mins I found this quite useful, and hope you go further ( E.G, Chrome support)
Where I have suggestions on the basic functionality I’ve found that I can work around my expectation easily, at least with the program I tested it on. That it could be a challenge where the selection is no longer available once the choice is made but for now thanks for putting this out there.
When I have dropdowns or an open text field my input isn’t detected until I click back into the field /dropdown.
When I select a dropdown control The default value of the dropdown is captured, I select the choice I want, then I select the control again and at that point, the desired version is captured. It’s the same thing with text fields. It’s still faster to remove the initial select than typing it all out.
Using IE9 ( company default)
Thanks for the feedback. The click into the input/dropdown and then click again to then grab the value was intentional (to suit my scripting needs) - perhaps this can in the future be made configurable though to just on click show the selected /input value and the name of the field. The issue I think is the way in which I’ve implemented the solution which causes many challenges - it’s a c# windows form application with 2 web browser controls and I hook in to the click events
It’s really great to see that it has offered you some value and that it has some potential - I plan to share the code on GitHub shortly and do a lot more work on it over the coming months.
For those interested - I’ve released the code for the StepsRecorder for free - it’s very rough hand ready, but hey it’ free! If you improve it etc let me know, I would love any contributors to the project. https://github.com/vivrichards600/StepsRecorder
Awesome list! Thanks!
Can you explain the purpose of “My Best Friend Evah” ? Pretty please?
It means something I use all the time and would be lost without.
We tend to shy away from tools that auto sync to the cloud for any testing documentation. The primary reason is that we use testing documentation as a launching point for future testing. We tend to have a lot of internals documented for re-use (undocumented APIs, SQL queries / jobs) in testing and this is definitely content I don’t want floating around wherever people feel like it’s most convenient to them.
It also doesn’t easily support version control in a coherent manner. All of our testing documentation lives as part of git repo and is considered active. We keep it high level, think sign posts on places to investigate, but it is in constant flux as we add and remove relevant information. Given that we have multiple people working on the same content frequently we need a good method of maintaining a history of revisions.
Ah ok. Now I sound dumb
Late to the party on this one but;
I previously used the Exploratory Testing module in Microsoft test manager, I found it had huge potential but also had some critical flaws that meant it eventually fell by the wayside.
Microsoft have since released the ‘Test and Feedback’ extension which is much more lightweight but looks promising, especially if you use TFS/VS Team services.
I test embedded software so for me the solution is easy, put the software onto the hardware and press random buttons and move random switches to see what happens.
The second way is to move a switch or button in a way that a bored operator would and not in a way that it is meant to be moved to ensure that the software doesn’t fall over.
I suppose that there is an element of testing by monkey as discussed at the #testbashbelfast by Jeremias Rößler.
Doing this type of testing can be the highlight of my week as I am generally in a room on my own and don’t get disturbed.
I mostly test web-based apps at the moment so I use Gojko’s Bug Magnet for data insertion, Rapid Reporter mostly for SBT, and recently I started using Microsoft’s Test & Feedback plugin but the report formatting is a little messy.
I’d second the use of XMind; it a cracking app, especially as its free
Ditto on Fiddler - testing web services or sites, Fiddler has helped me catch so many things I wouldn’t have seen otherwise