Exploring New Tools: What Have You Learned and What Do You Recommend?

Are you regularly exploring and experimenting with new tools? I’d love to hear about your experiences and the insights you’ve gained. This discussion is all about sharing our journeys in discovering and evaluating new automation tools.

How to contribute

  1. Initial Tool Selection: In your most recent research, what tools caught your interest initially and why?
  2. In-Depth Evaluation: When experimenting, which tools did you decide to explore more thoroughly?
  3. Evaluation Process: How did you go about evaluating these tools? Share any specific methods or exercises you used.
  4. Learnings and Recommendations: What did you learn from this process? Any standout tools or features? Share your recommendations based on your evaluations.

Why contribute

  • Your experiences can provide valuable insights for others who are seeking new tools or approaches to evaluating tools.
  • Sharing your experience helps in honing your ability to document and articulate what you’ve learned.

I look forward to hearing about your newly discovered tools and how you went about evaluating them! :robot:

6 Likes

I learn a lot and constantly read books, research papers, blog posts, and documentation at work. So, I always needed a place to store the information. And most of all - a place where I can read and reflect on my notes.

My journey of searching for a note-taking tool:

  1. Notion - the best option with a bunch of functionality, but entirely online, not so good at mobile devices and paid templates
  2. Evernote - not so bad, but too little functions are free
  3. Workflowy - a great tool for making lists and simple presentations. But not for big texts
  4. Google Keep - good for quick notes, but nothing more
  5. xTiles - good, but need time to adapt
  6. Physical notebooks - too much time spent on writing, needed a place to store notebooks :), search for text is a nightmare

Additionally, I am heavily using task management tools, planning meetings with Calendar, etc. Up to some point, I tried MS ToDo, Todoist, and Google Tasks - but I stuck to the TickTick for a few years.

But then I discovered Obsidian. I am using Obsidian now as much as I am using Visual Studio Code (and even more).

Why it works for me?

  • Basic functionality - is free. Sync between multiple devices is paid.
    • I am using Obsidian on desktop (Ubuntu, Windows), Android, IPad. It just works.
    • But sync can be done on your own - using cloud storage or even Git
    • Paid sync offers end-to-end encryption to your notes
  • Great tool for note taking - you can tags, link notes to each other, search for notes easily
  • Note in Markdown format
  • Export notes to PDF
  • Canvas plugin is a good alternative to mind-mapping
  • Excalidraw plugin allows you to create schemes (even by hand - as free painting)
  • Extensible with a lot of community plugins
    • Daily notes for planning and daily reflection. Same for weekly notes.
    • I stopped using TickTick for task management - and I am successfully doing it now in Obsidian.
      • You can see tasks on Calendar, add reminders, create recurring ones

There are also plugins for Pomodoro, word and char count, etc.

The only hard truth about Obsidian is that you need time and effort to learn it and get used to it.
Of course, you can always find alternatives.

P.S. If you want to take notes better, you can explore the PARA method or Zettelcasten. Obsidian is created for Zettelcasten (in a nutshell).

9 Likes

Lesson 1.5.1 Activity: Write up Learnings and Suggestions

I suppose my classic story was selecting an API testing tool at my last job. Postman was assumed to be the easy choice but I wanted to expand it out as you can become too content in your tool choices. Tools can lead to staleness in terms of what you are testing too, as it pushes you down familiar routes constrained by the functionality.

Our needs were:

  1. Be able to store in source control and change like any other file.
  2. As they were mainly for exploratory testing, have a user interface.
  3. Developer and testers had to want to use it. We had quite a few NodeJS developers too.
  4. Some money was available, but would have been a pain to get hold of.
  5. Decent documentation would be nice but not essential as it was a highly technical audience.

Initial tools selected

Three is always a nice number:

  1. Postman - old reliable, most people have used it. Even I have heard myself say ‘a good postman collection is the way to a developers heart’ (figuratively speaking).
  2. Bruno - the new kid on the block, open source, free, no massive json blobs in source control. The cool new thing.
  3. Insomnia - challenger to Postman, lighter weight than the other two but not free.

Tools that were evaluated in more detail

We didn’t go forward with Insomnia:

  1. Source control - still a big blob of a file
  2. UI - fast and lightweight interface with code generation in NodeJS
  3. Developers couldn’t see the point in choosing Insomnia over Postman, too similar, especially when paid for.
  4. Money - billed per user and quite expensive
  5. Docs - just enough to get going, although the general community usage of Postman gives much more depth.

Therefore, we went with Postman and Bruno to the next stage.

The exercise used to evaluate tools

Create a POST request with authenication and assertions with one of our API’s in a test enviroment, check into source control and create a pull request for review.

For both Postman and Bruno, the creation of the requests and environment was easy and familiar, as its pretty much the same. Slight edge to Postman on assertions though, with snippets present.

Bruno really shone in terms of source control though, with individual files for environments and requests. Postman was also expensive too and we wouldn’t host it, so Bruno was the winner.

What was learnt and recommendations based after evaluating each tool

  • Don’t just stick with what you know, genuinely evaluate.
  • Tools have many similar functions and often copy each other.
  • Evaluate tools for the whole cross-functional team, its more complex but it results in tools everyone will want to use.
4 Likes

I’m currently going through the Foundation Certificate in Test Automation, and wrote a little summary of my evaluation for Salesforce UI automation approaches: Salesforce UI Automation: POM vs UTAM vs TestZeus Hercules | Cassandra HL

4 Likes

Initial Tool Selection: In your most recent research, what tools caught your interest initially and why?

  • TestCompass by @silviocacace , I stumble over it a while ago and it looked interesting to create models for testing and use those for discussion as well as test design

Evaluation Process: How did you go about evaluating these tools? Share any specific methods or exercises you used.

  • I waited until I had a topic to test where I expected TestCompass to be helpful and applied then for the trial. The I used the tool to support my testing of the topic.

Learnings and Recommendations: What did you learn from this process? Any standout tools or features? Share your recommendations based on your evaluations.

  • It is great for improving my understanding of the topic I had to test by forcing my more vague thoughts into an actual artifact. This made me aware of things I overlooked or which are ambiguous. For the same reason sharing and discussing it with others enhanced my testing as well.
  • Its capabilities of creating test cases from the model was not something for me, as I do a more explorativ style. But it might be useful in other contexts.
  • As I have less often topics where I could use the tool, I find a monthly description too much for me. I will try this approach with free diagram tools and will see how that will turn out. I guess I have to be more disciplined as TestCompass enforced certain things, which where helpful.
1 Like

Hi @sebastian_solidwork,

Thank you for your detailed and honest feedback! It’s great to hear that TestCompass has helped you structure your thoughts and uncover ambiguities in your testing. That’s exactly what we aim for…making test design not just more efficient, but also improving critical thinking and exploring the requirements and product.

I understand that the automatic test case generation didn’t fully align with your exploratory testing style. However, we’ll soon be introducing AAMR (AI-Assisted Model Reflection), which is specifically designed to enhance critical thinking and refine test models.

Additionally, graph reversal in TestCompass can help define test scenarios based on risk, allowing you to explore paths that might otherwise be overlooked. This can be a powerful way to balance structured and exploratory testing.

Regarding pricing: I’ve kept TestCompass as affordable as possible compared to similar MBT tools. That said, I’m always open to discounts or flexible options if that helps. Feel free to reach out anytime if you’d like to use TestCompass again in the future!

Thanks again for your feedback!

Silvio Cacace

2 Likes