How to find a mentor!

Hi all!

Relatively new here and not sure if I’ve created this in the right place so apologies if not!

My name is Sarah (my husband and I share and account at the moment whilst I try to persuade my work to buy a membership!) and I’ve about 3-4 years of testing experience under my belt. Most of that time I’ve been the sole QA in various companies so a lot has fallen to me such as implementing automation frameworks and the like! I enjoy what I do but feel I lack the benefit of another person at a similar level who I can talk to and sound things out with to see if my thinking is on track!

So I’m on the hunt for a mentor, specifically someone who has lots of experience in implementing automation - the good, the bad and the ugly! My main concerns that I would like to discuss are how other people decide what is of value to automate. I’m a big advocate for less is more and we utilise something that Angie Jones demoed in a talk to assist us, but it would be interesting to see how others make these decision and the thinking behind them.

Would love to hear from anyone/everyone with more experience than me and open to connecting with people for mentorship stuff if it’s on offer!

Thanks for reading :grin:

3 Likes

Well Hi Sarah (& Darren) and welcome to MoT! :slight_smile:

Always happy to have a chat (written or live talk) about anything you like. :slight_smile:
Hit me up with a DM

Easy! ROI. Return On Investment.

You are not going to automate a workflow which takes 20 days to create and 5 hours to run because 1 end-user out of a million runs that workflow. You want validate your business flows first before automating. Which flows are the most important? Which are actually done by our end-users. Then you can cross-check on which level you’ll automate this. The closer to the code, the better. Also pick the right framework for it, don’t pick what someone advices or what you know. Pick the right tool for your project :wink:

Sometimes it’s not automating the test itself that is going to get you ROI but setting up the automation of creating test data. It’s different for all projects & context.

1 Like

That’s great, thanks!

I think the thing I struggle with in my job is we are often making things from scratch that don’t have a user base yet. And I worry we’re over automating because it’s not always clear what is going to be used the most! We have requirements but they essentially give equal weight to everything so I feel like we end up automating every screen we make!

We want the benefit of automation so that as we’re developing these things we have a level of confidence we’re not regressing but I find that hard to balance with not over automating!

Will definitely drop you a DM though as you sound exactly like the kind of person I need :grin:

2 Likes

I feel ya, I’ve seen organizations automate the same test on UI, API and Unit level :monkey:

I don’t think your problem here is ‘over-automating’ or automating at all. I think your requirements and business analysis are lacking key elements. To understand it right, your team is making applications/features without knowing the business/end users are going to use it?
So who comes up with the ideas to implement these features? Can he/she shed some light on what the purpose is of the feature, maybe then you can identify some key flows to automate before other flows.

If it’s needed make sure that he/she prioritizes the list, you can give everything the same weight for sure but you can make a list yourself and even ask for validation.

What kind of automation are you currently doing on your project?

We do get requirements that we work from but sometimes those are from people who sit in the middle and are not necessarily the end user! It’s complicated :rofl:

Because we’re currently building web based products, we as QA had been responsible for implementing UI based automation tests. And for a while it was like we were unit testing the front end and I have fought really hard for us to make smarter choices! We have since implemented something that allows is to put features through a matrix where we as a team score things based on the impact to the customer if it broke, the likelihood we’d fix it, the cost of writing a UI test for it. This has helped A LOT to whittle things down. In addition, we were pretty light on the API side of things so we’re going to be trying to do more of those in the first instance and less UI based ones

1 Like

:rofl::rofl::rofl: been there! :smiley:

1 Like