Where are testing roles shifting to?

(Rosie) #1

I’ve heard of some companies getting rid of ‘testing roles’ entirely. Does anyone have stories of how they end up getting on?

I’m also seeing the topic crop up more.

There’s a meetup on the topic, for example: https://www.meetup.com/Ministry-of-Testing-Copenhagen/events/236352211/

Also, Fiona Charles covered the topic at ETC - slides here - http://europeantestingconference.eu/slides17/FionaCharles.pdf

Where does testing fit into your organisation? And/or where is it heading?

Multiplying the Odds: Do I have a future in testing?
Multiplying the Odds Masterclass further discussion
What trends do you see in software testing?
(Jesper) #2

These are trends in roles, and have been happening for a while based on what I see from TestBash’es, conferences, blog posts and twitter. My mission is to identify what I see … and present it to the stakeholders. :wink: When we label things, we can talk about them. We are mostly shift-lefting and shift-coaching…

Here are some additional resources:

Patrick Prill @TestPappy wrote this piece:

After TestBash Manchester there was an Open Space on October, 22nd. The topic I choose to put on the schedule was “Shift X – Going up- or downstream?” My motivation to choose that topic was that in two talks during the conference we heard – again – about testers on the move.

(Jesper) #3

I wrote four pieces on my blog the directs I see (so far)

  • Shift-Left
  • Shift-Coach
  • Shift-Right
  • Shift-Deliver

This DevOps & Ministry of Testing infograph is probably also relevant:

(Anders) #4

Great topic, Rosie :slight_smile: Testing is certainly changing, but we could also say it might be growing. My personal interest is more on the “reinvigoration” of testing and testers. To me, it’s important to focus on risk. Go beyond the purely functional testing perspectives. Find finding courage to enter domains where “wild animals lurk”!

I spoke about testing in “Black Swan Domains”, i.e. contexts where risks are really difficult to qualify through testing last time during “Reinventing Testers Week” at Anna Royzmans conference in New York back in September. I have blogged about it and what Anna and I are working on here:

(Dan) #5

I think companies that think “testing is just checking requirements” are getting rid of those “testers” that are just doing that checking, as they are moving towards automating all those checks…

But as we know, testing is more that checking. It’s actually more focussed on investigation, and I think companies are starting to realize this. I think companies are starting to realize this but are struggling to hire people with the investigative skills, which is driving up the salary range for good testers.

I think our craft has some “growing up” still to do, in terms of having everyone on the same page with investigative testing, but I think it’s getting better and more exciting!

In my company, when I’m hiring testers (regardless where we’re hiring in the world), I look for these investigative skills, but I also look for modern skills like agile, technical skills, etc. But I also look for a good attitude to learning and leading too. And as It’s a big company that I work for, I hope other companies are watching what we are doing and how we are hiring, and look to imitate us a bit in terms of hiring good investigative testers too.

(Jesper) #6

by Katrina. Btw: I’m used to “test manager” without formal staff reports - there’s all kinds of roles out there. #itdepends

(Guna) #7

I’m seeing the shift towards full stack tester - as in good at switching context and able to dive from one mode into an other: e.g. switching between exploratory, to automation, to security, to support back to automation.

With the argument of the - "manual testers are expensive because some things they do can be done in a less expensive way.

(Paul) #8

It’s an interesting one; my gut reaction is that I think it depends on where an organisation is at (and ultimately how testing is “sold” in to the company).

I’ve found that sometimes the hard part is selling testing, and the benefits thereof, to the people that matter - and getting them to shift their thinking from “we know we need testing but we’re not sure why” to “we totally understand the benefits of testing (and all the flavours it comes in)”.

The problem with the former is, if people aren’t sure why they are paying for folk to do testing, it’s a short hop from that to “we can save money here, these people are just clicking buttons”. Once critical mass is achieved with the latter, it’s likely you’ll end up disappearing down the various testing rabbit holes (automation, performance, security, etc) but that can be managed as long as there’s the right level of buy-in (and understanding).

I also feel it’s necessary to make sure folk understand the importance of the learning aspects of testing, by being able to highlight and recognise successes in this area; a testers growth in to a level of domain expertise that rivals that of BAs/Product folk shouldn’t be underestimated due to the longer term benefits this can bring. It’s at this point at which we can start addressing the broader “quality assurance” that testers can help bring to a team.

(Anders) #9

In contexts I’ve worked in for the past 10 or more years, there has been a lot of need for domain knowledge in testing, and since domain knowledge is something testers often provide better than people who focus on automation. So working together to make the most out of things really makes a better “stack”, I think than just requiring one person to be “the full stack” (I can’t help thinking about the TCP/IP stack of Unix - or even a stack of beer :slight_smile: )

Also, one thing I think we often forget is that domain knowledge often not only concerns how things should work, but also business risks. The automation guy will typically only be able to script tests of functionality, but what about the things that go beyond the purely functional?

(Aine) #10

I was seeing more of that in my last role too. We were expected to do more of it and across multiple platforms as well as digging into the backend too. Although I quite enjoyed it as it added more variety to my day.

(Guna) #11

I will have to agree to the neglected focus on domain testing/gaining domain knowledge

When it is being discussed on what kind of tester we need - there are two aspects - well but debs can also spent some time testing functionally - and - what stops you to add an end user if they are reasonably accessible.

It’s not per say attack on testers role, rather then grasping what of things we as a testers do, we have not made visible.

The argument that no matter how skilled - it is expensive when done by hand. This argument will not go away.

We are not being told to drop explorations and/or “manual efforts”

Those will still be there, yet there is strong push on "we see value in you being able to do these 5-7 extra things and evolve " rather then “it is cheeper to replace you with”

It is fun to becoming more diverse as it helps with identifying problems.
Alarming is that with this the “I should spend less time on this” , “I keep lagging behind” , “how far I am from goof enough” as a nagging voice keep adding pressure and is not helpful when one tries to build better relationship with failure (: I hope I will manage to learn to embrace them and make them be a great reason to evolve, but for now it’s like being on Titanic.

Change is hard, it is necessarily but very very hard

(Satyajit) #12

I like how you classified them and derived the definitions from present day world and conferences.

(Paul) #13

Talking about removing the tester job title/role I actually took issue with something Rex Black (former head of the ISTQB) said this weekend on Twitter about SDETs. He suggested that testers became SDETs (which he considered a different job role entirely) as soon as they learned programming and automation, which I don’t think is right at all. Just different tools to aid testers in doing their work.

(Thomas) #14

This article touches on the same topic in an interesting way and the difference between testing and QA.


(Ivor) #15

I had a chat with a chap who used to work for me.

He was looking for a new contract, but the latest filters installed contained the key word “Automation” and his CV wasn’t getting through.

“What should I be doing?”, he says.
“Hmm…”, says I.

pause… I could have said many things, lie, learn, read, try it out, but… time on…

“Why not have a chat with the relevant people? Why not make a statement in your CV/initial contact about where you place automation in your skills toolbox? Why not write the skills profile in one page that describes you with a one-liner or two from different organisations you have worked with about the value add?”

Were all the things I thought about saying, but didn’t.

“Listen”, I said, “Check out the MOT Dojo, Udemy, MountainGoat, Satisfice, Test Obsessed and take some advice from these sources. Learn some Javascript (seems to be hottest thing here in Dublin these days), look into SoapUI, LoadUI as low level simple testing tools. Fill your toolbox with direction, even if you haven’t much experience.”

“Then”, I said, “Check out the likes of TeamCity, Octopus, Jenkins etc. to see what’s happening in the build, deploy, test, rinse and repeat area”

“After that”, I continued, “Look into Selenium, Fiddler, Python, Perl etc. and how they are applied in this area of Test Automation”.

“Oh”, he said… Pause…

“Jaysus Ivor, I’m just looking to get a contract testing, I’m no developer”, he said.

“I know! But if you’re looking to do that you need to look for the right role and not beating yourself up about Automation”, says I.

The truth is, testing is evolving. Traditional test roles, where test cases are written against large requirements sets and encased in reviews, approvals and sign offs, are disappearing. Testing adds value up and down the chain of an application/products development, whether in an Agile. Kanban, Waterfall or any other methodology we can throw out there.

Except, now Business/System/Product Analysts are building testing into what they do. Now, good developers are building good quality tests into what they do. ETL developers are working hard at verifying the data pipelines they are laying down. We’ve brought the user closer to us for testing purposes in UAT. We’ve cut our product/products up into self managed deliverable chunks and it seems to work.

As testers we now have a role to play by asking the hard questions (usually at the wrong time :slight_smile:, by designing tests to be executed on a regular basis that are more about our insight than making sure the software works, by automating the tests that will be executed again and again, but most of all for thinking outside of the normal box, connecting thoughts, design and more with people and making things work.

As I see it, after going round the houses, the testing roles are evolving and have become more technical. The traditional testing roles have been dissipated and incorporated into other functions. The sapient tester is identified as a “Test Analyst”, while the more technical tester is a “Software Developer in Test”. Specialist roles of “Performance Testers” are more like DevOps and provide detailed insight into what happens to the product under duress. “Release Engineers” make sure the automated tests are run, the process executed faithfully to deliver a working product. The “Test Manager” is more of a supporting role, as cross functional teams form and disband to meet the needs of the organisation. This particular role is most at risk as testing becomes more embedded with the political clout once associated with it being taken up by “Product Owners/Managers”.

I do realise that I posit a view that is relatively narrow in its scope. I don’t include Governmental organisations or Banking/Financial institutions. These have typically been well behind the curve in adopting new and ultimately better ways of doing business internally.

Maybe this post should have been in the “Rants” section :wink:

Anyways, tis my €0.02c.

Thanks, Ivor

(Jesper) #16

Nice Rant @ivor! IT ties into what I see and have outlined above. Great stories :slight_smile:

This reminds me of one trend I do see: The subject matter expert in the business section is doing all the testing - because they can, have the tools (SAP SolMan etc) and the best knowledge. If the tester is not the best SME - then the tester is out of the game. And UAT is not even dead in the corporate landscape.

(Sam) #17

I think there are some very large successful companies out there that have a reputation of not hiring testers cough Facebook cough. However every company has a different risk profile, Facebook has agressive dog fooding tactics (they release their software to their staff first for some time to screen for usability issues), Google employs similar tactics. Spotify, Facebook and Google all have amazing experimentation frameworks baked into their products with teams of researchers/data scientists helping to validate ideas with data. When you have these teams of people trying to improve the quality of their products and the frameworks in place help mitigate/reduce most of the risk with a product, where can a tester add value?

One role that facebook has is called a Trust Engineer (they seem to be responsible for engineering feelings of trust with users), there is an interesting radio lab talk here: http://www.radiolab.org/story/trust-engineers/ on it

(Christina) #18

With the increase of dog fooding, testers are probably more important but should be involved at the start of the cycle rather than the end. If a tester can look at a work item with the devs during an agile planning phase they can highlight the potential problem areas up front. Most dev departments suffer from having the same type of error reported over and over again - in one case I’ve seen almost every new piece of work gives the same basic fault despite the testers talking the devs through them every time. If testers can be involved in the devs’ planning stage then simply saying "I think there’s a risk of … " in this piece of work may well prevent that problem being built in. Far better to train the dev in writing software to pass test than to pick the problems up during test and spend time re-working.

(Steven) #19

It’s simple. Many companies (and indeed many recruitment consultants) don’t have a clear understanding of what or who they are looking for.

They trot out the same old guff about setting up automation frameworks, etc blah blah when in fact they likely don’t even know what this looks like or what the benefits (if any) are going to be had. In the next breath they’ll say you must have an ISEB this or an ISTQB that. I wonder sometimes how much copying and pasting goes on when preparing role vacancies. Many advertised roles seem to blindly demand everything - a bit like using a blunderbuss.

I recall a telephone interview once with a large satellite broadcasting provider and it was clear there were major misunderstandings between what they were looking for and the matter being compounded by the further confusion surrounding role titles e.g. failing to differentiate between a test engineer and a developer in test, etc. I can talk intelligently about test automation strategy but I am no developer. I wouldn’t necessarily be able to write the actual ‘code’ behind the tests unassisted. Moreover I wonder how many companies realise test automation is not in fact the same as software testing. Scary.

I guess it’s all a bit of a mess and will continue to be a mess until something is done to better align our often misunderstood industry.

(Jesper) #20

Here’s an additional shift… The tester becomes the Product Owner or even more knowledgeable than the PO or any other Subject matter expert or business analyst.

I see plenty of places where the UAT is not even dead but done by various BA’s and other SME’s - as they have the required domain knowledge. … and that’s OK. What we can add is making it smarter and more exploratory.

I wrote more about it here: https://jlottosen.wordpress.com/2017/05/30/the-domain-expert-is-the-tester/ :wink: