Help to guide a developer to understand unit testing and other forms of testing

I need help, I have been moved to a new project with a new developer, who is very ā€˜gung hoā€™ and controlling, he has recently decided that automation testing is the future, and everything should be automated, he is so engaged in this idea that he has ā€˜writtenā€™ a test plan, constructed the test scanerios and ā€˜decidedā€™ that the testers on the project should use a record and playback tool.

My problem or request is thisā€¦automation is a new skill for both testers on the project (of which I am one) both testers do not want to use record and playback (not really helping us learn) so we are slowly (by his standards) starting our framework, while also still running manual tests, his is also not the only project we are on, but i need him to take a step back and let us decided the work, the automation scripts , so how do I temper his control while also keeping him engaged?

3 Likes

You broadly have two options:

  1. Reduce this to an either/or situation where ownership and accountability of quality remains strictly with either the developer in question or the two testers. Your engg. leadership will need to take a call and the next few release will be used to determine the quality of the decision. This is a not the best way to resolve the situation as it creates adversarial competition.
  2. Communicate to build trust with the developer. Demonstrate openness to understanding the developerā€™s point of view. Take ownership of the record/replay vision and help them see pitfalls that they havenā€™t thought of yet. e.g. How would you replay non-idempotent code paths that involve modifying the state of the system under test? How would you deal with non-deterministic code paths? How would you ensure no sensitive data gets accidentally recorded? Read up on the state of the art on record/replay with an open mind and try to find common ground with the developer.

Here are some resources:



1 Like

It will ultimately boil down to what kind of person youā€™re dealing with but assuming youā€™re dealing with someone totally rational, my piece of advise is to stay objective - if you just point out all the problems you see with this new approach, it will immediately seem like youā€™re simply resisting the change.

I think itā€™d make sense to point out all potential problems with the new strategy. Such as the lack of inability to learn that you mention. Or maybe the flakiness of such tests - depending on how the record and playback is actually done but assuming itā€™s the inputs that you capture and then play back, then all it takes is for a response time to change for the test to fail. Or maybe itā€™s the inability to automate everything - after all you can only automate what you know and somethings cannot be automated at all. Or maybe itā€™s the maintenance of the tools?

But then also look for opportunities with this new strategy - how can the automation aid you and your fellow tester? What is this approach really good for? Can it catch any issues that were historically missed, and are these issues of high severity? Can this type of automation reduce your workload and allow you to focus on other aspects of the software? Can you develop the automation further by creating more tests or improving the tool?

Overall it sounds like a greater issue to tackle by the entire team. If possible, Iā€™d involve your lead/producer/PO so that you can sit down with the developer and agree on testing strategy - whoā€™s owning it, whoā€™s making the decisions, how is regression done and so on.

2 Likes

Hi @nquinn . I would strongly advise against option 1 from @khanduri - as he has said, it will lead to an adversarial relationship and that is never productive in the long run.

As to the second point, I am in broad agreement. If you work with the developer but can contribute improvements it will feel like a win-win situation. Work with them and formally identify the risks in following the devā€™s strategy and provide mitigation and/or alternatives.

As Testers it will presumably come back on you if poor quality code is released, so you would simply be doing your job.

Good luck!

1 Like

I think the key here - as previous replies have said - is to question, but question with positive reenforcment. Be pro-active about these new plans but also heap on your observations of the strategies negative aspects. There are loads of drawbacks of both record and playback tools and also not giving testers the freedom to learn. Insert these drawbacks into conversations about how to move forward. This might help the teams as a whole realise that moving forward with one individualā€™s plan might not be the best idea and this in turn may cool the personality of said individual.

I had a similar situation on a project a while back, the dev in question was a great person but was WAY TOO enthusiastic about automating all the things and using Katalon record and playback in particular. I helped him present his plan to the team and inserted my concerns about it in a diplomatic manner, focussing on the restrictive nature of the tool (in comparison to more programmatic testing tools) and also suggesting how I could perhaps help with much better alternatives given the breathing room to work on a demo. I think managing critical and social distance helped bring our dev on side and he became willing to try more things, which is all that was really needed.

1 Like

Have you tried shouting at him?

haha noā€¦that would not be the best approachā€¦

Nichole, is there potential for both using the record and playback model with tinkering under the covers? In one environment I used SmartBearā€™s TestComplete which was a pretty decent record/playback program and we would then go in and modify the recorded script with variations, triggers, timing, sequences, etcā€¦ So we had the ability to make fast progress in automating while also learning by reviewing and modifying the scripts to the point of creating our own. We engaged the developer within that context, hereā€™s what weā€™ve done, we want to do this, help us understand the hook we need to use here. It was a great cooperative venture for us.

2 Likes

If your product is at all public-facing, you need to make a case for a round of exploratory testing. The automated tests will (we should expect) demonstrate that when you click on button A, B happens; but thatā€™s not the be-all and end-all of testing, and you can never guarantee that users will click on button A and wait for outcome B every time. Automating all the things misses out the one factor that canā€™t be automated - the user.

Should the PO, management and the dev ignore or dismiss this, your only option will be to file the conversation away for future reference against the day when everything falls over badly because of a bug that arose out of a failure to consider user interactions that automated tests could never detect. ā€œI told you soā€ isnā€™t an ideal thing to say - or to have to say - and hopefully the real-world damage wonā€™t be too significant, but sometimes people need a failure to shake them out of their complacency.

2 Likes

Yours is a quite tricky situation. I personally would try to understand why he thinks a record and playback solution is the correct way.

I also think having good arguments for a self-written framework can also make him understand what the advantages are. These could be a bunch of different reasons. I would not emphasize the learning part too much as it doesnā€™t help him. Try to use arguments like

  • It is easier to extend and maintain,
  • It is (often) easier to scale
  • If it is written in the same language, it can be better understood by the developers.
  • It can live in the same repository (which makes CI/CD easier)

As other people wrote automating everything is also a very dangerous path to go. This often leads to a massive block of automated UI tests which are testing through all the layers, and they will take an eternity to run, which could be another argument on your side to better choose what to automate. :slight_smile:

1 Like

Oh this is an interesting suggestion as heā€™s not really letting goā€¦

Iā€™m a bit confused by this honestly on a couple levels.

First, many of the record-and-playback tools Iā€™ve seen are little more than snake oil and a recipe for shallow testing and a maintenance nightmare. They tend to attract managers who have purchase authority but little knowledge of good testing, so Iā€™m surprised to hear a developer has fallen for the marketing hype. I believe @katepaulk wrote a very good post on the limitations of such tools that might be helpfulā€”I think it may have been on sqa.stackexchange.com but not 100% sure.

Second, Iā€™m assuming that this developer is not your manager? If thatā€™s the case, I wouldnā€™t let them push you around on what tools or approaches you use for your work, and perhaps your manager can be an ally in defending your autonomy if he wonā€™t back down from trying to control that. I suspect he wouldnā€™t take kindly to you telling him what tools or languages he should use for development :wink: and maybe thatā€™s even a helpful example to bring up if he continues to insist on your using his favorite tech.

Finally, as more general advice I would recommend prioritizing learning to advocate for the value of good testing over learning the tool the developer is trying to dictate. Michael Boltonā€™s blog (developsense.com) and conference presentations available online (just make sure to include ā€œtestingā€ in your search to avoid just getting the singer!) are some great resources for that. I imagine that if you put together your own test plan and demonstrate how it provides deeper testing for problems that threaten the value and ontime delivery of the product than the developerā€™s shallow record-and-playback approach, and are able to articulate that depth and skill required for good testing to the developer and to management, he would be hard pressed to keep trying to control how the testing is done.

As one final thought, if heā€™s dead-set on record-and-playback, perhaps you can make a wager with him that youā€™ll record the scripts and he can do the work to fix them if (by ā€œifā€ I mean ā€œwhenā€) they break :grin: .

1 Like

I donā€™t remember which post it was - and since Iā€™ve written over 600 answers for sqa.stackexchange.com it might be a bit time-consuming to go through them all to find it.

The really short version is that record-playback produces a lot of duplication, and tends to be very dependent on the full layout of the application. Because of this, it doesnā€™t take much to cause automation to fail - and the only way fix it is to re-do the recording. A change to a common process (like the login flow) can cause every test recording to fail and need replacing.

It should be obvious this isnā€™t a good thing.

1 Like

Ah, found it! Looks like it was actually a question but I think itā€™s a fairly canonical source.

1 Like

Oh, wow. That was one of my first questions there.

I still miss that test framework: it was complex to learn because there were a LOT of helper methods, and it wasnā€™t always easy to find them when you needed them. On the flip side, once I was familiar with the framework, adding new tests to it was often a matter of a few lines of data then an update to the baselines. At worst, it was a new data class, a new handler routine, a new navigation routine, a new case in the master switch statement of the driver routine, and the appropriate lines of data and baseline updates.

I long for something that mature.

1 Like

thanks you for thisā€¦its a great read

1 Like