Reviewing how a developer performed after project has passed QA

Hi,

I wonder if anyone can help me. We have had a fair amount of devs firing poor code over the fence to QA. This mostly relates to the fact their end of year performance reviews are based on shipped projects/features/fixes. So very obvious that itā€™s quantity over quality from the developers.

Now some are better than others and prefer to get good code out the door, but some do not. As a result Iā€™m being tasked with creating a review that the tester fills out once a project has passed QA. The idea of this is to create a score for each project and from that we can pull an average for each developer to get an idea on what they are doing wrong.

This needs to be incredibly simple and quick to fill out. Currently I have the following questions:

  1. Did you have to contact the dev for more info before you could start? Such as no documentation, Branch name etc, feature not loadingā€¦

  2. Were there obvious bugs spotted that should have been seen by dev before handing over to QA?

  3. Were things sent to be retested only for them to not actually be fixed or just partially fixed?

  4. How many full QA rounds did it take to pass? 1-3 / 4-6 / 7+

  5. Complexity: 0 - 5 (5 being most complex)

What are your thoughts on these questions? I need to find a way to score the developers and also to use complexity to lower them slightly depending on how complex the project was.

I tried giving each question the same points and then assigning a % to the complexity (-10%, -20% etc) and taking that off the score. Bu have not been happy with the results.

Any help would be hugely appreciated. Does anyone else on here have to review projects/developers?

Ant

1 Like

Hello @antread and Welcome!

The relationship between people on project teams has multiple facets. In my opinion, project role and rapport are among the more important. I believe this has become more important in recent years as companies adopt and practice more agile methods of product development. Regardless of methodology, a project team is a set of people working to deliver good products.

The set of questions above seem narrowly focused on the evaluation of a product. This may not provide a complete picture of any one person on that team. While I believe you have experienced or perceived relaxed attitudes towards quality, quality is rarely held solely in the hands of those who produce the product.

I have seen requirements that are unclear. How might that impact a performance review? I have seen defects logged that were inaccurate or incorrect. How might that impact a performance review?
If the product producer were assigned a similar task for product testers, might there be a bias present when they are asked for documentation too often, possibly humiliated by persistent bugs, or assessed perceptions on complexity?

I believe your experience could be perceived as a single data point on which to build relationships, commitments, and better products. I encourage you to work with managers (a test manager, a development manager, and others) to explore the behaviors you perceive. Additionally, ask for feedback from them.

In my experience, these kinds of results sometimes occur when a project team has not collaborated on what to build, agreed on a delivery pace, or established some rapport. Yes, the experience is frustrating. Step back, learn, share, and grow.

Your thoughts?

Joe

2 Likes

I find it troubling to think that the answer to a poorly designed review process is more reviews.

I find it more troubling that the reviews are peer reviews. Even taking out the emotional content, the reviews can, in my opinion, be seen as an attack on a programmerā€™s abilities and style. Even if it were taken at face value, I donā€™t see the benifit of such a review. Thereā€™s nothing there that will motivate a sloppy programmer to pay more attention to detail.

You can couple that opinion with the fact that your current questions may all be answered by a halfway decent issue tracking system and good revision numbering. This is something that can come from management.

Iā€™d like to give advice about how to address the main problem in a more productive way, but I donā€™t know your team or how they work, so any advice I give will be wrong. I would encourage your team to start thinking more like a team rather than individuals working for that good performance reviewā€¦ but how to get there, quite frankly, is beyond me.

2 Likes

Itā€™s hard to comment confidently without knowing your team. With regards to question 1, that doesnā€™t sound like the team is functioning as such, and that the QAs are isolated from the devs - you mention ā€œfiring poor code over the fenceā€. If youā€™re in the kick off meetings, you should be able to identify what you need at the beginning, and if youā€™re working closely with the team, it should be ongoing conversations. Question 2 should be a private conversation with the dev, and, where possible, you should be looking at it like a coaching opportunity.

Sounds like your company could do with addressing how their QAs and devs work first - feels a bit antagonistic, and a bit ā€˜old fashionedā€™. I wouldnā€™t like anything I write to affect a devs career, unless itā€™s in a positive way. Maybe ask the company to treat themselves to a few copies of https://www.amazon.co.uk/Agile-Testing-Practical-Addison-Wesley-Signature-ebook/dp/B001QL5N4K/ref=sr_1_2?adgrpid=67945719317&gclid=CjwKCAiAxMLvBRBNEiwAKhr-nP2H8DrfAu3_dlwR2rxcYVlmBk0i_VXz0SYGB3zPs1x_Kc8Io1I8qhoCMcIQAvD_BwE&hvadid=338670459313&hvdev=c&hvlocphy=1006601&hvnetw=g&hvpos=1t1&hvqmt=e&hvrand=12844518583384373344&hvtargid=aud-615115051421%3Akwd-308686526521&hydadcr=24399_1748859&keywords=janet+gregory&qid=1576054385&sr=8-2

2 Likes

There is so much about your question that sets all sorts of alarm bells ringing with me. Christian (above) put it most kindly when he said your company sounds ā€œold fashionedā€. The profession has spent a long time moving away from the concept of QA - and weā€™ll temporarily ignore the huge debates weā€™ve had over what we mean by ā€œQAā€ - as ā€œgatekeepersā€ of quality, and youā€™re describing a situation where you arenā€™t only gatekeepers of quality, but youā€™re part of the performance management system for devs as well!

Iā€™ve been doing testing for something like 25 years and worked in a number of different roles and sectors, but the structure and situation you describe is something Iā€™ve never encountered anywhere else, and Iā€™ve only heard about similar situations in management case studies. Perhaps the best advice I could give you is, once your immediate concerns are resolved, that you spend some time exploring the resources in this forum and in the blog feeds that the MoT site gives you access to. After youā€™ve spent some time doing that, you perhaps ought to think about pointing out particular examples of best practice to some of the decision makers in your organisation/company, with a view to looking seriously at your working arrangements and thinking about how they could be improved. |Because from what you say, it sounds to me as though improvement is a) possible, and b) very necessary.

3 Likes

Thank you all for your feedback. You have basically hit the nail on the head that the company and process at the moment is hugely flawed. Whilst reading your replies itā€™s made it clear why I had doubts with what I was tasked to do in the first place and started to seek guidance.

Iā€™ll give some background to the situation. So our company is based in the UK and the core of the Dev and IT team were all based in the same office and had been working together for over 8 years. Myself and the senior dev team have been together for over 10. Now we have a very close relationship and a refined process where none of what i mentioned above occurred. The devs were very aware of what was needed on handover and we had regular meetings and chats to fill in any gaps or bounce things back and forth.

8 months ago our company was purchased by an overseas competitor. They unfortunately made a large chunk of our UK based team redundantā€¦ then some left later due to all the changes. The parent company have everyone, bar sales and consultants based in the same office. Whats frustrating is that although purchasing us, they are very much behind us in terms of process and IT Maturity. They also donā€™t like to pay for things if they can help it, meaning helpful tools are difficult to get sign off for. For instance we have gone from using Jira to manage all our workflow, bugs, features etc. to using an internal tool they have which is basically a glorified time/task tracker and spreadsheets. Very frustrating and tricky to manage. Iā€™m pushing for change but itā€™s tricky to convince an entire company who have been doing things this way for 10+ years.

The reason I was requested to put together some sort of review was due to the fact that being honestā€¦ some of their developers are not the greatest and they have tried various ways to get them to conform and raise their levels without luck. This is unfortunately not helped by the fact they seem to hire cheaply and usually from Asia meaning there can be large language barriers, which doesnā€™t help when requiring information. Most of their developers have no previous experience but are expected to hit the ground running and thrown in the deep end, so the management can be to blame as well. From a QA point of view, we quite often waste huge amounts of time running into problems laid out in the initial questions above. None of this helps when we are 5 hours ahead of them, so if we hit issues in the morning we have to wait several hours for them to come online.

This has been happening for several months and we have tried to speak to the developers in question, but again they have one eye on their performance reviews so the mentality is still to get as much out as quickly as possible. I raised this with the CTO and thatā€™s when the request for some sort of project review came about. To try and work alongside their performance reviews. Basically so the CTO can look at what and how much they released but also how smoothly it went and what could be done better. I assume that the plan would be for these to effectively show if devs repeatedly make the same mistakesā€¦ whether thatā€™s never giving enough documentation or not looking at their own work before handing it over.

I 100% agree that the negative spin this could have if used wrong is a bad choice and have mentioned that to my CTO and that I think we should change this. I suppose we could use this internally for QA only to then give an indication on who needs coaching and where?

Let me know your thoughts. Really good to put this on paper and see what the opinion is on this kind of thing.

Sorry, Iā€™m going off on a tangent rather than answering your question.

Your employers have employed, and are paying you, to help them identify where quality can be improved upon (or spot it where itā€™s absent). This is because they donā€™t have the time, or the experience, or the knowledge to do this themselves. If you have a way of improving the way your teams work, they should at least listen to you, the expert, even if they then reject your ideas.

3 Likes

antread, now a lot of things fall into place.

Any sort of remote ownership situation has the potential for this sort of problem; Iā€™ve seen similar when the owners were only 120 miles away. OTOH, Iā€™ve also seen situations where remote dev teams can work effectively with the ā€˜homeā€™ team, but in that case the remote team was set up from the start to work with international clients and put a lot of effort into communications; so much so that we were able to effectively work one of the purest forms of full Agile Iā€™ve experienced with our local and remote teams almost fully integrated!

It sounds as though the owners are pretty hands-off. I also take away the implication that the owners and the remote team are in similar parts of the world, and that might have even had a bearing on the choice of remote team. The way you speak of your CTO suggests that they are the only line of communication up to the ownership level; and even then, they may have only limited influence on decisions taken remotely, if at all.

Ultimately, the way of working that they have imposed is going to impact the bottom line in some way or another. I can only suggest that you keep your eyes open for ways to improve the overall situation and develop a sense for the balance of costs versus opportunities so that you can suggest improvements that save, or better make, money. Alternatively, it may be possible to identify and suggest minor changes that will incrementally bring about positive change.

If the owners really cannot see that even a cost-neutral change may generate benefits in terms of getting a better product to market more quickly, or cutting down on rework (and rework is rework, whether itā€™s performed incrementally or in one big chunk as part of a QA ā€œcycleā€; it still costs money), then there is little hope for them.

1 Like

Itā€™s actually the other way around, one of the directors is sooo hands on it can be difficult. He knows just enough to form strong opinions and the rest he googles. Incredibly difficult to get him to change his mind on anything unless absolutely proven wrong. The CTO is good, but backs down instantly if there is any question from the higher ups. Which makes it incredibly frustrating and feels like your corner is not always being fought.

Micro management is off the charts sadly, and meetings for the sake of meetings. That is slowly getting better as everyone has pushed back. So at least we can get change if enough shout loudly. We also have a new product manager who is making changes and not afraid to raise concerns when needed which is good. Hopefully we can start to put forward process changes.

When I spoke to the director about my concerns with this being too negative he responded that being a little negative is okay. They want to advance and push the developers to improve. Make sure itā€™s not too negative, but allow some competition and ability to shed light on issues.

So iā€™m kind of back to square one. They still want this type of review, even if itā€™s just short term until the devs change and stop making the same mistakes. So I need to review the questions and scoring and see if I can make something work. :tired_face:

1 Like

Suggesting improvements sounds like a good way forward, possibly for all concerned. Just flagging up less competent colleagues (wherever they are located) only replaces one problem with another. If you can offer an inexpensive solution, then thereā€™s the possibility of getting wins all round.

If your ā€˜hands-onā€™ director is found of googling and using online sources, perhaps influencing their search or even suggesting helpful sites might enable you to steer them on-side. It doesnā€™t matter if they see something you suggest and then run with it as if it was their own idea from something ā€œthey discoveredā€ if it gets the right results. Online training tools might be an acceptable way to achieve this; Pluralsight could be cost-effective.

2 Likes

Forgive me all for chiming in, but Iā€™m confused about something. Maybe itā€™s due to my ignorance, in which case I beg your patience. If developers are writing code which is failing abysmally time and time again, why is it bad to be ā€˜negativeā€™ to them? Why is quantifying a systemic problem with hard data ā€œreplacing one problem with another?ā€ Isnā€™t that data part of the testing narrative, which itā€™s our job to tell? Think about it from a testing perspective. If projects I tested consistently had major bugs in them after I was done, would no-one want to tell me because they were afraid to be negative to me?

Maybe you could reframe your original criteria in such a way as to remove any subjectivity: i.e. finding a consistent way to determine if new code objectively fails to satisfy a specific requirement communicated via your project management/issue tracking system.

This is a very good question.
Personally, I choose to be more positive for a number of reasons.

  • Because I knew a junior developer 10 years ago who was consistently bad. Today, heā€™s one of the better programmers I know. If we had sacked him rather than supported him, the world would have lost a good one.
  • Because positive reinforcement is more effective than negative reinforcement (ā€œYou screwed upā€ is less effective than, ā€œHow can we improve?ā€
  • Because adding more reviews to the process can be a sign of a toxic culture (reference: My opinion)
  • Because pointing out the symptom does not necessarily show the cause. Pointing out who is messing up is showing the symptom, while the cause may be management forcing the mistakes.

That being said, there are definitely times and places when negative feedback seems like the best option (again, my opinion). In some cases, there may be no other realistic option.

3 Likes

Hi antread,

chris_dabnor offers some good advice here, and Iā€™d like to second it.

Blockquote

In my view, the scoring task in your question is a symptom of the problem, and chrisā€™s ā€œtangentā€ describes the main problem: The teams are not working effectively. Constructive suggestions for improvement should be welcome in a healthy environment. And there may be benefits if you can get your managers to consider team structure and management.

Example 1: Tuckmanā€™s stages of group development (forming, storming, norming, and performing) were first described in 1965, and they still explain what happens when working groups are reorganized. Tuckman's stages of group development - Wikipedia

Example 2: ā€œSelf-managed teamsā€ offers a way to align decision making with the groups doing the affected work. Pushing decisions down in the hierarchy (where appropriate and with oversight) can lead to quicker, better decisions. This is especially true when management is not present on site, or where they are new to the technology or processes in use. Can you get the director to google ā€œself managed teamsā€? Can your product manager advocate for such process changes?

My perspective comes from a variety of development roles: as contributor (requirements, testing, coding), manager, and member of a ā€œself-managed teamā€.

Good luck,

George

1 Like

second try on quote:

For metrics,
keep it simple:
Time spent by tester (T) divided by bugs found (B) and then times it by bugs found times 2.
E.g.
10 mins (T) / 2 bugs (B) = 5x(Bx2) = 20mins.
Less bugs will always equal less time.

Make it about the test teams loss of productivity not about the developers.

However, I think everyone is saying the same thing.
a) This wonā€™t solve the problem.
b) It wonā€™t make the developers better at cutting code.

My suggestion would be to see if the developers have a quality champion (QC) or suggest it would help and if you are leading the test team then it is you and the QC who have regular weekly/fortnightly catch ups.

If they do not have a QC then find out who is the Devā€™s team leader\boss - they should be a QC.
Set up a meeting with you, your boss and them. Talk about saving their devs time not the test team time and show the Devā€™s boss the benefits to them by improving quality such as how many bugs were reported and estimated time his team spent fixing bugs not cutting new code.

1 Like

Iā€™ve recently changed the way my teams work to try to improve the quality of the work being passed to the testers after the fault/failure numbers and released bugs started to creep up. Weā€™re now tasking up any faults/failures so that I can count the number of time work items cycle between dev and test. Iā€™ve also set some team objectives based on number and severity of bugs found post release, percentage of work items that cycle between dev and test more than once etc so that thereā€™s an incentive for dev and test to work closely together - objectives influence salary increments and bonuses. As manager of dev teams that contain both devs and testers Iā€™m also spot checking ā€˜completedā€™ items to ensure test quality is up to standard (some days are depressing) - my background is a dev who transferred to test so Iā€™ve been on both sides of the fence.

1 Like