I wonder if anyone can help me. We have had a fair amount of devs firing poor code over the fence to QA. This mostly relates to the fact their end of year performance reviews are based on shipped projects/features/fixes. So very obvious that itās quantity over quality from the developers.
Now some are better than others and prefer to get good code out the door, but some do not. As a result Iām being tasked with creating a review that the tester fills out once a project has passed QA. The idea of this is to create a score for each project and from that we can pull an average for each developer to get an idea on what they are doing wrong.
This needs to be incredibly simple and quick to fill out. Currently I have the following questions:
Did you have to contact the dev for more info before you could start? Such as no documentation, Branch name etc, feature not loadingā¦
Were there obvious bugs spotted that should have been seen by dev before handing over to QA?
Were things sent to be retested only for them to not actually be fixed or just partially fixed?
How many full QA rounds did it take to pass? 1-3 / 4-6 / 7+
Complexity: 0 - 5 (5 being most complex)
What are your thoughts on these questions? I need to find a way to score the developers and also to use complexity to lower them slightly depending on how complex the project was.
I tried giving each question the same points and then assigning a % to the complexity (-10%, -20% etc) and taking that off the score. Bu have not been happy with the results.
Any help would be hugely appreciated. Does anyone else on here have to review projects/developers?
The relationship between people on project teams has multiple facets. In my opinion, project role and rapport are among the more important. I believe this has become more important in recent years as companies adopt and practice more agile methods of product development. Regardless of methodology, a project team is a set of people working to deliver good products.
The set of questions above seem narrowly focused on the evaluation of a product. This may not provide a complete picture of any one person on that team. While I believe you have experienced or perceived relaxed attitudes towards quality, quality is rarely held solely in the hands of those who produce the product.
I have seen requirements that are unclear. How might that impact a performance review? I have seen defects logged that were inaccurate or incorrect. How might that impact a performance review?
If the product producer were assigned a similar task for product testers, might there be a bias present when they are asked for documentation too often, possibly humiliated by persistent bugs, or assessed perceptions on complexity?
I believe your experience could be perceived as a single data point on which to build relationships, commitments, and better products. I encourage you to work with managers (a test manager, a development manager, and others) to explore the behaviors you perceive. Additionally, ask for feedback from them.
In my experience, these kinds of results sometimes occur when a project team has not collaborated on what to build, agreed on a delivery pace, or established some rapport. Yes, the experience is frustrating. Step back, learn, share, and grow.
I find it troubling to think that the answer to a poorly designed review process is more reviews.
I find it more troubling that the reviews are peer reviews. Even taking out the emotional content, the reviews can, in my opinion, be seen as an attack on a programmerās abilities and style. Even if it were taken at face value, I donāt see the benifit of such a review. Thereās nothing there that will motivate a sloppy programmer to pay more attention to detail.
You can couple that opinion with the fact that your current questions may all be answered by a halfway decent issue tracking system and good revision numbering. This is something that can come from management.
Iād like to give advice about how to address the main problem in a more productive way, but I donāt know your team or how they work, so any advice I give will be wrong. I would encourage your team to start thinking more like a team rather than individuals working for that good performance reviewā¦ but how to get there, quite frankly, is beyond me.
Itās hard to comment confidently without knowing your team. With regards to question 1, that doesnāt sound like the team is functioning as such, and that the QAs are isolated from the devs - you mention āfiring poor code over the fenceā. If youāre in the kick off meetings, you should be able to identify what you need at the beginning, and if youāre working closely with the team, it should be ongoing conversations. Question 2 should be a private conversation with the dev, and, where possible, you should be looking at it like a coaching opportunity.
There is so much about your question that sets all sorts of alarm bells ringing with me. Christian (above) put it most kindly when he said your company sounds āold fashionedā. The profession has spent a long time moving away from the concept of QA - and weāll temporarily ignore the huge debates weāve had over what we mean by āQAā - as āgatekeepersā of quality, and youāre describing a situation where you arenāt only gatekeepers of quality, but youāre part of the performance management system for devs as well!
Iāve been doing testing for something like 25 years and worked in a number of different roles and sectors, but the structure and situation you describe is something Iāve never encountered anywhere else, and Iāve only heard about similar situations in management case studies. Perhaps the best advice I could give you is, once your immediate concerns are resolved, that you spend some time exploring the resources in this forum and in the blog feeds that the MoT site gives you access to. After youāve spent some time doing that, you perhaps ought to think about pointing out particular examples of best practice to some of the decision makers in your organisation/company, with a view to looking seriously at your working arrangements and thinking about how they could be improved. |Because from what you say, it sounds to me as though improvement is a) possible, and b) very necessary.
Thank you all for your feedback. You have basically hit the nail on the head that the company and process at the moment is hugely flawed. Whilst reading your replies itās made it clear why I had doubts with what I was tasked to do in the first place and started to seek guidance.
Iāll give some background to the situation. So our company is based in the UK and the core of the Dev and IT team were all based in the same office and had been working together for over 8 years. Myself and the senior dev team have been together for over 10. Now we have a very close relationship and a refined process where none of what i mentioned above occurred. The devs were very aware of what was needed on handover and we had regular meetings and chats to fill in any gaps or bounce things back and forth.
8 months ago our company was purchased by an overseas competitor. They unfortunately made a large chunk of our UK based team redundantā¦ then some left later due to all the changes. The parent company have everyone, bar sales and consultants based in the same office. Whats frustrating is that although purchasing us, they are very much behind us in terms of process and IT Maturity. They also donāt like to pay for things if they can help it, meaning helpful tools are difficult to get sign off for. For instance we have gone from using Jira to manage all our workflow, bugs, features etc. to using an internal tool they have which is basically a glorified time/task tracker and spreadsheets. Very frustrating and tricky to manage. Iām pushing for change but itās tricky to convince an entire company who have been doing things this way for 10+ years.
The reason I was requested to put together some sort of review was due to the fact that being honestā¦ some of their developers are not the greatest and they have tried various ways to get them to conform and raise their levels without luck. This is unfortunately not helped by the fact they seem to hire cheaply and usually from Asia meaning there can be large language barriers, which doesnāt help when requiring information. Most of their developers have no previous experience but are expected to hit the ground running and thrown in the deep end, so the management can be to blame as well. From a QA point of view, we quite often waste huge amounts of time running into problems laid out in the initial questions above. None of this helps when we are 5 hours ahead of them, so if we hit issues in the morning we have to wait several hours for them to come online.
This has been happening for several months and we have tried to speak to the developers in question, but again they have one eye on their performance reviews so the mentality is still to get as much out as quickly as possible. I raised this with the CTO and thatās when the request for some sort of project review came about. To try and work alongside their performance reviews. Basically so the CTO can look at what and how much they released but also how smoothly it went and what could be done better. I assume that the plan would be for these to effectively show if devs repeatedly make the same mistakesā¦ whether thatās never giving enough documentation or not looking at their own work before handing it over.
I 100% agree that the negative spin this could have if used wrong is a bad choice and have mentioned that to my CTO and that I think we should change this. I suppose we could use this internally for QA only to then give an indication on who needs coaching and where?
Let me know your thoughts. Really good to put this on paper and see what the opinion is on this kind of thing.
Sorry, Iām going off on a tangent rather than answering your question.
Your employers have employed, and are paying you, to help them identify where quality can be improved upon (or spot it where itās absent). This is because they donāt have the time, or the experience, or the knowledge to do this themselves. If you have a way of improving the way your teams work, they should at least listen to you, the expert, even if they then reject your ideas.
Any sort of remote ownership situation has the potential for this sort of problem; Iāve seen similar when the owners were only 120 miles away. OTOH, Iāve also seen situations where remote dev teams can work effectively with the āhomeā team, but in that case the remote team was set up from the start to work with international clients and put a lot of effort into communications; so much so that we were able to effectively work one of the purest forms of full Agile Iāve experienced with our local and remote teams almost fully integrated!
It sounds as though the owners are pretty hands-off. I also take away the implication that the owners and the remote team are in similar parts of the world, and that might have even had a bearing on the choice of remote team. The way you speak of your CTO suggests that they are the only line of communication up to the ownership level; and even then, they may have only limited influence on decisions taken remotely, if at all.
Ultimately, the way of working that they have imposed is going to impact the bottom line in some way or another. I can only suggest that you keep your eyes open for ways to improve the overall situation and develop a sense for the balance of costs versus opportunities so that you can suggest improvements that save, or better make, money. Alternatively, it may be possible to identify and suggest minor changes that will incrementally bring about positive change.
If the owners really cannot see that even a cost-neutral change may generate benefits in terms of getting a better product to market more quickly, or cutting down on rework (and rework is rework, whether itās performed incrementally or in one big chunk as part of a QA ācycleā; it still costs money), then there is little hope for them.
Itās actually the other way around, one of the directors is sooo hands on it can be difficult. He knows just enough to form strong opinions and the rest he googles. Incredibly difficult to get him to change his mind on anything unless absolutely proven wrong. The CTO is good, but backs down instantly if there is any question from the higher ups. Which makes it incredibly frustrating and feels like your corner is not always being fought.
Micro management is off the charts sadly, and meetings for the sake of meetings. That is slowly getting better as everyone has pushed back. So at least we can get change if enough shout loudly. We also have a new product manager who is making changes and not afraid to raise concerns when needed which is good. Hopefully we can start to put forward process changes.
When I spoke to the director about my concerns with this being too negative he responded that being a little negative is okay. They want to advance and push the developers to improve. Make sure itās not too negative, but allow some competition and ability to shed light on issues.
So iām kind of back to square one. They still want this type of review, even if itās just short term until the devs change and stop making the same mistakes. So I need to review the questions and scoring and see if I can make something work.
Suggesting improvements sounds like a good way forward, possibly for all concerned. Just flagging up less competent colleagues (wherever they are located) only replaces one problem with another. If you can offer an inexpensive solution, then thereās the possibility of getting wins all round.
If your āhands-onā director is found of googling and using online sources, perhaps influencing their search or even suggesting helpful sites might enable you to steer them on-side. It doesnāt matter if they see something you suggest and then run with it as if it was their own idea from something āthey discoveredā if it gets the right results. Online training tools might be an acceptable way to achieve this; Pluralsight could be cost-effective.
Forgive me all for chiming in, but Iām confused about something. Maybe itās due to my ignorance, in which case I beg your patience. If developers are writing code which is failing abysmally time and time again, why is it bad to be ānegativeā to them? Why is quantifying a systemic problem with hard data āreplacing one problem with another?ā Isnāt that data part of the testing narrative, which itās our job to tell? Think about it from a testing perspective. If projects I tested consistently had major bugs in them after I was done, would no-one want to tell me because they were afraid to be negative to me?
Maybe you could reframe your original criteria in such a way as to remove any subjectivity: i.e. finding a consistent way to determine if new code objectively fails to satisfy a specific requirement communicated via your project management/issue tracking system.
This is a very good question.
Personally, I choose to be more positive for a number of reasons.
Because I knew a junior developer 10 years ago who was consistently bad. Today, heās one of the better programmers I know. If we had sacked him rather than supported him, the world would have lost a good one.
Because positive reinforcement is more effective than negative reinforcement (āYou screwed upā is less effective than, āHow can we improve?ā
Because adding more reviews to the process can be a sign of a toxic culture (reference: My opinion)
Because pointing out the symptom does not necessarily show the cause. Pointing out who is messing up is showing the symptom, while the cause may be management forcing the mistakes.
That being said, there are definitely times and places when negative feedback seems like the best option (again, my opinion). In some cases, there may be no other realistic option.
chris_dabnor offers some good advice here, and Iād like to second it.
Blockquote
In my view, the scoring task in your question is a symptom of the problem, and chrisās ātangentā describes the main problem: The teams are not working effectively. Constructive suggestions for improvement should be welcome in a healthy environment. And there may be benefits if you can get your managers to consider team structure and management.
Example 1: Tuckmanās stages of group development (forming, storming, norming, and performing) were first described in 1965, and they still explain what happens when working groups are reorganized. Tuckman's stages of group development - Wikipedia
Example 2: āSelf-managed teamsā offers a way to align decision making with the groups doing the affected work. Pushing decisions down in the hierarchy (where appropriate and with oversight) can lead to quicker, better decisions. This is especially true when management is not present on site, or where they are new to the technology or processes in use. Can you get the director to google āself managed teamsā? Can your product manager advocate for such process changes?
My perspective comes from a variety of development roles: as contributor (requirements, testing, coding), manager, and member of a āself-managed teamā.
For metrics,
keep it simple:
Time spent by tester (T) divided by bugs found (B) and then times it by bugs found times 2.
E.g.
10 mins (T) / 2 bugs (B) = 5x(Bx2) = 20mins.
Less bugs will always equal less time.
Make it about the test teams loss of productivity not about the developers.
However, I think everyone is saying the same thing.
a) This wonāt solve the problem.
b) It wonāt make the developers better at cutting code.
My suggestion would be to see if the developers have a quality champion (QC) or suggest it would help and if you are leading the test team then it is you and the QC who have regular weekly/fortnightly catch ups.
If they do not have a QC then find out who is the Devās team leader\boss - they should be a QC.
Set up a meeting with you, your boss and them. Talk about saving their devs time not the test team time and show the Devās boss the benefits to them by improving quality such as how many bugs were reported and estimated time his team spent fixing bugs not cutting new code.
Iāve recently changed the way my teams work to try to improve the quality of the work being passed to the testers after the fault/failure numbers and released bugs started to creep up. Weāre now tasking up any faults/failures so that I can count the number of time work items cycle between dev and test. Iāve also set some team objectives based on number and severity of bugs found post release, percentage of work items that cycle between dev and test more than once etc so that thereās an incentive for dev and test to work closely together - objectives influence salary increments and bonuses. As manager of dev teams that contain both devs and testers Iām also spot checking ācompletedā items to ensure test quality is up to standard (some days are depressing) - my background is a dev who transferred to test so Iāve been on both sides of the fence.