What are your go-to resources for âWhat to look for in a code review?â
Iâm thinking of a range of audiences from people whoâve never contributed to a code review before to people who have but maybe feel like theyâre missing something.
What are your go-to resources for âWhat to look for in a code review?â
Iâm thinking of a range of audiences from people whoâve never contributed to a code review before to people who have but maybe feel like theyâre missing something.
Iâm not in a position where I do code reviews currently but in the past I always looked at the code review comments for any item I was going to test not only to check what had been flagged up had actually been fixed but the kinds of problems that were being found. I always thought this informed my testing more, gave context to potentially something I might not have realised was being changed and maybe some gaps in the PBI/story/item that needed more attention.
I had to fight hard to get involved in code reviews. The initial reaction of the developers in the team was: âWhy would you as a tester be part of âourâ code reviewâ. Glad I persisted. As @meowy24 said, it helps to inform your testing; especially when it comes to exploratory testing. I also managed to nip a few bugs in the bud. Some of the things I look out for, but not limited to:
One of the main things I very specifically do NOT look for is code syntax. This is something that should be taken care of by code linters. It is a waste of a reviewerâs time to go through the code to determine if the code is written in the right syntax. A program is a lot quicker and more accurate for that. If your organisation is not using such technology, please push / advocate for that. Itâs a quick win, with a massive impact.
As a closing remark for everyone involved in code reviews, please keep them respectful, which makes them more useful
I do not currently understand most of the code my team is writing. Iâm OK with that in the short term, Iâm doing a short-course (itâs a short 8 hour course, which I have taken 8 weeks to not complete.) I became a scripter and stopped coding a long time back, so most of the new languages are hard for me to read.
But one thing I do know is code smell, code smell is where bugs lie in the code but the devs donât want to fix them. Code smell looks very similar in all languages. As the non-codeer in my team, I âpolitelyâ ask ,âwhat does that code doâ, rather than to say, âthat code looks uglyâ. If you spotted the right fragment of code, most of the time the developer will reply: âthat code is uglyâ. And you have it. How to spot code smell, well you can google that. Itâs all the things that @pmichielsen pointed out above. Itâs things like
Show interest, but you donât need to participate directly, and always eavesdrop on your team so you can pick up clues about functionality that may contain bugs. This is much easier if your desk is right amongst them (harder in lockdown I know, but still possible to do online).
Here is an excellent resource on code reviews by Angie Jones. It is NOT about low level coding details like âuse descriptive names for your variablesâ or such. Its mainly about the people or social aspects of code reviews. But, those aspects are very important and are applicable to anyone who does code reviews (even developers, even highly experienced developers in famous companies). I am sure there are plenty of resources on technical aspects of code reviews, but I doubt if those can be covered in one or two short articles.
Wrote this back in 2014 and mostly itâs still relevant - http://simon-prior.uk/2014/06/27/overcoming-the-resistance-qa-involvement-in-peer-reviews/
For me it was always about understanding the code enough to know whatâs going on, so it could influence my testing and give me ideas of what to try.
Also output of static analysis and coverage tools showing where Unit tests covered code and any identified issues were good resources to have at hand when reviewing the code as it gave you talking points with the developer
Iâm a programmer, so possibly have a different perspective on this to testers, although I agree with everything that people have already said.
For me, the most important thing is attitude or respect (as others have said). Without trust and openness, the review is much less effective and youâre almost wasting everyoneâs time. Itâs an instance where the Golden Rule applies: do to others as youâd like them to do to you. I use a football (soccer) analogy for a decent tackle: go for the ball, and not the player. The code / design / test / etc. is under review, not the author.
I agree with other people that itâs worth remembering that a review is part of a bigger picture that also includes compilers / linters and tests, so focus on the areas where reviews can do things that other things canât. A big part of this is spotting things that are missing, or spotting duplicate code.
I also point out code thatâs hard to understand (which is related to, but not the same as, things like code complexity and code structure). If itâs hard to understand, it will be hard to debug, hard to change in the future, and hard to know itâs correct (which isnât necessarily the same as passes its tests). I realise that not everyone in a review might feel confident to say this, e.g. if theyâre a relatively junior tester, but it might be possible to phrase this as: Please can you explain this bit of the code to me? If the programmer has trouble explaining it, thatâs a warning sign. I know that sometimes understandability needs to be sacrificed to get other things e.g. speed, but that should happen only when itâs needed and not as a default (no premature optimisation).
I do point out typos, not because Iâm picky, but because our tools are generally stupid. For example, if your system deals with shopping carts, and thereâs a method called AddToCrt rather than AddToCart, then if someoneâs searching for cart-related things later (because e.g. theyâre changing how carts work) then a search for âcartâ wonât find AddToCrt - even a case-insensitive one.
I am not a coder and I can barely read any code thatâs not SQL, but I look at code often. I look for developer comments: I look for large sections of code commented out, for TODOs, and other interesting remarks or just the way some comments are worded that can show lack of interest, lack of effort or plain old laziness.
Hereâs an example of 2 things I found in code for a web app thatâs currently live and anyone can access. This was during a practical training class.
Things I tend to do:
sometimes for dev code reviewers, some standpoints i like to look at are:
Iâve not done too many reviews over the past couple of years but I tend to start with the changeset text.
Iâd then consider the unit tests. Do they have decent tests? Are there any gaps jumping out at me? If there are, Iâll try and find the code of interest first. Sometimes Iâve been pulled in to review code I just donât understand. This is where testing, changeset description and comments come in.
One key thing that I believe people can overlook is logging and testability. If an edge case is caught and handled, will we know that was the case? Often thereâs nasty race conditions and if we donât log that it was handled, how will we test that it works?
My best advice for anyone doing code reviews is to ask questions. It is better to ask a dumb question than make a dumb assumption. Also look at the testing notes and unit test coverage. Challenge it if you think it is lacking.
One other thing that I like to do if Iâm familiar with the code base is to look at the task before the code, think of what Iâd do then view the code. If itâs different, look at why.
Wow Richard, those 2 rules actually blow away lot of code review process âfluffinessâ and get to the meat of why we code review.
You reminded me that code reviews of each change are a bogus concept which does not really reduce bug injection rates if we do it as a âmerge exerciseâ only. Or even if we do it in whole big bang per module thing, without clear goals and a balance of raised visibility and of testing work to go with it, code review is best if it is not an isolated activity.
Code/line/decision/ ⌠coverage is Bullsh*t
This is why mutation testing exists. An automated review of your unit tests. (covers a lot of the basics in order to have very good unit tests) â Still requires manual effort of course to review the code but itâs such a great help.
But yea totally agree!
I think one important thing when doing a code review is having a written guide which points out the important parts that should for sure be covered in a code review. That is because depending on your circumstances you might have different points that have a high value for your team. For instance you could have customer sensitive data that you for sure donât want to be leaked. Moreover if everyone does the codereview freestyle some important points might be forgotten.
That said here is what I would like to have in my code review:
Excellent points @samuelm. I would hope that the written guide for reviewing was a more general guide e.g. also used by developers. Iâve worked on systems that dealt with sensitive data as you mention, and leaking it was as you said Concern Number 1. When I worked on billing systems there didnât need to be such stress on sensitive data, but correctness of calculations was Concern Number 1 (followed by speed and scalability).
There needs to be a shared understanding of the context - whatâs important to the customer / user, and whatâs important internally. This can be used during code review, but also during programming, testing etc.
First of all thank you all for posting here about this topic, because today it motivated me so I started to reorganize our review process. So thanks a lotâ:heart_eyes:
Anyway when I was doing my research today about this topic I stumbled across another interesting thing to consider about reviews:
Be a human:
Also give compliments in a code review where appropriate. For you it might be a small thing but for the other person it might make the day.
Really loved that so I thought I will share it here!