We’re well into a year of introducing code reviews and I’ve noticed a continued evolution over that year.
- At first we used no tools, printed out the code ahead of time, and then walked through it with an overhead and a projector in a formal meeting.
- Then we introduced tools, first Jupiter and then Crucible. We’re still using Crucible.
- We also discovered that static analyzers like Checkstyle helped avoid a lot of more basic issues with the codebases before the review.
- Initially we hoped to do at least one review on a project every week or two.
- Now we aim for one review per Sprint on a project. It goes in as a general task each Sprint. Of course this means you might only review a few classes every month.
- We’ve gradually reduced the number of classes reviewed from 5+ to 2-3 per review.
- As we’ve adjusted to Crucible and it’s lightweight online style, we’ve dropped almost all the formal meetings unless we need to meet to discuss a design issue that crops up.
I feel like we still have a ways to go, but we are getting a lot of mentoring out of the effort and I think it helps everyone to know there code is going to be read by a few developers at some point, so they’re more likely to keep things clean along the way.
Buddy style review might be the next experiment or something akin to Kevin Klinemeier’s idea of sitting down and making any changes that come up with the code under review to be done right there in the code review. I really like the idea of completing the review and being done/done with the agreed to fixes.
As per our experience, code reviews have been a gradual experimentation and evolution towards a balance of lightweight approaches and thorough reviews. I don’t think we’re they’re yet, but we’re getting closer.