Code metrics tend to draw a lot of opinions from developers on both sides of the fence.
Generally, I’m on the side of monitoring metrics, but not breaking the build because of them. I want to know trends, not encourage people to play a game with the system. In other words, I don’t want to “teach to the test,” I want the test to be an accurate representation of what we’re dealing with.
Monitoring Approach
For example, imagine we’ve turned on the monitoring of tags in the codebase.
This will count the number of comments that start with //TODO
, //FIXME
, and
other common
tags.
In a world where we only count these, we’ll see very clearly
- how many tags we have
- the accumulation vs time
- which classes are the worst offenders
With this information, we can adjust our approach to conquering technical debt and strike our team’s ideal balance of fixing and creating.
Build-Breaking Approach
If we instead attempt to improve the code by capping the number of //TODO
tags
in our system, our team will risk ending up with all sorts of undesirable
consequences:
- new, valid
//TODO
comments not being added //TODO
s deleted without being fixed//TODO
s turned into much-less-detectable regular comments- clever wordplay in the form of
//TUDONT
tags introduced - munging changesets by checking in completely unrelated
//TODO
fixes
Each project, team, and metric category can have special circumstances. Given my
experience with the list of the negatives for breaking the build on //TODO
s,
I’ll strongly consider the necessity of that in the future.