Saturday, November 22, 2014

Code Coverage Theater

From the title of this entry, you might think that this is going to be a screed that is going to rain down hard on the code coverage parade. Instead it will be some tempering and pragmatic advice taken from the experiences of a mature agile team.

If there is any bit of heresy, this is it - code coverage has no intrinsic value. Its value comes only from showing where there is untested code. All of the value in what we do in this aspect of development is in the quality of the unit tests. Maybe leaving code untested is a deliberate calculation, but maybe it was a mistake. As developers, we are pulled in multiple directions all the time. It can take some effort to get back into context and you can use your uncovered code to help you recover.

The single best way to use code coverage is to break the build. As Agilists, we all are practicing continuous builds, right? I mean a new build goes off with nearly every code commit, right? That frequency is exactly what you should use for running code coverage checks. When the build breaks due to a fall in coverage, you get the advantage of causality. The latest check-in probably broke the build. (Your thoughtful coworkers never check code into a broken build.) It is very important that all developers should be able to run the same coverage checks locally on their own machines that are done on the continuous build server. Thus a broken build due to coverage checks should never be a surprise to anyone.

So if we want to have minimum coverage limits that must be met or the build breaks, what is an appropriate limit? Unfortunately some misguided management mandate that a 100% line coverage be maintained. In my opinion, this is a very poor way to spend your limited resources.

First, not all code coverages are created equal. A 100% line coverage limit does not address the differences between line and branch coverage. Package and class coverage can tell you very coarse stories about the state of your coverage. But line and particularly branch coverage have a lot richer information. You might have 100% line coverage and have very poor branch coverage. Branch coverage can help you figure out if you are covering all the independent paths through the code? Do you even pay attention to cyclomatic complexity?

Second, code coverage tools typically run off the the Java byte code, not the actual source itself. This means that code that the compiler generates for you may not be mappable to any sensible line in your source code. (Ever wonder why the first curly brace after your class name is shown as uncovered?) You can find discussions of this issue on sites like stackoverflow.com. The practical significance of this is that you think you have all the code covered, but the coverage tool shows you odd "unreachable" corners in your source that have no coverage.

Finally, there will be some flows through the code, particularly obtuse error paths, that may be hard to test. Do you really want to spend 50% more effort to get that last 10% coverage? Is this a good use of your money?

There is no real best practice on code coverage limits. On my team, we use about an 80% limit for line and branch coverage. This mostly keeps new code from being checked in without coverage. And it leaves enough slack for the developers to exercise their judgement on whether it is worth chasing those extra few lines of coverage. In a future posting, I'll see if I can find statistics that show the cost of getting that higher coverage and the chances that the tests for the extra coverage actually find or prevent defects.

I work in a Java shop, and we use JaCoCo as our coverage tool. It is also used in Sonar. We used to use Cobertura but I believe at the time that the new Java 7 byte codes confused it. Coverage tools like JaCoCo do a pretty good job generating coverage reports that show you both coverage summaries and allowing you to drill into your packages, classes, and methods to see coverage for individual code lines.




While serviceable, these tools have a serious deficiency - the legacy code problem. You know that problem - the one where you start a code base completely ignoring unit tests and code coverage. Then somewhere along the line, one or more of you get a pang of guilt, see the light, and join modern software best practices. But now you have a problem. Nobody, and I mean nobody, is going to pay you to go back and write tests for coverage for that legacy code. One way to handle this is to just take a punch in the nose and watch your code coverage average be dragged down by a bunch of zeros. I'll cover how we handled this problem in a later blog.

Hopefully this has given you some things to consider in your own projects.