Quick Rex about 100% code coverage

read comments here

TLDR; maintaining 100% coverage brings many benefits, you need to try it.

A few years ago I blogged about aiming for 100% code coverage for your tests. This post made some noise and the feedback was essentially negative. I was even called out as a troll a few times…

Being stubborn and dedicated, I understood I needed to put my money where my mouth was, and start to practice what I preached. I did and this post is about what I learned by reaching 100% code coverage for my tests.

The playground

Nowadays, I have made reaching a high level of coverage a primary objective during both my 9 to 5 job as well as on my late night OSS coding.

While I can’t share the code for my professional activity, you can browse at will two OSS projects: NFluent and RAFTing.

While RAFTing is a stalled project at this date(C# implementation of the RAFT Algo), the other one is actively maintained.

NFluent is an assertion library, helping to write pertinent tests, easy to read and raising helpful error messages. If you are not using an assertion library, I urge you to dig into the subject and pick one. I tend to think NFluent is far superior, and I actively contribute to that effect, but use whatever you like.


A developer needs a reliable testing platform. Let me rephrase this: a developer requires a testing platform that is above any suspicion. Imagine the consequences of tests not failing when they should, leading to the release of severely flawed products. Or the confusion and doubt that would come if assertions failed when everything was ok.

NFluent has to be bug free and therefore is the perfect candidate to explore the benefits of having 100% code coverage.

Code coverage

Quick reminder regarding code coverage: it describes the part of the code that was executed while running the (automated) tests. When a test fails, it means that there is a bug somewhere in the covered code. Conversely, the tests cannot give you any feedback regarding the part of the code that were not executed.

Pretty basic, pretty boring.

Things get a bit more interesting when we dig a bit more: how do you measure ‘coverage’?

Here is some code:

void foo(int y)
  var x = ( y > 0) ? y * 4 : 10 + y;
  if ( x > 0)
  if (y > 5)

The usual metrics are:

  • Line coverage: the system capture which lines of code have been executed. You can get an overall metrics by calculating the percentage of executed lines out of executable lines. If we check the coverage of foo(0), 4 lines are executed out of 5, so we have 80% coverage
  • Branch (or condition) coverage: as the name implies it focus on code branches. For example an if statement always generates two branches: one for when the condition is true and one for when the condition is false, even if the code has no else clause. The foo method has actually 6 branches: two ifs and a ternary operator. Therefore, foo(0) has a branch coverage of 50%
  • Path coverage: focus on identifying how many possible paths are executed out of the possible ones. The foo method possesses 8 possible paths. foo(0) execute only one path, leading to a meager 12.5% coverage.

A brief look at the previous percentages gives you a hint of how much effort is needed to reach 100% in each case.


For my OSS work, I started recently to use OpenCover, to capture line and branch coverage data, as well as ReportGenerator to get a synthetic report. I also use the indispensable NCrunch which executes unit tests on a continuous basis but also flags covered lines within the IDE.

And recently, I stumbled upon codecov.io and decided to push NFluent coverage data there, so you can go and see for yourself.

Benefits I identified during my journey to 100% coverage:

  1. Cleaner code, assuming you have a good testing hygiene. This one was a bit of a happy surprise, it also led me to reconsider what is an adequate testing strategy. This topic deserve a dedicated paragraph.
  2. Maximum trust in the code base: most, if not all, regression introduced by changes will be detected. Conversely, if no test turns red after a likely-breaking change is made means something is fishy (in your test code).
  3. Being able to reproduce almost any bug with tests, with the notable exceptions of concurrency related issues. I am able to reproduce any user reported issue within minutes, if not seconds! Any remaining bugs will likely be related to edgy input values (Note that property based testing could help us there).

How extensive testing leads you to better coverage?

Because it forces you to consider the design that led to non covered code!

That is exactly what I did while completing the coverage of NFluent. I careful considered and reviewed non completely covered methods.

Every time I reached one of the following conclusions:

  1. Test(s) must be amended
  2. Code needs refactoring
  3. Production code must be removed
  4. Production code is buggy
  5. A test is missing

Those are listed in decreasing frequency, meaning I rarely added a test case. Let’s detail:

  1. Amending tests: most of the time it meant adjusting values used in test, such as making sure a string was longer that some threshold, or avoiding reusing the same value. This obviously relates to having path(s) in the code that depends from the values used. For example, here there was a lack of tests regarding trailing whitespaces:
    There, I had to remove an internal test that hampered flexibility: https://github.com/tpierrain/NFluent/commit/578cbb7632300056f3c46209b03cc700a21d8c05#diff-2f7a424fc4caf0527fb2d9563b4ac36d
    And here, the null case was not covered: https://github.com/tpierrain/NFluent/commit/9b03f6572ef6a0b63d1cacd113b1e3573482e3b6#diff-37fd491f59874e466e6a2297ad0f1760
  2. Code needs refactoring: it mostly relates to imperfect algorithms or non needed checks. It can be as simple as spurious null checks or trying to cater for some situation that just can’t happen!
    For example, this code contains a ternary operator that was no longer useful
  3. (Production) code must be removed: code that tries to handle out of scope use cases. Here I dropped several methods that were no longer relevant:
    And here, I dropped support for a option that was never used: https://github.com/tpierrain/NFluent/commit/75c0a0e8006c39824a956dd56449a19a2c8ab7ee.
  4. (Production) code is buggy: those situations were identified thanks to the fact that I had to review the code. It is typically a situation were I realized tests were missing and the associated production code was buggy or missing as well. For example, here I have a tricky issue where the code was failing to use the ‘not equal’ operator (!=) when it had to:
  5. A test is missing: obviously close to test(s) needed to be amended, but I refer to situation where a full test case is missing. For example, here a test case was simply missing: https://github.com/tpierrain/NFluent/commit/084021ce505abd34bd4e757dc36544a3202af915#diff-25da551f30cf2f5d6888a76366961ee5

In retrospect, I am happy I made the extra effort to reach 100% branch coverage. As I expected before engaging in the journey, I fixed several bugs and improved the design. The extra bonus is that I did learn a couple of things Net related along the way.

I definitely plan to keep the cursor at 100%.

Should you do it?

YES! As with any practices, you need first hand experience to discover what it will bring to you. You can try it on moderately sized project; libraries are perfect for that.

As a team, you should identify a component to be used as a test ground, and reach 100% coverage. And discuss the outcome in a few weeks/months. My bet is that you are going to make interesting discoveries…

Please share your feedback here, in the comment section.

As usual, I am pretty sure this post will trigger another flamewar: guys, I suggest you save your ammo for a later time, as I plan to write a second post on this…

Thank you for your interest in this post!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s