What is code coverage?

In computer science, test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage.[1][2] Many different metrics can be used to calculate test coverage; some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite.

Source: Wikipedia

Should I care about code coverage?

Code coverage is a metric and as such it can be used or abused.

I’ve worked on projects where we’ve had 100% code coverage and I’ve worked on projects where we’ve had close to 0% code coverage. I prefer working on a codebase with a higher code coverage, it gives you more confidence that you won’t break everything with a “simple” change. Of course, the code coverage percentage and the confidence though correlated are not linearly correlated! Having a 100% code coverage doesn’t mean you can’t break something, it just provides you with a higher confidence you won’t do so.

Having 100% code coverage

As mentioned, this is just a metric and not having end-to-end tests won’t save you from production failures. Okay then, but then should I strive for 100% code coverage? The answer is - it depends. ;)

It depends on your team, it depends on the product you’re building, it depends when you want to ship it, etc. Not all code coverage is the same, having 50% code coverage for non-critical features is less valuable than having 50% but having all your critical sections covered.

Legacy projects

When working on legacy projects with a low code coverage - your goal should be to increase the code coverage (or at least not to decrease it!). It’s not realistic to go from 20% to 100% overnight. Reaching 100% on a relatively big project is a big effort and would definitely require change of the project priorities.

I have 2 simple rules which I try to follow when working on such projects:

  1. Add a test for any newly written code.
  2. Add a test when you change “old” code.

Don’t be mistaken, simple doesn’t mean easy. Not having tests probably means it’s not really straight-forward adding such, because if it were that easy - probably somebody else would have probably already done it. It’s important to take time and add such tests if you consider this part of the code critical, if not - then you could probably skip it, but be ready to go back and fix it again. :)

New projects

Everybody loves new projects. Starting a new project fills you enthusiasm and we’re thinking “i am going to do it right this time!”. More often than not - we’re doing the same mistakes over and over again. It’s important to lay the right foundations (and restrictions) from the beginning. Having strict rules is great when you don’t have to worry of 5-years worth of code!

You can configure the linter exactly how you won’t and you won’t rewrite the history of practically every file under source control (or do stupid git hacks to workaround it).

You can have a hard-rule for the code coverage and while you are it - just have it at 100%! But why should it be 100% and not 95%, but what about 90% or 75%? You get the point, having the choice will always start discussions. It’s black/white rules. Having the code coverage at 100% - doesn’t mean you need to have a 100% code coverage, but it means by default you should cover your newly written code, if a new piece of code can’t be instrumented - then this would be the exception, not the rule! Note also, that having 100% code coverage doesn’t mean you’ve tested all the cases, it means all branches’ code has been executed, but you may have not actually asserted anything and everything can be wrapped in a try/catch block. This will fool the tooling, but that’s not what we want, right? :)

Having black/white rules is very useful for linting as well - just break the build if you have 1 or more warnings or errors - no discussions there! Yeah, have a discussion what constitutes an error for your project, but don’t start a flame war how many errors is OK to have.

The whole point of the tooling is to minimize the trivial comments in PRs, have the tools do the heavy-lifting, this way you enable the developers to focus on the really important parts of the code - not on technical details as why is this not properly indented, there are tabs instead of spaces, etc.

Unit tests and .NET Core

Running tests with dotnet CLI is really simple, but has a minor difference than using vstest console runner - you need to pass a project file and not an assembly! What this means you can’t tests from multiple assemblies with a single command. That’s a shame… Good we’ve MSBuild built into dotnet so we can hack our way around it! If you have missed the previous post - go check it out.

Running the tests

All you need to do is the following in the directory of the test project:

dotnet test

or if you are in the root solution dir

dotnet test <project_path>

Collecting code coverage

It’s as simple as adding --collect:"Code Coverage" to the dotnet test command!

Unfortunately, there are a couple of gotchas you need to consider.

Running on non-Windows OS

You need to run on Windows or you would get the following error:

Data collector 'Code Coverage' message: No code coverage data available. Code coverage is currently supported only on Windows..

That’s a downer, but it should be fixed sometime in the future. Check this GitHub issue if you want to track the progress of this.

In the meantime, you can use coverlet.collector. You add it as a reference to the test project and change the command collector type from Code Coverage to XPlat Code Coverage.

Here is the whole command, for example:

dotnet test Lib.Tests/Lib.Tests.csproj  --collect:"XPlat Code Coverage"

* Not sure if coverlet results file is supported by Azure DevOps or you need to convert it.

If you are building on Windows, just use the first command as it can generate the TRX files and the code coverage files which can be seamlessly published to the Azure DevOps. Probably there are plugins (maybe called tasks?) that can parse the Cobertura format, not sure. Why am I mentioning Azure DevOps - well, this was the CI provider I use, so I solved for my problem. :)

Updating the Microsoft.NET.Test.Sdk package

Make sure you are running the latest version of this package or you are likely to have issues. If you don’t have the package explicitly listed - add the latest version of it!

Running tests as part of CI

And we come to the final part. We’re going to leverage the knowledge from the previous post and this one to have a working MSBuild script executing the tests from all of our test projects. Generating code coverage as well and erroring out if we have any failing tests. I am just going to add the code as I think it’s pretty self-explanatory.

<ItemGroup>
  <DotNetCoreTestProjects Include="$(SolutionDirectory)\**\*tests.csproj" />
  <DotNetCoreTestArgs Include="--verbosity:normal" /> <!-- Default verbosity is "minimal" -->
  <DotNetCoreTestArgs Include="--no-build"/>          <!-- If you've already built the solution you can safely add this -->
  <DotNetCoreTestArgs Include="--logger:trx"/>        <!-- Add the TRX logger so we can publish the test reults -->
  <DotNetCoreTestArgs Include="--results-directory:&quot;$(TestResultsOutputDir)&quot;"/>
  <DotNetCoreTestArgs Include="--collect:&quot;Code Coverage&quot;"/>

  <UnitTestExecutionExitCodes />
</ItemGroup>

<Exec Command="dotnet test %(DotNetCoreTestProjects.FullPath) @(DotNetCoreTestArgs, ' ')"
      ContinueOnError="true">
   <Output TaskParameter="ExitCode" ItemName="UnitTestExecutionExitCodes"/>
</Exec>

<Error Text="Unit test execution failed!" Condition=" '%(UnitTestExecutionExitCodes.Identity)' != '0' "/>

The snippet above will execute the dotnet test command and all the defined arguments for each test project matching the **\*tests.csproj search pattern. No need to update the script if you add a new test project which name ends with Tests.csproj. That’s pretty sweet, we want to have script that we write once and don’t touch anymore. If you need to touch it every time you add a new project - chances are you are going to forget to do it and your tests won’t get executed! Having conventions for naming the test projects is useful in on itself, but not having the need to touch the script is a stronger reason.

I want to draw your attention to the ContinueOnError=true - that’s pretty important as mentioned before, we want to execute all tests and THEN fail (if there are failing ones). That’s important so you can see all failing tests on the first run! If you have failing tests in multiple projects, we’re going to get the errors one by one for each successive run and we don’t want that.

Closing thoughts

I’ve always been a proponent having my build scripts in the source control and not in the CI tool, be it TFS, Azure DevOps, Jenkins, TeamCity or whatever. Yeah, the tools have history as well (they didn’t before), but you are coupling branches and builds which is bad. Any change you need to make would require you to sync things in 2 places - source code and CI tool! Just move everything in a single place - the source control. You can have per-branch changes that won’t affect every other branch there is.

Of course - you can go with Azure Pipelines (YAML) for example - it’s per branch and is stored in source control as well, but you can’t really run the script locally. Yeah, probably there are some tools that will do that for you, but it’s another thing you need to think about and check if everything is really supported. MSBuild gives you the flexibility to experiment with the scripts locally (almost an instantaneous feedback loop), no need to wait for build agents, source retrieval, etc. You can easily comment-out unneeded steps to speed-up the feedback loop even more! And most importantly - you have environment-agnostic scripts - you can execute them locally or somewhere else, they should behave the same as MSBuild will be the same on all the environments. Yeah, MSBuild is not easy/fun to write, but it’s definitely better than YAML. YAML is not made to be written by humans, but to be automatically generated (prove me wrong).

I’ve been using this script for 3+ weeks now and I haven’t touched it! 3 weeks may not sound like much, but new test projects were added, old ones were moved and everything is still working as in the beginning, so I say this works well enough for my case! ;)