Category: Uncategorized

Nullable reference types in C# 8.0

Nullable reference types are a new feature in C# 8.0. They allow you to spot places where you’re unintentionally dereferencing a null value (or not checking it.) You may have seen these types of checks being performed before C# 8.0 in ReSharper’s Value and Nullability Analysis checks.

These are potential sources for bugs, and can cause application crashes and NullReferenceExceptions. C# 8.0’s compiler supports nullable types, and can warn you when you are dereferencing a null value without first checking if it is null if the type ends with a “?”; consequently, any type without a “?” added to the end is a non-nullable reference type. For everything else, it’ll use flow analysis. In this article, I will explain how you can use nullable reference types to help make your code less prone to NullReferenceExceptions, and to make it more consumable by other APIs.

Null attributes

There are also a few attributes you can use to declare the arguments and return values for null-related code. These attributes extend the nullable types and allows the compiler to make more judgements:

  • AllowNull, the argument could be null, even if the type doesn’t allow it. For example, we are setting a string inside of a getter/setter to null. In C# 8.0, strings are known as a nullable “string!”, and so the AllowNull annotation allows setting it to null, even though the string that we return isn’t null (for example, we do a comparison check and set it to a default value if null.) This is a precondition.
  • DisallowNull, the argument isn’t null, even if the type allows it. This is a precondition.
  • MaybeNull, the output might be null. So, the callers have to check if the output is null. This is a postcondition.
  • NotNull, which means that the input wasn’t null when the call returns, even if the type allows it to be null. This is a postcondition.
  • NotNullWhen, which is a post condition that asserts the argument isn’t null depending on the boolean value of the return of the method. For example, say my method is bool MethodA([MaybeNullWhen(false) out string outVal], and it returns true. Then outVal isn’t null. If it returns false, then outVal could be null.
  • MaybeNullWhen, “signifies that a parameter could be null even if the type disallows it, conditional on the bool returned value of the method.” This means that if I were to annotate an argument with [MaybeNullWhen(false)], then the output (signified through the “out” keyword) could be null if the method returns false. This is a postcondition.
  • NotNullIfNotNull, “signifies that any output value is non-null conditional on the nullability of a given parameter whose name is specified”. What this means is that if I pass in a “string?” the output’s nullability is true, and vice-versa. This is a postcondition.

There are some other conditions, such as:

Why are these checks important?

These checks help ensure the safety of the code you write, and also allows other consumers of your library to know when to use null checks (and where to omit them.) While it is possible to null check every call that is null-ambiguous, it can be error-prone because:

  • Too many null checks clutter the code and wastes time trying to create error handlers to safely stop program execution.
  • You might forget to write a null check, and because there are null checks for everything else, it is unclear which method is missing a null check.

Here is an excellent example from Microsoft’s docs:

string? userInput = GetUserInput();
if (!string.IsNullOrEmpty(userInput))
{
  int messageLength = userInput.Length; // no null check needed.
}
// null check needed on userInput here.

In this case, they annotated the string.IsNullOrEmpty method with [NotNullWhen(false)], which means that if the method returns false, then no null checks are needed. The annotation can be read as “it’s not null when the output is false”. These higher-level logical statements help the compiler make inferences about the code. While this sounds like a trivial comparison to do through the compiler only without annotations, it’s actually a very complex research topic.

Microsoft Pex, a “White-Box test generation for .NET” is a program that analyses every possible path through your program symbolically to discover edge cases and missing conditionals that can cause NullReferenceExceptions (and more.) While it is extremely interesting, it’s a bit outside the scope of this post.

How do I use them?

If you are upgrading a legacy project, Microsoft recommends that you don’t turn it on for everything at once, but there might be a lot of warnings and it could be overwhelming. This is especially true if your team treats warnings as errors (a compiler option), as development would have to cease for several days to fix the null warnings. This isn’t a great strategy, and so incrementally enabling null checks helps prevent an explosion of warnings that could go ignored if not addressed promptly.

There are a few ways you can prioritize adding null annotations for the first set of checks. One of the ways is to start with the very small, straightforward methods. If the method is easy to reason about, then adding the null annotations can be easier to do, and if the small method is used throughout the code many times, then it can help infer what null checks are and are not needed in larger methods. While there are potentially infinite many ways to prioritize null checks, this approach can be helpful if you are not familiar with null checking.

Conclusion

Null reference types can help make your code more maintenance friendly and easier to spot bugs, as nulls can cause unexpected problems and application crashes. While nullable reference types aren’t a panacea (as it is possible to ignore the warnings), they can help provide the compiler with extra information. This extra information can be used to deduce errors and find logic errors. Gradually implementing nullable reference types helps find potential errors without overwhelming the developers with warnings. If there are too many warnings, they could be disregarded, further causing more problems down the line.

  • Alex
  • Yorke

Continuous Integration: Balancing Value and Effort

Continuous integration can be a tough sell to managers. It’s hard to describe the need for extra time and resources to build automated tests that should mimic what is already being done by developers. This advocacy can be especially difficult early in development when CI failures are common and the pipeline will need a lot of work. Why would any manager want a tool that creates more problems and interferes with the development cycle? A robust continuous integration pipeline is vital during development since it protects from the deployment of broken code and will generate more issues to remove bugs before production. Since Orbital Bus is an internal project, we decided to use it as an opportunity to build the kind of CI pipeline we always wanted to have on client sites.

Early on we looked at the possibility of automated provisioning of multiple machines for integration tests. We looked at a variety of tools including Vagrant, Salt Stack, and Chef and Puppet. What we found is that this automation was not worth the time investment. This post is supposed to be about the value of investing in a CI pipeline, so why are we talking about work we abandoned? To demonstrate that the value of a CI pipeline has to be proportionate to the time cost of maintaining it. When it came to automated provisioning we realized that we would spend more time maintaining that portion of the pipeline than reaping the benefits, so we stood up the VMs manually and replaced provisioning with a stage to clean the machines between runs.

As development progressed, we added to our pipeline, making sure that the time investment for each step was proportionate to the benefits we were receiving. Gradually we added the build process, unit tests, and automated end-to-end integration tests. As we continued to experiment we began using the GitLab CI runners to enhance our testing. We also discovered that GitLab could integrate with Jenkins, and brought our pipelines together to create an integrated dashboard on GitLab. As we neared the public release, we added a whole new stage for GitLab pages to deploy our documentation.

A shot of our Jenkins Continuous Integration pipeline builds.
A shot of our Jenkins pipeline builds.

As the saying goes, Rome was not built in a day. Neither was our continuous integration. We added to it gradually, and as we did we had to overcome a number of obstacles. Our greatest problem has been false negatives. False negatives immediately negate the benefits of continuous integration because the team stops respecting the errors being thrown by the system. At one point, our disregard for the failures on the CI pipeline prevented us from noticing a significant compatibility error in our code. Each failure was an opportunity for us to understand how our code was running on multiple platforms, to explore the delta between development and production environments, and ultimately made our solution more robust. From the perspective of productivity it was costly, but the time greatly outweighed the value of hardening of our solution.

A capture of one of our Continuous Integration GitLab pipelines.
A capture of one of our GitLab pipelines.

You would be mistaken if you thought we’ve stopped working on our pipeline. We have plans to continue to grow our CI, expanding our integration tests to include performance benchmarks and to work with the multiple projects which have originated in the Orbital Bus development. These additional steps and tests will be developed alongside our new features, so as to integrate organically. As our solution matures, so will our continuous integration, which means we can continue to depend on it for increased returns in our development cycle.

  • Joseph
  • Pound