null ⊨ null (null's what they mean to you?)

The title is a play with semantic consequence and goes a on little incidental complexities of code. Given the definition of semantic consequence we are asserting that a null is a semantic consequence of a null if no interpretation of null turns null to be false. Confusing? Let me explain, here's simple example:

 public Type method(OtherType arg) throws Exception;

What this method does? Possible guesses are:

  • Given a OtherType argument the method can return Type.
  • Given a some scenario instead of returning the method throws Exception.

Since we are talking about nulls, we should consider other scenarios:

  • Given a null argument the method can return null
  • Given a OtherType argument the method can return null

Now a method could behave in four different ways. I don't know a developer who likes to handle nulls, the reason is that adding an unrelated if on the code is unpleasant. But most fail in correcting the behavior ASAP. Here's an example:

  public Type method(OtherType arg) throws Exception {

    if(arg == null) return null;

    //... awesome things happen if you provide a non null argument.

Not only this code has this unrelated if but it's forwarding the same scenario up to the caller. Eventually some layer has to handle this and that's when the problems start to get worse. Check this code:

  public Type method(OtherType arg) throws Exception {
    DependencyKiller dk = new DependencyKiller();
    ResultingType result = dk.notSureHowWasImplemented(arg)

    if(result != null) {
      return somethingInteresting(result);
    return null;

This reflects common situations on big code bases where nulls exists freely. This method took a decision based on the fact that it could get a null, but what this means? (OBS: I'm using not meaningful names on purpose) We can guess that something interesting it will be done only if not sure returned a result that is not null, multiply these assumptions by a factor of ten and the feeling of not understanding what we are talking about is predominant.

Going back to our semantic consequence, the meaning of a null will hold true if and only if all interpretations of its meaning don't turn our consequence into false. More clearly, all usages of null have to have the same meaning to the developers in order to this code make sense. Is this possible? Unlikely.

Because each handling/forwarding of the null will change its meaning, on the code above returning null means that something interesting did not happen, add one more layer and the meaning could change again. This will lead, most of times, to a situation where the developer find himself/herself with a unexpected NPE (NullPointerException) and nulls are not allowed anymore, the handling now means an Exception or returning an empty object.

Nulls are part of the language, what the options do we have? Actually the last developer, who handled the null and stopped the chain, did the right thing. One that expects a List could get an empty List, that would be harmless for most cases. There's also the Null Object Pattern.

In the end, what matters is avoiding use a wildcard inside your code, such things are dangerous and often result in a more complex, hard to maintain and problematic code. The borders of your application (DB or other external entry) should handle the null soon, turn it into a type, maybe an UnkownType without behavior, but a type anyway, after all they are a Special Case.

Published in Aug 02, 2011