Inspired by my recent readings of The Philosophy of Computer Science, plus some Grady Booch On Architecture listening and finally Pat Kua We do "TDD." Oh really?, got me to a point where I consider not doing TDD not only a fail in follow best practices of software development but also not developing software at all only random (brownian motion) execution on a computer.

Let's think about unit tests. Unity tests not only guide the real implementation but also specifies inputs and outputs of methods in a class, the whole set of tests of a class specifies the class itself. If we take a unit with a little more rigor, meaning that complex business rules are splited in its small steps and clear collaboration, can the tests be clear enough to look like a very detailed specification?

Taking this idea further, using unit tests with this specification meaning, what we think as implementation now becomes too low level. Our tests are defining, behavior, dependencies (mocked or stubbed), data structures of input and output, how far are these from the "real" implementation? Is now the real implementation something that could be auto-generated? If so, not using tests to guide your software is equivalent of writing bytecode directly?

Actually, as for now, if such compiler is created, turning our unit tests into implementation we would be back where we started on the mechanics of software development, although one level higher on abstraction. So, my feeling is that we should keep these two, implementation and its tests on the same level and not use one automated solution. It might be the case in the future this happens, still the mechanics will be conserved, having code, a specification with strict rules and grammar, turned into executable.

The end of the cycle maybe will be the time where no more software development would be necessary, where all pieces are coded and software will adapt itself over business requirements. It might be created as physical chips with preloaded software that can be assembled and only very high level description would be necessary to make it work, in example: "10% discount on products if it's the client birthday." Can't think about this with the feeling of again being back to the same point.

When I read Pat's post and recall the phrase "We only test the happy path." I think, "What about the rest?" How the rest of the implementation is there anyway if not specified? In absence of tests we might think that the implementation not tested is free to do anything. Maybe a division by zero actually increments a variable instead of raising an error, who knows? That's how it sounds to me, like any other path than the called "happy" will be a guess. It gets more worrying, any refactoring or modification of the code that doesn't change the ins and outs of the happy path is a mutation on the "by chance" path, and actually turning the whole thing into a big pile subject to natural selection as single nucleotide mutations are for DNA.

Unit test coverage, could be interpreted as "by chance" metric. Coverage of 30% means, 30% implemented and 70% is by chance. As if you ran a compiler and it only compiled 30% of your instructions right, the other 70% is shuffled and inserted randomly. You get a 30% certainty and 70% bag fun of surprises.

One might argue that all code was written by the developers so, even if not tested, it has to maintain some logic reasoning and flow. Well, where this can be true for some cases in other cases I saw developers actually randomly guessing code. Writing and changing it until the point that gives some expected result, I saw code programmed by chance more than I would like.

I feel more and more that not doing TDD can only be explained by a developer thinking that he knows so well the domain and so well the language that he can write down a solution without any flaw. Action that is doomed to fail, as humans make lots reasoning errors and are subject to cognitive biases. History shows that even the mathematician Euler made a mistake. So if a technique that can help you avoid mistakes and better, can alert you if mistakes were made by others, is at your reach and you just ignore it justifying that is not needed cause you can write without mistakes, you should be better than Euler. And stakeholders pay for software with 30% of coverage are likely to buy 70% of problems.