You start on a new project with a team that is working on a code for some time. Credentials for the repository are given, you check out the code and two minutes after your inner voice goes “Oh my god, what happened here?” This can be the unsparing sole dev bias talking.
While possible with context, there’s a mixture of Fundamental Attribution Error and Naive Realism when you do it without context. The idea that just reading code without other information is sufficient for assessing its “goodness” is problematic as concluding that a book is bad by reading one random page. [^1]
The bias is to assume, as you evaluate the code, what you would write alone under no constraints. This mistake leads to an unrealistic bar from which the code is being evaluated. A more balanced frame of mind can be: What code could this group produce under certain constraints?
Figuring out what “certain constraints” actually represents is the key. The code is seldom written by someone alone with all knowledge and time to solve the problem. The environment in which most software is produced is much more complex. Before already visualizing a “better version” you can start by asking some questions:
- How many people changed this code? Although most teams try to get a consensus on style, language use and etc, this is hard. Sometimes code lives through generations of programmers and leadership. What you are looking at is more a patchwork of opinions and styles than an elegant solution to a problem.
- Did I consider environmental constraints? What degree of freedom the team has? Is this a safe environment for learning? If not, expect several guard clauses and duplication.[^2]
- Is this a rewriting? Rewriting have a tendency of copying the structure of the old for several reasons.
- What’s the Conway’s Law effect? Communication structure between team members and other teams influence code structure.
- How the problem was framed to the team and how it changed over time? Problems change and we all know that. The question is how and when the change was discovered by the team.
I bet you can think on a lot more questions and this is the idea. We should hold our first impulse of qualifying code and team just by a glance at the code. By asking these questions your assessment will use a more realistic bar and you can start searching for solutions (of the real problems) under the same constraints the team has.
PS: The day I decided to publish this post I received an email from Pragmatic Programmers announcing a new book Software Design X-rays, Fix Technical Debt with Behavioral Code Analysis and I got my copy already.
[^1]: Granted that one might conclude that a book is bad by reading one page. For that some other information has to be available for this conclusion. Maybe you're an expert on the subject (previous experience with the same type of problem), you are a professional writer (you are well familiar with the language, so much that you know several ways to express the same idea), and so on. The point here is that these conditions are necessary and awareness they are present is also necessary. In order to satisfy these conditions the information, by definition, comes from outside the code itself.
[^2]: Lack of overall safety is compensated by individuals trying to be safe. On a hostile environment everyone tries to protect themselves and in code this shows by not refactoring code; not messing with things that are working; making sure we blame on bad input; and other coded behaviors that reflect fear of failure or being wrong.