bookmark_borderMy experience with (un)certainty about estimates in relation to technical debt

Not too long ago, Martin Fowler pointed out a nice blog post by Jay Fields. Jay Fields refers to a nice talk he had about accidental complexity and essential complexity and how this has impact on your estimates. He found that not all developers consider the accidental complexity and therefor have lower estimates.

I found this a very interesting thought. It got me thinking how I estimate and how far I’m off. I found that, especially with larger solutions, I’m most of the time under estimating. Even with more complex things, and adding some ‘unforseen complexity percentage’. I’m still under estimating most of the time. However, I’ve also had better experience on other projects. Especially the latest project I’m working on the estimates of fixes and rework are not as much off. How is this possible?

I find myself labeling this phenomenom as “lack of overview”. If you read the definition of Accidental Complexity, it is described as “…accidental complexity is caused by the approach chosen to solve the problem.”. I believe this ‘approach chosen to solve the problem’ is the design of the code. This is different comparing to Essential Complexity, which I belief is much like Cyclomatic Complexity.

I made mistakes with my estimates, even when I knew the code well. Often it was due a dependency that ‘got in the way’, or worse, the lack of dependencies. All functionality was in one class! Adding similar behaviour required me to duplicate code. I consider this a bad practice, so I had to extract code from the other class. I was untangling the code. Whenever I had to untangle that code (ie, seperate concerns), I had a hard time doing so: Because untangling a tangled (tightly coupled) piece of code forced me to untangle other pieces of code as well. I had to stop somewhere. Like someone once said to me: The devil is in the details. (this is one of the reasons I encourage my co-developers to talk to interfaces, and not implementations).

But why are estimates off anyway? Is it because (lack) of experience with the code? Even with code I’ve worked with for years I still made bad estimates. And I could not find a way to get them better. The newer project went way better for me to estimate. I already knew why it was going better:

My mental model of the code matched better to the actual code. Was it because I worked on it lately and knew how it worked exactly in detail? No, not at all! The Technical Debt is much lower on this project. One of the principles that played a huge rule was the Single Responsibility Principle (PDF). When I had to make a change, it was often in one place. When I had to add code, I could easily move code out of the class and seperate responsibilities. The code was less tangled, tightly coupled.

This phenomenom of untangling code, seperating concerns and having a hard time maintaining code is clearly a sign of repaying serious interests of technical debt. And I clearly see that as a result of a ‘choosen approach to solve the problem’.

Therefor I believe the technical debt is linked to essential and accidential complexity and even more (what about readability?). Accidential Complexity is something that is very hard to grasp. I think this ‘uncertainty’ needs to be clarified and be added to each initial estimate in order to get a ‘more realistic’ estimate.

I would recommend to estimate code while looking at the code itself, rather than use just your mental model of the code.
Finally, repaying the interest of the technical debt should be prioritized in order to be able to maintain the system and to prevent to get a mad customer getting ever less features using ever increasing time to make them.