bookmark_borderRegression – Lets stop it!

I hate it.

You change something and you can’t tell if your change broke something in the system.

If you’re lucky, you did not break anything. Or nobody noticed it.
Next to that, on the same scale of luck, the potential bugs are found in the manual testing phase.

But often there is no time to do all the regression testing by hand. It will take days and days, and the change you made looked so insignificant. It should go live. What could possibly go wrong?.

Then it happens. You’re live, your changes work, but the inevitable happens. Regression!

Of course, this has to be fixed. We can’t let our customers have a product with new features while the features of the previous version(s) are broken.

And so the patching process begins.

I call it patching, because often you are not done with one patch. While you were working hard to get the first patch live, other regression bugs are found and need to be patched asap as well! And so you end up with a few patches. You could be done with with a few patch releases. But it could easily extend ten-fold.

This process is very stressful for the customer and the development team. As the team is working to get these patches out soon, the customer is unhappy with his ‘broken system’. Even worse, once a few bugs are found, more testing is done on the live system to make sure everything still works, and more regression bugs poor in, adding up to the stress. To the development team it begins to look like…

From the customer’s point of view, it looks like the team working on the product is not in control. It is as if the team does not seem to know what they are doing. To them their product, which seemed rock solid at start, is degrading to a house of cards.

You can debate about high and low impact issues, and the matter of urgency to fix these issues. The perception of the customer is likely to be the same, regardless.

This is how I see it:

It is us developers who are responsible for letting regression happen.

Not testers.
Not project managers.
Not stakeholders.
Not the customer.

It is us and us alone.

We write the code, we change the code, we are in control of the code (at least we should be!).

Even if you happened to be depending on a third-party system, it is your job to keep an eye out on that system. Verify that it behaves as you would expect it to. Why? Your system depends on the behaviour of another system, trusting that this behaviour does not change is not enough. You have to be *sure*.

Its all about attitude
Do you always deal with regression bugs after each release?

Stop accepting it, it is not normal.

Rather, start thinking about how you can prevent this. Don’t look how other people could prevent this. Think of what you could do right now. There are many ways to reduce the amount of regression bugs. For instance: add tests before changing any code. Fixate the behavior with black box tests. When you refactor, keep running your tests so you know you did not break existing behaviour. Add new tests for new features you introduce. Create a test suite that you can trust. Make integration tests. Is it hard to write tests? Make it easier. Don’t back away from the code, it is your code and you should be in control.

Besides the code, improve your own skills. Start reading about how to deal with legacy code. Attend a legacy code retreat to hone your skills. Practice, practice, practice! Get better.

Reap what you sow in your daily work.

But isn’t the whole team responsible?
Ah, of course! But does that mean that you, as a developer can now do less? Would it be okay in a team to not test, because you have testers? (“its their job right?”).

In a team we all have our strengths and weaknesses.

We understand code, and we can change code. No other role in your team is responsible for understanding the code then you. Being in a team does not make you less responsible.

Again, it is all about attitude. Stand for your craft, deliver high quality work and make sure the system is in check. It should be you who controls the system.

Attitude, again
There are developers out there who really think they know everything of the system. And to be honest, I once had a time where I always knew what changes had impact and what I could do. And even though I was right about the impact on changes…

…I was at least wrong as many times as I was right.

But sometimes it is not just being over-confident. Sometimes it is being ignorant, or even arrogant.

Please, don’t be like this guy…

Just because it is hard, doesn’t mean you shouldn’t do it
Regression is a pain. It can be dealt with.

It is not easy.

You will not completely eliminate regression bugs. But with the correct mindset, tools and safety-net(s), you will greatly reduce the amount of regression bugs.

It is necessary. Just do it. For the love of our (your!) craft, do it, for everyone who depends on us:

The customer.
The stakeholders.
The project managers.
Yes, even the testers.

bookmark_borderExperiencing a Code Kata – Become a better developer while having fun!

Recently I have been experimenting with a Code Kata, and in this post I’d like to share my experiences with it.

Code Kata?

Code Kata’s have been around for a while, but it really came into my attention while reading Chapter 6 from the book The Clean Coder by Robert C Martin. This chapter makes an anology that at your work you’re a performer like a musician and outside work you (should be, like a musician) practicing. (Of course, you will learn while at work, but that is not the point).

But what is a Kata?

I have played the piano for around 6 years (followed lessons) and played it much less after that (in fact, I don’t really play at all anymore). In those six years I had to practice etudes as well as more famous pieces. I did not understand why I had to practice Etudes, until much later.

So what has an Etude to do with Kata’s? Lets look at the description of an Etude (at wikipedia):

“an instrumental musical composition, most commonly of considerable difficulty, usually designed to provide practice material for perfecting a particular technical skill”

Without going into detail of a Kata itself, it is used to practice and perfect a set of techniques. Repetition and practice until you’re able to perfectly perform a Kata (or an Etude if you will) will help you further when you need to improvise or apply it in different forms. A lot of Etudes, techniques, are used in real pieces. A lot of Code Kata’s are actually dealing with real world problems.

Its not all about the solution!

So if I practice enough code kata’s I will become good at any problem I might face when writing software?

Not quite.

Code Kata’s are flexible; meaning you must set yourself a goal you want to achieve by doing a kata. When doing an etude you don’t have a lot of options. Your main goal is getting better with your fingers to play a series of notes or transitions. With a Code Kata you could practice your typing. Or perhaps practice all short-cuts of your IDE. Or heck, learn a new IDE while doing one. Perhaps you want to learn a new language. Perhaps you want to get better at TDD. Or you simply want to get to the solution and find the most efficient way to do so.

Atleast, I found doing a code kata much more fun than doing an etude 🙂

Bowling Game Kata

The Bowling Game Kata is a Kata that challanges you to write a class (Game) that simulates a bowling ball game. You can roll balls and give the amount of pins knocked down. At the end you can call the score() method and you should get the correct score. It takes all rules into account, gutter, spares, strikes and the tenth frame where you can have 3 rolls instead of 2. The perfect game has 300 points. The worst game 0.

My initial thought: How hard can this be? I mean come on, I’ve dealt with harder things than a bowling game scoring system. Since I did not do any Code Kata before I set my goal to find the solution to this kata while doing TDD. I also did not want to look too much ahead in Uncle Bob’s (very nice) presentation (with solution). So I stopped when the game interface was given (roll() and score()) and I went ahead. Again, how hard could it be?

I have tried this Bowling Game Kata three times, and for each attempt I have written my experiences. All in all it was a very good experience and I recommend to try it out yourself. I believe if you want to get better, you need to practice. And only if you tried this multiple times, only then you know how it is.

So instead of talk the talk, let me walk the talk…

First attempt – Deception

Goal: Get it working, while doing TDD.

I set up a simple project and started with the GameTest. The first two tests where easy to do (0 pins, and all ones). But as soon as I got into Spares my first thought was to create a Frame class. Because a Frame represents a ‘turn’ where you can only roll 2 balls. The Frame class was born, along with its unit test. And I thought it felt good. I even added more ‘cheating’ detection. So you cannot roll 3 balls in one frame, or you cannot say you rolled 2 pins and then 9 (making 11 in one frame). I felt great and got unit tests working like nothing could stop me. Until the ‘perfect game’ test came around and my model just fell apart.

There was no way I could make it fit, without bending my entire model/solution.

So there I was, totally excited and thinking “just one more test and I’m done”.. and I got this.

Eventually I fixed it, I made my Frame class more flexible so I could set the maximum rolls and added flags. I then could create a TenthFrame class and set its flags so it would score differently. I also had created tests for the TenthFrame and even used an Abstract test class so I did not have duplicate code. Even so, I felt like this was wrong. I was bending my design just to work for one exception in the rules.

When I got all my unit tests passing, even the perfect score game, I just felt a great deception. My design sucked. Also, it took me almost 4 to 6 hours to get it working. Way too long for a code kata right?

Lessons learned
– TDD cycles where not strict enough; so…
– TDD cycles where slow, I had to switch mouse/keyboard to rerun the unit test(s)
– I made design decisions too early and later got ‘stuck’ and had to bend the design to make it work completely
– Finding the solution the first time takes time
– I made much more stuff than I had to (over-engineering?)

Trivia
– Time taken: roughly 6 hours.
– Amount of unit tests: 33

Second attempt – No need to bend the universe

Goal: Get it working, while doing TDD. Take lessons learned from first attempt. Aka: Tighter TDD cycles, get faster at TDD cycles, etc.

I started this kata late in the evening. I spent a fraction of the time compared to the first attempt: ~ 45 minutes(!). The later half hour mainly refactoring and keeping green bars. The actual solution was there within an hour. I did not have to bend my design, I could keep everything within the Game class!

Something little, but practical I learned about the IDE I used (Eclipse) is to short-key the ‘rerun last test’, so my TDD cycles where shorter.

I also noticed I understood the scoring of the bowling game much better. I don’t play this game very much, and when I do, the computer does all the scoring for me. So I guess the first attempt at the Kata took also longer because I had to understand the scoring rules.

Design wise I found that I still use “frames”, but not as a separate class. I do not have any cheating detection (so I can roll 12 pins and it won’t complain), but the scoring will utterly fail in that respect. Building these checks in would not be a problem though, because the spare/strike detection is now so easy.

Lessons learned
– Faster TDD cycle by using shortcut for re-running tests
– TDD cycles where stricter, but could be even more tightened
– Commenting out tests when more than one breaks really helps you get focussed on getting one thing to work (so leave one test breaking), instead of fixing all tests at once.
– No cheating detection, no over-engineering
– Of the time consumed, I spent more time refactoring relatively to the first attempt, than thinking / finding the solution.
– I could refactor a lot of code, and make it much more cleaner. And safely due the tests.

Trivia
– Time taken: roughly 45 minutes (!!)
– Amount of unit tests: 7

Third attempt – The only way to go fast is to go well

Goal: Tighten the TDD cycle. Get it done in 30 minutes or less.
This time I started fresh on a Sunday morning. Since I know the solution and I knew the design choices I made earlier (I know what works, and what does not work) things went very quick. I finished it within 30 minutes. I had the same amount of unit tests and the greatest thing was that the last test (perfect game) worked immediately. I did not had to change anything to make it work.

Once I had the last test working, I checked the code a bit and called it done. This time I also wanted to check my solution against the original presentation Uncle Bob made, to see if I missed anything or not. I figured that some of my tests where faulty:

– If you roll only ones, you can only roll 20 times and not 21 times. (i had a weird if statement to fix this up, but now it seemed that this was flawed).
– The perfect game was in my case 21 strikes, while you can only roll 12 times in that case. When I changed my test, it still worked.

It struck me that I was approaching this technically (21 rolls is maximum), and not from a functional point of view (ie 20 is max when only ones).

I also found that my TDD cycles where still to wide. I could do run the cycles close to each line of code, but I tend to write 2 or 3 lines before re-running my tests. Especially the first tests where suffering from this, later tests went better.

Lessons learned
– TDD cycles can be shortened
– Functional point of view caught errors in my tests
– Shortest kata ever (under 30 minutes)
– Latest test worked, design was good. Design was even better after fixing the test for ‘only ones’, so I could remove weird if statements.

Trivia
– Time taken: < 30 minutes
– Amount of unit tests: 5

Retrospective

So, after 3 attempts, do these Code Kata’s work for me? It surely learned me a few lessons. In short:
– The first time you do a Kata, it is slow. And you’re focussed on the solution. This could throw you off, but you have to persist…
– Later attempts are going much faster, and your focus shifts to other techniques. Mine was mainly speeding up my TDD cycle.
– I have learned a few things to speed up my TDD cycle, which I can apply in real world stuff as well. Which is good!
– Over-engineering will bite you, one way or another. Your design will be toast.
– I have learned that even in my last example I was approaching the solution too technical. As a Developer I still did not approach it entirely functionally. This meant that even though I thought I was done, I wasn’t. I do plan to use Acceptance tests (using JBehave) for this. That will be covered in a next blog.
– Above all, doing a Code Kata is fun!

I would advice to other developers to do Code Kata’s and get better at what they are doing. There are tons of areas where you can improve. In short, yes I do believe in them and I think you should give them a go, if you haven’t already!

Find more about Code Kata’s:
http://codekata.pragprog.com/
http://stackoverflow.com/questions/44533/your-favorite-code-kata
http://www.codinghorror.com/blog/2008/06/the-ultimate-code-kata.html

If you really want light-weight warm up exercises, you might want to go to: http://codingbat.com/

bookmark_borderMy experience with (un)certainty about estimates in relation to technical debt

Not too long ago, Martin Fowler pointed out a nice blog post by Jay Fields. Jay Fields refers to a nice talk he had about accidental complexity and essential complexity and how this has impact on your estimates. He found that not all developers consider the accidental complexity and therefor have lower estimates.

I found this a very interesting thought. It got me thinking how I estimate and how far I’m off. I found that, especially with larger solutions, I’m most of the time under estimating. Even with more complex things, and adding some ‘unforseen complexity percentage’. I’m still under estimating most of the time. However, I’ve also had better experience on other projects. Especially the latest project I’m working on the estimates of fixes and rework are not as much off. How is this possible?

I find myself labeling this phenomenom as “lack of overview”. If you read the definition of Accidental Complexity, it is described as “…accidental complexity is caused by the approach chosen to solve the problem.”. I believe this ‘approach chosen to solve the problem’ is the design of the code. This is different comparing to Essential Complexity, which I belief is much like Cyclomatic Complexity.

I made mistakes with my estimates, even when I knew the code well. Often it was due a dependency that ‘got in the way’, or worse, the lack of dependencies. All functionality was in one class! Adding similar behaviour required me to duplicate code. I consider this a bad practice, so I had to extract code from the other class. I was untangling the code. Whenever I had to untangle that code (ie, seperate concerns), I had a hard time doing so: Because untangling a tangled (tightly coupled) piece of code forced me to untangle other pieces of code as well. I had to stop somewhere. Like someone once said to me: The devil is in the details. (this is one of the reasons I encourage my co-developers to talk to interfaces, and not implementations).

But why are estimates off anyway? Is it because (lack) of experience with the code? Even with code I’ve worked with for years I still made bad estimates. And I could not find a way to get them better. The newer project went way better for me to estimate. I already knew why it was going better:

My mental model of the code matched better to the actual code. Was it because I worked on it lately and knew how it worked exactly in detail? No, not at all! The Technical Debt is much lower on this project. One of the principles that played a huge rule was the Single Responsibility Principle (PDF). When I had to make a change, it was often in one place. When I had to add code, I could easily move code out of the class and seperate responsibilities. The code was less tangled, tightly coupled.

This phenomenom of untangling code, seperating concerns and having a hard time maintaining code is clearly a sign of repaying serious interests of technical debt. And I clearly see that as a result of a ‘choosen approach to solve the problem’.

Therefor I believe the technical debt is linked to essential and accidential complexity and even more (what about readability?). Accidential Complexity is something that is very hard to grasp. I think this ‘uncertainty’ needs to be clarified and be added to each initial estimate in order to get a ‘more realistic’ estimate.

I would recommend to estimate code while looking at the code itself, rather than use just your mental model of the code.
Finally, repaying the interest of the technical debt should be prioritized in order to be able to maintain the system and to prevent to get a mad customer getting ever less features using ever increasing time to make them.

bookmark_borderAn example of refactoring

As I have promised in my previous post, I would post an example of small refactorings in order to greatly improve the readability and understandability of code.

I own a little project called Dune II – The Maker, and I started writing it a little over 10 years ago. In those years I have learned a lot. I did not have much time in those days to apply my new knowledge to the project. You could say the software was rotting. In order to make it better I need to refactor a lot and I encounter the best examples to improve code without pointing fingers :). In any case I have experienced you have to make mistakes in order to get better. I hope you will learn from the mistakes I made.

So here is a little example I have just checked in the dune2themaker repository, I’ll give you the before (revision 411) and after (revision 412). Of course, I have taken smaller steps to get to the end result. First the original piece of code:

Revision 411 (before)

void cGame::think_winlose() {
	bool bSucces = false;
	bool bFailed = true;

	// determine if player is still alive
	for (int i = 0; i &lt; MAX_STRUCTURES; i++)
		if (structure[i])
			if (structure[i]-&gt;getOwner() == 0) {
				bFailed = false; // no, we are not failing just yet
				break;
			}

	// determine if any unit is found
	if (bFailed) {
		// check if any unit is ours, if not, we have a problem (airborn does not count)
		for (int i = 0; i &lt; MAX_UNITS; i++)
			if (unit[i].isValid())
				if (unit[i].iPlayer == 0) {
					bFailed = false;
					break;
				}
	}

	// win by money quota
	if (iWinQuota &gt; 0) {
		if (player[0].credits &gt;= iWinQuota) {
			// won!
			bSucces = true;
		}
	} else {
		// determine if any player (except sandworm) is dead
		bool bAllDead = true;
		for (int i = 0; i &lt; MAX_STRUCTURES; i++)
			if (structure[i])
				if (structure[i]-&gt;getOwner() &gt; 0 &amp;&amp; structure[i]-&gt;getOwner()
						!= AI_WORM) {
					bAllDead = false;
					break;
				}

		if (bAllDead) {
			// check units now
			for (int i = 0; i &lt; MAX_UNITS; i++)
				if (unit[i].isValid())
					if (unit[i].iPlayer &gt; 0 &amp;&amp; unit[i].iPlayer != AI_WORM)
						if (units[unit[i].iType].airborn == false) {
							bAllDead = false;
							break;
						}

		}

		if (bAllDead)
			bSucces = true;

	}

	// On succes...
	if (bSucces) {
		// &lt;snip&gt;

	}

	if (bFailed) {
		// &lt;snip&gt;

	}
}

The intention of the think_winlose() function is to determine if the player has won or lost, and if so it transitions the game state. These transitions have been snipped.

So when does a player win or lose? It depends if there is a ‘win quota’, or not. The win quota is a number, whenever it is above zero it means the player has to collect at least that many of credits (spice) in order to win. If the win quota is not set, the default win rule : destroy everything of the enemy, will be used. (do you notice I need this much text for just a simple rule? Which I could have prevented If I had code that said this in the first place? At the bottom of this post you can see what I mean :))

Lets take a look at the code and point out what could be done better:

  • There are two booleans bSuccess and bFailed. Which is confusing and ambigious. What is succesfull? What did fail? Why aren’t they one boolean?
  • There are comments all over the place, meaning we could refactor these pieces to code so comments are not needed. (Comments are seen as clutter and should be removed)
  • The code formatting could be done better. If statements should start with { and end with }, even with one line.

And there are more things you will probably find yourself. What I’ll do is point out a few things that could be improved. If you just want to see the final result, just take a look below.

Lets start with the booleans bSuccess and bFailed. Why are there two booleans and whey are they called so vaguely? A little bit of searching in the code and we find out that bSuccess actually means “Mission is accomplished” (player has won), and bFailed means the player has no units and structures (which implicates the player has lost the game). They are not the same boolean, because a player could be alive and not have yet won the game of course. Now we know they are not actually the same boolean, but their naming was vague. A simple “rename variable” made things easier to understand!

void cGame::think_winlose() {
	bool bMissionAccomplished = false;
	bool isPlayerAlive= true;

(when posting this I realize the two booleans are named differently, consistency is also important to improve readability, so either both should start with “is” or both with a “b”, I prefer the first though)

Right after the booleans a few for loops are used just to find out if there is anything alive for the player. A little bit below we see such for loops again, but for the AI. This is duplicate code and should be removed. Extracting them into a method and make them return a boolean value is easy to do:

bool cGame::playerHasAnyStructures(int iPlayerId) {
    for (int i = 0; i &lt; MAX_STRUCTURES; i++) {
		if (structure[i]) {
			if (structure[i]-&gt;getOwner() == iPlayerId) {
				return true;
			}
		}
	}
    return false;
}

(Again, while posting this I realize this could be even improved a bit more, the iPlayerId should be called ownerId (or the getOwner should be a getPlayerId), so it is obvious we match two of the same kind. Now it could confuse us: is an owner the same as the playerId? Since I know it is, why isn’t it called that way?… :))

Since we extract these for loops we can now set the isPlayerAlive boolean immidiately instead of setting a variable within the loop as it was done in the original example above. Reducing 24 lines into one!:

bool isPlayerAlive = playerHasAnyStructures(HUMAN) || playerHasAnyGroundUnits(HUMAN);

The final result of revision 412 is shown below. It will clearly show the major improvement regarding readability and understandability. Any other developer who comes to this code can see what it does and it is almost a no-brainer.

Result revision 412

void cGame::think_winlose() {
	bool bMissionAccomplished = false;
	bool isPlayerAlive = playerHasAnyStructures(HUMAN) || playerHasAnyGroundUnits(HUMAN);

    if (isWinQuotaSet()) {
		bMissionAccomplished = playerHasMetQuota(HUMAN);
	} else {
		bool isAnyAIPlayerAlive = false;
		for (int i = (HUMAN + 1); i &lt; AI_WORM; i++ ) {
			if (playerHasAnyStructures(i) || playerHasAnyGroundUnits(i)) {
				isAnyAIPlayerAlive = true;
				break;
			}
		}

		bMissionAccomplished = !isAnyAIPlayerAlive;
	}

    if (bMissionAccomplished) {
		// &lt;snip&gt;
		
	} else if (!isPlayerAlive) {
		// &lt;snip&gt;

	}
}

bookmark_borderThe tremendous power of tiny refactorings

More and more I am being intrigued by the power of a small code refactorings. The positive impact it has on the readability, the maintainability and understandability of your code is great. It keeps code clean(er) and since the changes you make are really small (I’ll demonstrate how small), the chance they will break things is small. Of course, with unit tests (you are writing them right?) making sure you did not break anything: a small refactoring is a low-risk high-benefit practice.

In my experience, small refactorings are undervalued. In fact, I undervalued them much myself since not too long ago. They are disregarded as refactorings that don’t help at all, because it is obvious what the code does. However, the flaw in this rationale, as I see it, is that the intended audience is not only you but also the other developer you work with. Also, you know what code does right now. But would you understand it as quickly if you did not look at it for a week and came back? Would another developer understand the code right away?

When working on code, you’re constantly trying to ‘translate’ the code in your mind in order to know what it is doing. Doing this it leads you to where the bugs are or the areas where you need to make changes, etcetera. This process of ‘translating’ code in your mind comes at a price. Literally the energy you need to burn in your brain to grasp the meaning of a piece of code: brainpower; The easier we understand code, the less brainpower we need. The less energy we burn by understanding what is going on, the more energy we have left to create new things, or fix that bug.

I’ve created a little example. The code below represents an implementation of a mail service. The mail service allows you to send an email using a method that uses 4 parameters: to, from, the subject and the message. When all parameters are filled, the email needs to be sent. That is the only requirement for now. Of course, later we might want to validate if the given email adress of from and to are valid. But for the sake of the argument, lets keep it simple. The following code is ‘mind-boggling’, compared to its simple intention:

public class MailServiceImpl implements MailService {

	public void sendMail(String from, String to, String subject, String message) {
		if (from != null &amp;&amp; !&quot;&quot;.equals(from) &amp;&amp;
			to != null &amp;&amp; !&quot;&quot;.equals(to) &amp;&amp;
			subject != null &amp;&amp; !&quot;&quot;.equals(subject) &amp;&amp;
			message != null &amp;&amp; !&quot;&quot;.equals(subject)) {
			// send the email
		}
	}

}

Basically what this says it that any field may not be null or an empty string. It took 4 lines for just to show. Even though you recognize the pattern of a ‘null or empty check’, it costs you time and energy to make that translation happen. So here is a first suggestion to make it read easier:

public class MailServiceImpl implements MailService {

	public void sendMail(String from, String to, String subject, String message) {
		if (parametersAreNotNullOrEmpty(from, to, subject, message)) {
			// send the email
		}
	}

	private boolean parametersAreNotNullOrEmpty(String from, String to, String subject,
			String message) {
		return from != null &amp;&amp; !&quot;&quot;.equals(from) &amp;&amp;
			to != null &amp;&amp; !&quot;&quot;.equals(to) &amp;&amp;
			subject != null &amp;&amp; !&quot;&quot;.equals(subject) &amp;&amp;
			message != null &amp;&amp; !&quot;&quot;.equals(subject);
	}

}

When another developer is reading the sendMail method, he will now know that when the parametersAreNotNullOrEmpty the mail will be sent. It does not need any translation, the method name just says what it does! Simple! By doing this, you greatly reduce the needed brainpower to understand what is going on. The refactoring method used is called Extract method.

Reading code is sometimes easy for your brain to handle. Sometimes your brain seems to explode because of the complex statements and context you need to be aware of. It is strongly tied with the Cyclomatic Complexity, the Coupling between Objects (CBO) and the lack of Cohesion in your code. If you are using any tools to measure your code, like Sonar for example, look for these metrics to find code that needs attention. But it is always better to refactor while you have made the translation in your brain, if you see things can be written simpler to reduce the needed brainpower, by all means do so. Not giving software the appropriate attention might let your code rot. Small refactorings help you prevent that.

I hope you have seen a bit of the power of small refactorings. I will get back to them in my future posts as I will post more concrete examples and how I would/have dealt with them. To me, small refactorings need to be part of your system and are introduced when you do TDD. All too often when the code works, it is not looked at again. Making these small refactorings can make a big difference and take relatively no time.