I’ve been thinking about test-driven development a lot lately, observing myself veering between TDD virtue and occasional lapses into sin. Here’s my thesis: As a profession, we do a lot more software maintenance than we do greenfield development. And it’s at the maintenance end where TDD really pays off. I’m starting to see lapses from the TDD credo as more and more forgivable the closer you are to the beginning of a project. And conversely, entirely abhorrent while in maintenance mode.

Other Voices · I was deeply impressed by “Uncle” Bob Martin’s RailsConf keynote (see also his follow-up exegesis), which at its core argued for “professionalism”, specifically as expressed in the deep-TDD rules:

  1. Never write code until you have a failing test.

  2. Never write any more code than is necessary to un-fail the test.

Also, it’s worth visiting Kent Beck’s The Open/Closed/Open Principle and especially InfoQ’s Kent Beck Suggests Skipping Testing for Very Short Term Projects, which comes with lots of third-party commentary.

Why We Do It · I think TDD makes these claims:

  1. You build better software initially.

  2. You remove the fear from maintenance in general and refactoring in particular.

Maybe I’m weird, but I think the second is the one that matters. After all, we do way more maintenance than initial development. And in my experience, the first-cut release of any nontrivial software is pretty well crap. But since software is after all soft, you can go back and whack it and whack it and whack it until you’ve whacked it into shape.

But to do that well, you absolutely must have enough test coverage that you just aren’t afraid to rip your code’s guts out, rearrange them, and put them back in a better configuration.

Sin and Penitence · People introducing TDD do this thing where they start from scratch saying “We’re going to write a class to represent X and it’ll need a method to do Y, so let’s write a test for Y”. The problem is, when I’m getting started, I never know what X and Y are. I always end up sketching in a few classes and then tearing them up and re-sketching, and after a few iterations I’m starting to have a feeling for X and Y.

And maybe the sketch-and-tear process would be better and more productive if I had the patience to write the tests for each successive hypothesis about X and Y, but I don’t think I ever will. This is partly because the first few tests for the kind of classes I write tend to be expensive; where you have to face the dependency-injection and big-fat-mock problems. I lack the patience to make that investment if I’m unsure the class is basically pointing in the right direction.

But here’s what I do. When I have a few X’s and Y’s that start to feel right, then I go back and fill in tests for all the methods, and I use rcov and friends to help me. And I don’t check things in till that’s done.

I freely admit that this is not really truly TDD. I hope there are no adherents of the TDD church who would consequently argue that it’s not worth doing.

Redemption · On the other hand, once you’re into maintenance mode, there are really no excuses. Because you really know what all your X’s and Y’s are, either because X is already there and you’re adding Y to meet a well-understood need, or because your previous choice of X and Y were wrong and you have a better one in mind. So write the tests first and don’t try to go any further than where they take you. Put another way, the most rewarding place to build your test code is on a foundation of existing tests.

For me, this is TDD at its most addictive, the engineering part of software engineering; where that “professionalism” thing comes into sharp focus.



Contributions

Comment feed for ongoing:Comments feed

From: Chris Dent (Jun 24 2009, at 03:45)

I think the most significant win with tests is when you are developing library code that other people will use. Writing the tests points out all the mistakes you might make in signatures, prerequisites, etc. If the tests are too hard to make then you know that your API will be too hard to use, you're doing it completely wrong, and may as well pause for a rethink.

So in that sense rather than the core code being the scratchpad for exploration, the test files are.

[link]

From: Mike Hayes (Jun 24 2009, at 04:37)

While the approach you advocate makes sense, it does require professionalism, not just from the developer but from management too.

My experience with similar approaches is that the tests somehow never get written, or at best are only badly written.

<br/>The bones of the code don't lend themselves to unit testing without considered refactoring. Management often doesn't appreciate the need to do this, or developers are more interested in writing new features. So the tests never get written and as the volume of code rises, so too does the difficulty in writing unit tests.

<br/>The likelihood is that the person left to maintain the code isn't the person who wrote it, leaving the maintainer with an unholy mess to untangle. Getting unit tests into such code is a monumental task.

I think this is a big weakness in the TDD philosophy - the failure to address how unit tests can be introduced to an existing non unit-test codebase. (i.e. go from non-TDD to TDD). Some TDDers will say it's impossible, others will squirm uncomfortably, mention Feathers 'Legacy Code', and suddenly remember something really important they have to go do.

<br/>Don't get me wrong, the Feathers book other than Feathers, no one seems to have done so.

I feel the TDD community only wants to focus on greenfield projects and has ignored maintenance/legacy issues. Which is strange when as you say code spends most of it's time in maintenance. So, sadly I think your "code first, unit test later" while sensible, is risky in the majority of cases.

[link]

From: Kieron Wilkinson (Jun 24 2009, at 04:37)

I think that is quite a practical take on the matter. Personally, I tend to find that if I force myself to keep doing the tests from the beginning, I end up with a more testable design - and hence I feel like I am saving time later in maintenance mode. For example, it is not very often that I have to write a mock (though perhaps that depends more on the domain and development style).

[link]

From: Mihai (Jun 24 2009, at 04:58)

I agree with you, but still it would be nice to work as a purist and do the red/green/refactor shebam.

The thing is that as long as the project is small you really don't see the benefits of TDD. I've done a couple of small projects and never had to go back to them ever again.

But when I'm in a project that is more than 3 month old, and have to go back and rethink or improve some pain points, having tests really pais off.

[link]

From: Bob Aman (Jun 24 2009, at 04:59)

I personally have only one testing rule I hold sacrosanct. Never use mocks unless you are mocking an interface that will almost never change. Every other testing practice I've got tends to vary based on the project.

[link]

From: Daniel Steinberg (Jun 24 2009, at 05:27)

People introducing TDD do this thing where they start from scratch saying “We’re going to write a class to do X and it’ll need a method to do Y, so let’s write a test for Y”. The problem is, when I’m getting started, I never know what X and Y are. I always end up sketching in a few classes and then tearing them up and re-sketching, and after a few iterations I’m starting to have a feeling for X and Y.

===

Tim, to me that's the benefit of the second D in TDD. It's similar to what Brad Cox wrote about coding in Objective-C more than twenty years ago. If you need a method that does Y then how are you going to want to call this method from code that will be using it. That helps you get a feeling for what X and Y might look like.

You are writing the client code (in the form of a test) so you are thinking how the worker code will be used. What is its public interface and what do you want it to do when it's called.

D

[link]

From: Tathagata Chakraborty (Jun 24 2009, at 07:31)

TDD is useful in another situation - in a commercial setting and when detailed specification documents have already been created by say a technical expert/architect. In this case you don't have to do a lot of designing while coding, so you can start off with the test cases.

[link]

From: Raphael Speyer (Jun 24 2009, at 08:02)

I agree with you that one of the major benefits of having good tests in place for me is that I can refactor with greater confidence. I see it like a scaffold around the codebase, which allows me to change the internal workings without it failing it crucial ways. Static typing/analysis can be useful here too.

And of course the benefit of writing the tests *first* is that it helps keep your code focused on exactly what it's meant to do, and no more. To avoid wondering what my X's and Y's aught to be upfront, I like to start with end-to-end tests and move inwards from there. This helps to focus the internal design as well.

[link]

From: Steve Jorgensen (Jun 24 2009, at 08:17)

Here's my current take on when to start writing unit tests:

1. Most projects need 1 or more code spikes or prototypes. Concepts from these will be used in the production code, but code will usually not be copied/pasted from here. Few if any tests need to be written for spikes/prototypes.

2. When work on production code begins, most of the code should fall into the categories of things that are not to be tested. That is to say, it should be mostly scaffolding, getters/setters, and calls to existing libraries,

3. Initially, follow the "3 strikes, and then you refactor rule". Write tests before refactoring, whether the refactoring is to remove duplication or to remove other smells.

Following these rules, I think you'll find that you gradually ramp up into a TDD groove on a project as business logic develops.

[link]

From: Ben in Boston (Jun 24 2009, at 08:26)

My problem with TDD in general is not an engineering one. In theory, TDD is a great idea. The problem with TDD can be expressed in one word: money. It increases the amount of code that you have to write (and maintain!) within a project by a linear factor, which I would submit to be at least 1.5, possibly 2. Even if you get a quality increase, in the long term that increase in cost will not, in the eyes of management or stock holders be practical. The benefits the author mentions in terms of refactoring are more than made up for by the costs of having to update the tests each time code behavior changes.

I am currently working on a project where we have experienced code rot of automated tests. It was before my time here, but it's made me rethink my former position about TDD.

[link]

From: John Cowan (Jun 24 2009, at 08:28)

TDD has its limitations, particularly where the intended behavior is inherently ill-specified. TagSoup, for example, guarantees well-formed output no matter how bad its input (modulo character encoding issues). It also guarantees consistency: if the input is the output of a previous TagSoup run using the same version, the new output will be identical to the old. Tests help with these guarantees. However, there are no guarantees about exactly what that output will be.

When I make a change, it hopefully improves the output for a certain class of documents, while quite possibly disimproving it for others. The best I can do is to have a large library of in-the-wild and randomly generated documents and run new versions against them to see what happens. If something changes that I didn't expect, I try to figure out why, and whether it's better. This is not something that an automated testing framework can help me with (other than the shell script that runs the doc library and diffs against previous outputs).

[link]

From: Dan DeLeo (Jun 24 2009, at 09:15)

One approach to the unknown X and Y problem that I've been using recently has been to pretend that class X has been written already, and then write code that uses this pretend X object/API. I usually write this directly in the file that will become my unit test. Since X doesn't exist, I'm allowed to call whatever methods I want and pretend it all works. Once I'm satisfied with how it all looks, I cut and paste everything into a bunch of failing tests.

This works great for me because I get really bored adding tests to code that already runs. The downside is that as I learn more about what I'm trying to do, I have to refactor a bunch and re-evaluate assumptions I made about how class X would be used.

I just started doing this a few months ago, so I can't really say if it's better than the alternatives, but it fits well with my motivational hang-ups.

[link]

From: Nick (Jun 24 2009, at 10:20)

Tim, it is sad to see you fall into the seductive TDD trap. This is really a subtle way for religious zealots to trap the unsuspecting, much like the "wedge strategy" used by intelligent design proponents.

There is nothing wrong with building tests after you have built your product. Indeed, that goes a long way towards taking software development from a form of artisanal craftsmanship to a real engineering profession. But using tests to drive development cripples innovation, dramatically slows development, and really destroys the momentum of every software team I've ever been on. Not to mention the fact that everyone who though they liked it actually hates it in practice on real projects. TDD sucks away your soul and desire. Don't fall for the trap. Watch out for religionists.

--Nick

[link]

From: Paul W. Homer (Jun 24 2009, at 11:23)

I thought the whole point of TDD was to use the "tests" to shape the creation of the code. It always seem to me to be a codified form of reverse engineering, or at least a way to force the programmers into looking at their code from two separate angles at the same time.

If you're just adding tests at the end, then it's normal unit-testing, isn't it?

By now I've written enough code that I don't need to go to any extra effort to help me structure it. I do that very well internally in my head, long before I even know the entire program. I do realize that this type of exercise might help younger coders in getting better structure, they do often rush in too quickly and focus more on the instructions than the abstractions. That can make for a messy implementation.

The thing that really bothers me about TDD is that it is hard enough to maintain hundreds of thousands of lines of code, and keep that in sync with all of the secondary resources like schemas, config files, scripts and documentation. Do we really need to double or triple up the size of our code just to unit-test simple objects? Twice as much code is at least twice as much work if not a lot more.

Paul.

[link]

From: Max (Jun 24 2009, at 11:34)

@Nick: Wow, man. I mean, just "Wow," is it, really. Wow.

[link]

From: Marcel Popescu (Jun 24 2009, at 11:43)

What you write is definitely not bad, it's just not TDD. TDD is test-driven *design*, and that's why it's important to start with tests and keep doing tests before. What you are talking about is more like plain unit testing - which is great, just a somewhat different thing.

[link]

From: Mark Levison (Jun 24 2009, at 11:56)

Tim - I'm the author of the InfoQ piece and I think you're missing an important qualification that Kent made. He said he didn't write tests in cases where it would have taken him several hours to get a working test for a small piece of code. In Kent's case he's trying to build Eclipse plugins and from experience I can tell you that is right right PITA.

In addition I surprised at your statement that the second idea "removing fear of maintenance" is the more important one. To me that is a side effect. I get reduced maintenance costs because my code is cleaner. I'm able to work faster because there is less code, its cleaner and I'm not afraid of changes.

I'm concerned that your comments will give more people license not to TDD on their projects.

[link]

From: Mark Levison (Jun 24 2009, at 12:11)

Mihai - even on the small projects didn't you not find your self going faster (or not slowing down) as the first few weeks past and you had to refactor?

Daniel Steinberg ++

Ben in Boston - I find I write less code overall when I TDD. The tests force me to slim down the production code, so even with the tests my volume of code is smaller. In addition it helps me go faster after the first few weeks. (see my comments to Mihai).

Jown Cowan - TagSoup is an edge case and even there you can do some TDD. Alot of what you will do in that case is deterministic, just not the outputs of some methods.

Nick apparently you and I live on different planets and in your belief system I no longer have a soul.

[link]

From: Paul Houle (Jun 24 2009, at 12:38)

Like anything else, I think there's a cost-benefit calculation to be made about testing and the methods used to do it.

In some applications, objects are self-contained, activities are sequential, and algorithms are tricky. Automated unit testing is cheap and beneficial. You may have already spent the cost that it takes to break the system into testable units, and know how to make a good architecture with those constraints.

In other cases (say a browser-based app) you're writing simple programs that work by causing side effects in a large, complex and unreliable external system. In that case the real issue is "does the external system behave the way I want it to" and the nature of your testing is entirely different.

I've seen cases where people have wrecked the architecture of systems in the name of making them testable... But have never written the tests. Testability is another constraint on the design of a system. Yes, it's possible to make peace with testability, and in the best situation, testability can improve the architecture of a program, but it can also lead people away from highly reliable and maintainable KISS approaches.

[link]

From: Tony Fisk (Jun 24 2009, at 16:49)

While I agree that maintnenance is the major part of the job, I find it's actually rather difficult to introduce TDD at that stage if you haven't been applying it all along.

Testing monolithic code is part of the associated terror. Besides which, what are you testing against?

I wrote a little monologue of my own on TDD a little while back.

My conclusion:

"Like any infrastructure, it is always beneficial to provide unit testing. The most benefit is derived from installing it as early on in the project as possible. Like most infrastructures, there will be some who perceive this unproductive clutter as time consuming overhead. "Never mind these silly tests! We want to see measurable progress!" (usually measured by lines of code written* or number of features implemented)

Allow me to introduce such people to the concept of 'throughput', which sets the upper bound of your productivity as the value of goods/features/whathaveyous that have been passed on to the client.

The value of an untested feature, to a client, is ... zero. So, it doesn't matter how many of these you have rattled off in the past week, your net throughput is effectively... zero."

I disagree with those who say that TDD stifles creativity. What it *does* do is make you think about how to make it easier to test an interface... and that usually means identifying all those features that can be decoupled which, I suggest, leads to a better design.

[link]

From: Ron Burk (Jun 24 2009, at 16:59)

It's fun to watch the programming culture constantly revolt against and then reinvent orthodoxy. When Parnas wrote about how to fake the waterfall model nobody was really doing, he was instigating the part of the cycle where people start to wonder if maybe other people really aren't following the orthodoxy either -- crucial, since what holds orthodoxy together is everyone believing that they are the only "bad" person doing those naughty things.

You can see in this thread the word "professionalism" (substitute "morality" with little gain/loss of substance) and even "sin" (used in jest, but not really!). The constant tension in human endeavour between planning/not planning, and rules/no rules plays out in programming with constant vigor, but alas, no awareness of the history and experience humans have already paid for in this category of dilemma. And somewhere, Watts Humphrey is clucking so hard his tongue hurts. :-)

[link]

From: Isaac Gouy (Jun 24 2009, at 17:27)

Let's step from anecdote to correlation and consider the differing importance of "an early prototype" and "integration or regression testing" to different aspects of software development.

MacCormack, A., Kemerer, C.F., Cusumano, M., and B. Crandall, "Trade-offs between Productivity and Quality in Selecting Software Development Practices." IEEE Software 20(5) 2003, 78-85

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.1633

[link]

From: Daumier (Jun 24 2009, at 21:22)

I totally agree that #2 is the big win, and I more or less follow your process. The only thinhg I would add is I try really hard to write unit tests before I have the lot of them running end to end.

The thing I noticed is that if I delay writing unit tests until after all the units are working together then because the system "already works" my subconscious enthusiasm for writing unit tests falls markedly, and so their quality and coverage fall, if they are written at all.

Whereas if I write the unit tests just after each unit, it's part of "getting everything to work", and so I am willing to put the effort into doing it.

[link]

From: njr (Jun 25 2009, at 00:19)

My process is similar to yours, Tim, though I do find that as time goes on the proportion of the time I write the tests before the code increases.

As a practical matter, one reason I often end up writing the tests immediately after the code is that I often want to check quite large, fiddly output. Experience teaches that if I generate that output by hand (1) it takes *much* longer (2) I almost always get it wrong. So I often write the code, get its output, carefully check it (really...) and then use it as the correct result. While some will undoubtedly view this as an even larger heresy than yours, for me this has proven to be faster, more accurate, more reliable and less frustrating.

Technically, I suppose, I usually do write a test that fails first, but a lot of the time that's because I put in some kind of NULL output in place of the true output, then to be corrected.

[link]

From: Cedric Beust (Jun 26 2009, at 09:27)

My main objections to TDD are: 1) it promotes micro-design over macro-design and 2) it's hard to apply in practice (i.e. for any code that is not a bowling card calculator or a stack). I'll write up a blog post and I'll expand on these shortly.

[link]

From: Lennon (Jun 26 2009, at 14:38)

My own relationship to TDD is somewhat skewed by the fact that I've spent about 80% of my software career working in languages with a REPL, or at least prototyping designs in an interpreter.

My tests tend to be literally copied-and-pasted from the interpreter session, which removes much of the feeling that I'm maintaining too much code -- the tests are just a persistent artifact of the exploratory coding I've already done.

[link]

author · Dad
colophon · rights
picture of the day
June 23, 2009
· Technology (90 fragments)
· · Coding (98 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!