TDD depression

published: Tue, 18-Oct-2005   |   updated: Tue, 18-Oct-2005

Over the past year or so, I've become more and more aware that the big companies that provide development tools just don't understand test- driven development (TDD). In fact, I'd go even further, a lot of companies that write software don't seem to get it either. So, I'm depressed.

Microsoft have produced in Visual Studio 2005 Team System functionality that produces unit tests from production code. Their Guidelines for Test-Driven Development has these helpful rules:

6. Define the interfaces and classes for your feature or requirement. You can add a minimum of code, just enough to compile.

7. Generate tests from your interfaces and classes.

Well, it's complete crap, people: this is not TDD. TDD says:

1. Write a test that fails (Red)

2. Write code to make the test pass (Green)

3. Modify the code to remove duplication, to make it simpler and more expressive (refactor)

Spot the difference: VS2005 seems to think that writing the code first is the way to go, whereas TDD says that the code is driven by writing the test first.

To be fair, I must admit the page has a disclaimer to say that this is not real TDD (which, if you recall, is as much about evolving design as testing code).

And then we have Borland with its Application Lifecycle Management (ALM) suite of products. I'm reminded of a demo I saw at one of the BorCons, where they were proudly showing off the integration between this heavy-weight let's- track-everything-the-developer-does system and some IDE. I gazed horrified as the "marketing guy" changed a requirement, the "software team lead" costed it out and assigned it to a developer, and the developer made the single line change to make it happen and checked it in.

Where was the point where the developer said, you know, this change seems to indicate that this class is now too broad and has too much responsibility, and I need to refactor it to this model instead, and since I have all the unit tests, I'm protected, etc, etc. Nope. No intelligence or agility allowed in this world of waterfall ALM tools. They are not designed for agility, they are designed for control of what the developer does. Fine, if you have a team of crappy developers and want to keep them penned up, but for good developers? A developer who says, this requirement seems to point to this type of functionality, have we thought about this? Should I engineer for success or just code to make the ALM tool happy?

And then I made a presentation a few weeks ago about unit tests to a group here at Configuresoft. My premise was simple: a unit test should test code or functionality you write. It should not test code or systems you don't write. Furthermore a unit test should ideally test a single independent piece of functionality, it should not test multiple pieces since that's the purview of integration tests. In essence, for unit tests we try and remove as much dependency on other production or third-party code as we possibly can.

Everything seemed to go OK until I raised the topic of mock objects, fake objects, and stubs. Blam, big, big dissent and pushback. Essentially, the argument revolved around the amount of code being written that wasn't "production code", to whit: "the company doesn't pay us to write this kind of code, they pay us to write code the customers will use."

No matter that I tried to point out that

  • customers pay us to write code that works, they don't want nasty surprises
  • we're not paid to test SQL Server
  • we don't test exception handling because we can't easily cause the exception to be thrown
  • we're continually writing monolithic coupled code because we can't test in isolation
  • our testing is coupled to having a properly-configured data-populated SQL Server online
  • we reproduce bugs in the debugger and fix them, but don't have a test to check against regression
  • our testing is human-driven: first by the developers and second by QA

So, I'm depressed. I'm obviously not getting the concept across at my own place of work, and equally the TDD world out is in the same boat. I'm aware of the élitist attitude (or even the evangelist attitude) that TDD proponents are accused of having, but I don't know how to counter it. It seems evident to me that TDD is a great development methodology for producing working software; it definitely works for me, so much so that I'm now in this place where I get mindfunk if I can't run a suite of unit tests to validate what I'm doing.

The thing is, there are development teams out there that do get this. They understand the limitations of Big Design Up Front (BDUF). They love the confidence in their code that TDD gives them. They like evolving a design that is loosely-coupled, efficient, testable. ThoughtWorks is a prime example of this kind of company: they "get it" and are willing to push for it in their contracts. (Martin Fowler is the Chief Scientist at ThoughtWorks.)

Update (Tue 18-Oct-2005)

A reader pointed out that my original post seemed to indicate that Configuresoft's ECM product wasn't tested. On the contrary, it's extremely well tested. My point is that (1) a lot of the code is not supported by automated unit tests (we rely instead on integration testing to ferret out any bugs), and (2) we don't yet have a culture of writing proper automated unit tests, be it via TDD or any other methodology. The fact that mocks, fakes, and stubs got pushback is neither here nor there, really. Writing and relying on unit tests will naturally guide the developer to using these techniques.

Indeed, I would argue that a whole class of bugs cannot be found in ECM by any other means than integration testing. The reason is the wide variety of interconnected subsystems in ECM: there's the Agent that collects the data from the target workstation/server; this data gets sent to the Collector; the Collector passes it on to a router called SAS, which then passes it onto a data processor called ETL, which eventually loads it into the database. All of those entities are separate processes, all need to be teasted against each other (after all, if the Agent produced data that couldn't be understood by ETL then the system would be irretrievably broken).