In the last couple years, there has been a lot of buzz around things like "Agile", "Scrum", "TDD", and other stupid stuff that wraps an acronym around common sense. Basically, they're all a way for people to make money by writing books telling people how to develop software. The target market is non-technical managers, developers who aren't very good at their job, and executives who enjoy acronyms.

It's not all bad, though. Agile and scrum are basically a fancy way of saying "communicate more", and if you can get "the business" to talk to you more and all it takes is an overloaded word like "scrum", go for it. Mostly, though, I hear "scrum" and "agile" and "tdd" as cop-out explanations used by developers to explain why they aren't very good at their jobs.

• The project is late because the requirements were constantly changing.
• We didn't catch the bug because we didn't have enough time for testing
• 100% code coverage is impossible

All of these issues can be solved by being better at your job. Learn how to communicate with your customers. Learn how to budget your time. Learn how to write tests. Scrum and agile and tdd mitigate these issues, they don't solve them. They force you to communicate, and write tests, but if you're not good at communicating or writing tests, doing more of it isn't going to help you.

Of course, if your manager is an idiot, then you're screwed, and you will always have these problems.

### Case Study: Sudoku Solver

Peter Norvig works at Google as Director of Research (I'm sure that actually means something) and before that he worked at NASA as some kind of AI genius/hacker. He's a smart guy who's been programming for a long time.

In Coders at Work, Norvig talks about a TDD experience of his involving an algorithm for solving Sudokus. It was interesting to me because I wrote a Java applet (shudder) to do that for a college assignment. It was an interesting problem involving recursion, threading, backtracking, memoization and other stuff that academics love.

What happened is that he wrote an algorithm to solve them, and then someone else, from the TDD community, saw it and attempted to write the same algorithm using test driven development: that is, by writing the tests first, and allowing the tests to influence the design of the algorithm.

It's also important to know what you're doing. When I wrote my Sudoku solver, some bloggers commented on that. They said, "Look at the contrast—here's Norvig's Sudoku thing and then there's this other guy, whose name I've forgotten, one of these test-driven design gurus. He starts off and he says, "Well, I'm going to do Sudoku and I'm going to have this class and first thing I'm going to do is write a bunch of tests." But then he never got anywhere. He had five different blog posts and in each one he wrote a little bit more and wrote lots of tests but he never got anything working because he didn't know how to solve the problem.

The problem that the TDD guy encountered was twofold:

1. He didn't know anything about Sudoku
2. He tried to solve that by writing tests

Tests are not a replacement for knowledge. That seems like a stupid statement, but it's something that a lot of TDD/Agile advocates sometimes forget. They might complain that we're not practicing scrum correctly, but practicing scrum correctly doesn't guarantee that anything will get better. Instead of blindly following a wikipedia article to determine how to run your project or write code, you should figure out what's wrong, and then solve that in the way that makes the most sense. You might even discover that you'll start practicing scrum correctly as a byproduct of common sense. Common sense is why most development methodologies exist. Stuffing your current situation into the fad-of-the-moment is not going to solve your problems, although it might by accident. But I wouldn't hold your breath.

Anyway, back to Sudoku. The TDD guy tried to solve the Sudoku problem by writing tests for a problem he did not understand, which meant whatever tests he was writing were totally worthless, which is something he actually admits in the very first article. After five articles and a great deal of code and a great deal of tests, a Sudoku solver did not exist.

From the first article:

Arguably this is a violation of YAGNI, but since I don't really know how to start or how to know when I'm done, writing some Spike code in TDD style seems to me to be a good way to get my feet wet.

WRONG. A good way to get your feet wet is to do some research about the problem (the internet is your friend) to see if you're going in the totally wrong direction.

My plan, subject as always to change, is to code something up in that way that I have, to see what happens

FAIL. "To see what happens?" I'm glad I don't work with this guy. If you're randomly writing code to determine if your plan is either absolutely perfect or absolutely idiotic, you're doing it wrong.

Norvig solved the Sudoku problem by understanding the problem, understanding the solution, and then writing the code. He probably wrote some tests at some point, but they most definitely did not influence his design. What influenced his design was personal experience and a vast knowledge of search algorithms.

Of course, you aren't going to have vast knowledge of every problem you're trying to solve, but that's not the point. The point is that writing tests first is not always the correct decision. If you don't know anything about the problem or how to solve it, writing a test is not going to help you. All it will do is make you write a lot of code that is probably worthless.

### Case Study: Wiki Engine

Recently, I decided to write a wiki engine. Why? Because I wanted to. I'm aware of the hundreds of other wiki engines, but it's an interesting problem, and reinventing the wheel is a lot of fun, just make sure you're doing it on your own time.

Anyway, I started and stopped several times, deleting large chunks and rewriting them as I went. That is probably not the best way to develop, but whatever. At one point, I wrote a test that looked similar to this:

public function testHeaderGetsParsedCorrectly() {
$wikitext = <<<WIKI !Header level 1 !!Header level 2 WIKI;$expected = <<<HTML

HTML;

$this->assertEquals($expected, $this->wikiEngine->toHtml($wikitext));
}


Nothing wrong with that. It's testing that the given wiki text gets transformed into the correct HTML. Pretty straightforward.

However, this gives me absolutely no insight to the design of the overall engine. I could easily write a function that will do this, but will it be the correct design? Let's probe deeper.

I've written wiki engines before (they sucked). I've looked at other wiki engines, and done a bit of research into Lexical Analysis and certain kinds of parsers. I know a little bit about how to write something that parses something else. I didn't really use any of that knowledge in my own personal wiki engine, because using yet another parser generator in a wiki engine written in PHP would be an impressive waste of my time, even from a purely academic point of view. But I digress. I was talking about why TDD was a bad idea in this situation.

All this test can tell me is that I need a function that should transform some text into some other text. How do I handle scoping? How do I handle block elements vs. inline elements? How do I handle escaping? Can the solutions to these problems be reused, or is it a one-off kind of thing that can be hard-coded in one place?

I ran into all of these problems after I had already written several tests. Everything worked, but the design was crap. There was no way I could handle a nested element (like say, a string of bold text inside a list item). I had to completely rethink my design. The amusing part is that the tests did not change, only my design. TDD utterly failed in influencing my design.

It could be argued that my tests were not good enough. I would argue that my tests are fine. The actual implementation of the engine is not something I care about. I don't care if use a scope stack or if I keep track of nested elements in a different way, which is why my tests are only interested in results, not design. I could write some tests that look like the following:

public function testBoldIsAddedToScopeStack() {
$wikitext = '__bold text__';$this->wikiEngine->toHtml($wikitext);$this->assertContains(array('type' => 'strong'), $this->wikiEngine->scopeStack); }  Oh wait, that's an absolutely terrible design, for several reasons. First of all, the scope stack should not be publicly available. I suppose I could add a public scopePeek() method that just returns the last pushed scope, but why? To satisfy a unit test? Testing should influence design, but you shouldn't sacrifice design for testing. Don't make methods public because you can't test them otherwise. Secondly, toHtml() is not incremental. When I pass text into it, I want HTML returned. I can't just parse half of it and then check that it was parsed correctly; that makes no sense for a wiki engine. There isn't ever a time when you would want to parse something halfway. Again, I'd have to sacrifice design to satisfy my unit test. That's not going to happen. Another solution is to just not close the scope, something like this: $wikitext = '__bold text'; //no closing "__"


However, I want the engine to automatically close scopes when it reaches EOF. So the toHtml() method does that at the end automatically, which means that after I call toHtml() the scope stack will always be empty. I could move that functionality into another public function, but why would I want to expose that functionality publicly? There's no reason. I always want to close scopes; I don't want that decision left up to the implementer of the wiki engine.

In the end, there was no way for me to test that a scope got pushed onto the stack correctly. In fact, I realized that I really didn't want to test that anyway. What if I wanted to change the scope stack to a queue? Then I'd have dozens of breaking tests because now the internal data structure is FIFO instead of LIFO. Is that something that should be tested? In my opinion, absolutely not. A long as the parsing works correctly, I don't particularly care about the internal data structure of the engine. If performance is a concern, I can write tests to verify that parsing some hideously long whiny anti-TDD article takes x amount of seconds at the most. Then if a change to the internal data structure causes that test to break, then I know I've got something to fix. In that way TDD could influence my design, but up front, there's no test that could have told me what my design should look like. I had to analyze problems that were hidden to the public interface and solve them without tests.

This seems a rather contrived example, but it's an example that I encountered while trying to employ TDD to help me write some code (IRL, as it were). The fact is, this methodology just didn't work for this problem. It was creating hindrances and forcing me to sacrifice a solid, safe design for the sake of testing something that didn't really need to be tested. And it didn't even give me the help I needed, which was how to construct the internal engine so that nested elements were rendered correctly. The fact is that I needed more experience in parsing engines before I could confidently design a parser that correctly handled nested scopes. This was something that required some trial and error before a design that solved the problem emerged. TDD wasn't going to give me that no matter what kind of tests I wrote.

### Conclusion

I'm not here to bash testing. I'm all for testing, in ways that might surprise you. I'm all for testing in ways that make people develop better. Some people just aren't suited for TDD, and not every problem requires a unit test. In most cases, common sense will solve more problems than writing a test ever would.

And if anyone actually looks at my wiki engine, I had specific goals in mind, which is why it's shoved in one giant class and uses arrays for stuff instead of objects. I don't want to hear it.