Thursday, 25 August 2011

Refactoring Kata fun

I've been working on a kata called "Tennis"*, which I find interesting, because it is quite quick to code, yet is a big enough problem to be worth doing. It's also possible to enumerate pretty much all the allowed scores, and get very comprehensive test coverage.

What I've found is that when I'm using TDD to solve the Kata, I tend to only enumerate actually a very small number of the test cases. I generally end up with something like:

Advantage Player1
Win for Player1
Advantage Player2

I think that's enough to test drive a complete implementation, built up in stages. I thought it would be enough tests to also support refactoring the code, but I actually found it wasn't. After I'd finished my implementation and mercilessly refactored it for total readability, I went back and implemented exhaustive tests. To my horror I found three (of 33) that failed! I'd made a mistake in one of my refactorings, and none of my original tests found it. The bug only showed up with scores like Fifteen-Forty, Love-Thirty and Love-Forty, where my code instead reported a win for Player 2. (I leave it as an exercise for the reader to identify my logic error :-)

So what's the point of TDD? Is it to help you make your design good, or to protect you from introducing bugs when refactoring? Of course it should help with both, but I think doing this practice exercise showed me (again!) that it really is worth being disciplined and careful about refactorings. I also think I need to develop a better sense for which refactorings might not be well covered by the tests I have, and when I should add more.

This is something that my friend Andrew Dalke brings up when he criticises TDD. The red-green-refactor iterative, incremental rhythm can lull you into a false sense of security, and means you forget to stop and look at the big picture, and analyze if the tests you have are sufficient. You don't get reminded to add tests that should pass straight away, but might be needed if you refactor the code.

So in any case, I figured I needed to practice my refactoring skills. I've created comprehensive tests and three different "defactored" solutions to this kata, in Java and Python. You can get the starting code here. You can use this to practice refactoring with a full safety net, or if you feeling brave, without. Try commenting out a good percentage of the tests, and do some major refactoring. When you bring all the tests back, will they still all pass?

I'm planning to try this exercise with my local python user group, GothPy, in a few weeks time. I think it's going to be fun!

* Tennis Kata: write a program that if you tell it how many points each player has won in a single game of tennis, it will tell you the score.

Tuesday, 9 August 2011

Books on automated testing

As I mentioned in my last post I've recently taught a course in automated testing to a bunch of students at KYH. Before the course I spent some time looking for good course books for them. I looked at a few options and eventually decided on "The art of Unit Testing with examples in .Net" by Roy Osherove, and "The RSpec Book" by Chelimsky et al.

I chose the unit testing book because Roy does a good job of describing the basics of test driven development, including simple mocks and stubs. The book is very practical and is full of insight from experience and code examples.

I also looked at "Pragmatic Unit Testing in C# with NUnit" by Andrew Hunt and David Thomas. I'm a big fan of their book "The Pragmatic Programmer", so I had high hopes for this one. Unfortunately I was rather disappointed with it. It talks about what good unit tests should look like, but not much about how you use Test Driven Development to create them.

I chose the RSpec Book because it has quite a bit of material about Cucumber and how it fits in to a Behaviour Driven Development process. I think the published literature on automated testing focuses too much on unit level tools, and there is not enough written about feature level tests and how to use them as part of the whole agile process.

I also looked at "Bridging the Communication Gap" by Gojko Adzic, which I think is an excellent introduction to how to use feature level tests as part of the agile process, but it is largely tool agnostic. There is a short chapter introducing some tools, including JBehave, a forerunner to Cucumber, Selenium and TextTest too. It's a little out of date now though, and for this course I wanted something with more detail.

I hope these short book reviews are useful.