Most cellphone and computer software is delivered late and over budget. The biggest contributing factor to cost bloat is building the wrong thing. So what software and business people need is "a shared understanding of what done looks like".
Test Driven Development is about design, conversations, and writing examples for a system that doesn't yet exist. It's not really about testing. However, once the system exists, your examples turn into tests, as a rather useful side effect.
A User Story is a promise of a conversation, and it is in that conversation that things go wrong. The customer and developer rarely agree what "enough" and "done" look like, which leads to over- or under- engineering.
Dan suggests a format for User Story cards which aims to prevent this communication gap.
On the front of the User Story index card, you have the title and narrative. The narrative consists of a sentence in this format:
As a
I want feature
so that benefit
where benefit
On the back of the card, you have a table with three columns
Given this context | When I do this | then this happens
Then you have 4 or 5 rows in the table, each detailing a scenario. (If you need more than that then the story is too big and should be split)
Dan finds that in his work, this leads to conversations about User Stories where "done" and "enough" are discussed, and defined.
User Stories should be about activities, not features. In order to check that your User Story is an activity, you should be able to do a thought experiment where you implement the story as a task to be performed by people on rollerblades with paper. You must think about it as a business process, not a piece of software.
When creating the story cards, the whole team should be involved, but it is primarily the business/end user stakeholders and business analysts who write the title and narrative on the cards. They then take in a tester to help them to write the scenarios.
Are people familiar with the V model of software testing? When this was conceived, they thought that the whole process would take 2 years, and span the whole project. Dan ususally does it in 2 days. Many times for each project.
Then Dan offered to show us how to do BDD using plain JUnit. He requested a pair from the audience, so I volunteered. At this point my notes dry up, and I am working from memory, but I think the general idea is like this.
You talk about "behaviour specs" not tests. The words you use influence the way you think, and "behaviour specification" gives much better associations than "tests".
Each behaviour specification should be named to indicate the behavour it is specifying. Not "testCustomerAccountEmpty" rather "customerAccountShouldBeEmpty".
In the body of the spec, you can start out by typing in the prose of one of the scenarios you have on the user story, as a comment.
//given we have a flimble containing a schmooz
// when we request the next available frooble
// then we are given a half baked frooble and the schmooz.
Then you can fill in code after the "given" comment. When you have code that does what the comment says, delete the comment. Repeat with the "when" and "then" comments.
In this way, you build up a behaviour specification that drives your development of the system. A few minutes later (hopefully) you have a system which implements the specification, and at that point your spec helpfully turns magically into a regression test which you can run. At that point you can start calling it a test if you like. But actually it is more helpful to your brain to continue to think of it as a behaviour specification. It leads to much more constructive conversations about the system.
Hi! Nice post! I just have a small question..
ReplyDeleteShouldn't it be "I want activity" rather than "I want feature", since stories should be about activities, not features?