Monday, 28 December 2009

big UI changes and their effect on tests

I recently read this post in Brian Marick's blog, and it set me thinking. He's talking about a test whose intention in some way survived three major GUI revisions. The test code had to be rewritten each time, but the essence of it was retained. He says:

I changed the UI of an app and so… I had to rewrite the UI code and the tests/checks of the UI code. It might seem the checks were worthless during the rewrite (and in 1997, I would have thought so). But it turns out that all (or almost all) of those checks contained an idea that needed to be preserved in the new interface. They were reminders of important things to think about, and that (it seemed to me) repaid the cost of rewriting them.

That was a big surprise to me.

I'm not sure why Brian is so surprised about this. If the user intentions and business rules are the same, then some aspects of the tests should also be preserved. A change in UI layout or technology should mean superficial changes only. In fact, one of the main claims for PyUseCase is that by having the tests written in a domain language decoupled from the specifics of the UI, it enables you to write tests that survive major UI changes. In practice this means when you rewrite the UI, you are saved the trouble of also rewriting the tests. So Geoff and I decided to write some code and see if this was true for the example Brian outlines.

In the blog post, there is only one small screenshot and some vague descriptions of the GUIs these tests are for, so we did some interpolation. I hope we have written an application that gets to the gist of the problem, although it is undoubtedly less beautiful and sophisticated than the one Brian was working on. All the code and tests is on launchpad here.

We started by writing an application which I hope is like his current GUI. You select animals in a list, click "book" and they appear in a new list below. You select procedures from another list, and unsuitable animals disappear.



In my app, I had to make up some procedures, in this case "milking", which is unsuitable for Guicho (no udders on a gelding!), and "abdominocentesis" which is suitable for all animals, (no idea what that is, but it was in Brian's example :-). Brian describes a test where an animal that is booked should not stay booked if you choose a procedure that is unsuitable for it, then change your mind and instead choose a procedure that it is suitable for.


select animals Guicho
book selected animals
choose procedure milking
choose procedure abdominocentesis
quit
This is a list of the actions the user must take in the GUI. So Guicho should disappear when you select "milking", and reappear as available, but not as booked, when you select "abdominocentesis". This information is not in the use case file, since it only documents user actions.

The other part of the test is the UI log, which documents what the application actually does in response to the user actions. This log is auto generated by pyUseCase. For this test, I won't repeat the whole file, (you can view it here), but I will go through the important parts:

'select animals' event created with arguments 'Guicho'

'book selected animals' event created with arguments ''

Updated : booked animals with columns: booked animals ,
-> Guicho | gelding

This part of the log shows that Guido is listed as booked.


'choose procedure' event created with arguments 'milking'

Updated : available animals with columns: available animals , animal type
-> Good Morning Sunshine | mare
-> Goat 3 | goat
-> Goat 4 | goat
-> Misty | mare

Updated : booked animals with columns: booked animals ,


So you see that after we select "milking" the lists of available and booked animals are updated, Guicho disappears, and the "booked animals" section is now blank. The log goes on to show what happens when we select "abdominocentesis":


'choose procedure' event created with arguments 'abdominocentesis'

Updated : available animals with columns: available animals , animal type
-> Good Morning Sunshine | mare
-> Goat 3 | goat
-> Goat 4 | goat
-> Guicho | gelding
-> Misty | mare

'quit' event created with arguments ''


ie the "available animals" list is updated and Guicho reappears, but the booked animals list is not updated. This means we know the application behaves as desired - booked animals that are not suitable for a procedure do not reappear as booked if another procedure is selected.

Ok, so far so good. What happens to the test when we compeletely re-jig the UI and it instead looks like this?



Now there is no book button, and you book animals by ticking a checkbox. Selecting a procedure will remove unsuitable animals from the list in the same way as before. So now if you change your mind about the procedure, animals that reappear on the list should not be marked as booked, even if they were before they disappeared. There is no separate list of booked animals.

What we did was take a copy of the tests and the code, updated the code, and see what we needed to do to the tests to make them work again. In the end it was reasonably straightforward. We didn't re-record or rewrite any tests. We just had to modify the use cases to remove the reference to the book button, and save new versions of the UI log to reflect the new UI layout. The use case part of the test looks like this now:


book animal Guicho
choose procedure milking
choose procedure abdominocentesis
quit

which is one line shorter than before, since we no longer have separate user actions for selecting and booking an animal.

So updating the tests to work with the changed UI consisted of:
  1. remove reference to "book" button in UI map file, since button no longer exists
  2. in use case files for all tests, replace "select animals x, y" with a line for each animal, "book animal x" and "book animal y".
  3. Run the tests. All fail in identical manner. Check the changes in the UI log file using a graphical diff tool, once. (no need to look at every test since they are grouped together as identical by TextTest)
  4. Save the updated use cases and UI logs. (the spurious line "book selected animals" is removed from the use case files since the button no longer exists)
  5. Run the tests again. All pass.
The new UI log file looks like this:

'book animal' event created with arguments 'Guicho'

Updated : available animals with columns: is booked , available animals , animal type
-> Check box | Good Morning Sunshine | mare
-> Check box | Goat 3 | goat
-> Check box | Goat 4 | goat
-> Check box (checked) | Guicho | gelding
-> Check box | Misty | mare

'choose procedure' event created with arguments 'milking'

Updated : available animals with columns: is booked , available animals , animal type
-> Check box | Good Morning Sunshine | mare
-> Check box | Goat 3 | goat
-> Check box | Goat 4 | goat
-> Check box | Misty | mare

'choose procedure' event created with arguments 'abdominocentesis'

Updated : available animals with columns: is booked , available animals , animal type
-> Check box | Good Morning Sunshine | mare
-> Check box | Goat 3 | goat
-> Check box | Goat 4 | goat
-> Check box | Guicho | gelding
-> Check box | Misty | mare

'quit' event created with arguments ''
It is quite explicit that Guicho is marked as booked before he disappears, and not checked when he comes back. Updating the UI map file was very easy - we viewed it in a graphical diff tool, noted the new column for the checkbox and the lack of the list of booked animals were as expected, and clicked "save" in TextTest.

I only actually had like 5 tests, but updating them to cope with the changed UI was relatively straightforward, and would still have been straightforward even if I had had 600 of them.

I'm quite pleased the way PyUseCase coped in this case. I really believe that with this tool you will be able to write your tests once, and they will be able to survive many generations of your UI. I think this toy example goes some way to showing how.

Wednesday, 16 December 2009

PyUseCase 3.0

Geoff has been working really hard for the past few months, writing pyUseCase 3.0. It has some very substantial improvements over previous versions, and I am very excited about it. He's written about how it works here.

It's a tool for testing GUIs with a record-replay paradigm, that actually works. Seriously, you can do agile development with these tests, they don't break the minute you change your GUI. The reason for this is that the tests are written in a high level domain language, decoupled from the actual current layout of your GUI. The tool lets you create and maintain a mapping file from the current widgets to the domain language, and helps you to keep it up to date.

In a way it's a bit like Robot, or Twist, or Cucumber, that your tests end up being very human readable. The main difference is the record-replay capability. Anyone who can use the application GUI can create a test, which they can run straight away. With these other tools, a programmer typically has to go away and map the user domain language of the test into something that actually executes.

The other main way in which pyUseCase is different from other tools, is the way it checks your application did the right thing. Instead of the test writer having to choose some aspects of the GUI and make assertions about what they should look like, pyUseCase just records what the whole GUI looks like, in a plain text log. Then you can use TextTest to compare the log you get today with the one you originally recorded when you created the test. The test writer can concentrate on just normal interaction with the GUI, and still have very comprehensive assertions built into the tests they create.

pyUseCase, together with TextTest, makes it really easy to create automated tests, without writing code, that are straightforward to maintain, and readable by application users. Geoff has been developing his approach to testing for nearly a decade, and I think it is mature enough now, and sufficiently far ahead of the competition, that it is going to transform the way we do agile testing.

:-D

Thursday, 10 December 2009

Jens Östergaard on Scrum

Today I listened to a presentation about "Scrum for Managers" from Jens Östergaard. He's a big, friendly Dane who grew up in Sweden, and now lives in the UK. I first met Jens at XP2003 in Genoa, when he had just run his first successful Scrum project. These days he spends his time flying around the world, teaching Scrum courses and coaching Scrum Masters. (He'll be doing 2 more CSM courses in Göteborg in the next 6 months, and speaking at Scandinavian Developer Conference).

One thing I noticed about his talk was that most things about Scrum hardly seem to have changed at all. Jens was largely using the same language and examples that are in the original books. The other thing that struck me was that Jens said nothing about the technical practices that are needed to make agile development work. In my world, you can't possibly hope to reliably deliver working software every sprint/iteration if you havn't got basic practices like continuous integration and automated testing in place. I asked Jens about this afterwards, and he said it was deliberate. Scrum is a project management framework that can be applied to virtually any field, not just software development. Therefore he didn't want to talk about software specific practices.

When I first heard Ken Schwaber talk about Scrum (keynote at XP2002) I'm farily sure he included the XP developer practices. I can't find my notes from that speech, but I remember him being very firey and enthusiastic and encouraging us to go out and convert the world to Scrum and XP (the word agile wasn't invented then).

Scrum has been hugely successful since then. Today we had a room full of project managers and line managers who all knew something about Scrum, many of whom are using it daily in their organizations. Scrum is relatively easy to understand and get going with at the project level, and has this CSM training course that thousands of people have been on. These are not bad things.

I do think that dropping the XP development practices entirely from the description of Scrum is unhelpful. I chatted with several people who are having difficulty getting Scrum to work in their organizations, and I think lack of developer practices, particularly automated testing, is compounding their problems. I think a talk given to software managers needs to say something about how developers might need coaching and training in new practices if they are going to succeed with Scrum.

Friday, 4 December 2009

Scandinavian Developer Conference 2010


The programme for Scandinavian Developer Conference has just been published. I think we have a fantastic line up of speakers this year. I am particularly pleased Michael Feathers, Brian Marick and Diana Larsen have agreed to join us, and that this year my husband Geoff is also a speaker.

I have met Michael and Diana at many XP conferences over the years, but I missed Brian Marick the one time I was at the agile conference in North America, so I'm particularly interested to hear what he has to say. He has been very influential in the testing community, and invented the idea of testing quadrants, which I think is a very helpful way of thinking about testing.

Michael Feathers is known for his book "Working Effectively with Legacy Code", which I reviewed early drafts of back in like 2004. He and I also competed together in "Programming with the Stars" at agile2008. Michael works for Object Mentor, coaching teams in all things agile.

Diana Larsen is chairman of the agile alliance, and has written a book about retrospectives together with Esther Derby. I think I first met her at XP2005, when I attended her tutorial, which I remember as outstanding. It was very interactive and all about communication skills and teambuilding. Her job seems to be all about teaching the people skills needed for agile to work.

Geoff is going to be talking about texttest, which goes from strength to strength, and productive GUI testing with pyUseCase. Geoff has been doing an awful lot of work on this tool lately, and I am really excited about the possibilities it opens up for agile testing. I will have to write a separate post on that though, so watch this space :-)

Many of the other speakers are familiar faces who I look forward to meeting up with again - Bill Wake, (books about refactoring, XP and Ruby), Erik Lundh, (the earliest Swedish XP coach), Niclas Nilsson (ruby, programming guru), Jimmy Nilsson (Domain Driven Design book), Neal Ford (Thoughtworks, productive programmer book), Thomas Nilsson, (CTO, Responsive, linköping), Ola Ellnestam (CEO, Agical, stockholm), Marcus Ahnve (programming guru), Chris Hedgate (programming guru)...

I'm also very pleased that I'm going to be speaking again this year, after the success of my previous presentation on "clean code". This year I hope to talk about agile testing and how best to approach it.

One of the reasons I keep going back to the XP conference is the amount of interaction and discussion generated by the many workshops and open space sessions. There are very few straight talks, and all are either presentations of academic papers, or keynotes. When I saw the proposed programme for SDC a couple of weeks ago, I felt it was lacking something. Eight parallel tracks of presentations is all very well, but where is the interaction, the whole reason to go to a conference and not just watch presentations on infoq? So I proposed a ninth "track", devoted to discussion, called "conversation corner". Luckily my colleagues at iptor, who are organizing the conference, liked my idea.

To get the conversations going, I am organizing four "fishbowl" style discussions, seeded by conference speakers. I've picked topics that interest me, and invited other conference speakers, who I think are also interested in these topics, to join me.

I am hoping that after participating in one or two of my fishbowls, some conference attendees might feel comfortable proposing discussions of their own. To that end there will be a board with timeslots and index cards, so people can write up their topic, assign it to an empty timeslot, and hence invite more people to join them.

It won't be full blown open space, there will be no opening meeting with everyone, or two minute pitches proposing sessions. I won't be explaining the law of two feet or the open space rules. But it is a step in that direction, and I hope a complement to the organized speeches going on in the rest of the conference.

Perhaps you'd like to join us at the conference? Register here.