In my current assignment, I'm taking the role of "developer-in-test". I'm working in a large distributed development project, which is building new functionality on a large existing codebase. In practice, I work closely with the developers in the project and build automated tests for subsystems that previously had only manual tests. The developers can use these tests to support their work, and add new tests as they build new features.
My background is basically as a developer, so I have been reading up on testing. I found "Lessons Learned in Software Testing" by Kaner, Pettichord and Bach very helpful, and "Agile Testing" by Lisa Crispin and Janet Gregory helpful and also very thorough. I find it interesting that the authors of the first book started out as developers and now classify themselves as testers, while Lisa and Janet apparently always have called themselves testers, although they clearly write a fair amount of code as part of their work.
Dave Nicolette recently made a blog post "Merging the developer and tester roles" where he argues that Tester is just a specialization of Developer in the agile world, like DBA (DataBaseAdministrator) is a specialization of Developer. He argues that agile teams need to be staffed with generalizing specialists. That means anyone can turn their hand to any task that is currently needed, while still having some tasks they perform with more skill than others.
I like Dave's viewpoint, it fits my experience. I can only write effective 2nd Quadrant tests, (business facing, support the team), if I understand what the developers need, and I do that best if I have done some development on that part of the system myself. To put it another way, I need to be just as competent at writing code as the other developers in the project I'm working in, but I also need additional skills to do with testing.
I like the term "developer-in-test" to describe a role writing and enabling 2nd Qudrant tests.
Having said all that, I'm not sure I agree with Dave that the Developer and Tester roles should always be merged. In my current assignment I'm also helping a group of testers, usability experts, technical writers and product owners to get going with exploratory testing. This testing falls into Q3 of the agile testing quadrants, and is quite different. You still need testing skills, but developer skills are mostly irrelevant. It's much more about understanding what the user is trying to achieve with the system, and how they view it.
I think there is a role for non-coding testers in Q3 testing. However, I don't think you'll get far with Q3 unless you have the other quadrants well covered with automated tests. So I think the majority of work for a tester in an agile environment is still going to involve test automation. Only the biggest projects will be able to afford to have non-coding testers.
Sunday, 21 February 2010
Sunday, 14 February 2010
XP2010 workshop and lightning talk
I've just heard that two of my proposals for XP2010 have been accepted, which means I will definitely be off to Trondheim in early June. I've heard Trondheim is very beautiful, and the XP conference it usually excellent, so I'm really looking forward to it. It will actually be my 8th XP conference!
I'm going to be running a half day workshop "Test Driven Development: Performing Art", which will be similar to the one I ran at XP2009, (which I blogged about here). I've put up a call for proposals on the codingdojo wiki, so do write to me if you're interested in taking part.
The other thing I'll be doing is a lightning talk "Making GUI testing productive and agile". This will basically be a brief introduction to PyUseCase with a little demo. Hopefully it will raise interest in this kind of approach.
Perhaps I'll see you there?
I'm going to be running a half day workshop "Test Driven Development: Performing Art", which will be similar to the one I ran at XP2009, (which I blogged about here). I've put up a call for proposals on the codingdojo wiki, so do write to me if you're interested in taking part.
The other thing I'll be doing is a lightning talk "Making GUI testing productive and agile". This will basically be a brief introduction to PyUseCase with a little demo. Hopefully it will raise interest in this kind of approach.
Perhaps I'll see you there?
Friday, 5 February 2010
code coverage and tests
At GothPy yesterday, Geoff talked about code coverage and tests. Geoff has spent a lot of his evenings lately working on PyUseCase, and getting the test coverage up to 100%, (statement coverage), a feat which he achieved last week. The evidence for this is available for all to see on the texttest site, (which is updated daily, btw, so if it is not green and 100% the day you read this post, then clearly Geoff had a bad day yesterday).
I have limited experience of using coverage statistics to evaluate my tests, so it was interesting to hear Geoff summarize his findings. He thought it had been well worth the effort to get coverage to 100%, he'd found some bugs, some dead code, and improved his design along the way. Actually, saying he has 100% coverage is a statement that needs qualification. The tool he's been using - coverage.py - has a feature whereby you can mark lines of code as # pragma: no cover, ie I don't want this line counted for coverage purposes. So he's marked 37 of 3242 lines like this.
The reason for excluding these lines is mostly practical - due to the nature of the tool you can't test it automatically when it is in "interactive" mode without physically pressing the buttons yourself - so automated tests for that part are impossible. Some excluded lines are for error cases which should never occur, but for which it would be useful to have a good error message if they ever did.
Overall, Geoff thinks coverage is very useful to help you to identify
Similarly, if your tests cover all your features and some code is not covered, maybe it's not that important code at all, and could be safely removed. Geoff's tests are not unit tests, they are testing the whole of PyUseCase, and that maybe makes a difference with this particular point. If I just had unit tests, and a piece of code wasn't covered, I'm not sure I could as easily infer that it wasn't needed as a part of a larger feature.
Refactoring opportunities can be identified from gaps in coverage too. The idea is that poorly tested code is a clue that it has other problems too. Perhaps you find two pieces of code are similar, and one copy has a gap in coverage. This could indicate they originate from copy-paste programming, and could be combined into one routine, with full test coverage.
Geoff had some tips for people who wanted to use coverage statistics to improve their tests.
Inspired by Geoff's talk, I spent some time today trying to get some coverage numbers for the code and tests I'm working on at present. Unfortunately it seemed to be a bit tricky to get the coverage tool to work. It's not python, of course, and that may have something to do with it. Hopefully I'll sort it out and be able to write a new blog post about my own experiences with coverage statistics sometime soon.
I have limited experience of using coverage statistics to evaluate my tests, so it was interesting to hear Geoff summarize his findings. He thought it had been well worth the effort to get coverage to 100%, he'd found some bugs, some dead code, and improved his design along the way. Actually, saying he has 100% coverage is a statement that needs qualification. The tool he's been using - coverage.py - has a feature whereby you can mark lines of code as # pragma: no cover, ie I don't want this line counted for coverage purposes. So he's marked 37 of 3242 lines like this.
The reason for excluding these lines is mostly practical - due to the nature of the tool you can't test it automatically when it is in "interactive" mode without physically pressing the buttons yourself - so automated tests for that part are impossible. Some excluded lines are for error cases which should never occur, but for which it would be useful to have a good error message if they ever did.
Overall, Geoff thinks coverage is very useful to help you to identify
- poorly tested areas of your code
- mistakes in your tests
- dead code
- refactoring opportunities
Similarly, if your tests cover all your features and some code is not covered, maybe it's not that important code at all, and could be safely removed. Geoff's tests are not unit tests, they are testing the whole of PyUseCase, and that maybe makes a difference with this particular point. If I just had unit tests, and a piece of code wasn't covered, I'm not sure I could as easily infer that it wasn't needed as a part of a larger feature.
Refactoring opportunities can be identified from gaps in coverage too. The idea is that poorly tested code is a clue that it has other problems too. Perhaps you find two pieces of code are similar, and one copy has a gap in coverage. This could indicate they originate from copy-paste programming, and could be combined into one routine, with full test coverage.
Geoff had some tips for people who wanted to use coverage statistics to improve their tests.
- Don’t design your tests around coverage. Write appropriate tests, and then measure coverage.
- This applies even when working with coverage results. See the coverage report as containing clues for new tests, not commands.
- Use “#pragma : no cover” in your code to be explicit about code that you decide not to try and cover. Review these periodically.
- Don’t be fanatical about absolute numbers. Commands like “Aim for at least 85% coverage” are counterproductive. (You get what you measure).
- It’s always good to increase feasible coverage. It’s sometimes better to spend your limited time on other things. But if you don’t measure, you can’t make that decision effectively.
Inspired by Geoff's talk, I spent some time today trying to get some coverage numbers for the code and tests I'm working on at present. Unfortunately it seemed to be a bit tricky to get the coverage tool to work. It's not python, of course, and that may have something to do with it. Hopefully I'll sort it out and be able to write a new blog post about my own experiences with coverage statistics sometime soon.
Subscribe to:
Posts (Atom)