I'm speaking next week at ScanDev on Tour in Stockholm on the subject of "Software Development Craftsmanship", and as part of my research I read both "The Clean Coder" by Robert C. Martin and "Apprenticeship Patterns" by Dave Hoover & Adewale Oshineye. These are very different books, but both aimed at less experienced software developers who want to learn about what it means to be a professional in the field. In this article I'd like to review them side by side. First some text from each preface on what the authors think the books are about:
Apprenticeship Patterns
"This book should help you through the tough decisions you face as a newcomer to the field of professional software development. " (preface xi)
The Clean Coder
"This book is about software professionalism. It contains a lot of pragmatic advice" (preface xxii)
The Content
Both books contain a lot of personal stories and anecdotes from the authors' careers, and begin with a short autobiography. Some of the advice is also similar. Both advise you to practice with "Kata" exercises, to read widely and to find suitable mentors. I think that's mostly where the similarities end though.
Dave and Ade don't say much about how to handle unreasonable managers imposing impossible deadlines. Bob Martin devotes a several chapters to this kind of issue, handling pressure, time management, estimation, making committments etc.
Dave and Ade talk more about how to get yourself into situations optimized for learning and progress in your career. They advise you to "Be the worst", "Find mentors", seek "Kindred Spirits". In other words, join a team where you're the least skilled but you'll be taught, look for mentors in many places, and get involved in the community.
Bob talks about a lot of specific practices and has detailed advice. He mentions "... pairing is the most efficient way to solve a problem" (p164) Later in the chapter he suggests the optimal composition of job roles in a gelled team. (p169) He also has some advice about how to successfully argue with your boss and go over their head when necessary (p35).
The Advice
Those few example perhaps illustrate that these two books are miles apart when it comes to writing style, approach and world view. Dave&Ade have clearly spent a lot of time talking with other professionals about their material, acting on feedback and testing their ideas for validity in real situations. The book is highly collaborative and while full of advice, is not prescriptive.
Bob Martin on the other hand loves to be specific, provocative and extreme in his advice. "QA should find nothing."(p114) "You should plan on working 60 hours per week." (p16) "Avoid the Zone." (p62) "The jury is in! ... TDD works" (p79) These are some of his more suprising pieces of advice, which I think are actually fairly doubtful propositions when taken to extremes like this. Mixed in are more reasonable statements. "You do not have to attend every meeting to which you are invited" (p123) "The professional developer is calm and decisive under pressure". (p150)
The way everything is presented as black-and-white, do-or-do-not-there-is-no-try is actually pretty wearing after a while. He does it to try to make you think, as a rhetorical device, to promote healthy discussion. I think it all too easily leads the reader to throw the baby out with the bathwater. I can't accept one of his recommendations, so I throw them all out.
Some of Dave&Ade's advice is actually just as hard to put into practice. Each of their patterns is followed by a call to action. Things like re-implementing a program you've written in an imperative language in a functional language (p21). Join or start a user group (p65). Solve the same coding exercise once a week for the next four weeks (p79). None of these things is particularly easy to do, but they seem to me to be interesting and useful challenges.
Collaboration
Bob has also clearly not collaborated very widely when preparing his material. One part that particularly sticks out for me is a footnote on page 75:
"I had a wonderful conversation with @desi (Desi McAdam, founder of DevChix) about what motivates women programmers. I told her that when I got a program working, it was like slaying the great beast. She told me that for her and other women she had spoken to, the act of writing code was an act of nurturing creation." (footnote, p75)
Has he ever actually run his "programming is slaying a great beast" thing past any other male programmers? Let me qualify that - non-fantasy-role-playing male programmers? Thought not. This is in enormous contrast to Dave&Ade, whose book is full of stories from other people backing up their claims.
Stories
Bob's book is full of stories from his own career, and he is very honest and open about his failures. This is a very brave thing to do, and I have a great deal of respect for him for doing so. It's also really interesting to hear about the history of what life was like when computers filled a room and people used punch cards to program them. Dave&Ades stories are less compelling and not always as well written.
Bob's book is not just about his professional life, he shares his likes and dislikes. He reccommends cycling or walking to recharge your energy, or "focus-manna" as he calls it, (p127). Reading science fiction as a cure for writer's block. (p66) Listening to "The Wall" while coding could be bad for your design. (p63) When describing "Master" programmers he likens them to Scotty from Star Trek. (p182)
All this is very cute and gives you a more rounded picture of what software professionalism is about. Maybe. Actually it really puts me off the idea. I know a lot of software developers like science fiction and fantasy role playing, but it really isn't mandatory. He usually says that you may have other preferences, and you don't
have to do like he does, but I just don't think it helps all that much. The rest of the book is highly
dogmatic about what you should and shouldn't do, and it kind of rubs off.
Conclusions
The bottom line is, I wouldn't reccommend "The Clean Coder" to any young inexperienced software developer, particularly not if she were a woman. There is too much of it written from a foreign culture, in a demanding tone, propounding overly extreme behaviour. The interesting stories and good pieces of advice are drowned out.
On the other hand, I would recommend "Apprenticeship Patterns". I think it is humbly written and anchored in real experience from a range of people. I agree with them when they say you need to read it twice to understand it. The first time to get an overview, the second time to understand how the patterns connect. It's not as easy to read as it might be. But still, I think the content is interesting, and it gives a good introduction to what being a professional software craftsman is about, and how to get there.
Friday, 14 October 2011
Wednesday, 21 September 2011
Code Retreat Stockholm
This weekend I was in Stockholm to facilitate a Code Retreat, organized by Peter Lind and sponsored by Valtech. We were about 40 coders gathered in the warm autumn sunshine early on a Saturday morning at Valtech's offices. (Do take a look at Peter's blog post about it, he has a photo too).
It's actually the first time I've even attended a code retreat, let alone facilitated, but I think it went pretty well. Corey Haines has written extensively about what should happen, and what the facilitator should do. I think he's given a great gift to the community, not just by inventing the format, but also by documenting it thorougly. I've previously led various coding dojos and "clean code day" events, but code retreat is somewhat different in format, if not in aim.
The reason for going to a code retreat is to practice your coding skills. By repeating the same exercise over and over, with different pairing partners, you have a chance to work on your coding habits. Do you pay attention to what your tests are telling you about your design? Do you remember to refactor regularly? Can you take really small steps when you need to?
For the day in Stockholm, we followed the tried and tested formula for a code retreat that Corey has laid out. I spent about 20 minutes introducing the day, the aims and the coding problem (Conway's Game of Life). Then we did 6 coding sessions, each with a short retrospective, and a longer retrospective at the end of the day. Each session comprised 45 minutes coding in pairs, 10 minutes retrospective in groups of 6-8, and 5 minutes to swap partners. I also began each coding session by reminding everyone of what we were supposed to be practicing, and highlighted a different "challenge" to add some variety. The challenges were things like:
- concentrate on writing really beautiful code so the language looks like it was made for the problem. *
- partition code at different levels of abstraction. **
- Think about TDD in terms of states and moves.
- do TDD as if you meant it
- concentrate on refactoring in very small steps
Each pairing session is just 45 minutes, and in that time you don't actually have time to really solve the whole kata, and that's actually quite difficult to cope with. Most coders are very motivated by writing code that does something useful, and like to show off their finished designs at the end. To try to prevent that, Corey emphasizes that you should keep in mind the end result isn't important, and be sure to delete the code at the end of the session. I found that even with that rule, there was quite a lot of discussion of how the designs ended up, and some people even saved their code.
One of the things I encouraged people to try was working in an unfamiliar programming language, and although I specified "for 1 or 2 sessions", I was surprised to find how popular it was to do this. After the first session when most people used Java, C#, Ruby or Python, there were more and more people coding in Clojure, Javascript, Erlang and even Vim script. I think it got a bit out of hand actually. It's hard to practice your coding habits and TDD skills when you're struggling with the language syntax and how to get the tests to run. Next time I facilitate I'll try to be clearer about using a familiar environment for most of the sessions.
One of the things I offered in the last session was using the cyberdojo, and three pairs agreed to try it. I had them working in Java and Ruby, switching pairs every 5 minutes, coding in a browser window. They complained about the browser experience compared with their IDEs, but they liked the feedback cyberdojo gives you. It shows how long you spend between running the tests, and whether the tests pass, fail or give a compiler error.
I'm not sure if it was a good idea to bring in the cyberdojo at the code retreat. One of the main things we discussed in the retrospective for that session was the resistance they all felt to changing the first test that was written at one of the three pairing stations. This test was too big and focussed on a boring part of the problem. Yet each person who "inherited" the code tried their best to make it pass, no-one started over with a better test. It's that kind of collaboration problem that the cyberdojo is good at highlighting. It's not so much a tool for improving your coding skills as improving your collaboration skills. This is good, but not really the purpose of the code retreat.
Thinking back over the day, I've also become a little uncertain about the "delete your code" rule. I understand why it's there, but it didn't seem to prevent people from trying to solve the whole problem in 45 minutes. By deleting the code, you also lose the opportunity to use analysis tools like those in the cyberdojo to give you some more feedback on how you're doing.
Outside of this code retreat, I've been trying out the codersdojo client quite a bit recently, to see if it gives a useful analysis of a coding session. Unlike cyberdojo, it lets you use your normal coding tools/IDE. So far it's still in beta testing and seems too buggy for me to recommend, but if you're lucky enough to successfully upload your coding session, you do get quite a good visualization of some of your coding habits. It will clearly show if you spend a long time between test runs, or if you spend a lot of time with failing tests.
So after my first code retreat, I'm feeling very encouraged that this is a good format for becoming a better coder, and I'd be happy to run one again. I'd like to try using coding visualization tools as part of the retrospective for each session. I'd also like to try setting the challenges before people have chosen a pairing partner, so they can find someone who also wants to work on my challenge rather than just try a new language. Or maybe I just need to emphasize more that trying a new language isn't the focus of the day.
In any case, I hope this blog post shows that I learnt a lot from facilitating this code retreat, even if I didn't write a single line of code myself :-)
* "You can call it beautiful code when the code also makes it look like the language was made for the problem" -- Ward Cunningham quoted in "Clean Code" by Bob Martin.
** G6: Code at Wrong Level of Abstraction - advice from "Clean Code" by Bob Martin.
It's actually the first time I've even attended a code retreat, let alone facilitated, but I think it went pretty well. Corey Haines has written extensively about what should happen, and what the facilitator should do. I think he's given a great gift to the community, not just by inventing the format, but also by documenting it thorougly. I've previously led various coding dojos and "clean code day" events, but code retreat is somewhat different in format, if not in aim.
The reason for going to a code retreat is to practice your coding skills. By repeating the same exercise over and over, with different pairing partners, you have a chance to work on your coding habits. Do you pay attention to what your tests are telling you about your design? Do you remember to refactor regularly? Can you take really small steps when you need to?
For the day in Stockholm, we followed the tried and tested formula for a code retreat that Corey has laid out. I spent about 20 minutes introducing the day, the aims and the coding problem (Conway's Game of Life). Then we did 6 coding sessions, each with a short retrospective, and a longer retrospective at the end of the day. Each session comprised 45 minutes coding in pairs, 10 minutes retrospective in groups of 6-8, and 5 minutes to swap partners. I also began each coding session by reminding everyone of what we were supposed to be practicing, and highlighted a different "challenge" to add some variety. The challenges were things like:
- concentrate on writing really beautiful code so the language looks like it was made for the problem. *
- partition code at different levels of abstraction. **
- Think about TDD in terms of states and moves.
- do TDD as if you meant it
- concentrate on refactoring in very small steps
Each pairing session is just 45 minutes, and in that time you don't actually have time to really solve the whole kata, and that's actually quite difficult to cope with. Most coders are very motivated by writing code that does something useful, and like to show off their finished designs at the end. To try to prevent that, Corey emphasizes that you should keep in mind the end result isn't important, and be sure to delete the code at the end of the session. I found that even with that rule, there was quite a lot of discussion of how the designs ended up, and some people even saved their code.
One of the things I encouraged people to try was working in an unfamiliar programming language, and although I specified "for 1 or 2 sessions", I was surprised to find how popular it was to do this. After the first session when most people used Java, C#, Ruby or Python, there were more and more people coding in Clojure, Javascript, Erlang and even Vim script. I think it got a bit out of hand actually. It's hard to practice your coding habits and TDD skills when you're struggling with the language syntax and how to get the tests to run. Next time I facilitate I'll try to be clearer about using a familiar environment for most of the sessions.
One of the things I offered in the last session was using the cyberdojo, and three pairs agreed to try it. I had them working in Java and Ruby, switching pairs every 5 minutes, coding in a browser window. They complained about the browser experience compared with their IDEs, but they liked the feedback cyberdojo gives you. It shows how long you spend between running the tests, and whether the tests pass, fail or give a compiler error.
I'm not sure if it was a good idea to bring in the cyberdojo at the code retreat. One of the main things we discussed in the retrospective for that session was the resistance they all felt to changing the first test that was written at one of the three pairing stations. This test was too big and focussed on a boring part of the problem. Yet each person who "inherited" the code tried their best to make it pass, no-one started over with a better test. It's that kind of collaboration problem that the cyberdojo is good at highlighting. It's not so much a tool for improving your coding skills as improving your collaboration skills. This is good, but not really the purpose of the code retreat.
Thinking back over the day, I've also become a little uncertain about the "delete your code" rule. I understand why it's there, but it didn't seem to prevent people from trying to solve the whole problem in 45 minutes. By deleting the code, you also lose the opportunity to use analysis tools like those in the cyberdojo to give you some more feedback on how you're doing.
Outside of this code retreat, I've been trying out the codersdojo client quite a bit recently, to see if it gives a useful analysis of a coding session. Unlike cyberdojo, it lets you use your normal coding tools/IDE. So far it's still in beta testing and seems too buggy for me to recommend, but if you're lucky enough to successfully upload your coding session, you do get quite a good visualization of some of your coding habits. It will clearly show if you spend a long time between test runs, or if you spend a lot of time with failing tests.
So after my first code retreat, I'm feeling very encouraged that this is a good format for becoming a better coder, and I'd be happy to run one again. I'd like to try using coding visualization tools as part of the retrospective for each session. I'd also like to try setting the challenges before people have chosen a pairing partner, so they can find someone who also wants to work on my challenge rather than just try a new language. Or maybe I just need to emphasize more that trying a new language isn't the focus of the day.
In any case, I hope this blog post shows that I learnt a lot from facilitating this code retreat, even if I didn't write a single line of code myself :-)
* "You can call it beautiful code when the code also makes it look like the language was made for the problem" -- Ward Cunningham quoted in "Clean Code" by Bob Martin.
** G6: Code at Wrong Level of Abstraction - advice from "Clean Code" by Bob Martin.
Thursday, 25 August 2011
Refactoring Kata fun
I've been working on a kata called "Tennis"*, which I find interesting, because it is quite quick to code, yet is a big enough problem to be worth doing. It's also possible to enumerate pretty much all the allowed scores, and get very comprehensive test coverage.
What I've found is that when I'm using TDD to solve the Kata, I tend to only enumerate actually a very small number of the test cases. I generally end up with something like:
Love-All
Fifteen-All
Fifteen-Love
Thirty-Forty
Deuce
Advantage Player1
Win for Player1
Advantage Player2
I think that's enough to test drive a complete implementation, built up in stages. I thought it would be enough tests to also support refactoring the code, but I actually found it wasn't. After I'd finished my implementation and mercilessly refactored it for total readability, I went back and implemented exhaustive tests. To my horror I found three (of 33) that failed! I'd made a mistake in one of my refactorings, and none of my original tests found it. The bug only showed up with scores like Fifteen-Forty, Love-Thirty and Love-Forty, where my code instead reported a win for Player 2. (I leave it as an exercise for the reader to identify my logic error :-)
So what's the point of TDD? Is it to help you make your design good, or to protect you from introducing bugs when refactoring? Of course it should help with both, but I think doing this practice exercise showed me (again!) that it really is worth being disciplined and careful about refactorings. I also think I need to develop a better sense for which refactorings might not be well covered by the tests I have, and when I should add more.
This is something that my friend Andrew Dalke brings up when he criticises TDD. The red-green-refactor iterative, incremental rhythm can lull you into a false sense of security, and means you forget to stop and look at the big picture, and analyze if the tests you have are sufficient. You don't get reminded to add tests that should pass straight away, but might be needed if you refactor the code.
So in any case, I figured I needed to practice my refactoring skills. I've created comprehensive tests and three different "defactored" solutions to this kata, in Java and Python. You can get the starting code here. You can use this to practice refactoring with a full safety net, or if you feeling brave, without. Try commenting out a good percentage of the tests, and do some major refactoring. When you bring all the tests back, will they still all pass?
I'm planning to try this exercise with my local python user group, GothPy, in a few weeks time. I think it's going to be fun!
* Tennis Kata: write a program that if you tell it how many points each player has won in a single game of tennis, it will tell you the score.
What I've found is that when I'm using TDD to solve the Kata, I tend to only enumerate actually a very small number of the test cases. I generally end up with something like:
Love-All
Fifteen-All
Fifteen-Love
Thirty-Forty
Deuce
Advantage Player1
Win for Player1
Advantage Player2
I think that's enough to test drive a complete implementation, built up in stages. I thought it would be enough tests to also support refactoring the code, but I actually found it wasn't. After I'd finished my implementation and mercilessly refactored it for total readability, I went back and implemented exhaustive tests. To my horror I found three (of 33) that failed! I'd made a mistake in one of my refactorings, and none of my original tests found it. The bug only showed up with scores like Fifteen-Forty, Love-Thirty and Love-Forty, where my code instead reported a win for Player 2. (I leave it as an exercise for the reader to identify my logic error :-)
So what's the point of TDD? Is it to help you make your design good, or to protect you from introducing bugs when refactoring? Of course it should help with both, but I think doing this practice exercise showed me (again!) that it really is worth being disciplined and careful about refactorings. I also think I need to develop a better sense for which refactorings might not be well covered by the tests I have, and when I should add more.
This is something that my friend Andrew Dalke brings up when he criticises TDD. The red-green-refactor iterative, incremental rhythm can lull you into a false sense of security, and means you forget to stop and look at the big picture, and analyze if the tests you have are sufficient. You don't get reminded to add tests that should pass straight away, but might be needed if you refactor the code.
So in any case, I figured I needed to practice my refactoring skills. I've created comprehensive tests and three different "defactored" solutions to this kata, in Java and Python. You can get the starting code here. You can use this to practice refactoring with a full safety net, or if you feeling brave, without. Try commenting out a good percentage of the tests, and do some major refactoring. When you bring all the tests back, will they still all pass?
I'm planning to try this exercise with my local python user group, GothPy, in a few weeks time. I think it's going to be fun!
* Tennis Kata: write a program that if you tell it how many points each player has won in a single game of tennis, it will tell you the score.
Labels:
clean code,
Code Kata,
TDD
Tuesday, 9 August 2011
Books on automated testing
As I mentioned in my last post I've recently taught a course in automated testing to a bunch of students at KYH. Before the course I spent some time looking for good course books for them. I looked at a few options and eventually decided on "The art of Unit Testing with examples in .Net" by Roy Osherove, and "The RSpec Book" by Chelimsky et al.
I chose the unit testing book because Roy does a good job of describing the basics of test driven development, including simple mocks and stubs. The book is very practical and is full of insight from experience and code examples.
I also looked at "Pragmatic Unit Testing in C# with NUnit" by Andrew Hunt and David Thomas. I'm a big fan of their book "The Pragmatic Programmer", so I had high hopes for this one. Unfortunately I was rather disappointed with it. It talks about what good unit tests should look like, but not much about how you use Test Driven Development to create them.
I chose the RSpec Book because it has quite a bit of material about Cucumber and how it fits in to a Behaviour Driven Development process. I think the published literature on automated testing focuses too much on unit level tools, and there is not enough written about feature level tests and how to use them as part of the whole agile process.
I also looked at "Bridging the Communication Gap" by Gojko Adzic, which I think is an excellent introduction to how to use feature level tests as part of the agile process, but it is largely tool agnostic. There is a short chapter introducing some tools, including JBehave, a forerunner to Cucumber, Selenium and TextTest too. It's a little out of date now though, and for this course I wanted something with more detail.
I hope these short book reviews are useful.
I chose the unit testing book because Roy does a good job of describing the basics of test driven development, including simple mocks and stubs. The book is very practical and is full of insight from experience and code examples.
I also looked at "Pragmatic Unit Testing in C# with NUnit" by Andrew Hunt and David Thomas. I'm a big fan of their book "The Pragmatic Programmer", so I had high hopes for this one. Unfortunately I was rather disappointed with it. It talks about what good unit tests should look like, but not much about how you use Test Driven Development to create them.
I chose the RSpec Book because it has quite a bit of material about Cucumber and how it fits in to a Behaviour Driven Development process. I think the published literature on automated testing focuses too much on unit level tools, and there is not enough written about feature level tests and how to use them as part of the whole agile process.
I also looked at "Bridging the Communication Gap" by Gojko Adzic, which I think is an excellent introduction to how to use feature level tests as part of the agile process, but it is largely tool agnostic. There is a short chapter introducing some tools, including JBehave, a forerunner to Cucumber, Selenium and TextTest too. It's a little out of date now though, and for this course I wanted something with more detail.
I hope these short book reviews are useful.
Wednesday, 22 June 2011
Teaching a diverse bunch of Testers
I've just spent 3 weeks teaching a class of 11 students about automated testing, as part of a one year course in software testing. The course is organized by the local "Kvalificerade Yrkes Högskolan", KYH. (loosely translated: Skilled Trade University). The students come from all kinds of job backgrounds, from sitting in a supermarket checkout to driving trams to gardening, and most of them had never written a computer program before the course started.
The KYH tries to design their courses so that students will be competent enough to get a job by the end of them, so they work closely with local employers to set the curriculum and find teachers for the courses.
I was pleased to be asked to do this teaching job, since automated testing is one of my main areas of expertise, but at the same time I was quite daunted by the prospect. I've never taught non-programmers before, and I've certainly never had to set an exam or hand out grades. Before I agreed to do it, I spent some time talking to a friend of mine who has previously taught a different KYH course, and his story actually wasn't all that encouraging. It's hard work preparing the teaching materials, and some of the students will find it very difficult and need a lot of help and coaching. I decided it could be worth doing, anyway. I had some teaching materials prepared already, and I wanted the chance to invent more, try out some new ideas, and broaden my horizons.
Now that I've done the course I can attest that it really is hard work preparing lessons and exercises, and some of the students do need a lot of help. It is very rewarding though when they start to understand. I got a real kick out of going round the classroom seeing them all starting to write tests with Selenium and Cucumber, and answering their questions about Ruby and Page Objects and how to name tests and what to assert, and where to put the code and which parts to write tests for...
I think by teaching this course I've learnt a lot myself about things like how to communicate ideas, give feedback and encouragement, and to set boundaries and manage expectations. I found marking their work much more interesting than I expected, too. What kinds of mistakes do inexperienced programmers make when doing TDD? Do they find it easier to write good tests with Selenium or Cucumber? Is there any correlation between testing skill and programming skill? (short answers - they don't refactor enough, Cucumber is way easier, and no, the correlation seems pretty weak)
So do I recommend getting involved? Absolutely! I think the IT industry in general needs more people in it from diverse backgrounds, and this is the kind of course that brings them in. If my experience is anything to go by, you'll work hard but you'll learn a lot from the students too. Networking with the other employers in the course Industry Reference Group is useful, and if I was looking to hire a junior tester I'd now know exactly who to ask first. Actually, who knows, in a few years some of my students might even be in a position to give me a job.
Don't just complain that it's hard to hire qualified people and/or people from diverse backgrounds. Get down to your local KYH equivalent and help them set up a course! I think that being a good software developer or tester is not restricted to only those with a degree in Computer Science. A course at a trade school where local employers get involved is good value for everyone.
The KYH tries to design their courses so that students will be competent enough to get a job by the end of them, so they work closely with local employers to set the curriculum and find teachers for the courses.
I was pleased to be asked to do this teaching job, since automated testing is one of my main areas of expertise, but at the same time I was quite daunted by the prospect. I've never taught non-programmers before, and I've certainly never had to set an exam or hand out grades. Before I agreed to do it, I spent some time talking to a friend of mine who has previously taught a different KYH course, and his story actually wasn't all that encouraging. It's hard work preparing the teaching materials, and some of the students will find it very difficult and need a lot of help and coaching. I decided it could be worth doing, anyway. I had some teaching materials prepared already, and I wanted the chance to invent more, try out some new ideas, and broaden my horizons.
Now that I've done the course I can attest that it really is hard work preparing lessons and exercises, and some of the students do need a lot of help. It is very rewarding though when they start to understand. I got a real kick out of going round the classroom seeing them all starting to write tests with Selenium and Cucumber, and answering their questions about Ruby and Page Objects and how to name tests and what to assert, and where to put the code and which parts to write tests for...
I think by teaching this course I've learnt a lot myself about things like how to communicate ideas, give feedback and encouragement, and to set boundaries and manage expectations. I found marking their work much more interesting than I expected, too. What kinds of mistakes do inexperienced programmers make when doing TDD? Do they find it easier to write good tests with Selenium or Cucumber? Is there any correlation between testing skill and programming skill? (short answers - they don't refactor enough, Cucumber is way easier, and no, the correlation seems pretty weak)
So do I recommend getting involved? Absolutely! I think the IT industry in general needs more people in it from diverse backgrounds, and this is the kind of course that brings them in. If my experience is anything to go by, you'll work hard but you'll learn a lot from the students too. Networking with the other employers in the course Industry Reference Group is useful, and if I was looking to hire a junior tester I'd now know exactly who to ask first. Actually, who knows, in a few years some of my students might even be in a position to give me a job.
Don't just complain that it's hard to hire qualified people and/or people from diverse backgrounds. Get down to your local KYH equivalent and help them set up a course! I think that being a good software developer or tester is not restricted to only those with a degree in Computer Science. A course at a trade school where local employers get involved is good value for everyone.
Tuesday, 21 June 2011
Nordic Ruby and Diversity
This is the second time I've attended Nordic Ruby, you can read about what I thought last year here. This year I enjoyed the conference more, for several reasons. There were some small changes in the way it was organized, (on a Friday and Saturday instead of taking up a whole weekend), a better choice of speakers and topics, (less technical, more inspirational), and I knew more of the people there.
One of the themes of the conference was diversity, which I was very, very happy to see. There was an inspiring talk by Joshua Wehner about this topic, taking up some depressing statistics about the IT industry in general and open source software in particular. What struck me most was that he said the statistics for women involvement are improving in many formerly male-dominated disciplines, like maths, physics and law, but in computing, the situation was actually better 20 years ago than it is now. The curves are pointing the wrong way in our industry.
Having said that, there were slightly more women at the conference this year than last, I think I counted 4 of 150, compared with 2 of 90 last year. There were also far fewer references to science fiction movies from the speakers this year ;-)
Joshua did take up several things that we could do practically to reduce bias and positively encourage diversity. He's written about some of them in this blog post. Another one he mentioned that I liked was the "no asshole rule". If people engage in arrogant one-upmanship, talk down to others, and emphasize their superior programming abilities, they should be regarded as not just annoying, but actually incompetent. Developing software is a multi-faceted skill, and it takes a lot more than just writing good code to be a good software developer.
Joe O'Brien continued the diversity theme in his talk "Taking back education" by basically arguing that having a degree in computer science correlates very badly with being a good software developer, and that we should be finding ways to bring people into our industry who have non-traditional backgrounds. He advocated companies to start apprenticeship programmes, while conceding that this model of education doesn't scale very well. He talked about getting a group of companies together to set up a "code school". He said "forget universities when it comes to education [of software developers]. We're better at it"
I applaud his efforts to bring a more diverse range of people into the industry, and I think my recent experiences teaching a group like this are relevant. I think I'll write a separate blog post about that experience, but basically I think the idea of a "code school" is a good one, and similar institutions probably already exist, and could add a course in software development to their programme of courses in practical skills. For this to happen it's up to companies to put in time and energy setting them up, rather than just complaining that when they put out a job advert, all they get are white male applicants between the ages of 25-35, so it's not their fault.
Another talk that deserves a mention is the one by Joseph Wilk. He spoke about "The Limited Red Society" which is an idea that Joshua Kerievsky came up with. I heard Joshua speak about it at XP2009, and I thought Joseph did a very good job of explaining what it is, and why it's important.
Basically the idea is that although you need your tests to go red during TDD, if they stay red for any length of time, it can get you into trouble. While they are red, you can't check in, ship your code, or change to working on a different task. This is one motivation for trying to measure, and limit, how much of the time your tests are red. It's also about more generally improving the feedback we get for ourselves while we work. Professional sports stars spend time analysing and visualizing their performances (where balls land on a tennis court, footballers rates of passing etc). We programmers could benefit from that kind of thing too.
Joseph has invented a tool that helps him to track his state when doing TDD. It's a simple monitoring program that makes a note every time he runs his tests. It's not as elaborate as the commercial tool offered by Joshua Kerievsky's company, but it does work with Ruby and Cucumber. Joseph also has his tool connected to his CI server so that it runs tests that have failed recently in his and others' checkouts first in the CI test run. He also gathers statistics about individual tests, how often they fail, and whether they are fixed without the production code needing to be changed - a way of spotting fragile tests.
I think this kind of statistics gathering is really interesting and I think Joseph will just have more insights to share as he gathers more data and does more analysis. I've been experimenting with the tool provided by codersdojo.org for measuring my performance at code katas, but Joseph seems to be taking this all to the next level.
Overall I thoroughly enjoyed Nordic Ruby. (I still think it would be improved by some actual open space sessions though). I talked to loads of really interesting people, enjoyed good food and drink in comfortable surroundings, and listened to some people give excellent talks. Thanks for organizing a great conference, Elabs.
One of the themes of the conference was diversity, which I was very, very happy to see. There was an inspiring talk by Joshua Wehner about this topic, taking up some depressing statistics about the IT industry in general and open source software in particular. What struck me most was that he said the statistics for women involvement are improving in many formerly male-dominated disciplines, like maths, physics and law, but in computing, the situation was actually better 20 years ago than it is now. The curves are pointing the wrong way in our industry.
Having said that, there were slightly more women at the conference this year than last, I think I counted 4 of 150, compared with 2 of 90 last year. There were also far fewer references to science fiction movies from the speakers this year ;-)
Joshua did take up several things that we could do practically to reduce bias and positively encourage diversity. He's written about some of them in this blog post. Another one he mentioned that I liked was the "no asshole rule". If people engage in arrogant one-upmanship, talk down to others, and emphasize their superior programming abilities, they should be regarded as not just annoying, but actually incompetent. Developing software is a multi-faceted skill, and it takes a lot more than just writing good code to be a good software developer.
Joe O'Brien continued the diversity theme in his talk "Taking back education" by basically arguing that having a degree in computer science correlates very badly with being a good software developer, and that we should be finding ways to bring people into our industry who have non-traditional backgrounds. He advocated companies to start apprenticeship programmes, while conceding that this model of education doesn't scale very well. He talked about getting a group of companies together to set up a "code school". He said "forget universities when it comes to education [of software developers]. We're better at it"
I applaud his efforts to bring a more diverse range of people into the industry, and I think my recent experiences teaching a group like this are relevant. I think I'll write a separate blog post about that experience, but basically I think the idea of a "code school" is a good one, and similar institutions probably already exist, and could add a course in software development to their programme of courses in practical skills. For this to happen it's up to companies to put in time and energy setting them up, rather than just complaining that when they put out a job advert, all they get are white male applicants between the ages of 25-35, so it's not their fault.
Another talk that deserves a mention is the one by Joseph Wilk. He spoke about "The Limited Red Society" which is an idea that Joshua Kerievsky came up with. I heard Joshua speak about it at XP2009, and I thought Joseph did a very good job of explaining what it is, and why it's important.
Basically the idea is that although you need your tests to go red during TDD, if they stay red for any length of time, it can get you into trouble. While they are red, you can't check in, ship your code, or change to working on a different task. This is one motivation for trying to measure, and limit, how much of the time your tests are red. It's also about more generally improving the feedback we get for ourselves while we work. Professional sports stars spend time analysing and visualizing their performances (where balls land on a tennis court, footballers rates of passing etc). We programmers could benefit from that kind of thing too.
Joseph has invented a tool that helps him to track his state when doing TDD. It's a simple monitoring program that makes a note every time he runs his tests. It's not as elaborate as the commercial tool offered by Joshua Kerievsky's company, but it does work with Ruby and Cucumber. Joseph also has his tool connected to his CI server so that it runs tests that have failed recently in his and others' checkouts first in the CI test run. He also gathers statistics about individual tests, how often they fail, and whether they are fixed without the production code needing to be changed - a way of spotting fragile tests.
I think this kind of statistics gathering is really interesting and I think Joseph will just have more insights to share as he gathers more data and does more analysis. I've been experimenting with the tool provided by codersdojo.org for measuring my performance at code katas, but Joseph seems to be taking this all to the next level.
Overall I thoroughly enjoyed Nordic Ruby. (I still think it would be improved by some actual open space sessions though). I talked to loads of really interesting people, enjoyed good food and drink in comfortable surroundings, and listened to some people give excellent talks. Thanks for organizing a great conference, Elabs.
Labels:
agile,
conferences,
Ruby
Tuesday, 10 May 2011
TDD in terms of states and moves
The classic description of TDD that most people know is the 3 rules by Bob Martin. I think his rules are a very succinct description, and for a long time I've just relied on them, together with a picture of "red-green-refactor" to describe TDD to newcomers. More recently I've found value in expanding this description in terms of states and moves.
When I'm doing TDD in a coding dojo using the Randori form, I get people stepping up to take the keyboard who've never done it before, and I find it helps them to understand what's going on if I explain which state we're in and what the legal moves are for that state. The picture I've used is like this:
I'd like to go through each state and some of the moves you can make in each.
I think before we start on a TDD coding session there is value in doing a small amount of analysis and planning. In the dojo, I'll spend around 15 minutes in this starting state before we start coding. We'll talk about the chosen kata so everyone hopefully understands the problem we're going to solve. We'll write a list of potential test cases on a whiteboard, and identify some kind of "guiding test". This is an overarching test that will probably take us a couple of hours to get to pass. It helps us to define the API we want to write, and the goal for our session. We may also talk a little about how we'll implement the solution, perhaps discuss a possible data structure or algorithm.
I know the group is ready to move on when we have sketched a guiding test or goal for the session, and have chosen the first test we'll try to make pass.
When we're getting to red, we're trying to set up a small achievable goal. We'll choose a simple test from our list, one that will help us towards our session goal (guiding test). This test should force us to address a weakness in our production code.
Starting with naming the test then writing the assert statement helps us to focus on what the missing functionality is. Then we can fill in the "act" and "arrange" parts of the test. I'm not sure who invented the "Arrange, Act, Assert" moniker for structuring tests, but the idea is that a test has three parts. In the "Arrange" part you set up the objects the system under test will interact with, in the "Act" part, you call the production code to do whatever it is, and in the "Assert" step, you check it did what you expected.
In a compiled language it helps to fill in a bit of production code while we're writing the test, but only just enough to keep the compiler quiet. Once the test executes, and is red, you can move on.
This is where we flip over to the production code (I've seen newbies presented with a failing test trying to edit the test to make it green!).
If we can easily see what the implementation should be, we might just write it, but often it helps to return some kind of fake value, until we work out what the code should be. Sometimes in this state we find the test we wrote is too hard, and we need to get back to green by removing or commenting out the failing test. This is a sign we understand the problem better than we did before, which is a good thing. In that case, we'll go back to "Getting to red" and write a better test. Otherwise, we get to green by making the new test pass.
The refactoring move is where we want to remove fake implementations, refactor away code smells and improve code readability generally. Don't forget there may also be duplication and readability problems in the tests as well as the production code. The trick is to do a series of small refactorings that taken together lead to a better design. Try to stay only a few keystrokes away from a running system (green tests).
While we're looking for refactorings, we'll probably spot weaknesses in the production code implementation - functionality that is missing or cases that are not handled. This is a cue to note new test cases, not to fix the code. If we find a hard coded return value, say, we should be able to think of a new test case that will force us to write a better implementation.
We can move on when we're happy the design of the code is good (for the test cases so far).
At some point hopefully we'll find we can get our guiding test to pass, and/or that we're out of time and the coding session is over. We'll look through the code a final time, (as we check it in, maybe), then take a few minutes to reflect on what we learnt during this coding session. In the coding dojo, we'll spend some time discussing what was good and bad about this dojo session, and what we might do differently next time based on what we've learnt.
The cycle begins again with the next coding task or the next dojo session.
When I'm doing TDD in a coding dojo using the Randori form, I get people stepping up to take the keyboard who've never done it before, and I find it helps them to understand what's going on if I explain which state we're in and what the legal moves are for that state. The picture I've used is like this:
I'd like to go through each state and some of the moves you can make in each.
I think before we start on a TDD coding session there is value in doing a small amount of analysis and planning. In the dojo, I'll spend around 15 minutes in this starting state before we start coding. We'll talk about the chosen kata so everyone hopefully understands the problem we're going to solve. We'll write a list of potential test cases on a whiteboard, and identify some kind of "guiding test". This is an overarching test that will probably take us a couple of hours to get to pass. It helps us to define the API we want to write, and the goal for our session. We may also talk a little about how we'll implement the solution, perhaps discuss a possible data structure or algorithm.
I know the group is ready to move on when we have sketched a guiding test or goal for the session, and have chosen the first test we'll try to make pass.
When we're getting to red, we're trying to set up a small achievable goal. We'll choose a simple test from our list, one that will help us towards our session goal (guiding test). This test should force us to address a weakness in our production code.
Starting with naming the test then writing the assert statement helps us to focus on what the missing functionality is. Then we can fill in the "act" and "arrange" parts of the test. I'm not sure who invented the "Arrange, Act, Assert" moniker for structuring tests, but the idea is that a test has three parts. In the "Arrange" part you set up the objects the system under test will interact with, in the "Act" part, you call the production code to do whatever it is, and in the "Assert" step, you check it did what you expected.
In a compiled language it helps to fill in a bit of production code while we're writing the test, but only just enough to keep the compiler quiet. Once the test executes, and is red, you can move on.
This is where we flip over to the production code (I've seen newbies presented with a failing test trying to edit the test to make it green!).
If we can easily see what the implementation should be, we might just write it, but often it helps to return some kind of fake value, until we work out what the code should be. Sometimes in this state we find the test we wrote is too hard, and we need to get back to green by removing or commenting out the failing test. This is a sign we understand the problem better than we did before, which is a good thing. In that case, we'll go back to "Getting to red" and write a better test. Otherwise, we get to green by making the new test pass.
The refactoring move is where we want to remove fake implementations, refactor away code smells and improve code readability generally. Don't forget there may also be duplication and readability problems in the tests as well as the production code. The trick is to do a series of small refactorings that taken together lead to a better design. Try to stay only a few keystrokes away from a running system (green tests).
While we're looking for refactorings, we'll probably spot weaknesses in the production code implementation - functionality that is missing or cases that are not handled. This is a cue to note new test cases, not to fix the code. If we find a hard coded return value, say, we should be able to think of a new test case that will force us to write a better implementation.
We can move on when we're happy the design of the code is good (for the test cases so far).
At some point hopefully we'll find we can get our guiding test to pass, and/or that we're out of time and the coding session is over. We'll look through the code a final time, (as we check it in, maybe), then take a few minutes to reflect on what we learnt during this coding session. In the coding dojo, we'll spend some time discussing what was good and bad about this dojo session, and what we might do differently next time based on what we've learnt.
The cycle begins again with the next coding task or the next dojo session.
Labels:
Coding Dojo,
TDD
Subscribe to:
Posts (Atom)