Listening again to a Polymorphic Podcast while I was out strolling around Sacramento at 5am on my morning walk, I heard commentary that struck a chord. The person being interviewed on the show was Kent Alstad, a developer with 20+ years of experience, who’s published a book or two in the .NET space. When asked about how he had adopted TDD he started to fumble. His best answer was that he felt guilty about it, but on his most recent projects he hadn’t done that much actual testing with NUnit because much of the application was in a GUI, ASP.NET I would assume, and when it became hard to test, the testing got dropped.
This seems to be a continuing pattern of anecdotal evidence that I’m collecting. I see it in my own organization with my bleeding edge developers who walk and talk unit tests, automated builds, etc, but let CruiseControl fail for weeks at time. I see it interviewing developers, who to a person have at least JUnit on their resume, but then admit they haven’t written a unit test in 2 years, and can’t remember how JUnit is organized. And finally, I see it in myself when I sit down with a developer to tackle some issue, but I rarely have them write a test, because we can solve the problem without it and they don’t even have any tests in place to begin with.
So right this moment I feel one of my greatest failings has been getting my developers to adopt TDD, but I’m beginning to think I have a great amount of company here. The understanding of TDD is fairly wide, but the adoption in many shops is shallow enough for my 1 year old daughter to feel comfortable in.
I’m still highly convinced it’s the right path to go down and the benefits outweigh the costs. It’s going to be harder than I thought, but if we can adopt TDD in our shop we’ll jump to the head of the pack.
It’s always important to take time off, so today I’m off to the beach in San Diego. I may post intermittently depending on wireless accessibility.
As a way of introducing a game, I’ve started to implement planning poker to estimate a Product Backlog, aka feature list, for two of my project teams. The experiments have gone fairly well so far. Basically everyone reviews the list of user stories, use cases, features, or tasks hopefully be then the poker rounds start:
- You hand out a stack of cards with numbers on them 1,3,5,8,11,13.
- You explain that the numbers are ‘story points’ and simply represent the relative complexity of a task.
- A ‘1’ might be creating an error message popup.
- A ‘5’ might be implementing validation of a HTML form.
- A ’13’ might be implementing some complex set of business rules in a rules engine that no one is familiar with.
- Then you start with the first item.
- Everyone selects the card they thing corresponds to the estimate for that item.
- On the count of 3 everyone reveals.
- If everyone is the same you’re done.
- If there are differences people speak up about why they thought it was a 3 versus a 5 until everyone’s had a chance to comment.
- Then you repeat the process. At this point generally estimates start to converge. If need be run a round or two more.
- Continue until you run out of items.
The developers I’ve sprung this on so far seem to enjoy it at least for the novelty period. The group today was a bit disappointed though that it didn’t involve real money and poker chips. You then use the estimated story points to plan your iterations based on what the team thinks they can do. I’m experimenting with this still, but the general idea comes from Mike Cohn one of the heavyweights in the Scrum arena.
To give credit, I picked up this tip from Elizabeth Hendrickson:
To check whether people are actually reading long technical requirements or design documents, liberally sprinkle the text with ridiculous quotes, like “My HoverCraft is Full of Eels.”
The basic idea is that very few people actually read long technical documentation. So why do we create long technical documents. Elizabeth related that she’s been doing this for a while, but very few people ever catch the phrases and thus very few people are reviewing these ‘important’ documents.
She said she once had a developer who came up to her after reading something she sent around for reviews and asked about this hovercraft thing. Being from the QA/Testing side of the house, Elizabeth really liked working with this developer. As a manager I find myself creating very few of these documents so I haven’t had a chance to try it out yet, but I expect the experiment to be very enjoyable.
While I understand the idea behind leaving out some details in example code in technical books, I just don’t agree with it in general. The reason behind picking up a book on a particular framework, language, or technique is to be able to immerse yourself in some sample code. If the authors opt out by including incomplete examples or relying on you downloading their example code I feel just a bit cheated and frustrated. “Not all code examples from this book will be complete,” is a bit of a cop out when some of the incomplete examples are in the first few chapters where the basic concepts are being introduced.
I learn primarily by example and reinforcement. That means when I crack open a book I expect to be able to type in the examples pretty much verbatim and get them to run successfully after I clean up my ever present typos. Typing in the code helps me learn it, more so than just reading or cutting and pasting it from the author’s examples. Short snippets of code are fine, but if the author bothers to show a real example class it should contain pretty much everything you need to run it or at least have it compile. The last couple of technical books I’ve read seem to assume you’ll just figure out all the stuff they left out like the implementations of 3 or 4 other classes the example depends on.
I have a limited amount of coding time between the daily meeting load, distractions, HR issues, and clearing obstacles for my developers. When I get home and my two young daughters are off to bed I have a few hours a night to possibly go through a technical book. I don’t have time to waste chasing down little details in the examples that were left out or imagining how I setup some example code by inventing an implementation. I realize this approach forces me to go looking for answers and do a lot of experimenting to get some examples to run, but far too much of that is frustration and not learning. I like rapid feedback loops, one of the reasons I enjoy TDD so much. Frustration is pretty much the opposite of what I’m looking for.
No books were named in this rant, but that’s partially because the phenomenon is so commonplace. A good counter example in my recent experience would be Head First Design Patterns. I love that they give full class examples and exercises that are left to the reader. That way I can get the concepts and then enjoy the challenge of the exercises. They of course add a lot of visual interest, crossword puzzles, and inside jokes.