One Hour Pairing
Today, I blocked out an hour to sit with one of my developers and pair up to write some unit tests and get the Clover IDE plugin working with RAD 6.0. It was actually fairly productive for a single hour, and though only one test actually got written in that hour, I learned some things:
- The Clover IDE is pretty easy to setup even in RAD, but you have to remember to recompile the code without instrumentation if you want to look at a JSP page in the embedded Websphere instance. Otherwise you get a nasty error warning you about not finding Clover.
- Writing tests after the fact is always a bit more painful, sort of the Michael Feathers legacy code bit.
- On this project we were writing some test classes that tested multiple classes which may be a mistake, but in one hour I couldn’t really evaluate it. We went ahead and wrote a new test class.
- We extend a <div class="codecolorer-container text vibrant overflow-off" style="overflow:auto;white-space:nowrap;">
1
BaseTest</div>
class that has most of the JSF setup code in it. It had a
1
testSimple()test in it that was getting run by all the inheriting classes so it was artificially inflating the number of tests. I took the obvious stand that we should just delete the test since it wasn’t doing anything real. Then we started getting a failure running the tests for the
1
BaseTestrunning in RAD. (Probably because it didn’t have any tests). Rather than spend a lot of time investigating we stuck it back in and moved on.</li>
- Unit testing JSF is a royal pain. This was a good reminder. Knowing intellectually what is driving your developers crazy isn’t quite the same as sitting down and coding with it.
- I learned I need to do more of this.</ul>
Coding Without a Net
I knew better, but despite that I decided to follow along with Agile Web Development with Rails without doing TDD. Anyway as I noted in an earlier post you don’t get to the testing section until about 150 pages in.
Apparently I was luckier than usual as I didn’t run into any show stoppers before getting to the testing chapter. Then I had some hard drive corruption and I had to recover from a backup. I ended up losing some of my code, but I just jumped back in test free. Of course I fell on my face doing the old, code check in browser routine.
Made a little error with the shopping cart class that I just couldn’t track down (something about a nil class). After about an hour of thrashing, I jumped backwards to writing unit tests around it. Within half an hour I found my logic error courtesy of a typo.
So I’m making better, faster progress now because I’m writing tests around everything. In the future even for wading through sample code, I’m requiring myself to do TDD. Slower is faster.
TDD and Dual Entry Bookkeeping
In a talk at SD West 2006 Bob Martin mentioned about the similarity between dual entry bookkeeping for accountants and test driven development for developers. Just like dual entry bookkeeping your production code checks the unit tests and the unit tests check the production code. At the end of the day everything should sum up to a simple green bar.
As for having to drop unit tests because you’re in a hurry:
One common issue I have found is that developers drop the discipline of TDD in the face of schedule pressure. “We don’t have time to write tests” I hear them say. Before I comment on the absurdity of this attitude, let me draw the parallel. Can you imagine an accounting department dropping dual entry bookkeeping because they’ve got to close the books on time? Even before SARBOX such a decision would be such a huge violation of professional ethics as to be unconscionable. No accountant who respected his profession, or himself, would drop the controls in order to make a date.
— Bob Martin
Bringing Down the Hammer to Nail TDD
Back a few weeks ago at SD West 2006 during a tutorial session on rSpec with Dave Astels a TDD discussion cropped up. It centered on how you introduce TDD to a development organization. Dave went on to relate a story with at least one client who took the top down management approach. Dave described him as a great technical manager for having the courage to force TDD on the developers. His approach was:
- Move the whole team out to a new collocated team room.
- Outfit the team with brand new equipment.
- Brought in Dave to mentor/coach the team on TDD by doing a lot of pairing.
- Mandated that all code was unit tested.
- Mandated user stories and acceptance tests written into Fitnesse.
At the end of the project the whole team was doing TDD. Sounds like a great approach, and developers generally get the idea of TDD better if you can pair a guru like Dave up with everyone.
I tend to be a bit more of an incrementalist, but it definitely gives me some ideas.
Gaming Testing Metrics
I read Larry Osterman’s post on Measuring Testers by Test Metrics Doesn’t via The Best Software Writing Vol I. It reminded me of a failed experience from about a year ago on a death march project.
The scenario was simple enough. Our large project had been underway for 2.5 years and had been in defect resolution for at least 1.5 years (Death March). Thus we had well north of 1000+ defects and were constantly falling behind on getting them resolved.
Given this one of our senior developers suggested maybe utilizing an oft forgotten field in our Mercury TestDirector bug tracker might really motivate faster bug fixing. The field was
1
|
Estimated Fix Time
|
and wasn’t required so it was generally left blank. The suggestion was to bring it up at our daily stand ups and explain that going forward all the developers should fill this out on their new defects with real estimates.
Not a bad idea in principle. Developers have to estimate the time for their assigned bugs and then they’ll naturally want to fix the defects within that timeframe. Guilt and professional pride will help re-motivate the developers on this great death march. So here’s what happened:
- Developers started reluctantly putting in estimated fix times.
- Most of those times were in multiples of 8 hours since it was easier to estimate in days and historically many of the defects required a lot of negotiation since they were really requirement changes or clarifications.
- This was a death march project so no one felt all that guilty if they didn’t manage to hit some estimated fix time estimate.
- Everyone stopped filling out the field again within a few weeks because it wasn’t required.
So again it reinforced the lesson that trying to drive behavior with metrics is likely to be a failed effort. I still like metrics, but mostly as a source of feedback. Negative reinforcement with a single statistic tends to fail or lead to gaming the system.