JSF At First Glance

Richard Monson-Haefel blogs about looking into JSF. Some of his comments include:

“It really seems complicated to me in terms of configuration, debugging, and such things.”

“JSF strikes me as an over-engineered solution that meets the needs of a small percentage of enterprise web sites.”

Appears to be seeing JSF for what it is right now. Interesting technology that isn’t that cleanly implemented. I really wonder how well JSF will succeed out in the world unless they can fix some of the difficulties with logging, debugging, and unit testing. That said we’re still pushing hard to make it work on several projects. One developer even mused recently that using a component with some built-in paging functionality, finally made him feel like there was something positive in JSF.

Test Driven Adoption Poll

Methods and Tools online magazine ran an actual poll on TDD within software organizations. The poll had 460 total votes. The poll asked the question how is unit testing performed at your location?

So TDD adoption is still slow going, which has been my anecdotal experience as well. The nice thing is if you can get there you gain a large competitive advantage over the 59% of shops that don’t really do any unit testing. My guess is informal unit testing means hardly any unit tests, or one developer is trying to do them and everyone else is ignoring them entirely.

I’m not really sure what the documented unit test cases are. Does this just mean that the unit tests are described in some documentation? And does the documentation of execution mean the tests are run through cruisecontrol or a similar tool? No way to know the answers to these questions.

My group is writing unit tests, typically after the fact, so not true TDD. Still that’s our end goal and a few developers are doing some TDD. We also have cruisecontrol executing all our tests with every code checkin.

The promising suggestion of this poll (unscientific of course) is that TDD is making headway with developers despite the initial alien feeling of writing tests first.

Sprint Day #29 and Burndown Charts

On many of our Sprints on various Scrum projects a curious burst of productivity takes place. Without warning many tasks like the following are suddenly completed:

  • Document the migration process.
  • Fix defects.
  • Research indexed search options.

Generally most of these tasks were either not even started or had been in progress for most of the Sprint.

So on day #29 at the daily Scrum the ScrumMaster stands up and starts asking the obvious questions. So are we really going to do task X at all, and does it really impact the Sprint goal? Pretty quickly piles of stalled tasks are removed because the team thought they were important at some point, but they’re no longer relevant, or just don’t add any real value to the current Sprint. So when you print out the last burndown chart for the Sprint a steep cliff shows up at the end assuming you’re actually going to make your Sprint goal.

I’ve seen this happen on several different Scrum teams now. It does make the progress of the team a little less transparent since if you look at the remaining work or the burndown chart it often appears that the team is in danger of not making its goals in the middle of the Sprint even though they feel they’re progressing fine. Especially if you’re looking in from the outside you may wrongly assume that the Sprint is in big trouble and corrective action needs to be taken immediately to negotiate out some of the scope for the Sprint.

When I’ve played the ScrumMaster role I’ve played it by ear since with daily Scrums and gentle corrections along the way I intuitively feel like I know where the project is. I also tend to use a big cork board to track progress which gives me a good visual. In the past on more traditional waterfall projects with Gantt charts I played it by ear as well since I never felt the Gantt chart was a true measure of where the project was.

My thoughts on how to improve this is to question the various tasks more along the way and determine if they shouldn’t be pulled or considered complete earlier in the Sprint to keep the burndown chart more accurate. All it should require is some discipline. I do wonder as well if the team will still want to hold onto tasks that they think are necessary.

Rake 0.7, Rails 1.0, appdoc target

Ran into a minor inconvenience going through Agile Web Development with Rails. Running the command:

rake appdoc

Results in the following error:

rm -r doc/app
unrecognized option `--line numbers -- inline-source'
For help on options, try 'rdoc --help'
rake aborted!
exit

Didn’t take to long to find the source of the bug. Just change one line in

1
/usr/lib/ruby/gems/1.8/rails-1.0.0/lib/tasks/documentation.rake

. So:

rdoc.options < < '--line-numbers --inline-source'

Becomes:

rdoc.options < < '--line-numbers' << 'inline-source'

A Green Rubber Band

Bob Martin now wears a small green rubber band. His explanation:

I wear a green band on that I put on about six months ago. This band is a statement of professional ethics. The band signifies that I unit test my code and I know it works.

— Bob Martin SD West 2006

Bob went on to explain that testing 50% of your code isn’t enough that the target should be 100% or at least the high 90s. Do you really want to explain to your customer that you only know that half your code works.

I like the little green band idea, as an obvious visual. I’m not sure it will catch on in the general TDD world, but I like that Bob’s still focusing his considerable presenting skills on convincing developers to test.