Iteration Zero Versus Cycle Zero

About 9 months ago I tried to explain to one of my developers the idea of an iteration zero. Sort of a quick organization of setting up for the project for a few weeks. I tried to find some references to it to point him too, but I didn’t manage to find any even though I was convinced I read about it somewhere.

If Scott Ambler is right it’s an assumed idea for Agilists, but “little has been written about this subject.” His definition is:

Agilists refer to the initial iteration of a project as “Cycle 0,” during which you determine whether the project is feasible, setup the technical environment, start building the team, and do sufficient modeling to support these activities. Sounds like what you do on a traditional project? Absolutely. The difference, however, is agilists achieve the same goals with a lot less effort—Cycle 0 is typically a week or two at most.

Small anecdotal evidence, but out of our three official Agile pilots the one that went the smoothest had a good 3-4 weeks in Cycle 0. The other two leaped off on Sprint #1 with minimal planning. The biggest pain seemed to be that very little initial work had gone into requirements, use cases in our world. For Sprint #1 it was hard for the developers and QA folks to do estimates and it’s taken a while to catch up.

Big Visible Charts

James Shore has a post on Informative Workspaces. He suggests using hand drawn charts and having everyone on the team update them where possible. Some of his ideas for charts include:

  • A chart showing pairing time.
  • A chart showing pairing combinations.
  • A team calendar.
  • A code coverage chart.
  • A chart of tests executed per second.
  • A chart of the lines of code in the system.
  • A chart of the age of the oldest open support request.
  • A chart of non-story work.
  • A chart of unavailable customer guesses.
  • A chart of customer and programer interactions.

We have yet to do any serious experimentation with pairing yet, so we haven’t used charts for that. We have used charts for the team calendar and code coverage. We’ve also used charts for number of unit tests and to track tasks. Reading this post gave me the idea that I need to try doing a hand-drawn chart of our code coverage. I tend to use a stock Clover print-out and I only occassionally change it. I really like the idea of the team owning the chart and updating as well. This does happen with the calendar and the task board, but I think it could be useful for some of these other charts.

Messy Picture: Java Web Frameworks

Matt Raible recently posted on web frameworks via another post by Tim O’Brien. His points were:

  • The Struts project has gotten confused with the split between Struts Shale and Struts Action (Webwork) which has hurt adoption.
  • “JSF continues to be the most over-hyped under-used framework in Javaland.”
  • He’s yet to “meet an unhappy WebWork fan.”
  • The learning curve on Tapestry is too high and it doesn’t maintain backwards compatibility.
  • Spring MVC is fine, but Matt has found “WebWork much more pleasurable to work with.”
  • Given the options you should go with Struts Action 2 (aka Webwork).

I’d be happy if WebWorks became the defacto web Java web framework as a successor to Struts, but unfortunately for now it’s still a wait and see game. In the meantime after about 9 months with JSF we continue to struggle with the six phase lifecycle and reliance on tooling. At least Ruby on Rails has forced the java community to wake up and smell the coffee.

Works On My Machine

Continuous Integration is a good thing.

Hmmm, it works on my machine.

— Developer

I paired up with a developer yesterday to hook a new project into the build box and Cruisecontrol. We expected a few wrinkles since this is the first project we’re hooking up to maven instead of ant. Should be just a simple **** tag and a few parameters.

We got past the parameter settings faster than expected, maybe 5 minutes on syntax errors like using

1
mavenhome

instead of

1
mvnhome

and forgetting to checkout the project manually for the first run. So after five minutes it’s up and running, but it starts failing soon after getting started.

So next we try debugging by just running the maven goals in the checked out codebase. Over and over again it pulls down about 10 dependent jars and then bails on the 11th. It’s not the same jar each time so the error isn’t obvious. Current theory is maybe the build box is being throttled by the proxy server, but it was late so we’ll get back to it in the morning. The build of course works on the developers machines.

Without hooking up the continuos integration early in the project we would have missed what may be a fairly subtle bug.

Code Review With Crucible Closed BETA

Yesterday I went through an install of Crucible on our development server. It runs on top of their Fisheye product. I’ve actually been waiting a long time to check it out.

Install and configuration wasn’t bad, maybe an hour and mostly to have Fisheye index part of our CVS repository. Just for an experiment I grabbed one random file and started a review. The workflow has a minimal number of stages which is just fine.

create –> approve –> review –> summarize –> close

I setup the review and added about 5 comments. The other reviewer who had only seen a quick demo of Crucible months ago then added a response to the 4 comments, fixed one issue and checked it back in. I went back in and summarized the change and closed the review.

The impressive part was with no real explanation, the whole review took about 30 minutes asynchronously. Even though this is a pretty early beta it’s mature enough for us to start using for our reviews. And since it’s so lightweight we’ll probably end up doing more reviews. When I went over and talked to the developer on the review he mentioned we might not even have to have a formal review meeting.

Overall it appears to be a good fit for us, even if it took about 2 years from it’s initial announcement to get to this stage.