Piloting Two Week Iterations

We just wrapped up our organizations first Scrum project using 2 week iterations. Of course it was a proof of concept project and only had 3 sprints, but it gave us some idea of the possibilities. At the final retrospective feelings were mixed on the team:

Pros

  • Kept everyone focused.
  • We got a lot done in each of the two week iterations.

Cons

  • From a testing perspective if anything slipped they really got slammed at the end.
  • You really needed well thought out requirements before starting on each Sprint to make the Sprint goal.

Overall the takeaway from the team was that two week Sprints could work well for a short project or one with straightforward requirements or a really experienced team. For most larger projects they would feel more comfortable with 30 day Sprints.

Only a single data point for us at this point, but luckily we are starting to acquire a lot more experienced teams being about 1.5 years into a Scrum rollout.

Throw It On the Backlog

A familiar scenario played out today on one of our Scrum projects. The customer had been thinking about one of the administration screens and how it could be enhanced. The requirements for it had already been baselined for the Sprint, but they were really interested in the new functionality.

On this particular project the last few times the developer has decided well, OK that isn’t too big a change, I’m sure we can get it done in the Sprint anyway. I’ve had several conversations asking questions about whether that made the most sense and shouldn’t we put it on the backlog, but they always felt they could get it done, and it wasn’t that big of a change. Much of this is probably attributable to trying to serve the customer, the product owner in this case.

Over the last two Sprints that scenario played out in predictable pattern. The developer simply wasn’t able to get all the changes in within the Sprint, or they delivered a day or two before the end of the Sprint to QA and so it wasn’t tested within the Sprint. Looks like the third time the lesson has sunk in.

I debated being hard-nosed and forcing the issue to be put on the backlog, but it’s really up to the team and the Scrum Master on the project didn’t balk, so I think it ended up as a much more valuable lesson learned. They might not have understood had I simply forced the issue onto the backlog. As Scrum Master I might have thought differently, but as a manager I think it’s better to let people make their own choices.

I think from now on we’ll see a lot more pushed onto the backlog and not forced into the current Sprint.

IBM’s Processor Value Units

Being a customer of Big Blue can be pretty painful sometimes. We recently learned that IBM has conveniently decided to radically restructure their licensing agreements around Websphere. Their plan to make sure they get all possible revenue is to charge per core on a processor and per processor type.

According to our contract they can just change things midstream like this without consultation. So our say 50 current processor licenses for Websphere have now been converted with the following formula:

Current per Processor Entitlements x 100 = New Processor Value Unit Entitlements

Thus instead of 50 CPU licenses we now have the wonderful 5000 processor value units. The whole thing reminds me of the Bill Cosby Noah routine:

Big Blue: We shall convert your CPU licenses to processor value units.

Noah: Right…, What’s a processor value unit?

David Ogren thinks it’s the worst licensing model he’s seen yet:

But perhaps worst of all is what IBM just announced for it’s middleware pricing. They’ve brought back the idea of “power units” or MIPS based pricing, this time calling it “processor value units”. IBM portrays this as providing for more flexibility and simplicity in pricing. (I think flexibility in this context means “we can charge you more”.) Most disturbing is their announced intention to “differentiate licensing of middleware on processors .. [evolving] to differentiate processor families based on their relative performance”. Meaning that if a faster processor comes out, IBM plans on charging you more to run their software. Or they might charge you more to run on Sun SPARC chips than IBM chips.

As I remember Oracle tried a similar scheme back in 2000-1 and had to drop it after their customers screamed and sales dropped. From a customer standpoint this is just a really problematic policy. As soon as you upgrade your hardware in the next few years your licensing costs could jump through the roof, probably something your company never budgeted for.

I think IBM is just making the open source software model even more tempting with JBoss or even IBM’s own Apache Geronimo. Suddenly my licensing problems go away, and I can deploy on any hardware I want or setup clusters without forking over the dollars. At the end of the day Big Blue is a services company and this licensing model isn’t my idea of service.

Starting a Tutorial With Testing

One of the minor disappointments of Ruby on Rails: Up and Running was that like so many other texts didn’t get to the testing topics until the last chapter. There reasoning was:

We’ve come this far with our Photo Share web application, but we haven’t yet created any tests. In truth, this was deliberate. You had enough new things to learn as it was.

This isn’t because the authors think it was a “nice to have” topic since they declare that:

…automated testing is probably the single most important thing you can do to increase the quality and reliability of your software.

It’s just that everyone assumes that testing is still not a mainstream assumption, so it’s a bit of an advanced topic to be looked at after you’ve debugged your way through all the tutorial features.

I’ll be really glad when someone just starts the darn book with testing and without needing to put TDD in the title.

Testing in the Next Sprint

Elizabeth Hendrickson is blogging quite a bit recently which can only be a good thing. In a recent post she takes on the problem with getting all the testing done in a Sprint:

The team decided to relieve the pressure on the testers by moving the test effort into the next Sprint. So the features developed in Sprint 1 would be tested in Sprint 2. The features developed in Sprint 2 would be tested in Sprint 3. During each Sprint, the developers worked on the new features while the testers tested the features already developed.

On at least one project we’ve had exactly this solution proposed but not implemented. On several others this has happened for at least one Sprint when the developers turned over the final bits of code right at the end of the Sprint leaving little or no time to get the testing done within the Sprint.

Elizabeth has some ideas about how to avoid this situation:

  • QA/Test participates in the Sprint Planning Meeting.
  • Testing tasks are included in the Sprint plan.
  • Hands on testing begins the minute there’s code checked in and available to test.
  • Developers and testers collaborate on test automation.
  • The team deals with bugs immediately.

We already include QA as full members of the team which means their tasks are included in each Sprint plan. And we fix defects as quickly as possible.

What we’re not so good at yet is doing testing as soon as things are coded and checked in. Often testers want completed code with the GUI done before starting testing. And developers aren’t looking for ways to make it testable earlier.

Finally, on developers and testers collaborating on test automation, we’ve just gotten started utilizing Fitnesse for acceptance tests. Hopefully this will prove to be a major boon once we get further down the road.

The tough part of this collaboration is that developers expect QA to figure out how to test not necessarily how to collaborate. It’s a similar gripe to the idea that I’ll start coding when they’ve nailed down 100% of the requirements. On the test side “the testers may balk at having to test unfinished code.” Despite adopting Mercury’s Quick Test Pro for our GUI driven testing, we’re four sprints in on a project with no automated GUI tests because the tester figures the user interface is still in too much flux.

So this is still a hard gap to close, but this approach gives me evidence we’re headed in the right direction.