Learned a new term today going through 19 Deadly Sins of Software Security. They give an example of a URL containing a strange looking id parameter:
Turns out a simple run through a base64 decoder for gives you:
The authors then refer to this as an “encrapted” password.
James Carr has come up with a basic list of TDD anti-patterns and while I’ve seen many of them, one currently still plagues many of our tests:
A test that requires a lot of work setting up in order to even begin testing. Sometimes several hundred lines of code is used to setup the environment for one test, with several objects involved, which can make it difficult to really ascertain what is tested due to the “noise” of all of the setup going on.
The core issue comes down to having to use JSF for the GUI layer and dealing with its’ many mock objects using Shale’s mocking package for JSF. Thus many tests cannot simply test a single method without making sure FacesContext and many other stubs are setup. This tends to make the tests harder and more tedious to write especially for developers who haven’t quite caught the whole TDD bug anyway. The setup might not be hundreds of lines, but it is far more than you’d want for simple unit testing.
We’re still looking at ways to improve this from refactoring out even more code out of JSF backing beans to bringing in some TDD heavyweights to mentor the team through the best way to deal with it.
We just wrapped up our organizations first Scrum project using 2 week iterations. Of course it was a proof of concept project and only had 3 sprints, but it gave us some idea of the possibilities. At the final retrospective feelings were mixed on the team:
- Kept everyone focused.
- We got a lot done in each of the two week iterations.
- From a testing perspective if anything slipped they really got slammed at the end.
- You really needed well thought out requirements before starting on each Sprint to make the Sprint goal.
Overall the takeaway from the team was that two week Sprints could work well for a short project or one with straightforward requirements or a really experienced team. For most larger projects they would feel more comfortable with 30 day Sprints.
Only a single data point for us at this point, but luckily we are starting to acquire a lot more experienced teams being about 1.5 years into a Scrum rollout.
A familiar scenario played out today on one of our Scrum projects. The customer had been thinking about one of the administration screens and how it could be enhanced. The requirements for it had already been baselined for the Sprint, but they were really interested in the new functionality.
On this particular project the last few times the developer has decided well, OK that isn’t too big a change, I’m sure we can get it done in the Sprint anyway. I’ve had several conversations asking questions about whether that made the most sense and shouldn’t we put it on the backlog, but they always felt they could get it done, and it wasn’t that big of a change. Much of this is probably attributable to trying to serve the customer, the product owner in this case.
Over the last two Sprints that scenario played out in predictable pattern. The developer simply wasn’t able to get all the changes in within the Sprint, or they delivered a day or two before the end of the Sprint to QA and so it wasn’t tested within the Sprint. Looks like the third time the lesson has sunk in.
I debated being hard-nosed and forcing the issue to be put on the backlog, but it’s really up to the team and the Scrum Master on the project didn’t balk, so I think it ended up as a much more valuable lesson learned. They might not have understood had I simply forced the issue onto the backlog. As Scrum Master I might have thought differently, but as a manager I think it’s better to let people make their own choices.
I think from now on we’ll see a lot more pushed onto the backlog and not forced into the current Sprint.
Being a customer of Big Blue can be pretty painful sometimes. We recently learned that IBM has conveniently decided to radically restructure their licensing agreements around Websphere. Their plan to make sure they get all possible revenue is to charge per core on a processor and per processor type.
According to our contract they can just change things midstream like this without consultation. So our say 50 current processor licenses for Websphere have now been converted with the following formula:
Current per Processor Entitlements x 100 = New Processor Value Unit Entitlements
Thus instead of 50 CPU licenses we now have the wonderful 5000 processor value units. The whole thing reminds me of the Bill Cosby Noah routine:
Big Blue: We shall convert your CPU licenses to processor value units.
Noah: Right…, What’s a processor value unit?
David Ogren thinks it’s the worst licensing model he’s seen yet:
But perhaps worst of all is what IBM just announced for it’s middleware pricing. They’ve brought back the idea of “power units” or MIPS based pricing, this time calling it “processor value units”. IBM portrays this as providing for more flexibility and simplicity in pricing. (I think flexibility in this context means “we can charge you more”.) Most disturbing is their announced intention to “differentiate licensing of middleware on processors .. [evolving] to differentiate processor families based on their relative performance”. Meaning that if a faster processor comes out, IBM plans on charging you more to run their software. Or they might charge you more to run on Sun SPARC chips than IBM chips.
As I remember Oracle tried a similar scheme back in 2000-1 and had to drop it after their customers screamed and sales dropped. From a customer standpoint this is just a really problematic policy. As soon as you upgrade your hardware in the next few years your licensing costs could jump through the roof, probably something your company never budgeted for.
I think IBM is just making the open source software model even more tempting with JBoss or even IBM’s own Apache Geronimo. Suddenly my licensing problems go away, and I can deploy on any hardware I want or setup clusters without forking over the dollars. At the end of the day Big Blue is a services company and this licensing model isn’t my idea of service.