A developer admitted today, after finally getting back to writing tests, that:
I’m finding a lot of places where we should have checked for nulls in our code.
I may point the developer to the Introduce Null Object Pattern tomorrow since it may avoid having to check for nulls all over the code base. Eventually we’ll write the tests first, but this is progress.
As I’ve mentioned before our adoption of TDD is going slower than I would have anticipated. I’ve actually been religiously posting unit test counts on the wall of my cube for an entire Sprint now and we’ve gone from zero to 39 unit tests in that amount of time. Since that amounts to about a 10% unit test coverage in Clover we have a long way to go.
I don’t think the daily chart has done much to accelerate things, but now that the two developers on the project have a chance to catch their breath, I have seen that they’re paying a bit more attention to the clover reports especially the green bar. Now they’re saying they want to get to at least 50% on the reports and move the green bar up from it’s current lowly 10%. It’s even occurred to them that they can boost their percentage on Clover by setting a lot of properties in the beans. This isn’t exactly the outcome I want, but it does show they’re starting to think about how they can increase their unit testing coverage.
Similar to the short cycles of TDD itself, we’re starting to make those baby steps.
If they [your developers] value simplicity you probably want to go with a super platform.
According to the Burton Group:
- Super Platforms are: IBM, BEA, SAP, and Microsoft
- Rebel Platforms are things like: Hibernate, Spring, Struts, and Tomcat and LAMP.
Of course I really thought much of the J2EE community had come to ‘rebel’ platforms because the wonderful vendor driven EJB model was too heavyweight and complex. Entity beans are pretty much dead on arrival these days. And we’ve been dealing with Websphere for 6 months now only to constantly reminded how complex it is. Everything from handling class loaders, to logging problems with RAD make it significantly more complex and troublesome then anything from the open source J2EE community. Case in point we ran complex applications on JBoss up until now with relatively few issues.
Amazingly enough Monson-Haefel points out how much better the support you get with a super platform including things like documentation. Lets compare IBM’s documentation to say Hibernate or Spring.
IBM believes in Redbooks, which are a strange mixture of documentation and marketing. They have a few books they haven’t written themselves on the market, but they’re mostly not exactly hot sellers. Then if you go digging in their developerworks site you can sometimes find something relevant. On the other hand Hibernate and Spring both have multiple fairly popular independent books put out by multiple technical presses. They also have a huge amount of documentation to be found online since these are popular frameworks, that many developers actually use. I’d pick a popular open source for better documentation every time.
Monson-Haefel also seems to presuppose that you need ‘better’ developers to handle the rebel platforms and that the super platforms are more drag and drop, model driven, trained monkey, kind of development. Unfortunately all our recent anecdotal evidence has pointed to the opposite conclusion, that even our brightest developers find themselves spending tons of time dealing with configuration instead of coding, and that the drag and drop stuff generally sets you up for some pretty crappy code. And how many times are we going to be pointed to the just out of reach holy grail of model driven development where you hardly need a programmer.
I’ll take the simplicity of a rebel platform any day over the complexity and constraints of an enterprise super platform. I can see why this podcast was the lowest rated of the week for IT Conversations.
It occurred to me today while reading an article about Tiny Basic in the latest Dr. Dobbs that much of my childhood contempt for actual programming may have come from my propensity for syntax errors and the primitive nature of the editors I used back in my childhood.
I’m dyslexic, thus spelling has always been pretty difficult by nature. Modern IDEs and editors nicely include features like auto-complete or at least syntax coloring and error suggestions. This greatly reduces the impact for not being a stellar speller. Enough so that in the last ten years or so I’ve always enjoyed pure coding versus my earlier experiences.
I remember taking forever to work my way through simple basic programs on a TI 99/4A or the Commodore 64. I also remember working in vi and absolutely detesting it, and not for just its absolutely frustrating modal interface. I edited my first HTML in a raw text editor and quickly determined that not using a tool was for the birds. A simple download of BBEdit Lite and I was instantly more productive and making a lot less syntax errors or at least being able to instantly spot them. And that just had syntax coloring at the time.
Now I love working in IntelliJ IDEA with auto-complete, intention actions, and refactoring support. I probably start to make just as many syntax errors, but they’re identified so quickly I hardly notice. It certainly makes me a lot more productive. And I still hate vi. My old joke from Georgia Tech was that I knew the most important command in vi: Shift-Z-Z.
I’m in the process of taking all of my lab examples from a TDD/JUnit class which include Bob Martin’s bowling game example, a simple bug tracker, and a golf game. I picked up the book, Fit for Developing Software, to help me along since I never found the built in documentation quite sufficient enough.
The books helped, though the book is split into different sections for business analysts and developers so you do a lot of flipping back and forth. I have run into one small issue I haven’t been able to resolve yet in dealing with exceptions.
Exceptions are fairly easy to test for in your standard JUnit framework, and I figure there’s probably an easy way to to test for them in Fitnesse. Turns out you just put the keyword
in the table cell if the expected result is an error, such as searching for a bug that doesn’t exist in the bug tracker. The problem I’m having is that I can’t find any easy way to examine the message being returned. If I don’t check for error it prints out the exception message and the related stack trace which is very nice for debugging, but not to useful for QA and besides I’m expecting it to fail. If I check for error then it’s all fine and dandy and the test passes, but the message gets swallowed. Hopefully I’ll run across any easy resolution to this as well.
I actually think that the fact that it throws an error should be good enough for acceptance testing by a business user, but I’m certain that our QA and business analysts will want to check the error message text exactly since I’ve seen some of our requirements business rules that for some reason specify in text where a field and error message should appear on the page. This is despite the fact that there are already detailed HTML prototypes. If I get some time maybe I’ll dig into the Fitnesse mailing list archives.