As a software manager and developer I’ve followed the gradual adoption of DSLs as a mainstream technique. I’ve worked with numerous DSLs including:
- Easymock, Mockito
- Grails, GORM
- Ant (as ugly as an XML DSL is)
RSpec was a wonder at the time compared to JUnit 3 where I spent the bulk of my testing time. I loved the ‘should’ syntactic sugar and the clean English definitions of the specs. Expressive tests are critical to maintaining a large codebase and they almost require DSL implementations. Rails pushed the idea into a full fledged web framework and made ugly implementations like JSF in Java land look like poor cousins. DSLs also helped with the adoption of newer dynamic languages like Ruby and Groovy, because method chaining in languages like Java don’t make a nice DSL.
It’s taken a while for the literature to catch up to the use of DSLs among developers. Indeed Martin Fowler has been working on his DSL book for 4-5 years. The great majority of DSL use is through frameworks and libraries so currently very few engineers are coming up with their own DSLs. DSLs in Action by Debasish Ghosh is an attempt to show some options in taking on DSL development on your next project.
The book defines two types of DSLs:
- Internal DSLs – Rails, Grails
- External DSLs – generated with tools like ANTLR
The book covers both types with a greater focus on embedded internal DSLs that the reader is probably more familiar with. If the approach had been to focus more on external DSLs I think the audience would have been more limited as most developers are not ready to jump into the complexity of writing there own external DSLs or even had the time to look at something like a DSL workbench like JetBrains Meta-Programming System. I loved that the diagrams were hand drawn whiteboard UML-lite style, as thats a trend I strong encourage in technical books.
The code samples do a good job of illustrating the various strengths of different languages you can use for implementing DSLs. The example domain is a Wall Street trading firm. The Java example looks like:
Typical method chaining and a bit verbose. Not exactly the code you hope to show to a technically savvy business user.
Much nicer, even if the curly brackets still bug me.
Lisp syntax, but very readable.
A nice syntax parsing out all the strings. I found the actual implementation classes for this example hard to follow, but the end resulting syntax might be worth the pain, though you can accomplish a similar syntax in Groovy or Ruby.
To note this is a polyglot programmer assumed book. The code examples range include Java, Ruby, Groovy, Clojure, and Scala. If you’re still mostly a Java developer this probably isn’t the book you want to start with as the coding syntax especially with Clojure and Scala will make it difficult to follow. I have a long background in Java, Ruby, and Groovy, and reasonable exposure to Lisp like languages, but I found the Scala examples and syntax jarring. It did help solidify my feelings that I just don’t like the syntax and complexity of Scala at least on the surface. (To be fair I probably need to spend some time with it, maybe it grows on you.)
The book does a good job of walking you through examples of implementing DSLs in various languages and the approaches you might take. In daily development the likely DSLs to be implemented will be embedded DSLs, but the book does a good job of dealing with external DSLs and the complexity involved. Even the the author admits that at the end of a chapter on implementing external DSLs using Scala’s parser combinators feature that:
If you have reached this far in reading the chapter on parser combinators, you have really learnt a lot about one of the most advanced applications of functional programming in language design.
Overall I found the book helpful in expanding my overall understanding of DSLs, and I will probably reference it the next time I need to build a small DSL in Groovy or Ruby on a project.
I was working with some clients recently when one of them leaned back in his chair and announced:
“Well Paul’s leaving. I guess he finally got fed up.”
The group of developers and sysadmins were disappointed at the news. They wondered why he decided to leave as it turned out he was a key champion from the QA group in pushing a closer working with development. He had a development background and had been key in moving the group from manual testing to working closely with developers on tests and adding automated regression suites.
Early the group had explained they had adopted Scrum in the development group about 18 months prior and it had been going fairly well with now 5-6 Scrum teams. One of the biggest successes had been the closer work with testing. And a familiar problem area had been the problem in getting the true product managers to attend the Scrums as they largely delegated to business analysts and much was lost in the translation.
Apparently the QA team was going to take this hard as Paul had been a champion of theirs in evolving their practices and fighting for respect for QA at the table. It sounded like he had pushed hard and been denied many things because of an unwillingness to imagine QA outside of their traditional role. This shop also had the Mercury suite of testing tools which often is a sign of a dedication to focusing on bug databases and record and playback style automation that doesn’t go nearly far enough in improving the effectiveness of QA.
I hope they succeed as the people I worked with all seemed bright and dedicated to improving things, but a couple of these items are classic warning signs in an Agile adoption that is likely to run out of steam.
- Agile champions like this QA developer pitching in the towel.
- Product managers delegating day to day involvement in the Scrums.
- Use of less than Agile style tools like the Mercury suite.
- QA still having a real perception problem in the organization.
I certainly hope that this test lead doesn’t turn out to be a canary in the coal mine in regards to their Agile rollout.
After an incredible hype cycle back in 2005 many organizations took the plunge. We were going to ride SOA into a new highly productive development environment. The idea was we’d build business services and then start composing applications on the fly based on these service components. The reasons for diving headfirst into SOA included:
- Gartner and many tech magazines pushed hard that this was the wave of the future. We haven’t had a tech hype cycle since.
- At a high level the story was compelling, especially the idea of working closely with the business.
- Enterprise vendors needed a new story to sell complex, high cost software after ERP packages started having so many expensive failures, and app servers were largely commoditized by the success of JBoss.
- As a baseline XML had become well understood, and web services were a much easier integration story than CORBA.
So a large number of IT shops jumped in to build their SOA solutions. Some efforts crashed and burned completely and the organization walked away. Others fought through the learning curve, the mass of WS* specs, and delivered some useful services. Still others worked with large system integrators and delivered solutions that were little more than overly complex integration services with large XML payloads presented as a grand new SOA Architecture.
Many others sat out the the whole SOA revolution. They didn’t go out and buy an expensive ESB/BPEL/Composite Application Suite. They let others go through the pain and tried to decide if it was worth it. In the Java space it was similar to the large number of shops who decided to pass on EJB and just stuck with web containers like Tomcat.
What I see now out in the field is a number of those IT shops taking a second look at SOA. There are a number of factors that make it worth taking a second look at SOA:
- Integrating with other organizations these days often assumes you can hook up to a web service they already have.
- The testing story has gotten better, even if much of it is still functional testing.
- The tool vendors have created reasonably decent tools unlike the early generation that were plagued with bugs
- The open source solutions things like Camel and ActiveMQ are proven
- The cloud computing meme is putting an emphasis on thinking and utilizing services to create a complete application.
As I’m out at client sites over the next year I expect to continue to see this trend increase with many sites taking a second look at implementing some sort of SOA. (And yes SOA is still a fuzzy term that I don’t hope to define here.)
I’ve been meaning to put together an example of all the Hamcrest assertions that have been added to JUnit 4 way back in 2007 now. My assumption based on a number of recent client engagements is that if unit testing is being done with JUnit the default is still to rely on assertEquals() as the default. I found developers were very enthused about the new assertThat() style if you showed them some examples. In order to better understand all of the new Core Matcher options I put together a little tutorial example of all the defined matchers.
Everyone has that trunk of old junk tucked away in the attic. It’s almost spring and time to think about getting organized, tossing out old junk, and having a garage sale. Apache has managed to create an online concept of a software attic. Old open source projects that have outlived there useful lives can be retired there. The concept is intuitive and useful in a world with hundreds of thousands of open source projects, many abandoned early in there life.
I’m not sure how long the Apache attic has been around, but I came across it accidentally. I’ve been doing a lot of architecture assignments recently which often involves digging through big source code trees to get a sense of how their development has evolved. In Java land I run into dozens of different web frameworks, and I came across a project utilizing Apache Beehive. I recalled it was some extra XDoclet like commented annotations on top of Struts that was adopted by Weblogic as their default framework years ago. I’m all too familiar with classic Struts, but now I needed to go look up Apache Beehive to see where the framework was currently.
It didn’t take too long to arrive at the Apache Attic page. The mission was stated as such:
The Apache Attic was created in November 2008 to provide process and solutions to make it clear when an Apache project has reached its end of life.
Brilliant! Explaining that the time has come to evolve the codebase is much easier when Apache has officially retired the project. Unlike so many other open source projects I don’t have to do the following investigation on arriving at some SourceForge page where it appears the project has been abandoned. I don’t have to make an argument that though despite a few check ins in the last year that on active projects there will typically be hundreds of check ins and that the project is essentially dead. I don’t have to point out that despite having a plugin architecture for some framework there are only say 5 plugins that have been developed and the last one was 3 years ago. And finally my explanation to management has the official endorsement of Apache.
I’d like to encourage more of this in the future. I await the day when Struts Classic moves to the Apache attic.