A Project of Problems

My friend and colleague Iwona and I were talking this afternoon about our current project. It’s a complex integration exercise, joining 4 very different systems while surfacing a new interface through Portal. Iwona is an excellent team member constructing a data model that is both durable and dynamic.

The conversation came around to how we’ve been tackling the problems as they come. Her preference would have been more upfront analysis and less refactoring. It came out that she believes we’ve been ‘designing the system to solve the next problem’.  I bristled at the suggestion….but only for a moment (and the moment’s gone)

I thought about it and realized that’s exactly what we’re doing and I don’t think I’d have it any other way. I’m not saying upfront analysis is a bad thing. Lightweight design meetings are a great way to start a project. They introduce everyone to the Domain, get lots of foundation knowledge on the table. I love that part of a project. I love getting to know a new Domain, form a common Domain language and figure out where the wins are. Conversations beyond that start returning diminished value.

Under a certain light, projects can be viewed as nothing more than a series of problems needing solved. Isn’t that why we got into this in the first place, to solve problems? I know that’s where the draw was on my 1983 Apple II clone, solving the riddles that made the bloody thing work (I was 12 and it the machine rarely worked without some coaxing…).

You often hear team members  describe a pre-production problem as “Murphy’s Law strikes again” and yeah, I guess that’s one way of looking at it. To me, it means a problem wasn’t ready to appear. Now I know that sounds a bit too, I don’t know, abstract or philosophical to have any meaning. What I mean to say is some problems are revealed only through the solving of others. You’ll never solve them all, you just need to solve enough to provide value to those who feel the problem’s pain the hardest.

So whether you spend the time designing the holes in which the fence post will be placed or drawing a rough idea and putting the shovel into the ground…there are rocks, they weigh a lot and it feels amazing once they’re out.

Specifications

I was having lunch with Jeff and Ian at Café Mauro when the subject of JSR168 portlets came up. It seems the architectural strategy at Jeff’s current client is to build all portlets in strict accordance to the specification (JSR168).

A blanket statement like all pieces of software are to be written as absolute deriviates of a specification makes no sense to me. I see it as architectural idealism; a pattern that hinders the movement to build better software. And here’s why.

Specifications have authors and guess where those authors work? They work for the large software vendors of the world. They work at SAP, they work at IBM, they work at Oracle…you get the idea. They co-author these specifications because they want to standardize ideas or approaches to building products. Once the specification is published, they release their implementation of the specification and customers buy it. It makes sense. They add functionality to the published version; some companies do a really good job, some fail miserably. That’s their problem (and their customer’s I suppose).

Let’s use Portal as an example. Here are three reasons why I will never insist my team build onlyJSR 168 portlets:

  1. It prevents them from leveraging what they own- They paid for a swack of work done by the diligent developers from their vendors labs…they should probably use it.
  2. It slows the software development cycle – Spending my time hand crafting something that has already been built means I’m not hand crafting the stuff that hasn’t been built
  3. It severs their support channel – “Ummm, IBM, Hi, it’s Jamie McIlroy calling. Yeah, I’m having some trouble with my real time awareness application within WebSphere…what’s that? Sametime? No, no…we wrote our own to comply with JSR 168…ummm, hello?!”

I can’t wait to sit in a meeting and here someone tell a client they can’t have functionality they need because: a. It’s not part of a rather generic specification or b. It would take too long to rebuild it to comply with the specification. Yeah, I’d love to support my colleague on that one…While I’m at it, I should probably ask the client to stop taking their medication (Drug Companies are corrupt) or writing to their mother. It’s not my place.

WebSphere Portal and it’s proprietary portlet specification (WPI) have loads of good things we’re delivering to the business. Sametime awareness (not in the specification — never will be), Click2Action (c2a, not portlet messaging: not in the specification — never will be) are two examples of helpful function points I have no problem getting out to the business.

This idealism is like asking your home builder not to use any manufactured parts in your new house. Asking them to mill every bannister spindle, baseboard and door jam. While the idea may seem appealing, imagine what it would cost? Imagine how little you’d care about those hand crafted baseboards once you moved in. I imagine telling my wife how the obscene budget overruns are attributed to the hand blown glass windows I had the builder make. Your boss at work is like my boss at home. They don’t care about where it comes from. — they don’t want any surprises. They care that it works and you did your homework before deciding to use it.


*This assumes I’m working for a customer and that customer isn’t in the business of writing JSR168 portlets.

Remembering Memory Management with CORBA

Java is a wonderful language, especially when compared to a container native language like LotusScript. There are many more options, frameworks, design patterns you can play with. The tools for writing Java agents make the days of writing LotusScript agents seem like forever ago. Have I discovered the silver bullet in the Domino world? Is this my valhalla?

No.

Not by a long shot. Remember memory management? Yeah, it’s back. I tried writing a simple agent that processed a Domino view containing a little more than 20,000 records. The agent uses the POI framework to convert the view into an excel workbook. This is done on a nightly basis. Here’s the thing. Domino’s default heap size is 64mb. My agent was running out of memory. A simple increase in heap size got me around the issue but raised the concern that memory management is something I now need to keep my eye on when writing these Java agents. I know, I know, any (every) developer should always be thinking about memory management but Domino’s LotusScript lulled me away from the concern.

That being said, I still believe this is the way I’ll continue to write my Domino agents from this day forward. My main reason? I can write junit tests for them. A topic for another day…

Testing Overhead

I was enjoying a sandwich with my friend Lindsay the other day when he asked an age old question:

“How much extra time do you think Test Driven Development adds to the build process of a software project?”

An excellent question, one I think about all the time. Once we established what he meant by build process my answer was pretty easy:

“None”

As a test driven developer, when was the last time you were burned by a NullPointerException? I haven’t seen one in my domains for a couple of years. You catch that stuff at build time through the effective use of testing. You know the state of your objects before they start interacting with other objects.

With some research, I’d wager that Test Driven Development actually saves time. The tests that drive my software were written by a customer. If my tests (and subsequent software) satisfy what the customer has written, I’m done. Finished. I can remember thinking I was done solving the customer’s problem only to later find I didn’t fully comprehend it. When they write the tests and I code them, we’re both aware of their expectations. That saves a tonne of time and better than that, it frees up my time to solve more of their problems.

If you’re new to TDD and looking for a good book to take you through the preliminary steps, try Kent Beck’s Test Driven Development: By Example

Business Process Execution Language

Like a courting young man, I stumbled into the beauty of BPEL yesterday. The promise of a cost effective, standards based integration technology was too good to resist.

The adventure started through the installation wizard of IBM’s WebSphere Integration Developer. Hard stop. WID cannot be installed on the same machine as Rational Application Developer, my primary development environment at TransAlta.

“BPEL is just a standard, the tooling shouldn’t matter”, was my next thought. Off to Oracle I went.

I read Oracle’s 2 minute tutorial on their Eclipse based BPEL Designer.

So, 30 minutes and zero Oracle experience yielded the following:
A Hello World BPEL
A deployed BPEL that reads my existing SAP Integrated Web Services (big step).

Well done Oracle, I’m impressed. IBM could learn a thing or two from you about how to introduce a developer to their new technologies.

Proof of Concept – Paint the Eaves before the Doors

I was recently in a meeting with a client of mine and one of their most respected vendors. The vendor was proposing a “Proof of Concept” exercise, implementing a bare bone version of their newest software.

The client proposed real world issues which the final version of the software needed to address. The vendor baulked at including them in the PoC*; concerned with the PoC failing. Through a long series of conversations, the vendor agreed to adding limited portions of these requirements. They insisted on a vast array of disclaimers in case it didn’t work.

I’m not trying to pick on the vendor, they were doing what we all do in these situations. They want to impress the client and drive as much functionality into the three month PoC as possible. Martin Fowler touched on this issue in his blog the other day.

I agree with him and in this case, would have liked to see the Vendor approach the Client with proving their technical limitations first. I don’t see that happening and anticipate a PoC that fails to live up to the client’s expectations.

I owned a painting franchise in University. I often told painters to avoid painting the easy stuff first. Leave the ground floor doors and basement windows until the end. Get up on the high ladders and paint the eaves first. By tackling the toughest task first, you have an accurate indication of how the rest of the project will go.

I think back to those projects all the time and marvel at how painting a house draws such significant parallels with building software…


*PoC – Proof of Concept

Mocking a Web Service

I like the idea of Mock Objects, I like what they stand for and the development speed they enable. I typically write my own and that makes me feel a little like someone at the edge of the room. When I get the time to investigate the Mock frameworks available, I’m sure I’ll think of this post and wonder why I was making life more difficult.

The issue is mocking an object you’re expecting from a web service. You don’t have control of the object so you can’t expect it to implement a desired interface (as I would normally do). You find yourself writing tests with real, hot wired objects — this is wrong for more ways than I care to get into on a Friday morning.

My first thought was extracting an interface from the type generated by the Web Service. I can then create a Mocked Type in my test package and implement that interface. I gave myself a little pat on the back and went about my business with writing some tests. I wrote tests for a day in my happy, ignorant little mock world, disconnected from the Service and speeding through my requirements.

I’m lucky enough to sit around the corner from the fellow writing these Services. He stopped by my desk to ask whether his changes to the Service object had caused any problems in my project. Pride gathering beneath the surface; I was about to outline my mocking technique. An anxiety filled “did I leave the iron on?” type of thought came roaring into mind.

“I’m not sure Jeff, let me get back to you on that one”.

Of course it had an impact.

My interface was based on what the type looked like from the day I extracted it. The type changed but without holding a true relationship (I couldn’t have the Wizard change the Service object to implement the new interface, it would break the moment I regenerated the client) with my interface. I was testing an irrelevant piece of software… The Titanic was sinking and I was checking the sound system in the Grand Ballroom…

So here’s what I did.

I deleted the interface and had my mock extend the Service object itself. When the client is regenerated, any changes to the Service object’s structure will cause my mock to break; I get notified right away (when I try running the tests).

It’s not the cleanest solution to this particular problem but it gets around that isolated feeling of ignorance I was starting to sense. It allows me to write software against something that looks and feels like the service object without being connected to the Service itself.