Remembering Memory Management with CORBA

Java is a wonderful language, especially when compared to a container native language like LotusScript. There are many more options, frameworks, design patterns you can play with. The tools for writing Java agents make the days of writing LotusScript agents seem like forever ago. Have I discovered the silver bullet in the Domino world? Is this my valhalla?

No.

Not by a long shot. Remember memory management? Yeah, it’s back. I tried writing a simple agent that processed a Domino view containing a little more than 20,000 records. The agent uses the POI framework to convert the view into an excel workbook. This is done on a nightly basis. Here’s the thing. Domino’s default heap size is 64mb. My agent was running out of memory. A simple increase in heap size got me around the issue but raised the concern that memory management is something I now need to keep my eye on when writing these Java agents. I know, I know, any (every) developer should always be thinking about memory management but Domino’s LotusScript lulled me away from the concern.

That being said, I still believe this is the way I’ll continue to write my Domino agents from this day forward. My main reason? I can write junit tests for them. A topic for another day…

Testing Overhead

I was enjoying a sandwich with my friend Lindsay the other day when he asked an age old question:

“How much extra time do you think Test Driven Development adds to the build process of a software project?”

An excellent question, one I think about all the time. Once we established what he meant by build process my answer was pretty easy:

“None”

As a test driven developer, when was the last time you were burned by a NullPointerException? I haven’t seen one in my domains for a couple of years. You catch that stuff at build time through the effective use of testing. You know the state of your objects before they start interacting with other objects.

With some research, I’d wager that Test Driven Development actually saves time. The tests that drive my software were written by a customer. If my tests (and subsequent software) satisfy what the customer has written, I’m done. Finished. I can remember thinking I was done solving the customer’s problem only to later find I didn’t fully comprehend it. When they write the tests and I code them, we’re both aware of their expectations. That saves a tonne of time and better than that, it frees up my time to solve more of their problems.

If you’re new to TDD and looking for a good book to take you through the preliminary steps, try Kent Beck’s Test Driven Development: By Example

Business Process Execution Language

Like a courting young man, I stumbled into the beauty of BPEL yesterday. The promise of a cost effective, standards based integration technology was too good to resist.

The adventure started through the installation wizard of IBM’s WebSphere Integration Developer. Hard stop. WID cannot be installed on the same machine as Rational Application Developer, my primary development environment at TransAlta.

“BPEL is just a standard, the tooling shouldn’t matter”, was my next thought. Off to Oracle I went.

I read Oracle’s 2 minute tutorial on their Eclipse based BPEL Designer.

So, 30 minutes and zero Oracle experience yielded the following:
A Hello World BPEL
A deployed BPEL that reads my existing SAP Integrated Web Services (big step).

Well done Oracle, I’m impressed. IBM could learn a thing or two from you about how to introduce a developer to their new technologies.

Proof of Concept – Paint the Eaves before the Doors

I was recently in a meeting with a client of mine and one of their most respected vendors. The vendor was proposing a “Proof of Concept” exercise, implementing a bare bone version of their newest software.

The client proposed real world issues which the final version of the software needed to address. The vendor baulked at including them in the PoC*; concerned with the PoC failing. Through a long series of conversations, the vendor agreed to adding limited portions of these requirements. They insisted on a vast array of disclaimers in case it didn’t work.

I’m not trying to pick on the vendor, they were doing what we all do in these situations. They want to impress the client and drive as much functionality into the three month PoC as possible. Martin Fowler touched on this issue in his blog the other day.

I agree with him and in this case, would have liked to see the Vendor approach the Client with proving their technical limitations first. I don’t see that happening and anticipate a PoC that fails to live up to the client’s expectations.

I owned a painting franchise in University. I often told painters to avoid painting the easy stuff first. Leave the ground floor doors and basement windows until the end. Get up on the high ladders and paint the eaves first. By tackling the toughest task first, you have an accurate indication of how the rest of the project will go.

I think back to those projects all the time and marvel at how painting a house draws such significant parallels with building software…


*PoC – Proof of Concept

Mocking a Web Service

I like the idea of Mock Objects, I like what they stand for and the development speed they enable. I typically write my own and that makes me feel a little like someone at the edge of the room. When I get the time to investigate the Mock frameworks available, I’m sure I’ll think of this post and wonder why I was making life more difficult.

The issue is mocking an object you’re expecting from a web service. You don’t have control of the object so you can’t expect it to implement a desired interface (as I would normally do). You find yourself writing tests with real, hot wired objects — this is wrong for more ways than I care to get into on a Friday morning.

My first thought was extracting an interface from the type generated by the Web Service. I can then create a Mocked Type in my test package and implement that interface. I gave myself a little pat on the back and went about my business with writing some tests. I wrote tests for a day in my happy, ignorant little mock world, disconnected from the Service and speeding through my requirements.

I’m lucky enough to sit around the corner from the fellow writing these Services. He stopped by my desk to ask whether his changes to the Service object had caused any problems in my project. Pride gathering beneath the surface; I was about to outline my mocking technique. An anxiety filled “did I leave the iron on?” type of thought came roaring into mind.

“I’m not sure Jeff, let me get back to you on that one”.

Of course it had an impact.

My interface was based on what the type looked like from the day I extracted it. The type changed but without holding a true relationship (I couldn’t have the Wizard change the Service object to implement the new interface, it would break the moment I regenerated the client) with my interface. I was testing an irrelevant piece of software… The Titanic was sinking and I was checking the sound system in the Grand Ballroom…

So here’s what I did.

I deleted the interface and had my mock extend the Service object itself. When the client is regenerated, any changes to the Service object’s structure will cause my mock to break; I get notified right away (when I try running the tests).

It’s not the cleanest solution to this particular problem but it gets around that isolated feeling of ignorance I was starting to sense. It allows me to write software against something that looks and feels like the service object without being connected to the Service itself.

Agile SOA

Experience on Agile projects tell me to delay making those “what if the business will one day need (insert some piece of data) from us” decisions. You build what you need today and do it in a way that’s open and easy to change in the future. How does that translate into something as public as an Enterprise Web Service? You’re publishing a defined object at this point, something for other business units with an organization to consume. A change (adding that future piece of data) forces a change in the serializable object being used by your existing applications. You are now refactoring working applications (most of them outside your control) for the sake of upgrading a public object. Bad news.

Is this what BPEL is supposed to handle?

Off to get some answers…

The Perfect Project

While dodging cars and fighting the wind (I ride my bike to work), I started thinking about what components make for the perfect software development project.

Valauble Business Problem
Business Problems fall into two categories. Those worth solving are one, everything else is another. What makes a problem worth solving? One of two things happens when the system goes live: Money is saved or money is made. One of these should present itself within the first quarter of the code going live. Asking a client to wait longer than that means the project was likely too large in the first place, it should have been reworked. A perfect Agile project would have the business measuring money saved/made from one iteration by the time the next iteration is lighting up.

Easy Access to the Business Experts
I think this one is more common than people think. If you’re about to solve someone’s problem, they want to help. Now, they’re never keen to help you solve a problem not worth solving (see above). A legitimate problem? Everytime.

An Agile project has different demands on the Business Expert, they are asked to behave differently. I have never experienced the Business Expert shy away from this. You are asking them to help you help them. Maybe this post will lead to me meeting a Business Expert from hell, keen on ripping my head off for not knowing how Credit Liquidity or Risk Management work…

Open, Dynamic Team Members
My current project fits this in a lot of ways. Lots of people wearing lots of hats. Business Experts reading API docs and writing tests, Developers meeting with management to discuss ROI — an absolute pleasure. Everyone has interests outside the project: another key. Work is something you do from Monday to Friday, morning to late afternoon. I’m much happier discussing various hikes or how my Leafs did last night than how some developer worked all night on a problem they could have fixed had they discussed it with the other team members. Leave your ego at the door, we all make mistakes, we all miss stuff. When you stop thinking people expect you to be perfect, you’ve taken your first step in perfect’s direction.

Flexible Approach
If the requirement is moving people from Canada’s east coast to west coast and the year is 1871, yeah, not so flexible. You’re building a railroad and that’s that.

It’s 2005. An infinite number of tools to solve the same problem exist. Try one. If it’s going to work, use it. When it stops working for you, change. If change means going back over work that’s been done and refactoring it, do it. Changing your code doesn’t mean it was wrong the first time through, it just means you’ve found a better way of working the problem.