Presentations

Paul Glen wrote a funny little list of things not to do when giving a presentation. While most are fairly obvious (and correct), I don’t quite know if I agree with all of them. I know it’s not a good idea to stray from topic but really, he contradicts himself a little. You should stay on topic but if you follow what Paul says, you should also gauge your audience to see if you’re hitting the mark. In my opinion (and experience) you need to improvise from time to time.

It’s also interesting to note his point on keeping opinions away from a presentation, especially when presented as facts. Mr. Glen goes on to say an audience member gives a presenter 30 seconds before determining whether the presentation is worth their attention (hard thing to prove — sounds a little like an opinion to me…). cover_small.jpgHe was trying to emphasize the importance of a strong opening. For anyone attending Eric Evans present at XPAU in 2004, you’ll probably agree on how inaccurate Mr. Glen’s assessment is. Eric opened quietly, meandered for a few minutes before finding his stride. I was happy to have the chance to meet Eric afterwards (while signing my copy of Domain Driven Design) and let him know how incredible I found his talk. There wasn’t a strong opening and if memory serves, not a very ‘strong’ closing. Just a talk comprised of really, really interesting Domain related experiences and findings.

As with everything, take the list with a grain of salt. It’s still nice to see folks in the industry pointing out the importance of a good presentation. I, for one, love delivering presentations on the things near and dear to me.

Refactoring

My wife and I took the kids out to Nipika for a four day weekend. It’s a wonderful place, I’d highly recommend it to anyone looking to get away from the hustle and bustle of city life for a while.

The last thing I did before leaving was export a .war file for release to my client’s Test Environment. I’m starting to learn the last thing you do prior to holidays is rarely the thing that causes the biggest upset while you’re gone. The second last thing you do prior to holidays…now that’s a different story.

As I prepared to export the .war and send it through the deployment process, I noticed something. I noticed a package within the code base with the following name: “org.transalta.creditservices.managedbeantests”.

I like to maintain two projects for every stream of production code, one for the code and one for the tests. That should explain why a package in my deployment release with ‘managedbeantests’ in its name seemed out of place. It’s not uncommon, it usually means I forgot to change the package name at the time of the classes creation. Since I typically create a class from within the context of my ‘Test’ project, it stands to reason why this happened.

No big deal.

So just before exporting, I refactor the package by changing its name to match the other managedbeans, “org.transalta.creditservices.managedbeans”…done. Export .war, send to Tommy and it’s a 4 day weekend.

I return Monday morning to find a note from Tommy, the deployment administrator. To summarize the content of the note, “Jamie, your deployment didn’t work”.

What?!?! Impossible. It worked when I left, I remember deploying it to my development environment right before packaging it up and sending it to Tommy, he must be mistaken. A thread of emails later, he’s right. It’s not working in either Project Dev or Test. I fire up my development environment to prove that I’m not nuts, that it worked just before I left.

It didn’t work.

again, What?!?!?

I spend 5 minutes thinking about the deployment and what may have gone wrong. I focus on the last thing I did before leaving (packaging the .war). A quick look at the code and I find an empty package, “”org.transalta.creditservices.managedbeantests”. Ahhhhh…a clue.
I moved the class but didn’t delete the package (lucky much?). That was enough to trigger my memory, a path to what may have happened lay ahead…. The class I moved is a JSF managed bean. These are beans used within JSF and registered with the JSF context. That means, you guessed it, an explicit reference to it within the faces-config.xml file (bloody xml!?!?!).
Oh, and by the way, nice exception logging JSF, “cannot instantiate ReviewLogManagedBean” (not even the bean in question, just the first referenced bean in the faces-config.xml file.  Almost as useful as Portal’s ‘AssertionFailed’ exception in the log…thanks guys.

I pop open faces-config.xml and there it is, a reference to the old package location. How foolish could I have been? I changed the reference to the new locations, ran it in Dev (it worked), deployed it to Tommy (it worked) and informed the customer (it worked).

So rather than learn a lesson over and over, I thought I’d jot down my lessons learned on this little adventure.

1) Never make a change between final test and deployment, no matter how trivial you think it is. Make the change, rerun your local tests and deploy (Comp. Sci 101, I know, I know)

2) Refactoring tools within Eclipse 3.02 are good, just not good enough to know about references within an XML file. 3.02, I know, is old but it’s what IBM use for Rational Application Developer (tool of choice here).

3) When a container driven exception is thrown and you’re working with JSF, take a look at the faces-config.xml file. It’s worth a shot and really, that’s the heart of the framework.

4) Never make a change between final test and deployment, no matter how trivial you think it is. Make the change, rerun your local tests and deploy (Comp. Sci 101, I know, I know) [worth a second mention]

5) Don’t release anything within 2 hours of a vacation and leave yourself a note.

6) Don’t fry bacon in a cast iron frying pan over a bbq in the mountains if you don’t want to smell like  bacon for the rest of the day.

A Project of Problems

My friend and colleague Iwona and I were talking this afternoon about our current project. It’s a complex integration exercise, joining 4 very different systems while surfacing a new interface through Portal. Iwona is an excellent team member constructing a data model that is both durable and dynamic.

The conversation came around to how we’ve been tackling the problems as they come. Her preference would have been more upfront analysis and less refactoring. It came out that she believes we’ve been ‘designing the system to solve the next problem’.  I bristled at the suggestion….but only for a moment (and the moment’s gone)

I thought about it and realized that’s exactly what we’re doing and I don’t think I’d have it any other way. I’m not saying upfront analysis is a bad thing. Lightweight design meetings are a great way to start a project. They introduce everyone to the Domain, get lots of foundation knowledge on the table. I love that part of a project. I love getting to know a new Domain, form a common Domain language and figure out where the wins are. Conversations beyond that start returning diminished value.

Under a certain light, projects can be viewed as nothing more than a series of problems needing solved. Isn’t that why we got into this in the first place, to solve problems? I know that’s where the draw was on my 1983 Apple II clone, solving the riddles that made the bloody thing work (I was 12 and it the machine rarely worked without some coaxing…).

You often hear team members  describe a pre-production problem as “Murphy’s Law strikes again” and yeah, I guess that’s one way of looking at it. To me, it means a problem wasn’t ready to appear. Now I know that sounds a bit too, I don’t know, abstract or philosophical to have any meaning. What I mean to say is some problems are revealed only through the solving of others. You’ll never solve them all, you just need to solve enough to provide value to those who feel the problem’s pain the hardest.

So whether you spend the time designing the holes in which the fence post will be placed or drawing a rough idea and putting the shovel into the ground…there are rocks, they weigh a lot and it feels amazing once they’re out.

Specifications

I was having lunch with Jeff and Ian at Café Mauro when the subject of JSR168 portlets came up. It seems the architectural strategy at Jeff’s current client is to build all portlets in strict accordance to the specification (JSR168).

A blanket statement like all pieces of software are to be written as absolute deriviates of a specification makes no sense to me. I see it as architectural idealism; a pattern that hinders the movement to build better software. And here’s why.

Specifications have authors and guess where those authors work? They work for the large software vendors of the world. They work at SAP, they work at IBM, they work at Oracle…you get the idea. They co-author these specifications because they want to standardize ideas or approaches to building products. Once the specification is published, they release their implementation of the specification and customers buy it. It makes sense. They add functionality to the published version; some companies do a really good job, some fail miserably. That’s their problem (and their customer’s I suppose).

Let’s use Portal as an example. Here are three reasons why I will never insist my team build onlyJSR 168 portlets:

  1. It prevents them from leveraging what they own- They paid for a swack of work done by the diligent developers from their vendors labs…they should probably use it.
  2. It slows the software development cycle – Spending my time hand crafting something that has already been built means I’m not hand crafting the stuff that hasn’t been built
  3. It severs their support channel – “Ummm, IBM, Hi, it’s Jamie McIlroy calling. Yeah, I’m having some trouble with my real time awareness application within WebSphere…what’s that? Sametime? No, no…we wrote our own to comply with JSR 168…ummm, hello?!”

I can’t wait to sit in a meeting and here someone tell a client they can’t have functionality they need because: a. It’s not part of a rather generic specification or b. It would take too long to rebuild it to comply with the specification. Yeah, I’d love to support my colleague on that one…While I’m at it, I should probably ask the client to stop taking their medication (Drug Companies are corrupt) or writing to their mother. It’s not my place.

WebSphere Portal and it’s proprietary portlet specification (WPI) have loads of good things we’re delivering to the business. Sametime awareness (not in the specification — never will be), Click2Action (c2a, not portlet messaging: not in the specification — never will be) are two examples of helpful function points I have no problem getting out to the business.

This idealism is like asking your home builder not to use any manufactured parts in your new house. Asking them to mill every bannister spindle, baseboard and door jam. While the idea may seem appealing, imagine what it would cost? Imagine how little you’d care about those hand crafted baseboards once you moved in. I imagine telling my wife how the obscene budget overruns are attributed to the hand blown glass windows I had the builder make. Your boss at work is like my boss at home. They don’t care about where it comes from. — they don’t want any surprises. They care that it works and you did your homework before deciding to use it.


*This assumes I’m working for a customer and that customer isn’t in the business of writing JSR168 portlets.

Remembering Memory Management with CORBA

Java is a wonderful language, especially when compared to a container native language like LotusScript. There are many more options, frameworks, design patterns you can play with. The tools for writing Java agents make the days of writing LotusScript agents seem like forever ago. Have I discovered the silver bullet in the Domino world? Is this my valhalla?

No.

Not by a long shot. Remember memory management? Yeah, it’s back. I tried writing a simple agent that processed a Domino view containing a little more than 20,000 records. The agent uses the POI framework to convert the view into an excel workbook. This is done on a nightly basis. Here’s the thing. Domino’s default heap size is 64mb. My agent was running out of memory. A simple increase in heap size got me around the issue but raised the concern that memory management is something I now need to keep my eye on when writing these Java agents. I know, I know, any (every) developer should always be thinking about memory management but Domino’s LotusScript lulled me away from the concern.

That being said, I still believe this is the way I’ll continue to write my Domino agents from this day forward. My main reason? I can write junit tests for them. A topic for another day…

Testing Overhead

I was enjoying a sandwich with my friend Lindsay the other day when he asked an age old question:

“How much extra time do you think Test Driven Development adds to the build process of a software project?”

An excellent question, one I think about all the time. Once we established what he meant by build process my answer was pretty easy:

“None”

As a test driven developer, when was the last time you were burned by a NullPointerException? I haven’t seen one in my domains for a couple of years. You catch that stuff at build time through the effective use of testing. You know the state of your objects before they start interacting with other objects.

With some research, I’d wager that Test Driven Development actually saves time. The tests that drive my software were written by a customer. If my tests (and subsequent software) satisfy what the customer has written, I’m done. Finished. I can remember thinking I was done solving the customer’s problem only to later find I didn’t fully comprehend it. When they write the tests and I code them, we’re both aware of their expectations. That saves a tonne of time and better than that, it frees up my time to solve more of their problems.

If you’re new to TDD and looking for a good book to take you through the preliminary steps, try Kent Beck’s Test Driven Development: By Example

Business Process Execution Language

Like a courting young man, I stumbled into the beauty of BPEL yesterday. The promise of a cost effective, standards based integration technology was too good to resist.

The adventure started through the installation wizard of IBM’s WebSphere Integration Developer. Hard stop. WID cannot be installed on the same machine as Rational Application Developer, my primary development environment at TransAlta.

“BPEL is just a standard, the tooling shouldn’t matter”, was my next thought. Off to Oracle I went.

I read Oracle’s 2 minute tutorial on their Eclipse based BPEL Designer.

So, 30 minutes and zero Oracle experience yielded the following:
A Hello World BPEL
A deployed BPEL that reads my existing SAP Integrated Web Services (big step).

Well done Oracle, I’m impressed. IBM could learn a thing or two from you about how to introduce a developer to their new technologies.

Proof of Concept – Paint the Eaves before the Doors

I was recently in a meeting with a client of mine and one of their most respected vendors. The vendor was proposing a “Proof of Concept” exercise, implementing a bare bone version of their newest software.

The client proposed real world issues which the final version of the software needed to address. The vendor baulked at including them in the PoC*; concerned with the PoC failing. Through a long series of conversations, the vendor agreed to adding limited portions of these requirements. They insisted on a vast array of disclaimers in case it didn’t work.

I’m not trying to pick on the vendor, they were doing what we all do in these situations. They want to impress the client and drive as much functionality into the three month PoC as possible. Martin Fowler touched on this issue in his blog the other day.

I agree with him and in this case, would have liked to see the Vendor approach the Client with proving their technical limitations first. I don’t see that happening and anticipate a PoC that fails to live up to the client’s expectations.

I owned a painting franchise in University. I often told painters to avoid painting the easy stuff first. Leave the ground floor doors and basement windows until the end. Get up on the high ladders and paint the eaves first. By tackling the toughest task first, you have an accurate indication of how the rest of the project will go.

I think back to those projects all the time and marvel at how painting a house draws such significant parallels with building software…


*PoC – Proof of Concept

Mocking a Web Service

I like the idea of Mock Objects, I like what they stand for and the development speed they enable. I typically write my own and that makes me feel a little like someone at the edge of the room. When I get the time to investigate the Mock frameworks available, I’m sure I’ll think of this post and wonder why I was making life more difficult.

The issue is mocking an object you’re expecting from a web service. You don’t have control of the object so you can’t expect it to implement a desired interface (as I would normally do). You find yourself writing tests with real, hot wired objects — this is wrong for more ways than I care to get into on a Friday morning.

My first thought was extracting an interface from the type generated by the Web Service. I can then create a Mocked Type in my test package and implement that interface. I gave myself a little pat on the back and went about my business with writing some tests. I wrote tests for a day in my happy, ignorant little mock world, disconnected from the Service and speeding through my requirements.

I’m lucky enough to sit around the corner from the fellow writing these Services. He stopped by my desk to ask whether his changes to the Service object had caused any problems in my project. Pride gathering beneath the surface; I was about to outline my mocking technique. An anxiety filled “did I leave the iron on?” type of thought came roaring into mind.

“I’m not sure Jeff, let me get back to you on that one”.

Of course it had an impact.

My interface was based on what the type looked like from the day I extracted it. The type changed but without holding a true relationship (I couldn’t have the Wizard change the Service object to implement the new interface, it would break the moment I regenerated the client) with my interface. I was testing an irrelevant piece of software… The Titanic was sinking and I was checking the sound system in the Grand Ballroom…

So here’s what I did.

I deleted the interface and had my mock extend the Service object itself. When the client is regenerated, any changes to the Service object’s structure will cause my mock to break; I get notified right away (when I try running the tests).

It’s not the cleanest solution to this particular problem but it gets around that isolated feeling of ignorance I was starting to sense. It allows me to write software against something that looks and feels like the service object without being connected to the Service itself.

Agile SOA

Experience on Agile projects tell me to delay making those “what if the business will one day need (insert some piece of data) from us” decisions. You build what you need today and do it in a way that’s open and easy to change in the future. How does that translate into something as public as an Enterprise Web Service? You’re publishing a defined object at this point, something for other business units with an organization to consume. A change (adding that future piece of data) forces a change in the serializable object being used by your existing applications. You are now refactoring working applications (most of them outside your control) for the sake of upgrading a public object. Bad news.

Is this what BPEL is supposed to handle?

Off to get some answers…