Tuesday, November 13, 2007

Coding Style is more important than you think!

Recently I read a post about the important qualities in a developer. One of the points that was made is that the developer needs to subscribe to the coding style of the project, even if they don't agree with it.

I thought, that's interesting, and it's not something that I'd want to do. I mean, I like my coding style, it works for me and it has advantages.

Upon further reflection, especially after looking at the deleterious effects of having different coding styles on a project, I now think that having, and enforcing a uniform coding style is very important.

But before I continue, let me first explain what I mean by coding style.

Coding style goes past code formatting and what is commonly known as coding standards, though I think coding standards should include, where possible, coding styles as well.

Elements that fall under coding style include
  • whether to have multiple return statements from a method
  • Whether to always use an iterator when iterating over a random access collection (List).
Coding style goes further than that, especially if you factor in more API specific elements. e.g How to initialise a collection in hibernate for instance.

The advantage of having a uniform coding style is a similar advantage to that which you get from utilising patterns. If you use a uniform coding style, then pieces of code become easily recognisable and familiar. Comprehensibility is significantly enhanced, largely because I'm seeing patterns recurring in the code, analogous to what occurs when I use patterns in design. I am familiar with that pattern so I know what the code is doing at that point. Furthermore, if a lot of though has gone into the coding style, it will probably enforce the most readable/most efficient and just plain best way of doing things. A naive example is when iterating over a list, always use an iterator. That is a good practice because it allows the collection implementation to change to set or some other collection without causing syntax errors.

But then is the win worth it. What if I do not like the coding style. One of the team leaders at my company legislated that on his project you cannot use "iterator" or "iter" as a variable name. You have to have a more descriptive name. I did not agree with that and probably would not have subscribed to that requirement had a I worked on a project with him. That would have been a counter productive approach however. There is no technical reason for using "iter" or "iterator" as variable names so for the sake of code uniformity I should have subscribed to his request. Rather have uniform code than keep the team ecstatic about what you're doing.

Powered by ScribeFire.

Monday, October 08, 2007

The value of information

Fellow developers have jokingly labeled me a book addict, and I can see where they are coming from. I think if it wasn't for the negative connotations attached to being an "addict" of anything, I would accept the moniker.

Every 3 months I get withdrawal since I haven't bought a book in 3 months! So I go trawling Amazon looking for my next purchase.

There are currently a total of 10 books on my shelf here on my desk. Looking at them I mentally total up how much money is sitting there, at an average of $30 a book, that makes for about R2500 in total!

But every cent is worth it.

So what about the contention that there is already enough information available for free on the net. People that say that are patently not aware of how different "pay per view" information is. It is just soooo much better, and for a number of reasons, the most obvious of which is that with information that you buy, the better the quality, the more people will buy. Since free information is by definition, freely available, quality is not put at a premium because the author does not stand to gain from producing quality information.

That's not to say it's all bad. Both the spring online reference and the hibernate
online reference are very good. But as their name suggests they are more of the reference type of document. Bought information, as long as it's pitched at the right level, is usually a lot more comprehensive in it's treatment of the subject matter such that even for newbies it's comprehensible then free information.

Furthermore, because I have gained so much value from the books that I've bought, when it comes to hiring decisions people should take very seriously the candidate's attitude to books and such like.

Without the books I have on my shelf, I'd be nowhere near where I am today, and while I don't have a book on a particular subject, I feel there's a hole that must be filled.

Technorati Tags:

Powered by ScribeFire.

The major advantage Open Source has...

Open source software has been around a long time and it has traditionally played second fiddle to closed source software. The differentiation here is betweeen software that is free where you get the source and can make modifications to it and software that is not free, you have to buy it and you don't have access to the source.

Because open source software is freely available and thus does not require money being spent to acquire, what tends to happen in a development shop is that developers recognise a need they have and immediately go trawling around looking for an open source solution to meet that need. If they find one, even if it does not do all of what they want, the download it and use it. the key issue here is that there is no need for approval for the use of this kind of software because the software is free. Clearly it is fulfilling a need otherwise it would not have been sought in the first place.

It empowers the developers to find and use the tools they require.

If a tool is closed source, then you always have to weigh up the value it might add over the cost to purchase. It also needs to go through an approval process which might be fairly onerous, and then the final decision is often made by a non developer. Does it add enough value to warrant the cost. Personally, companies need to be a lot more eager to spend money on closed source tools because they've probably saved money because they're using some open source tools, Eclipse and NetBeans for example. What is more, as someone here pointed out, getting $20 cleared is just as difficult as getting $1000 cleared. So the cost is really not the issue. It is the _fact_ the money is required.

So come on open source, build us developers, more tools so we can use them.

Powered by ScribeFire.

Monday, July 30, 2007

Hudson Unleashed...

It has been more than a month now since I started using hudson and I thought I'd give an update on the progress and on what I've found...

Let me say to kick things off that Hudson has handled everything I've thrown at it. There are now upwards of 50 projects being built by Hudson and hudson shows no sign of falling over or being thrashed too much. Furthermore, the front end is still as responsive and quick as ever.

It has a particularly active development community with the primary developer being active in the forums and also active in terms of working on the project. A new version is released about twice a week.

Hudson supports dependent builds iow, a build can be kicked off by the completion of another build. So what I've done is setup each one of our projects in Hudson. Hudson monitors the repo for changes to those projects and then builds them. If there are dependent projects, those projects are also built. Because of this I can make the polling interval and "wait time" very short. Developers know within a minute or two whether a checkin broke the build (compile errors). Sometimes unfortunately there is a false positive because of ordering issues, but that is acceptable.

Furthermore, because of this facility I can always have an EAR ready to be deployed. I get to re-use the result from one build in another build. The primary mechanism I used to do this was for the build to copy the generated jar to some shared location on the drive. It would be better to use a tool such as ivy to publish the jar because then I can leverage hudson ability to distribute builds to other machines (v. nice).

I've since discovered however, that if I publish the jar with hudson, then I can use hudson to distribute the jars. This is because hudson publishes the jar to a static url and I can get the jar via the static url. It's great - a lot lighter than putting ivy in.

Furthermore something very funky, is that you can fingerprint jars. If you tell hudson to fingerprint jars then it can tell you which builds used those jars. You need to fingerprint the jar both in the source project and in the dependent project for this to work. The key issue here is that you do not have to rename the jar every time. So if there is a build error with a downsteam build and you suspect it's caused by an upstream project, you can tell exactly which upstream build number the downstream build is using.

Hudson is just done well, and often it is in the small things that this comes through. One other example of this is hudson's ability to run arbitrary batch/shell commands. The command that you specify is not simply dumped to the console and run, but rather the command is dumped to a batch file and then that batch file is run. Very nice little consideration because with one sweep you allow multiple commands to be run! Very useful. I was able to create a hudson task to restart jboss for instance.

It's a great application, and if you haven't got a Continuous Integration server running in your environment, do yourself a favour and set it up, if you do, and it's not hudson, do yourself a favour, and set it up!

Powered by ScribeFire.

Wednesday, June 20, 2007

Hudson is cool...

We were using continuum in our environment and eventually instability just go too much for me. It would kick off a build and then half way through the build would fail. It was very irritating. Which is why when I stumbled on hudson I was very glad, glad also because it is very very good. Occasionally an open source product comes along that is so good, so feature rich and just works that you can't believe it is open source.

Some highlights of its features.

  • Handled everything I threw at it, including simultaneous test and compile builds
  • Ubiquitous AJAX, responsive and intuitive interface.
  • Detailed rendering of the changes included in each build
  • Detailed “blame”. In fact will send special emails to the people that actually checked in code before a broken build.
  • Funky test reporting and result tracking - tracks the change in test result state between builds. Also tracks how long a test has been failing for
  • JIRA plugin available (very funky).
  • Still more features which I’m not using.
Today I experimented with its distributed build mechanism, and went "Wow!".

Would like to hear from any fellow hudson users.

btw, if you want me to compare it with continuum, I'll tell you with all due respect that continuum is notepad and hudson is eclipse/intelliJ/netbeans.

Monday, May 21, 2007

Hello World Spring

For those programmers that only think a spring is something that can contract and expand, you're missing out on a particularly useful and more and more essential, programming framework.

It is the first time in my 6 year programming life cycle that I have used spring on a project and I must say, there is no going back. Its benefits are incalculable. Regardless of the size of the application the Spring Application framework adds enormous value.

What exactly is Spring then, or more accurately, what is the Spring Framework?

Spring is a framework which enables the efficient development and maintenance of high quality applications. It is an infrastructure which handles many of the every day which developers would otherwise have to do. It has many facets, with the various components both tightly integrated and complementary. But the biggest win with “spring-core” at it is affectionately known is that gives the application “Inversion of Control” or otherwise known as Dependency Injection.

Let's do this by example...

Consider the following piece of code...

public class BusinessObject() {

WindowsMailService mailService = new WindowsMailService();

public void sendMail(String recipientAddress, String message) {


try {

mailService.dispatchMail(recipientAddress, message);

catch (Exception e) {

LOG.error(“send failed”, e);




In the above example, if you have read Robert Martin's brilliant “Agile Software Development” you'd already be pointing out that you're violating the Dependency Inversion principle, and that is that dependencies should go “down” and not “up”. In this case if I wished to use a different mechanism to send the mail, maybe a LinuxMailService or even change the mode of transport it would cause a change to this class. i.e. The “higher” class would require a change if a changed the “lower class”. So in other words, changing details, i.e. Nuts and bolts of the mail send mechanism would cause a change in the higher level business logic of the application. This is something we want to avoid. By the way, it's also violating the open-closed principle.

It would be a lot better to abstract the message dispatch mechanism via an interface and then remove the dependency between the business logic and the message dispatch mechanism.

You may now be asking, so where does spring fit in. Well, even if we abstracted via an interface, we would still need to get at the service to send the mail somehow. This is where spring comes in. If I were to use spring to construct the above infrastructure I would first add a property to our business object above which takes a IMesssageDispatchService interface and then create a file which defined the infrastructure.

This file of xml is provided to a spring definition loader which would load the file in and create a context. The context would create the beans defined in the file, business.logic and service.mail and make then available via a getBean(String) method. It would instantiate them as the class specified and with any properties specified. In the above example it would call the “setMessageService” method on the BusinessObject with an instance of the MailService class. Thus the creation of the infrastructure has now been abstracted out into this file. Plugging in another message dispatch service would mean coding the service and then changing the xml definition file. No change to the calling context or any of its code, necessary.

So to show you...

IBusinessObject business = (IBusinessObject)context.getBean(“business.logic”);

You've probably already noticed tha changing the mechanism used to send the mail is as simple as changing the definition file. As long as the service.mail bean implements the ImessageDispatchService interface it can point to any class.

So then to give you a whirlwind summary of the advantages from my own experience, here goes.

  • Abstraction of the details

    Spring lends itself to a system where the high level components do not have dependency on the low level components

  • Management of Dependencies

    When you create a spring managed bean you can easily specify the dependencies (i.e. Other beans) which that bean depends on. Spring will then go and build those various dependencies for you and you end up with a fully built up bean. Those beans should also be managed by spring. You don't have to code the plumbing.

  • Simplifying and Enabling a good design

    One of the problems I've often found when designing an application is working out the relationships between the objects and keeping those relationships clean. I've often found myself simply making a singleton out of my service objects and thereby making them available wherever they are required. I've often felt uneasy about this because I allow any object to have access to any other object and the relationships are not explicit. i.e. Because it is a singleton it can be referenced anywhere without limitations. Using spring allows those relationships to be controlled and explicitly defined (in an xml file) but at the same time it provides a mechanism to provide easy access to the services. The spring developers do not recommend that you do to much of the context.getBean calls illustrated above. This should only be done for the “root” of your infrastructure.

  • Flexibility

    Spring uses an xml file to define the infrastructure. For this reason changing the infrastructure is as simple as changing an xml file, and spring provides quite a few efficiencies in how you can change this xml file, via a property file for example. In our project for instance, we can change from container managed transactions to hibernate managed transactions as simply as changing a value in a properties file.

I think the key issue with spring is that it is a framework. It is a framework for building an application. It's analogous to a building. Spring is the framework while the application is the bricks. Because you already have a framework, adding aop for instance, is simple, the framework enables it. It does a lot of the “heavy lifting” of management and infrastructure, leaving the developer free to work on the interesting bits.

An Example of spring in action...

In our project we have junit test which test at the service level and these tests test locally (outside the container and not via web services). Now because the service level is what is presented to the outside world, this is the level which is exposed by web services, and there is thus a one to one mapping between the service level and the wsdl's. Now for this reason I felt it was possible to switch those junit tests to test against a running remote application server, and I achieved this switch by adding a bit of spring infrastructure definitions and I only had to change one line in the test code!

Spring has become as essential in my day to day life as a programmer as an IDE. I would not consider not using it in any application, no matter what the size.

When I came on the project I had never used it, but then I had not been involved in developing an enterprise application and now I cannot be without it.

Monday, March 05, 2007

Reflective Revolution...

It has been interesting to see the evolution of the use of reflection. I think, though I may be wrong that it was java that was the first platform to fully support reflection. At that stage I'm not sure the developers of java realised what it would become. The birth and subsequent development of a technology often follows this pattern. It starts off as a feature which might be used in one or two rare situations, as it gains traction and support, and most important performance, it moves into more mainstream applications.

In recent times, probably the last 5 or 6 years, reflection has become central to so many applications. One of the original uses of reflection was java serialization, The poster boy of reflection is probably Struts. Translating an http form submission and populating the relevant java form would not be possible without reflection.

Many more applications were found for reflection to add value for instance, it significantly simplifies runtime enacted AOP and makes is essential component in the development of rich object relational mapping tools available on the java platform.

I can hear the reader asking about performance... let me acknowledge that yes, certainly historically that has been a factor, but I think the java brains trust have expended much resources improving this situation. Each edition of the java runtime has seen significant improvements in performance of reflection such that in 1.6 the performance penalty is negligible. In 1.3 it was slow, in 1.4 is was adequate, 1.5 brought it up to reasonable pace and 1.6 has further improved on this. It is slower than direct method calls whichever VM is used, but the value that it adds more than makes up for the performance penalty. The old adage that says, design first, and then optimise later applies. Because the performance penalty introduced with reflection is constant, in reality it will probably not be the biggest culprit.

Reflection probably has an even greater contribution to make to modern programming. It is probably fair to say that developers do not consider using it. Performance will probably play a role here, but also a loss of explicitness. The code is not as self explanatory, errors may only appear at runtime for instance. There are applications where this loss of explicitness is not as much of a factor.

For instance, on our project we require the ability to copy between context, from dto's to model objects. Reflection is a useful tool in this case. When needing to update a data managed object with a dto object. Another useful non invasive use of reflection is to compare two objects for testing purposes. There are probably a number of factors to look at when considering using reflection...

1.Does the solution to the problem require much repetitive “dog work” code.
Copying values from one object to another is the bread and butter of reflection. The bean utils library is a useful addition to a developers arsenal in this respect.
2.Will the reflective element be executed many times
One of relection's biggest value adds is that can replace hand written functionality and modularise it. If you can solve a task with reflection you can reduce the amount of hand written code required and thus increase the quality. The more hand written code there is, the more bugs there will be.

What is interesting about the current situation in the project is that speed of development has become the driving force behind everything. We have got to speed up the process. Reflection is a particularly useful arrow in the quiver in this regard.

Tuesday, January 30, 2007

The story so far...

Apologies to those who have been watching this space and seeing no updates. In my defence I have been particularly busy...

So I have some catching up to do.

I have a number of things on my mind, I think I'll start with a little bit of a report back on the goings on on the project for the last few months.

When we started on the project, a websphere/RAD based enterprise development with web services and session beans we eschewed the default RAD/WAS approach of hand coding the wsdl's. They were deemed to complex, and to be honest, there is some truth in that. However, like a fellow esteemed developer pointed out (web services duplication language) there is a lot of duplication in a wsdl so once the basic wsdl has been coded it is merely a case of copy paste. Furthermore, if the best practice approach of having one request object which encapsulates all the objects, once the methods have been setup (in the wsdl's) and the xsd are outside of that file, then changing the parmaeters of the method does not require editing of the wsdl. In fact it is very seldom that I find myself editing the wsdl file.

Thus the first round was via a a simplified wsdl where the xml containing the parameters was passed as a string. This string contained the method to be called as well as the parameters of the method. This approach was quite convenient because it meant only one ejb session bean and one web service (wsdl file), thus adding functionality at this level did not necessitate a new ejb/wsdl. Quite nice, but it is not particularly explicit (the outside does not know what is available), and furthermore it was an off standard approach.

Thus effort was made to beak the back of the IBM websphere wsdl generation and web service “topology”. It wasn't as difficult as initially thought. There were a number of teething problems brought on by the fact that the calling context was dotnet and thus had different requirements. Time was spent refining the process and today we have an automated object generation from the wsdl's. What is ironic and should not be surprising is that the whole wsdl to java generation sub-system has nothing to do with RAD, it only has dependencies on WAS. The generation and wsdl tooling of RAD is for want of a better word, crap.

This approach worked well for us. We have ant scripts to generate the dto's from the wsdl and it is nicely integrated into eclipse (notice the emphasis even though we're actually using RAD).

All very well, but the whole time RAD was holding us back in terms of productivity efficiency. The pressure from the business in terms of time frames etc increases all the time, and we found we were just losing too much time due to the idiosyncrasies of RAD.

To give you an example, my RAD platform can no longer refactor! You might ask, what? Yes, when I try and refactor it throws an exception, some yarn about JSF! Wt_? You might ask, what have you done? And I would answer by challenging you to tell me exactly what I could possibly have done. Put it this way, I did not delve into the class files of RAD and change them to specifically make it stop refactoring (there is about 6 gigs of RAD on my hard drive), for what else could I have done to make it stop refactoring? The second question I would ask is how can software possibly let you disable some functionality in some way. It's like driving a car, when you drive a car and it breaks down suddenly you don't exactly ask what did you do to cause it to break down? So it is the wrong question. The very fact that I can cause my IDE to break down in some way is ridiculous.

So this week we embarked on a mission to use Jboss in development. We will still be deploying onto WAS (our build server) that the front end people will use to test against but the DEV will be on eclipse/JBoss. This transition has gone better than I think you'd expect. Jboss is a quality product.

It is hundreds of times faster, more efficient, more light weight – not that it can't mix it with the “big boys”. It is not a monkey and WAS the elephant, it is in fact as nimble as the monkey and as strong as the elephant.

There has been minor differences between WAS and JBoss, this is to be expected. We had a minor issue today with the JAX-RPC mapping differences. I do not foresee a show stopper in this endeavour.

That is an update from my side in terms of what's been going on. See my other post for a programming specific ideas.

The role of the wiki in the dev team...

When I arrived on my current project one of the first statements that was made is that we need a wiki. I hadn't used one in the past so was a little intrigued by the concept and what it would mean. I felt they were the flavour of the week and felt they would add value a wiki was setup.

It quickly became apparent that a wiki is an essential part of the efficient operation of any software dev team and I think for a number of reasons.

The wiki plays an important role in the collection, storage, categorisation and dissemination of information (mostly technical) related to the project. For any software team to run efficiently information is required that can be shared by the team, and a wiki is probably the most efficient mechanism to do this.

I'm not sure exactly how the word wiki is defined or what makes a wiki a wiki, and not being online right now, I cannot look it up. So I'll give my take on the essential elements of a wiki.

A wiki is an application which serves information efficiently, allows information to be related and structured and allows for easy editing of the information in the same context as it is viewed. Typically they run in a web architecture though it is not fundamentally necessary.

The question then is how has the wiki added value to team and how to you optimise that value.

I think the simple way to show the value of the wiki to a dev team is to look at what would happen if the wiki was removed.

When I compare a previous large project that I was on, which did not have a wiki, with the current project that does, the previous project would not have the problem that they do not have adequate documentation. The documentation they do have is either out of date, fragmented, duplicated in a number of places or irrelevant. The wiki would have removed all those weaknesses. It would have...

  • Provided a centralised location for all documentation

  • Provided a tool to categorise and relate various pieces of information together

  • Allowed for the quick and easy updating of information, keeping it relevant.

Furthermore, in the past, people would often say “Where should I put that information?” - the default answer now is “on the wiki”. It is surprising how often this word comes up in every day conversation.

Once you have been using a wiki for a few months it becomes indispensable.

However, there are number of challenges...

  • Keeping things relevant – at the beginning of the project a lot of information is placed on the wiki. The rate of discovery of new information is high so the wiki grows quickly. This information has relevance for that time, and because the rate of change in modern IT projects is quite high, information can quickly become dated. Because that information is no longer relevant the motivation to maintain it is low. Take our project for example, there is now a lot of information on the wiki which is irrelevant!

  • Wiki structure – this is arguably the biggest challenge and I think potentially the most difficult, especially when things change – but getting it right can add a lot of value. The reason I say this is that if the information is appropriately structured, then it can be found more easily, and if it can be found more easily, then people are more willing to use the wiki and thus information is better disseminated.

But I think one of the key issues to remember is that there is no silver bullet to improving quality. A wiki is one of the tools at the developer's disposal. It can't make people record pertinent, nor can is make people read pertinent information. One of the problems with information is that in order for information to be useful it has to be read. It is no use if the only person who benefits from the wiki is the person who originally wrote the information. Having said that, at least the information is now in two places (the developer's head and on the wiki), so the risk involved with that information is reduced.

All the wiki and other development tools can do is make things easier for the developer. They can improve quality if used appropriately and the wiki is an important, I would even say essential tool for improving quality and efficiency in a dev team.

What is interesting is that I do feel that the large corporates have largely missed the boat as far as the software for wiki's is concerned. I do feel that although the quality and maturity of wiki is high, I do think there is still some space for improvement in that space – it's going to interesting to see what kind of innovations materialise.

Trumpi raised the issue on his blog: How does one make wikis work?