Tuesday, September 08, 2009

Urgent or Important?

Recently I had spent some time working at a new client (old client for my company, new client for me). I spent 3 weeks there getting familiar with the environment and am now working off site.

I was struck by how "busy" the place was. There were people on the phone constantly, thousands of mails flying around, the project is way behind schedule, the environment is a little mad, maybe chaotic would be a bit strong to say, but it is on its way there.

It just does not seem as if a lot of though and critical, deep and creative thinking has been brought to bear on the project. People are just going from one crisis to the next, doing what is urgent all the time and failing to address what is important. Either they don't have time, or they deem what is urgent to be important and can't really tell the difference!

What is interesting however, is that from the client's perspective, lots of things are happening. People are doing things, working hard, appearing very busy - but not necessarily getting things done.

Contrast that with an environment that is calm, smooth and running as planned. How would that look? A little boring in comparison. People would not appear busy at all. They'll be sitting at their desks working, not talking to much on the phone, not be stressed, and have everything under control. The client may interpret this as being lazy and not working, because we certainly don't appear so busy.

Thursday, August 20, 2009

Netbeans - a first really serious look

I have just started seriously using Netbeans for the first time - I've been forced to because it's what the client uses.

And I have to admit, that having used Eclipse up till now, I am definitely not enamoured with netbeans at all. In fact it has done nothing to recommend it. I constantly find myself going... oh, I wish I was using Eclipse.

Netbeans fans might be crying foul right now... and they have a right to, let me record some of my initial frustrations...

1. No inline variable
2. No inline method (though there is an extract method, called introduce method).
3. When you use the template short cuts (for, create field etc) the tab does not leave you where you want it to
4. It hides a lot - we're doing web service development - all the ant xml files which are used to compile and build, you don't see.
5. No call hierarchy
6. Search facility is nowhere near Eclipse
7. No Quick diff comparing repo version with local version (via the ruler)

A good way to explain netbeans is to say it is opinionated software (I remember someone describing ruby on rails like that) - iow, it's great, if you obey the rules and if you want to stay within the confines of what it wants. I don't like the ant scripts it generates and uses to build - I want to roll my own, as eclipse will let me and thus customise them the way I want to.

So what do I like about Netbeans?

It has very strong integration with glassfish - tells you a lot about the running glassfish server you're using to deploy to. I also wish I could use the collaboration stuff, because I've seen that it is very good.

I'm going to have to continue using netbeans so I'll see if my opinion changes... I'll wait and see.

Wednesday, August 19, 2009

Why don't people understand the computer?

I am always fascinated when I observe people using a computer's user interface, even when using the user interface of the popular video game, guitar hero.

I have played guitar hero with a lot of "normal" (non computer) people and have found it interesting to observe their interactions with it. This process has enlightened something else I have been spending a lot of time thinking about, and that is why people cannot understand/use a computer.

And Guitar Hero is a good example. Even something as (I think) simple as the user interface of guitar hero causes people to go... I don't know what's going on, I can't use this. From my point of view, it could not be simpler, everything is self explanatory and right before you. It gives you all the feed back and guidance you need to quickly navigate around - and I therefore cannot at all, understand why they find it so difficult - I just can't get into their head and understand.

Until now when I finally understand why they find it so difficult.

For these people, the user interface of guitar hero might as well be Chinese - that would be just as comprehensible to them! I understand the language of the computer - they don't, and it is a question of language. They don't speak "computer", or should I say "read computer" since the "computer" language is only a read language. You can definitely get different abilities in this "computer" language - the people I'm referring to here though have no clue. And so it makes sense that people can be "computer literate", able to read computer - it's a very appropriate label for someone who knows their way around the computer.

The conclusion to all this is that whereas before I could not understand why people cannot understand the simplest of interfaces, and these people are not uneducated, now I fully understand why they cannot operate the interface. They need to understand the language it's communicating in.

Monday, August 17, 2009

Development On a VM - who needs a desktop?

Recently, in our dev shop, we have taken delivery of a rather high end server class machine. We installed linux on it and VMWare. It is, needless to say, way faster than any of our desktops.

We created a VM, installed ubuntu on it and then added that as a hudson slave. With that VM, the time for the simple compile job is down to 13 seconds from 45 seconds (gives you an idea of how fast it is).

We created another VM with ubuntu and I started tinkering around on that VM. I installed eclipse on there and started using it for my development by exporting the desktop to my workstation. The remote desktop equivalent in VMWare is very good - more than adequate responsiveness etc. My personal laptop/desktop is now just being used as a dumb terminal...

The advantages of developing this way:
  1. Hardware upgrade costs are significantly reduced - don't upgrade your developers hardware when next upgrade cycle starts - let them keep their current hardware (and use it as a terminal) and give them a VM to develop on.
  2. Maximum value is extracted from available hardware - I reckon we could put about 4/5 developers on our new server, depending on size of applications being worked on. This will mean that the server hardware will be very well exercised, a lot less wasted CPU cycles
  3. Configuration and down time are almost non existent - new developer arrives, just provision a new VM from the image and he's up and running. Developer's image gets corrupted, same solution.
  4. Significant cost reduction - because economies of scale can be leveraged, and hardware is better utilised, hardware costs are significantly reduced.
The only disadvantage of this solution is the lack of mobility - you need to be within a fairly fast network of your beefy server for this to work, let's hope that line speeds can be made fast enough to handle this.

For dev shops which issue laptops is less of a solution, but if you have a location full of desktops, don't upgrade them, just use them as dumb terminals and virtualise their requirements.

Monday, July 27, 2009

Evolving an Architecture

So, when last did someone ask you about apiece of architecture, they asked, how are we doing to do this? And at stage you, the architect, or involved in the architecture, weren't quite sure. Not because you're a bad architect, but because you don't have enough of an idea of what is required at that stage, and because right now, that issue is not front and centre. In other words, it hasn't had the full treatment. You're currently sorting another issue and will get to that. Right now, the solution employed in the area asked about is "good enough". Probably not going to go into production like that, i.e. it needs work, but for the rest of the project to continue, it's adequate.

I don't know how many times I've found myself defending an architectural construct in this manner. Yes, I'm aware of the issue and will address it. Furthermore I am forever finding myself changing the way things are.

An architecture of a system must be a fluid thing. It is something which evolves. As the designer you should always be willing to modify something because it has not met it's requirements and/or you, or somebody else has come up with something better. One of the characteristics of a good architect is that they are willing to take risks, change something for the better even though it might negatively affect the application in the short term. Developers for example, might not like what you've done, but if you've done it for the right reason's they will see those reasonsand buy into the change.

You will not get everything right at the start, accept that, choose what is important to get right now, defer the rest, but make sure you get to the rest. When developers ask you about the "rest", let them know why it is that way, and that you're going to get to that issue. Often I've felt indignant at these kind of question, that ask about the "holes"... but I shouldn't be, it gives me an opportunity to give them a greater understanding of the system, and an opportunity to gain an ally. I suppose however, the ultimate is for developers to see where those defferred issues are and to know intuitively that they will be addressed.

Monday, June 22, 2009

A good design is mandatory if you want good performance.

Now, I know that there has already been much ink spilt over premature optimisation and I thought I'd add my thoughts, mainly because it is still so relevant. I still see people not listening to Michael. A. Jackson's classic law:
  • “The First Rule of Program Optimization: Don't do it.
  • The Second Rule of Program Optimization (for experts only!): Don't do it yet.”
Michael A. Jackson

I think the problem stems from a question of value. Developers still see code that runs fast as somehow better than code that runs slow - so they will prefer a solution for example, that creates less objects because they see that solution as slow.

The "fast" version could be 10 times slower than the slow version, and 10 times sounds like a lot. Wow, you achieved a 10 times speed increase, that's got to be good right?

Not if the fast version takes 1 milli-second right? The slow version will take 10 ms. I challenge anyone to notice a 9 ms difference.

I've never heard anyone complain about some process taking 9 ms longer to complete. 9 ms!

To give you an example...

Consider the code:

long p = System.currentTimeMillis();
Agent[] objects = new Agent[1];
for (int i = 0; i <>
objects[i] = new za.co.sepia.ucc.core.model.Agent();
objects[i].setAgentId("hello");
}
System.out.println(System.currentTimeMillis()-p);
This code takes less then 1 ms to execute (printed difference is ZERO)... on my fairly powerful laptop with Java 6.

If I up the array size to 1000, it takes 31 ms...

I had to up the array size to 1000 to up the time taken to 31 ms. An array size of 1 is faster than an array size of 1000, it is in fact 30 times faster! Wow, I got a 30x speed improvement by reducing the array size. However, even with such an enormous factorial diference, the slow version still only takes 31 ms... That is 31 thousandths of a second.

Alright, granted, if you're doing hundreds, maybe thousands of these kind of operations then speed would start to be a factor. How often do you, in your daily software programming do you encounter situations where you're doing the same operation 10000 times...?

What I would like to see is developers change their priorities, and change their priorities in accordance with the third rule of optimisation, which I would like to add, and that is...

  • A well designed system is better than a badly designed system which runs 1% faster.
Why is it so important to had a well designed system with respect to performance?

Picture the following... at the end of your project you now want to increase performance. You have not optimised prematurely, but you do, unfortunately find that you now have a performance problem.

You run a profiler on your application and pin point the problem to be a particular component. Because your system is loosely coupled with good encapsulation you're able to rewrite that component quickly and easily, test it on its own to validate that the performance problem is no longer there, plug it back in and run the tests to validate your change.

But imagine if your system was badly designed...

While you would be able to pin point the source of the problem, chances are there's loads of dependencies on that component such that a simple rewrite will prove difficult. Furthermore, you would not be able to unplug it easily and rewrite it because there is tight coupling and little encapsulation. The bits of slow code are scattered all over the show. Maybe when you were designing you prioritised "performance"... funnily enough though, where you expected performance problems, none materialised.

I will always prefer a well designed system over one which is "fast".

In my experience, if you prioritise good design over performance, you will get a performing system in any case.

Wednesday, June 10, 2009

Is Java Dead?

Recently, on a local Java User Group the question was asked, should Java be my platform of choice. I was fairly surprised by the discussion that ensued. It was largely negative.

And why shouldn't it be, a language that is more than 10 years old, still hampered by its legacy, competing with cutting edge dynamic/scripting languages which seemed to have stolen a great deal of java's positivity. Java was debunked for being inelegant, hard to understand, unpredictable and old fashioned. While languages like Python were praised for their elegance, obviously better than java, far easier to learn and use.

So the question is, is Java that bad?

One or two examples were presented singling out auto boxing as a disaster, concurrency unpredictability and that perennial source of mirth, the try/catch/finally what exception is thrown? Personally, those "what happens in the above piece of code that you'll never write yourself?" are a waste of time. People single out the handful of confusing scenarios that you'll just not get in every day coding (unless you go looking).

The question is, what made java so attractive in the first place? and can those properties carry it forward? It had built in OO (yes, not _everything_ is an object I know, but close enough), it was simple, had none of the superfluous syntactic sugar of C++ and hit the ground running with a useful API, both for gui and other development and it was cross platform (ran on a VM).

The factor that is very often underestimated in proclaiming java's looming demise in face of more modern, dynamic and scripting languages is that only a small minority of developers actually keep up to date. Some have said, it's in the region of 20%, in my experience, it's probably about 15%. Those are the ones that don't teach themselves new technologies unless forced to. Also, I doubt those individuals see the pitfalls of java - the so called inelegance, is to my mind, significantly overblown. Yes, autoboxing has problems, but I've already seen the benefits of it, which outweigh the problems.

It would be remiss of me not to indicate that without the java platform, java would not be as viable as it is. In fact, some would say that without that java would be on the wain, and possibly "dead". One thing is for sure, that this is what makes java so viable and the only choice for any project of significant size and scale.

Personally, I cannot see java fading away or even losing it's dominant position in the market in the next 10 years. In ten years time, things will largely be the same as they are now. The new kids on the block however, will grow and will start to cut into java's domination, especially on the web and on the desktop, though I do think that google is going to keep java strong on the web (gwt and google app engine). Java will lose in these areas, but in it's mainstay, on the enterprise, it's not going anywhere.

Monday, June 01, 2009

Depency Injection/Inversion of Control - are they the same?

On the weekend I spoke to a friend and we were talking about the SOLID principles. We talked at lengh about Inversion of Control/Dependency Injection and how they are not the same thing. Trumpi was saying that you use dependency injection to achieve inversion of control, and that made sense to me. Then I consulted with another colleague about it and this colleague pointed that in order to achieve inversion of control then you will necessarily be doing some form of dependency injection. You will need to "inject" the dependency because the component requiring the dependency will only depend on an abstraction.

So in effect, they are the same thing. The different phrases describe the mechanism from different point of views, the one describes the behaviour (dependency injection), the other the structure (Inversion of Control).

What I also discovered as I opened the book, Agile Software Development, which I think, first explained the SOLID principles, presents the 'I' of SOLID as "Dependency Inversion", i.e. a combination of inversion of control and dependency injection. Which to my mind, is a better moniker as it describes the mechanism more accurately.

Thursday, May 28, 2009

Mockito is first mock framework you should consider

Having been stuck on jdk 1.4 for the last 3 years it came as something akin to a coming of age to step into modern java; and a most satisfying revelation has been mockito.

If you have used easy mock in the past you will be very pleased and probably quite excited when you encounter and start using mockito. It is totally awesome.

It is simple, intuitive, terse and has some very powerful features. It will gracefully handle almost all of your mocking needs. For example, it is possible to stub out selected methods on a live object.

To give you an example of mockito in action, to whet your appetite, here is a code snippet.

ConfigParams params = Mockito.mock(ConfigParams.class);
when(params.getInt(ConfigParamKey.AGENT_MATCH_TIMEOUT)).thenReturn(99);


So, it's fairly self explanatory, when the method getInt is called with the parameter ConfigParamKey.AGENT_MATCH_TIMEOUT then the mock should return 99.

In case you were wondering how this all happens, generics are playing an important role - thus the reason why it is only available on jdk5+

As in EasyMock and JMock you can verify that the method was called.

It is probably worth pointing out that Mockito tries to be less strict with its mocking than other mock frameworks. For example, the default mock you get will accept any call - and unless you specifically verify (which you can do), not calling the method will not cause a test failure.call

See the web site for more information, downloads and demos.

Tuesday, May 12, 2009

Best Practices are for people who don't know what they're doing

On my last project, I can remember occasionally reading through "best practices" of the various technologies we were using. Spring and hibernate spring to mind.

I can remember noting that we were not following a large number of the best practices. In fact, what is more interesting is that we had made those decisions without considering the so called "best practices".

However we did not make those decisions without considering all the variables. In each instance where we diverted from the best practice we did so for good reason and with our eyes open. In other words, we did not deviate from best practices because we were rebellious but because we knew what we were doing. In all cases we had a ready defense for not using the best practice.

I realised in doing this that the point of best practices is to protect the person who does not quite understand what they're doing, to make sure they don't really shoot themselves in the foot. For those people who understand the situation, best practices are useful because they provide a base to work from. If you deviate from best practices make sure you can defend your deviance.

Furthermore, I'd also like to add, that very often a so called best practice is someone's opinion. A few months ago I wrote on what I thought were hibernate "best practices". These are based on my experience and my opinion, they are my best practices not necessarily industry tried and tested best practices. So the label "best practice" is often a fairly loose term.

Best practices are for people who don't know what they're doing, but for people that do, the tried and tested ones are a good and worthy starting point that should only be deviated from for good reason.

Friday, May 08, 2009

Spring with annotations - in 2 minutes

I have just recently, today, had my first look at spring with annotations. Up until now I have always had to use spring on VM 1.4 so was unable to take advantage of annotations.

Suffice to say, it is awesome!

Now I know for a lot of you, this might be old news, but it wasn't for me.

So you remember the old xml based spring configuration...

<bean id="serviceLayer" class="za.co.bbd.ct.ServiceLayer">
<property name="businessObject" ref="businessObject">
<property name="xmlConfigured" ref="xmlConfiguredX">
</property>


When using annotations, no nasty xml is required.
@Component
public class ServiceLayer {

private IBusinessObject businessObject;
private IXmlConfigured xmlConfigured;

@Resource(name = "businessObject")
public void setBusinessObject(IBusinessObject businessObject) {
this.businessObject = businessObject;
}

@Resource(name = "xmlConfiguredX")
public void setXmlConfigured(IXmlConfigured xmlConfigured) {
this.xmlConfigured = xmlConfigured;
}
}
Then in your spring xml file the following is required. The good thing is that you can mix annotation and xml based configurations seamlessly.

<!-- tell spring to use annotation based congfigurations -->
<context:annotation-config />


<!-- tell spring where to find the beans -->
<context:component-scan base-package="za.co.bbd.ct" />

The default name for the bean is the class name without the package name in camel case, iow that "businessObject" class has this as its annotation:
@Component
public class BusinessObject implements IBusinessObject {

}
And if you'd rather not specify the name of the reference and just match by type:

@Autowired
public void setBusinessObject(IBusinessObject businessObject) {
this.businessObject = businessObject;
}

You can mix xml and annotation based configurations. The definition for xmlConfiguredX is also in the xml...
On a project I was on we used xdoclet to circumvent the lack of annotations (project was on 1.4). We had 100K's worth of generated xml spring configuration files. Annotations would have been so much better!

Tuesday, May 05, 2009

You don't know what you don't know...

When I started on my last project, a very large enterprise application I did not know too much. I had not done enterprise programming before, I had not used web services nor had I used hibernate, all of which were used to the fullest on the project.

When I arrived, we spent six weeks finding out what challenges lay ahead, where we would have problems and on their solutions...

Turned out that zero of what we did during that time was useful.

Problem was, we did not know what we did not know. Sounds like a redundant statement but it's not. IOW we did not know where the holes in our knowledge was. We did not know web services enough to realise that we were going to have problems interfacing via web services between dotnet and Websphere. We did not know hibernate enough to know that using inheritance and putting sub types into collection was going to be problematic. We did not know that we would have to work very hard figuring out the issues with JNDI and session beans.

We wasted a lot of time because of this. We should have got someone in, not to help us with the problems, but simply to tell us where the problems were going to be. We had the ability to solve the problems, we proved that as we solved them, what we didn't have was the knowledge to know where the problems were going to be.

Tuesday, April 28, 2009

twitter is cool...

So if you haven't figured out twitter I'll try and help...

Twitter is technically a "micro blogging service"... but that does not really do it justice.

To describe twitter simply, it would go something like this. Twitter is a mechanism for people to distribute short pieces of text (140 chars) to people who have elected to be on their list.

Text communiques are known as "tweets" and people who receive them are "followers".

So then, what is the big deal?

The key difference is that for me to follow someone I do not require their permission. Unlike facebook where in order to receive status updates from people both sides need to agree.

So I could therefore follow Oprah Winfrey or Ashton Kutcher, or even Hugh Jackman.

So if you want to know what your hero is doing/thinking/wanting to tell the world... then why don't you look them up on twitter and follow them. You'll not only know what they're thinking about what you're interested in, you'll also probably know when they come back from a cycle.

Oh and btw, you can find me on twitter.

Thursday, April 16, 2009

Unit Testing, what do I need to get it right?

So you want to do Unit Testing... it's a noble goal and good goal that every development shop or environment should have.

And it is a goal that we had at the beginning of our project.

It didn't happen, or certainly not as I would have liked.

So then, what did I learn from our so called failure, what are the mandatory requirements for making unit testing a sucess...
  1. Infrastructure to automatically run unit tests. In order to make unit testing work, you must have some facility to run the unit tests regularly. Preferably these should run fast and should be run on every checkin. It is not enough to rely on the developer to run the unit tests before they checkin their changes.
  2. A univeral commitment to quality. Your whole development team needs to be fully and categorically commited to quality, and a working unit testing system is a necessary requirement to get there. For a unit testing system to work it means that if tests fail, developers need to care - as soon as one unit test fails, your system is immediately less effective. The unit test frameworks do not work when even one test is failing. You can't have just one developer, say the lead developer, telling developers to fix their unit tests.
  3. Mangement buy in. The problem we had with unit tests and getting unit tests fixed is that developers always made some excuse about how they could fix it now, they just had this bug to fix or this use case to finish, so management need to realise that a reason for the use case being delayed is because I had unit tests to fix is as good a reason as, there was a power failure and I couldn't use my PC.
The benefits of unit tests are proven, but I can't prove those benefits to you, here. You have to do it, 100%, pay the cost - and it's true, there is a cost, but the pay off from paying that price worth it, you'll probably gain 3 times as much you put in. But don't bother, unless you've ticked the 3 boxes outlined above.

Tuesday, April 14, 2009

Commit comments... more is more

First off I must say that it has been a long time since I last posted - I have just been swamped with work on www.sanlamconnect.co.za.

The other day the issue of what is a good commit comment came up... and there are a number of approaches to this issue.

The big question is, Should it stand alone? IOW should it make sense by itself, or should it be taken in conjunction with diff. If you apply the former then the comment will repeat what the developer can ascertain from the diff and typically add more information. If you apply the latter then the developer is required to look at the diff to make sense of what is going on.

So the other day a fellow developer accused me of being too terse in my commit comment, and I was a little indignant at his criticism because I regarded my commit comments as being adequate. I felt why must I rehash what the user can see from the diff. The comment as far as I remember was "fixing bug"...

I reasoned, if the user has any sense they can look at the diff and they can figure out what the bug was and what I did to fix it.

But would it have killed me to just put down maybe another 10 words to indicate more details, like "fixed bug in execute target, the file extension was placed incorrectly". Is that too much to ask? Another 10 words that would have taken me another second to write that is going to save the next person at least the time it takes to open the diff, notwithstanding the time it's going to take them to make sense of it.

So, the conclusion is, write detailed commit comments. Definitely a paragraph is not necessary, but at least 10 words is a good start.

What about those developers that would have taken another minute to write those 10 words because they can't touch type... to them I say, tough, it's time you learnt.




Wednesday, April 08, 2009

The Double Importance of Unit Testing when more than one developer is involved

So now, you have a team of 10 developers working on a project, I know it's a lot, but some projects have that number...

And so a solution is required to a problem and both developers require a solution to the problem. For any number of reasons, both developers end up solving the problem, not in exactly the same way mind you.

The result is that you have two solutions to the same problem.

Now what? You have 2 options...

1. You could leave the code as it is. This is surprisingly enough, a viable option, since it's done often enough! But it's clearly not ideal as it leaves you with duplicate code and duplicate functionality. And if you consistently apply this approach then before long you'll have many many instances where this occurs. Maybe inside that duplicate code there is another facility that is required and thus duplicated. And if you consider that with 10 developers involved, you can have many many instances where developers are unfamiliar with what other developers have done, so you have exponentially more instances where functionality is duplicated. The more developers you have, the more duplication. Each time a developer is added, the potential for duplication is exponentially higher based on the number of connections. With 2 developers, only 1 piece of communication is requried per piece of work, with 3, 2 pieces etc... each time you add another developer, the probability of duplication is significantly increased. So you inevitable end up with multiple ways of solving the same problem with code that differs in its quality (different developers wrote each one)

2. You could however, refactor, such that both parties use the same functionality. A more appropriate solution; but easier said than done right? It sounds good in theory but it is not practiced. Why not? What if it doesn't work? What if there was one little consideration the one version used and depended on that the other version didn't have or do? Then the implementations are not exactly the same and thus cannot simply be amalgamated. How do I know the implementation that now depends on new code is going to work. Unless you have some facility to validate the changed process you don't really know. You could of course start up the application and exercise the functionality from both points of view to validate they both work. A clunky and time consuming process that will hold you back from making the change in the first place. That's where unit tests come in.

Unit tests let you remove duplication and refactor with confidence allowing you to improve quality, conciseness, readability and delivery times.

Saturday, January 24, 2009

Open source software is self service software...

Jeff Attwood at Coding Horror makes an interesting point about open source software being "self service" software. That immediately struck a chord with me as a I saw his point straight away. 

But he didn't quite mean what I expected him to say. What he meant was that participating on software projects is a self service enterprise and thus leaders of open source projects need to make their projects as easy as possible to serve yourself on. There are particular issues that go with any kind of self service.

But the same rings true for _users_ of open source software, and those users are exactly that, self service users. With open source software the users help themselves, they fix their own bugs, get their own help and basically make their own way. They do not rely on a commercially driven entity in order gain value from the software they are using.

And I think companies thus need to take this into account when they're considering whether so use open source software. At cinema's in South Africa they have self service kiosks for buying movie tickets (I don't know if they have them elsewhere in the world) - and I really appreciate these. I do not have to wait in the queue, I also get to have my pick of where I want to sit. 

However, it is only particular people that are able to use these kiosks. They are not everyone's cup of tea. You need to be technologically savy, and need to be willing and able to back yourself. iow, to try new things and have the confidence and where withall to see it through.

But let me say that not all open source is as self service as "self service". Some open source initiatives are supported by commercial entities, and some open source applications look and behave like the closed source equivalents - OpenOffice and Apache Web server being two simple examples.

In general though, using open source is analogous to using the self service option, when presented, and seeing things in that light help to shed light on some the challenges and value adds of open source software.

Friday, January 09, 2009

Hibernate: What does update actually mean?

I work on a large enterprise application that uses hibernate for it's persistence.

I regularly in the code base find the equivalent of...
Person person = personDao.loadById(id);
person.setLastname("newlastname");
person.setFirstname("newFirstname");
personDao.update(person);
The update call in the above code snippet is unncessary and useless.

Even after being on the project for more than 2 years I still find developers not understanding the purpose of update and I think I understand why. The obvious thought process is, I've changed a stored object, I now need to call update to save the change; sounds reasonable enough? That however, is not the purpose of update.
This particular use of update belies a misunderstanding of the nature of hibernate, and that is that it is a "transparent" persistence framework. What that means is that persistence is made to be transparent to the developer. You should not have to worry about the persistence issues. You load objects from data store, you change the objects, and the changes are saved, without any explicit save instructions. Imagine for a minute, if you had to call update whenever you made a change the complexity upgrade would be significant as it would mean having to track all the objects you do in fact change.

So then, if update is not for that, then what is it for?

The update method could have been named "attachAndPersistChanges" - that is a more accurate name, but probably a little unwieldy. It could also be described by the sentence "update this object in the data store". In other words, update is designed for objects that are not currently persistent.

The update method makes objects that are not persistent, persistent and updates the persistent object with any changes present in the not persistent (detached) object. The object must not already be available in the level one (session bound) cache otherwise an exception will be thrown (for this situation, use merge).

It does nothing for objects that are already persistent. I do not know if there is a performance penalty with running update on an already persistent object, if there is, it's probably minimal. It does clutter up the code with needless lines however.

Then when should I use it?

My current project is a large enterprise system with java on the back end, .net on the front end and web services in between. The front end is fairly light and thus does not know about the data model. For that reason the front end does not operate directly on the data objects. If it did, there might be a case for using update. i.e. I've received an object from the front end with changes that were made by the user. and I need to persist those changes so I call update on the object. The pattern we follow however is the one outlined in the code snippet. Load an object from store, make changes to it based on data received from front end - changes are automatically persisted, no update call.

But there is one case that I've encountered where we do need to use update and it illustrates an important thing to remember about hibernate...

Consider the code... 
transactionController.startTransaction();
Person person personDao.loadPersonById(personId);
person.setLastName("newLastname");
person.setFirstName("newFirstname");
transactionController.endTransaction();

transactionController.startTransacton();
person.setLastName("secondLastname");
transactionController.endTransaction();
I know it's a silly piece of code and you wouldn't find it like this in reality, but it illustrates the case simply. It is setting up a case where there are two transactions and one transaction is interacting with an object which another transaction touched.

What will the value of person.getLastname on the database? Believe it or not, the value will be unaffected by the second setLastname operation. It will be "newLastname". Why, what's going on?

The thing to remember here is that as far as hibernate is concerned, from the perspective of the second transaction, that person object is transient. The session that it was connected to is now closed and any change that is made to it is made as if it's a normal pojo with no knowledge of the persistence store even though a new transaction has been started. That object does not know about the new transaction.

And that is where update comes in. An update needs to be called on the person object in the second transaction to persist the change to lastname. This as we explained earlier, will attach the object to the new session and apply any required updates. 
transactionController.startTransacton();
person.setLastName("secondLastname");
personDao.update(person);
transactionController.endTransaction();
On our project, that is the only legitimate context where update is appropriate and necessary.

Friday, November 07, 2008

When to inherit, when to compose?

I suppose if you're reading this you've read some of the material out there on when to inherit, when to compose. You've also probably familiar with the "is-a", "has-a" principles...

My brother Peter, who is a dotNet programmer commented on the issue and came up with a useful principle...

Don't do it.

Or more politely... When you think you need to inherit, you pobably don't.

That is probably a good best practice approach - I need to post about best practices and why I say it's a good best practice approach. The problem is that inheritance is abused and the naive rule of "is-a", "has-a" is responsible for to many situations where inheritance is not the answer - composition would have been more than adequate and without the constraints and complexity that inheritance introduces.

It's a little like threading. If you can do it in one thread, then do it in one thread - you don't want the concurrency problems.

Wednesday, October 22, 2008

Developers do what they see...

This is probably the first really big project that I have been on and I have learnt a lot. One of the key lessons I have learnt is that the average developer is a sheep. He (or she) will do what they see and very seldom will they go against that.

In fact, even if it's wrong, they will do what they see.

It is has been quite remarkable.

So changing habits is not easy, especially if you're on a complex enterprise project which has been going for 18 months with a very fast development cycle. There are _many_ different ways of doing almost anything in the code so the poor developer has no idea what approach to follow.

It also means that changing habits is not easy, is has thus been quite encouraging to see some habits changing - I introduced commons collections into the code base, used it in my code, and showed a few dev's what it offers - and I'm now finding it is used more and more.

The other good thing about the way developers do what they see is that if what they see is good and right, those habits are persisted. I guess the opposite also applies, if what they see is low quality, they will extend that low quality.


How important is the programming language?

  Python, typescript, java, kotlin, C#, .net… Which is your technology of choice? Which one are you currently working in and which do you wa...