- “The First Rule of Program Optimization: Don't do it.
- The Second Rule of Program Optimization (for experts only!): Don't do it yet.”
I think the problem stems from a question of value. Developers still see code that runs fast as somehow better than code that runs slow - so they will prefer a solution for example, that creates less objects because they see that solution as slow.
The "fast" version could be 10 times slower than the slow version, and 10 times sounds like a lot. Wow, you achieved a 10 times speed increase, that's got to be good right?
Not if the fast version takes 1 milli-second right? The slow version will take 10 ms. I challenge anyone to notice a 9 ms difference.
I've never heard anyone complain about some process taking 9 ms longer to complete. 9 ms!
To give you an example...
Consider the code:
long p = System.currentTimeMillis();This code takes less then 1 ms to execute (printed difference is ZERO)... on my fairly powerful laptop with Java 6.
Agent[] objects = new Agent[1];
for (int i = 0; i <>
objects[i] = new za.co.sepia.ucc.core.model.Agent();
objects[i].setAgentId("hello");
}
System.out.println(System.currentTimeMillis()-p);
If I up the array size to 1000, it takes 31 ms...
I had to up the array size to 1000 to up the time taken to 31 ms. An array size of 1 is faster than an array size of 1000, it is in fact 30 times faster! Wow, I got a 30x speed improvement by reducing the array size. However, even with such an enormous factorial diference, the slow version still only takes 31 ms... That is 31 thousandths of a second.
Alright, granted, if you're doing hundreds, maybe thousands of these kind of operations then speed would start to be a factor. How often do you, in your daily software programming do you encounter situations where you're doing the same operation 10000 times...?
What I would like to see is developers change their priorities, and change their priorities in accordance with the third rule of optimisation, which I would like to add, and that is...
- A well designed system is better than a badly designed system which runs 1% faster.
Picture the following... at the end of your project you now want to increase performance. You have not optimised prematurely, but you do, unfortunately find that you now have a performance problem.
You run a profiler on your application and pin point the problem to be a particular component. Because your system is loosely coupled with good encapsulation you're able to rewrite that component quickly and easily, test it on its own to validate that the performance problem is no longer there, plug it back in and run the tests to validate your change.
But imagine if your system was badly designed...
While you would be able to pin point the source of the problem, chances are there's loads of dependencies on that component such that a simple rewrite will prove difficult. Furthermore, you would not be able to unplug it easily and rewrite it because there is tight coupling and little encapsulation. The bits of slow code are scattered all over the show. Maybe when you were designing you prioritised "performance"... funnily enough though, where you expected performance problems, none materialised.
I will always prefer a well designed system over one which is "fast".
In my experience, if you prioritise good design over performance, you will get a performing system in any case.
No comments:
Post a Comment