I recently overheard a development leader talking with a team of programmers about the trade-off between the speed of developing working code and the effort required to improve the run-time performance of the code. His opinion was that it was not worth any extra effort to gain a few hundred milliseconds here or there. I found myself wanting to debate the position but it was not the right venue.
In my opinion a developer should not write inefficient code just because it is easier. However, a developer must not tune code without evidence that the tuning effort will make a meaningful improvement to the overall efficiency of the application. Guessing at where the hotspots are in an application usually leads to a lot of wasted effort.
When I talk about designing and writing efficient code I am really stressing the process of thinking about the macro-level algorithm that is being used. Considering efficiency (e.g. big-O) and spending some time looking for options that would represent a big-O step change is where design and development performance effort belongs.
For instance, during initial design or coding, it is worth finding an O(log n) alternative to an O(n) solution. However, spending time searching for a slight improvement in an O(n) algorithm that is still O(n) is likely a waste of time.
Preemptive tuning is a guessing game; we are guessing how a compiler will optimize our code, when a processor will fetch and cache our executable and where the actual hotspots will be. Unfortunately our guesses are usually wrong. Perhaps the development team lead was really talking about this situation.
The tuning circumstances change once we have an application that can be tested. The question becomes how far do we go to address performance hotspots? In other words, how fast is fast enough? For me the balance being sought is application delivery time versus user productivity and the benefits of tuning can be valuable. (more…)