// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Design and Build Effort versus Run-time Efficiency

I recently overheard a development leader talking with a team of programmers about the trade-off between the speed of developing working code and the effort required to improve the run-time performance of the code.  His opinion was that it was not worth any extra effort to gain a few hundred milliseconds here or there.  I found myself wanting to debate the position but it was not the right venue.

In my opinion a developer should not write inefficient code just because it is easier.  However, a developer must not tune code without evidence that the tuning effort will make a meaningful improvement to the overall efficiency of the application.  Guessing at where the hotspots are in an application usually leads to a lot of wasted effort.

When I talk about designing and writing efficient code I am really stressing the process of thinking about the macro-level algorithm that is being used.  Considering efficiency (e.g. big-O) and spending some time looking for options that would represent a big-O step change is where design and development performance effort belongs.

For instance, during initial design or coding, it is worth finding an O(log n) alternative to an O(n) solution.  However, spending time searching for a slight improvement in an O(n) algorithm that is still O(n) is likely a waste of time.

Preemptive tuning is a guessing game; we are guessing how a compiler will optimize our code, when a processor will fetch and cache our executable and where the actual hotspots will be.  Unfortunately our guesses are usually wrong. Perhaps the development team lead was really talking about this situation.

The tuning circumstances change once we have an application that can be tested.  The question becomes how far do we go to address performance hotspots?  In other words, how fast is fast enough?  For me the balance being sought is application delivery time versus user productivity and the benefits of tuning can be valuable.

For example, consider an application used in a call center to lookup information.  This particular call center receives an average of 100 calls an hour, each averaging 3 minutes and requiring operators to carry out 3 searches.  This is not a particularly heavy volume for call centers, but I’m trying to be conservative.

It takes 5 representatives taking calls to keep up with this volume.  Keeping up requires everything to operate perfectly, no system delays, no long-running calls, etc. or else a queue of calls will begin to build and callers will become frustrated, which is bad for business.

If we shave just 500 milliseconds off of the look up application’s runtime, which is used an average of 3 times per call; this would save 1.5 seconds per call.  500 milliseconds, half of a second, doesn’t sound like much.  Is this a meaningful improvement?  Should we invest the effort to achieve it?

Applied over our 100 calls per hour this improvement would save 150 seconds, or 2.5 minutes per hour, representing 0.5% of the total call time.  This would allow our 5 operators to easily handle our volume without a backlog building up and customers hanging up due to delays in answering.

On the development side what is this worth?  It may be difficult to arrive at a hard number, but a business stakeholder would certainly be able to articulate a rough ROI for such a change.

The basic concept to recognize is that the tuning effort is a one time cost.  The improved application accrues more savings every time it is used.  If a developer’s time costs three times as much as a call center representative’s time then a 40 hour effort to gain the 500 millisecond improvement would return the investment in less than a year.  That calculation is not even considering customer goodwill!

There are other ways to improve speed, especially by altering hardware.   For the example above it is unlikely that a hardware upgrade that only netted the same 8% improvement would be cost effective versus the code tuning cost.  However, the option of upgrading or adding hardware must be considered alongside code tuning efforts since the runtime improvements can represent orders of magnitude.  I’ll discuss a process around making such decisions in a future post.

So far I’ve focused on performance efficiency as a measure of runtime (execution) speed.  Other performance improvements can be focused on resource usage.  Reducing database load, memory consumption, file handles and so forth may also benefit the system and yet may not be measurable in terms of execution speed.  These improvements may pose a challenge when asking the business to budget for them.  This is also a topic for another day.

A key lesson to take away from this discussion is that tuning must be done after there is a system to measure.  Otherwise there is no way to assess improvement or be assured that a real bottleneck exists that can be addressed.  After all, referring to the earlier example, if there was no identified bottleneck adding 500 milliseconds of processing time then any developer effort invested in an attempt to remove an imagined bottleneck would be wasted.

Tags: , , , ,

Leave a Reply

You must be logged in to post a comment.