// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for October, 2009

Design and Build Effort versus Run-time Efficiency

Saturday, October 17th, 2009

I recently overheard a development leader talking with a team of programmers about the trade-off between the speed of developing working code and the effort required to improve the run-time performance of the code.  His opinion was that it was not worth any extra effort to gain a few hundred milliseconds here or there.  I found myself wanting to debate the position but it was not the right venue.

In my opinion a developer should not write inefficient code just because it is easier.  However, a developer must not tune code without evidence that the tuning effort will make a meaningful improvement to the overall efficiency of the application.  Guessing at where the hotspots are in an application usually leads to a lot of wasted effort.

When I talk about designing and writing efficient code I am really stressing the process of thinking about the macro-level algorithm that is being used.  Considering efficiency (e.g. big-O) and spending some time looking for options that would represent a big-O step change is where design and development performance effort belongs.

For instance, during initial design or coding, it is worth finding an O(log n) alternative to an O(n) solution.  However, spending time searching for a slight improvement in an O(n) algorithm that is still O(n) is likely a waste of time.

Preemptive tuning is a guessing game; we are guessing how a compiler will optimize our code, when a processor will fetch and cache our executable and where the actual hotspots will be.  Unfortunately our guesses are usually wrong. Perhaps the development team lead was really talking about this situation.

The tuning circumstances change once we have an application that can be tested.  The question becomes how far do we go to address performance hotspots?  In other words, how fast is fast enough?  For me the balance being sought is application delivery time versus user productivity and the benefits of tuning can be valuable. (more…)

Net Neutrality: Is There a Reason for Concern?

Monday, October 12th, 2009

Lately the subject of net neutrality has garnered a lot of attention.  As businesses large and small create an ever increasing set of offerings that require lots of bandwidth there is concern that the Internet infrastructure may not be able to keep data flowing smoothly

The core of the Internet’s bandwidth is provided by a few businesses.  These businesses exist to make money.  Fundamentally, when demand exceeds supply the cost of the good or service goes up.  In this case those costs might appear as increased charges for access or a slowing of one company’s data transfer versus another.

As in many debates there are two extreme positions represented by individuals, companies and trade groups.  In this case the dimension being debated is whether there is a need to legislate a message-neutral Internet (Net Neutrality)

The meaning of being “neutral” is that all data flowing across the Internet would be given equal priority.  The data being accessed by a doctor reading a CAT scan from a health records system would receive the same priority as someone watching a YouTube video.

Although the debate surrounds whether net neutrality should be a requirement, the reasons for taking a position vary.  I’ll start with concerns being shared by those that want a neutral net to be guaranteed.

Why Net Neutrality is Important

The Internet has served as a large and level playing field.  With a very small investment, companies can create a web presence that allows them to compete as peers of companies many times their size.

Unlike the brick-and-mortar world where the location, size, inventory, staff and ambiance of a store have direct monetary requirements, a web site is limited by the creativity and effort of a small team with an idea.  If Amazon and Google had needed to create an infrastructure on par with Waldenbooks or IBM in order to get started they would have had a much tougher journey.

Data on the Internet should be equally accessible.  It should not matter who my Internet Service provider (ISP) is.  Nor should my ISPs commercial relationships have a bearing on the service they provide to me to access the services of my choice.

If I choose to use services from my ISP’s competitor I should have access equivalent to using my ISP’s offering.  For instance, if I choose to use Google’s portal versus my ISP’s portal, the data sent by Google must not be impeded in favor of customer’s requesting my ISP’s content.

Network discrimination would dismantle the fundamental design of the Internet.  One of the original design goals for the Internet was its ability to get data from one place to another without regard for the actual content.  In other words, the underlying transport protocol knows nothing about web pages, SSH sessions, videos, and Flash applications.  It is this service agnosticism that has allowed myriad services to be created without having to fundamentally reengineer the Internet backbone.

An infrastructure that used to only routinely carry telnet and UUCP sessions now pass all manner of protocols and data.  Adding a layer of discrimination would require altering this higher-level agnosticism, since it is typically through inspection of the payload that one would arrive at decisions regarding varying the level of service.  This would lead the Internet down a road away from its open standards-based design.

There are other specific arguments made in favor of ensuring net neutrality.  In my opinion most of them relate to one of these I’ve mentioned.  So why oppose the requirement of net neutrality? (more…)