// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for the ‘Software Development’ Category

Business Rules Forum Tutorials: Analytics and Events

Tuesday, November 3rd, 2009

This was the second of two pre-conference days offering a set of interesting tutorial sessions.  Although the choices were tough, I decided on Eric Siegel’s and David Luckham’s sessions.  Both were thought provoking.

Eric’s session, “Driving Decisions with Predictive Analytics: The Top Seven Business Applications” caught my attention due to its focus on analytics.  I have taken two data analysis courses as part of the Master’s program at Union Graduate College.  The courses, “Systems Modeling and Optimization” and “Data Mining” really piqued my interest in this field.

What was different about Eric’s presentation was its focus on real-world use of these techniques.  Understandably, he could not delve into the detail of a semester-long course.  He did a great job of introducing the basic concepts of data mining and then explored how these can be leveraged to build models that can then be used to drive business decisions.

Beyond explaining the basics around creating models (formatting data, algorithm choices, training, testing) he discussed how the resulting model isn’t a magic bullet that will generate business rules.  Rather from the model comes the ability to make decision, but those decisions must be created by the business.

I believe that leveraging predictive analytics will continue to grow as a key differentiator for businesses and a key feature leveraged in business rule engines.  Having a keen interest in this area, I look forward to assisting businesses derive value from the growing set of analytical tools and techniques.

My afternoon session choice, delivered by David Luckham, was titled, “Complex Event Processing in An Event-Driven, Information World.”  Complex Event Processing (CEP) is not an area with which I am familiar and David’s presentation covered a broad cross-section of the field.

Professor Luckham (Emeritus) at Stanford has an amazing amount of knowledge regarding CEP.  He discussed its market, history, technology and his predictions for its future.  He flew through several presentations that make up a CEP course he teaches.  Given the amount of material he has on the topic, he allowed us to help tune his presentation based on our particular interests.

It is clear that he has a passion around CEP and a strong belief that it will grow into a core, hence transparent, feature of all service-based networks.  He refers to this end state as “Holistic Event Processing”(HEP).

The power of the platform he describes would be amazing.  Although he did not compare the vision to Mashups and environments such as Yahoo Pipes, the power of HEP would seem to extend and go well beyond the operation of those tools.

It will be interesting to see how this field and the products being created become part of our standard enterprise infrastructure.  There is a long way to go before we reach David’s vision.

Tomorrow the Business Rules Forum launches in earnest with lots of presentations and vendor demonstrations.  I’m looking forward to a variety of interesting discussions as the week goes on.

8SXr9o9ArvMjyP

Design and Build Effort versus Run-time Efficiency

Saturday, October 17th, 2009

I recently overheard a development leader talking with a team of programmers about the trade-off between the speed of developing working code and the effort required to improve the run-time performance of the code.  His opinion was that it was not worth any extra effort to gain a few hundred milliseconds here or there.  I found myself wanting to debate the position but it was not the right venue.

In my opinion a developer should not write inefficient code just because it is easier.  However, a developer must not tune code without evidence that the tuning effort will make a meaningful improvement to the overall efficiency of the application.  Guessing at where the hotspots are in an application usually leads to a lot of wasted effort.

When I talk about designing and writing efficient code I am really stressing the process of thinking about the macro-level algorithm that is being used.  Considering efficiency (e.g. big-O) and spending some time looking for options that would represent a big-O step change is where design and development performance effort belongs.

For instance, during initial design or coding, it is worth finding an O(log n) alternative to an O(n) solution.  However, spending time searching for a slight improvement in an O(n) algorithm that is still O(n) is likely a waste of time.

Preemptive tuning is a guessing game; we are guessing how a compiler will optimize our code, when a processor will fetch and cache our executable and where the actual hotspots will be.  Unfortunately our guesses are usually wrong. Perhaps the development team lead was really talking about this situation.

The tuning circumstances change once we have an application that can be tested.  The question becomes how far do we go to address performance hotspots?  In other words, how fast is fast enough?  For me the balance being sought is application delivery time versus user productivity and the benefits of tuning can be valuable. (more…)

A Decision Is No Place for Side-Effects

Monday, October 20th, 2008

As developers become more comfortable with their favorite syntax(es) they tend to favor terseness in coding.  This isn’t something limited to those writing programs.  The fact is, as people become more comfortable with a situation they look for ways to shorten interactions and bypass redundancies.

For example, as we become more comfortable with cooking, we may begin to approximate measurements.  We’ve measured a teaspoon of salt a hundred times.  We know that if we go slightly over or under it won’t ruin the recipe, so we just cup our hand and pour until it looks right.  At the gas pump I never press the “Pay by credit card” button.  Instead I insert my card and the machine recognizes what I am doing.  No need for the extra step. 

When communicating we shorten common phrases.  We may create compound words, contractions, or acronyms.  The goal is to reduce the overhead and increase entropy.  That is, we want each word we utter, or command we type, to do more for us.  When working with software we often refer to “power users” who know all the shortcuts to quickly access the program’s features.

Those who are new to the situation will need the more verbose and redundant approach.  Just because power users know how to bypass an application’s menus, a novice will still need the menus to help him or her learn the software. 

In a similar light, experienced developers need to consider their junior brethren as programs are being created.  At a basic level the issue is that the software may be maintained by others and those developers may not be as experienced as the original authors. 

One high-risk programmatic shortening concerns side-effects in decisions. 

(more…)

Unit Testing As a Standard Is Nondescript

Monday, September 8th, 2008

Often when I am discussing programming practices with developers they are quick to mention their use of unit testing.  It is a badge of honor that they wear and rightly so.  “I test my code.  I care about the quality of my work!”  Of course unit testing means that the test is exercising a small “unit” of code, typically a method or function.  Does the term tell us anything else?  Are all unit tests equivalent?

The tools and techniques developers use for unit tests differ, such as using frameworks versus more homegrown approaches, but many developers stress the importance of unit testing. However, saying that one subscribes to the use of unit tests is like saying that someone uses a motorized vehicle.  There is a lot of detail missing.

If you explore someone’s motorized vehicle it might turn out to be a motorcycle, car, train, boat, etc.  What is meant by motorized vehicle can vary widely making the term ineffective when trying to determine the vehicle’s ability to carry or tow something.  In the same fashion, if you dive more deeply into each person’s definition of unit tests, it is clear that one developer’s unit testing is not the same as another’s.

Here I’ll explore some details surrounding unit testing.  I’ll start off with my two invariants: predefined results and automation.

A test is less effective if its result is not defined before the test is run.  Each test must be defined with its input and result documented.  This means that no interpretation on the part of the tester can be involved.  The reason for this is simple; people can convince themselves that an answer is correct when they see it.  If a tester enters a test input without a definition of the correct answer, he or she may be willing to accept the result as correct.  This is human nature; to accept information that we see presented.

A logical corollary is that unit tests should be automated.  In order to guarantee that the tester is not interpreting any results, let the computer do the check.  The result is a pass or fail.  No murky gray area where the tester tries to understand whether it is correct.  The automation also leads to the creation of a regression test suite that will grow with the code.  Finally, if the tests are automated, other tools can help us assess the completeness of the tests.

Beyond these two statements there are decisions that developers need to make about the types of unit tests that will be created.  These decisions impact the complexity of the tests that must be written as well as the number of tests required. (more…)

Program to the Interface

Monday, August 25th, 2008

I was teaching a class last week that introduced the attendees to a BPM tool. Since this particular BPM environment leverages the use of OO paradigms, basing the organization of its rules and workflow on classes, it is central to the tool’s operation to have an understanding of Object Oriented Design (OOD). When I teach this course I always spend a day covering OO terminology and concepts. This gives the class a common basis to proceed as we learn how to navigate, design, build and test within the BPM tool.

At one point during the week we were discussing an example of some code within the BPM tool that included a variable whose declaration (type) was an interface and whose definition, obviously, was a class. Although not key to the example, I pointed out the use of the interface declaration as a reminder regarding good practices, noting that developers should always “program to the interface”.

I continued on but a student’s raised hand caught my eye. The individual asked me to repeat and elaborate on what I had mentioned. Although we had spent a day discussing OOD, it occurred to me that I may not have used that expression before. Even though we had discussed interfaces, heterogeneous collections, Liskov’s Substitution Principle, and so forth, apparently I had not effectively stressed the relationship to using interfaces as data types in order to decouple our code from implementation.

As we discussed the value of using interfaces as our data types the students caught on and were able to describe the advantages of this “rule”. Certainly Spring developers are familiar with the concept since Inversion of Control (IoC) is specifically used to decouple our code from other implementations.

Developers should favor the use of interfaces for data types rather than the implementation type even if it is not clear that another implementation will ever be leveraged or created. The practice of coding to interfaces will aid in consistently decoupling modules. The approach simplifies refactoring and allows for much more flexibility in future releases.

(more…)