// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for the ‘Information Systems’ Category

Technology Luddite?

Thursday, February 11th, 2010

In a recent blog post, Tony Kontzer is discussing a San Francisco Chronicle article about Jaron Lanier.  The article discusses Jaron’s concern regarding limitations imposed on people by virtual reality and Web 2.0 structures.  The article mentions that some people have labeled Jaron a “Luddite”.  Tony goes on to say that the term isn’t a bad one and that Luddites serve an important role, balancing the Pollyanna vision of technology’s value against its potential risks.

Although I agree with Tony’s defense of Jaron’s position, I think the “Luddite” term is being misused in Jaron’s case.  In fact, I disagree with an assessment that Jaron’s comments, as well as the well-articulated theme of his book, “You Are Not a Gadget,” equate to those of a technology Luddite.

Let us consider a definition.  Merriam-Webster includes in their definition of Luddite, “one who is opposed to especially technological change.”  However, Jaron’s point is not one that opposes technological change.  Instead, he is concerned that specific uses of technology and underlying limitations within the virtual (digital) world limit our human interaction and experience.  The limiting factors are imposed by computers and software.

Jaron’s thought process, bringing in examples from both his technology and musical backgrounds, does a great job of describing how computer programs constrain us.  Developers have experienced frustration when extending functionality as they try to add features to an existing program.  Separate from the technologists’ issues, and this is key, computer hardware and software limitations also impose boundaries and set expectations for people who interact with computers.

It is this latter aspect, the unintentional or intentional limiting of people’s uniqueness due to the design and implementation of software, that concerns Jaron. I emphatically agree with him on this point!  I believe that most of us would accept that the setting arbitrary boundaries around self-expression and creativity in the physical world can lower the quality of life for people.  If the digital world does likewise might we end up in the same place?

(more…)

Project H.M.

Sunday, December 6th, 2009

I have been following the work being done by The Brain Observatory at UCSD to carefully section the brain of patient H.M. The patient, whose identity was protected while he was living, is known as the most studied amnesiac.  His amnesia was caused by brain surgery he underwent when he was 27 years old.

Screenshot from the live broadcast of Project H.M.'s brain slicing process

Screenshot from the live broadcast of Project H.M.'s brain slicing process

I won’t redocument his history, it is widely available on various websites, a few of which I’ll list at the end of this posting.  For me, this study is fascinating in terms of the completely open way the work is being done.  The process of sectioning the brain was broadcast in real time on UCSD’s website.  The entire process that is being followed is being discussed in an open forum.  The data being collected will be freely available.  For me this shows the positive way that the web can be leveraged.

I spend so much time in the world of commercial and proprietary software solutions that I sometimes end up with a distorted view of how the web is used.  Most of my interactions on the web are in the creation of applications that are owned and controlled by companies whose content is only available to individuals with some sort of financial relationship with the web site owner.

Clearly sites like Wikipedia make meaningful content available at no cost to the user.  However, in the case of this work at UCSD, there is an enormous expense in terms of equipment and people in order to collect, store, refine and publish this data.  This is truly a gift being offered to those with an interest in this field.  I’m sure that other examples exist and perhaps a valuable service would be one that helps to organize such informational sites.

If you are interested in more information about H.M. and the project at UCSD, here are some relevant websites:

Business Rules Forum 2009 Winds Down

Friday, November 6th, 2009

With the vendors gone the main hall seemed spacious during the morning keynote and lunch time presentations.  James Taylor [of "Smart (Enough) Systems" fame] delivered the keynote address.  He always has interesting insights regarding the state of affairs for agile systems design, both leveraging automated decisioning and workflow processes. 

James made the point that systems need to be more agile given higher levels of uncertainty with which our businesses deal.  The need to adjust and react is more critical as our business strategies and goals flex to the changing environment.  Essentially he seemed to be saying that businesses should not reduce their efforts to be agile during this economic downturn.  Rather, it is more important to increase agility in order to respond quickly to shifting opportunities.

Following the keynote I attended Brian Dickinson’s session titled, “Business Event Driven Enterprises Rule!”  The description of the session in the conference guide had caught my attention since it mentioned “event partitioning” which was a phrase I had not heard used in terms of designing automated solutions for businesses.

I was glad that I went.  Brian was an energetic speaker!  It was clear that he was deeply committed and passionate about focusing on events rather than functionality when considering process automation.  The hour-long session flew by and it was apparent that we hadn’t made a dent in what he really wanted to communicate.

Brian was kind enough to give attendees a copy of his book, “Creating Customer Focused Organizations” which I hope expands on the premise of his presentation today.  Although quite natural as a design paradigm when building event-driven UI’s and multi-threaded applications, I have not spent time focused on events when designing the business and database tiers of applications.  For me, the first step of working with his central tenants will be to try applying them, at least in a small way, on my next architecture project. (more…)

Business Rules Forum: In the Groove

Thursday, November 5th, 2009

The second day of the BRF is typically the most active.  People are arriving throughout day 1 and start heading out on day 3.  I’m attending RuleML, which follows on the heels of the BRF, so I’ll be here for all if it.

The morning keynote was delivered by Stephen Hendrick (IDC).  His presentation was titled, “BRMS at a Cross Roads: the Next Five Years.”  It was interesting hearing his vision of how BRMS vendors will need to position their offerings in order to be relevant for the future needs of businesses.

I did find myself wondering whether his vision was somewhat off in terms of timing.  The move to offer unified (or at least integrated) solutions based on traditional BRMS, Complex Event Processing, Data Transformation and Analytics seemed well beyond where I find many clients are in terms of leveraging the existing BRMS capabilities.

Between discussions with attendees and work on projects for which Blue Slate’s  customers hire us, the current state of affairs seems to be more about understanding how to begin using a BRMS.  I find many clients are just getting effective governance, rules harvesting and infrastructure support for BRMS integration started.  Discussions surrounding more complex functionality are premature for these organizations.

As usual, there were many competing sessions throughout the day that I wanted to attend.  I had to choose between these and spending some in-depth time with a few of the vendors.  One product that I really wanted to get a look at was JBoss Rules (Red Hat).

Similar to most Red Hat offerings there are free and fee-based versions of the product.  Also, as is typical between the two versions, the fee-based version is aimed at enterprises that do not want to deal with experimental or beta aspects of the product, instead preferring a more formal process of periodic production-worthy upgrades.  The fee-based offering also gets you support, beyond the user groups available to users of the free version.

The naming of the two versions is not clear to me.  I believe that the fee-based version is called JBoss Rules while the free download is called JBoss Drools, owning to the fact that Red Hat used drools as the basis for its rule engine offering.  The Drools suite includes BPM, BRM and Event Processing components.  My principle focus was the BRMS to start.

The premier open source rules offering (my opinion) has come a long way since I last tried it over a year ago.  The feature set includes a version control repository for the rules, somewhat user-friendly rule editing forms and a test harness.  Work is underway to support templating for rules, which is vital for creating rules that can be maintained easily by business users.  I will be downloading and working with this rule engine again shortly! (more…)

Business Rules Forum: Full Fledged Kickoff!

Wednesday, November 4th, 2009

Today the Business Rules Forum (BRF) kicked off for its 12th year.  Gladys Lam welcomed us and set the stage for an enlightening and engaging three days.  Jim Sinur (Gartner) gave the keynote address.  His expertise surrounding the entire field of Business Process Management (BPM), Business Rules Management (BRM) and Complex Event Processing (CEP) gives him significant insight into the industry and trends.

Jim’s talk was a call to action for product vendors and practitioners that the world has changed fundamentally and being able to leverage what he called “weak signals” and myriad events from many sources was becoming a requirement for successful business operations.  As always his talk was accompanied with a little humor and a lot of excellent supporting material.

During the day I attended three sessions and two of the vendor “Fun Labs”.  For me, the most interesting session of the ones I attended was given by Graham Witt (Ajlion).  He discussed his success with creating an approach of allowing business users to document rules using a structured natural language.  His basis was SBVR, but he reduced the complexity to create a practical solution.

Graham did a great job of walking us through a set of definitions for fact model, term, fact types and so forth. Using our understanding of the basic components of a structured rule he explored how one can take ambiguous statements, leverage the structure inherent in the fact model, and create an unambiguous statement that was still completely understandable to the business user.

His approach of creating templates for each type of rule made sense as a very effective method to give the business user the flexibility of expressing different types of rules while staying within a structured syntax.  This certainly seems like an approach to be explored for getting us closer to a DRY (Don’t Repeat Yourself) process that moves rules from the requirements into the design and implementation phases of a rules-based project.

The vendor labs were also interesting.  I attended one run by Innovations Software Technology and another by RuleArts. (more…)

Business Rules Forum Tutorials: Analytics and Events

Tuesday, November 3rd, 2009

This was the second of two pre-conference days offering a set of interesting tutorial sessions.  Although the choices were tough, I decided on Eric Siegel’s and David Luckham’s sessions.  Both were thought provoking.

Eric’s session, “Driving Decisions with Predictive Analytics: The Top Seven Business Applications” caught my attention due to its focus on analytics.  I have taken two data analysis courses as part of the Master’s program at Union Graduate College.  The courses, “Systems Modeling and Optimization” and “Data Mining” really piqued my interest in this field.

What was different about Eric’s presentation was its focus on real-world use of these techniques.  Understandably, he could not delve into the detail of a semester-long course.  He did a great job of introducing the basic concepts of data mining and then explored how these can be leveraged to build models that can then be used to drive business decisions.

Beyond explaining the basics around creating models (formatting data, algorithm choices, training, testing) he discussed how the resulting model isn’t a magic bullet that will generate business rules.  Rather from the model comes the ability to make decision, but those decisions must be created by the business.

I believe that leveraging predictive analytics will continue to grow as a key differentiator for businesses and a key feature leveraged in business rule engines.  Having a keen interest in this area, I look forward to assisting businesses derive value from the growing set of analytical tools and techniques.

My afternoon session choice, delivered by David Luckham, was titled, “Complex Event Processing in An Event-Driven, Information World.”  Complex Event Processing (CEP) is not an area with which I am familiar and David’s presentation covered a broad cross-section of the field.

Professor Luckham (Emeritus) at Stanford has an amazing amount of knowledge regarding CEP.  He discussed its market, history, technology and his predictions for its future.  He flew through several presentations that make up a CEP course he teaches.  Given the amount of material he has on the topic, he allowed us to help tune his presentation based on our particular interests.

It is clear that he has a passion around CEP and a strong belief that it will grow into a core, hence transparent, feature of all service-based networks.  He refers to this end state as “Holistic Event Processing”(HEP).

The power of the platform he describes would be amazing.  Although he did not compare the vision to Mashups and environments such as Yahoo Pipes, the power of HEP would seem to extend and go well beyond the operation of those tools.

It will be interesting to see how this field and the products being created become part of our standard enterprise infrastructure.  There is a long way to go before we reach David’s vision.

Tomorrow the Business Rules Forum launches in earnest with lots of presentations and vendor demonstrations.  I’m looking forward to a variety of interesting discussions as the week goes on.

8SXr9o9ArvMjyP

Business Rules Forum Tutorial: Smart Use of Rules in Process

Monday, November 2nd, 2009

I was fortunate to be able to attend Kathy Long’s (Innovative Process Consulting, Inc) presentation centered on the importance of working with business rules in the context of process.  This was a preconference tutorial for the 12th Annual International Business Rules Forum.  She did an excellent job of walking us through her experiences and thinking concerning process modeling and business rules.  The three hour session flew by!

Kathy spent a fair amount of time discussing four ways we typically model processes.  The four were Process Decomposition, Swim lanes (e.g. BPMN light), Full BPMN and IGOE (both and a high-level and as full flow models).  The IGOE (Inputs, Output, Guides and Enablers) was new to me and I think she anticipated that the majority of the attendees would not be familiar with their use.

We took some time using the IGOE approach with a couple of exercises.  This allowed us to understand the way these diagrams are used.  She pointed out some interesting strengths of using IGOEs versus swim lane-based flow diagrams.  I would recommend that people who document processes take a look at IGOEs.  They would seem to be a useful tool in the analyst’s arsenal.

Beyond the exploration of process documentation, Kathy discussed the need to consider business rules within the context of process.  She pointed out that we often embed decisions within our process flows as if they are separate from the processes themselves.  Of course this is wrong, the decisions occur as part of the process.

She showed us how the simplification of the flows, through the removal of the decision, makes the diagram clearer and more useful as documentation.  Rather than being tied to low level business rules, the process stands on its own.

I cannot do justice in describing everything Kathy walked us through in the course of the three hour session.  You should check out her articles, published in the Business Rules Journal (http://www.brcommunity.com/).

I am looking forward to more preconference tutorial sessions tomorrow!

Design and Build Effort versus Run-time Efficiency

Saturday, October 17th, 2009

I recently overheard a development leader talking with a team of programmers about the trade-off between the speed of developing working code and the effort required to improve the run-time performance of the code.  His opinion was that it was not worth any extra effort to gain a few hundred milliseconds here or there.  I found myself wanting to debate the position but it was not the right venue.

In my opinion a developer should not write inefficient code just because it is easier.  However, a developer must not tune code without evidence that the tuning effort will make a meaningful improvement to the overall efficiency of the application.  Guessing at where the hotspots are in an application usually leads to a lot of wasted effort.

When I talk about designing and writing efficient code I am really stressing the process of thinking about the macro-level algorithm that is being used.  Considering efficiency (e.g. big-O) and spending some time looking for options that would represent a big-O step change is where design and development performance effort belongs.

For instance, during initial design or coding, it is worth finding an O(log n) alternative to an O(n) solution.  However, spending time searching for a slight improvement in an O(n) algorithm that is still O(n) is likely a waste of time.

Preemptive tuning is a guessing game; we are guessing how a compiler will optimize our code, when a processor will fetch and cache our executable and where the actual hotspots will be.  Unfortunately our guesses are usually wrong. Perhaps the development team lead was really talking about this situation.

The tuning circumstances change once we have an application that can be tested.  The question becomes how far do we go to address performance hotspots?  In other words, how fast is fast enough?  For me the balance being sought is application delivery time versus user productivity and the benefits of tuning can be valuable. (more…)

Net Neutrality: Is There a Reason for Concern?

Monday, October 12th, 2009

Lately the subject of net neutrality has garnered a lot of attention.  As businesses large and small create an ever increasing set of offerings that require lots of bandwidth there is concern that the Internet infrastructure may not be able to keep data flowing smoothly

The core of the Internet’s bandwidth is provided by a few businesses.  These businesses exist to make money.  Fundamentally, when demand exceeds supply the cost of the good or service goes up.  In this case those costs might appear as increased charges for access or a slowing of one company’s data transfer versus another.

As in many debates there are two extreme positions represented by individuals, companies and trade groups.  In this case the dimension being debated is whether there is a need to legislate a message-neutral Internet (Net Neutrality)

The meaning of being “neutral” is that all data flowing across the Internet would be given equal priority.  The data being accessed by a doctor reading a CAT scan from a health records system would receive the same priority as someone watching a YouTube video.

Although the debate surrounds whether net neutrality should be a requirement, the reasons for taking a position vary.  I’ll start with concerns being shared by those that want a neutral net to be guaranteed.

Why Net Neutrality is Important

The Internet has served as a large and level playing field.  With a very small investment, companies can create a web presence that allows them to compete as peers of companies many times their size.

Unlike the brick-and-mortar world where the location, size, inventory, staff and ambiance of a store have direct monetary requirements, a web site is limited by the creativity and effort of a small team with an idea.  If Amazon and Google had needed to create an infrastructure on par with Waldenbooks or IBM in order to get started they would have had a much tougher journey.

Data on the Internet should be equally accessible.  It should not matter who my Internet Service provider (ISP) is.  Nor should my ISPs commercial relationships have a bearing on the service they provide to me to access the services of my choice.

If I choose to use services from my ISP’s competitor I should have access equivalent to using my ISP’s offering.  For instance, if I choose to use Google’s portal versus my ISP’s portal, the data sent by Google must not be impeded in favor of customer’s requesting my ISP’s content.

Network discrimination would dismantle the fundamental design of the Internet.  One of the original design goals for the Internet was its ability to get data from one place to another without regard for the actual content.  In other words, the underlying transport protocol knows nothing about web pages, SSH sessions, videos, and Flash applications.  It is this service agnosticism that has allowed myriad services to be created without having to fundamentally reengineer the Internet backbone.

An infrastructure that used to only routinely carry telnet and UUCP sessions now pass all manner of protocols and data.  Adding a layer of discrimination would require altering this higher-level agnosticism, since it is typically through inspection of the payload that one would arrive at decisions regarding varying the level of service.  This would lead the Internet down a road away from its open standards-based design.

There are other specific arguments made in favor of ensuring net neutrality.  In my opinion most of them relate to one of these I’ve mentioned.  So why oppose the requirement of net neutrality? (more…)

Why Do So Many Information Systems Implementations Fail and What Can Be Done to Improve Our Success Rate?

Thursday, May 7th, 2009

Information Systems (IS) implementations normally fail due to a lack of ownership, planning and execution by the organization.  The software and hardware tend to do what they are supposed to do.  Their features and limitations are typically known, at least if we take the time to investigate.  However, it is the organizational issues among and between business units and teams that actually require most of the effort when running an IS project.

The root causes of IS project failures include weak scoping, lack of executive management ownership, poor project management, lack of scope controls, improper infrastructure and inappropriate technology choices.

Weak scope leads to a project whose requirements are too broad to be met by any single system.  Similarly the team will be too broad with differing opinions as to the ultimate purpose of the project and therefore application.  After all, if the team members are each interpreting the goal(s) of the project in different ways it will be difficult, and time consuming, to arrive at consensus on each aspect of the project.

Lack of executive management ownership leaves the project team without an effective sponsor.  Having such a sponsor helps mitigate the various issues that will arise as the project team seeks to design and implement a new system.  Maintaining focus on the business goals of the system along with championing the system and breaking down barriers between groups are major functions for the executive owner.

Project management is key to delivering on any sort of solution, technology or otherwise.  Knowing the team roles, responsibilities, timelines and dependencies allows for issues to be identified proactively and resolved in a timely manner.  Exit strategies must be defined that rely on understanding the current project risks.  Without effective project management the actual status of the project remains hidden.

(more…)