// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for the ‘Architecture’ Category

JavaOne 2010 Concludes

Saturday, September 25th, 2010

My last two days at JavaOne 2010 included some interesting sessions as well as spending some time in the pavilion.  I’ll mention a few of the session topics that I found interesting as well as some of the products that I intend to check out.

I attended a session on creating a web architecture focused on high-performance with low-bandwidth.  The speaker was tasked with designing a web-based framework for the government of Ethiopia.  He discussed the challenges that are presented by that country’s infrastructure – consider network speed on the order of 5Kbps between sites.  He also had to work with an IT group that, although educated and intelligent, did not have a lot of depth beyond working with an Oracle database’s features.

His solution allows developers to create fully functional web applications that keep exchanged payloads under 10K.  Although I understand the logic of the approach in this case, I’m not sure the technique would be practical in situations without such severe bandwidth and skill set limitations.

A basic theme during his talk was to keep the data and logic tightly co-located.  In his case it is all located in the database (PL/SQL) but he agreed that it could all be in the application tier (e.g. NoSQL).  I’m not convinced that this is a good approach to creating maintainable high-volume applications.  It could be that the domain of business applications and business verticals in which I often find myself differ from the use cases that are common to developers promoting the removal of tiers from the stack (whether removing the DB server or the mid-tier logic server).

One part of his approach with which I absolutely concur is to push processing onto the client. The use of the client’s CPU seems common sense to me.  The work is around balancing that with security and bandwidth.  However, it can be done and I believe we will continue to find more effective ways to leverage all that computer power.

I also enjoyed a presentation on moving data between a data center and the cloud to perform heavy and intermittent processing.  The presenters did a great job of describing their trials and successes with leveraging the cloud to perform computationally expensive processing on transient data (e.g. they copy the data up each time they run the process rather than pay to store their data).  They also provided a lot of interesting information regarding options, advantages and challenges when leveraging the cloud (Amazon EC2 in this case).

(more…)

Strange, Our Enterprise Architecture Continues to Operate

Wednesday, September 15th, 2010

For years we’ve been hearing about the importance of Enterprise Architecture (EA) frameworks.  The messages from a variety of sources such as Zachman, TOGAF, HL7 and others is that businesses have to do an incredible amount of planning, documenting, discussing, benchmarking, evaluation, (feel free to insert more up-front work here) before they will have a good basis to implement their IT infrastructure. Once implemented all the documentation must be maintained, updated, verified, expanded, improved, (once again, insert more ongoing documentation management work here).  Oh, by the way, your company may want some actual IT work aligned with its core operations to be accomplished as part of all this investment. I don’t believe such a dependency is covered well in any of the EA material.

I have always struggled with these EA frameworks.  Their overhead seems completely unreasonable. I agree that planning the IT infrastructure is necessary.  This is no different than planning any sort of infrastructure.  Where I get uncomfortable is in the incredible depth and precision these frameworks want to utilize.  IT infrastructures do not have the complete inflexibility of buildings or roads.  Computer systems have a malleability that allows them to be adapted over time, particularly if the adjustments are in line with the core design.

Before anyone concludes that I do not believe in having a defined IT architecture let me assure you that I consistently advocate having a well-planned and documented IT architecture to support the enterprise.  A happenstance of randomly chosen and deployed technologies and integrations is inefficient and expensive.  I just believe that such planning and documentation do not need to be anywhere near as heavyweight as the classical EA frameworks suggest.

So you can imagine, based on this brief background, that I was not particularly surprised when the Zachman lawsuit and subsequent response from Stan Locke (Metadata Systems Software) failed to stop EA progress within Blue Slate or any of our clients.  I’m not interested in rehashing what a variety of blogs have already discussed regarding the lawsuit.  My interest is simply that there may be more vapor in the value of these large frameworks than their purveyors would suggest.

(more…)

Successful Process Automation: A Summary

Monday, July 26th, 2010

InformationWeek Analytics (http://analytics.informationweek.com/index) invited me to write about the subject of process automation.  The article, part of their series covering application architectures, was released in July of this year.  It provided an opportunity for me to articulate the key components that are required to succeed in the automation of business processes.

Both the business and IT are positioned to make-or-break the use of process automation tools and techniques. The business must redefine its processes and operational rules so that work may be automated.  IT must provide the infrastructure and expertise to leverage the tools of the process automation trade.

Starting with the business there must be clearly defined processes by which work gets done.  Each process must be documented, including the points where decisions are made.  The rules for those decisions must then be documented.  Repetitive, low-value and low-risk decisions are immediate candidates for automation.

A key value point that must be reached in order to extract sustainable and meaningful value from process automation is measured in Straight Through Processing (STP).  STP requires that work arrive from a third-party and be automatically processed; returning a final decision and necessary output (letter, claim payment, etc.) without a person being involved in handling the work.

Most businesses begin using process automation tools without achieving any significant STP rate.  This is fine as a starting point so long as the business reviews the manual work, identifies groupings of work, focuses on the largest groupings (large may be based on manual effort, cost or simple volume) and looks to automate the decisions surrounding that group of work.  As STP is achieved for some work, the review process continues as more and more types of work are targeted for automation.

The end goal of process automation is to have people involved in truly exceptional, high-value, high-risk, business decisions.  The business benefits by having people attend to items that truly matter rather than dealing with a large amount background noise that lowers productivity, morale and client satisfaction.

All of this is great in theory but requires an information technology infrastructure that can meet these business objectives.

(more…)

Database Refactoring and RDF Triples

Wednesday, May 12th, 2010

One of the aspects of agile software development that may lead to significant angst is the database.  Unlike refactoring code, the refactoring of the database schema involves a key constraint – state!  A developer may rearrange code to his or her heart’s content with little worry since the program will start with a blank slate when execution begins.  However, the database “remembers.”  If one accepts that each iteration of an agile process produces a production release then the stored data can’t be deleted as part of the next iteration.

The refactoring of a database becomes less and less trivial as project development continues.  While developers have IDE’s to refactor code, change packages, and alter build targets, there are few tools for refactoring databases.

My definition of a database refactoring tool is one that assists the database developer by remembering the database transformation steps and storing them as part of the project – e.g. part of the build process.  This includes both the schema changes and data transformations.  Remember that the entire team will need to reproduce these steps on local copies of the database.  It must be as easy to incorporate a peer’s database schema changes, without losing data, as it is to incorporate the code changes.

These same data-centric complexities exist in waterfall approaches when going from one version to the next.  Whenever the database structure needs to change, a path to migrate the data has to be defined.  That transformation definition must become part of the project’s artifacts so that the data migration for the new version is supported as the program moves between environments (test, QA, load test, integrated test, and production).  Also, the database transformation steps must be automated and reversible!

That last point, the ability to rollback, is a key part of any rollout plan.  We must be able to back out changes.  It may be that the approach to a rollback is to create a full database backup before implementing the update, but that assumption must be documented and vetted (e.g. the approach of a full backup to support the rollback strategy may not be reasonable in all cases).

This database refactoring issue becomes very tricky when dealing with multiple versions of an application.  The transformation of the database schema and data must be done in a defined order.  As more and more data is stored, the process consumes more storage and processing resources.  This is the ETL side-effect of any system upgrade.  Its impact is simply felt more often (e.g. potentially during each iteration) in an agile project.

As part of exploring semantic technology, I am interested in contrasting this to a database that consists of RDF triples.  The semantic relationships of data do not change as often (if at all) as the relational constructs.  Many times we refactor a relational database as we discover concepts that require one-to-many or many-to-many relationships.

Is an RDF triple-based database easier to refactor than a relational database?  Is there something about the use of RDF triples that reduces the likelihood of a multiplicity change leading to a structural change in the data?  If so, using RDF as the data format could be a technique that simplifies the development of applications.  For now, let’s take a high-level look at a refactoring use case.

(more…)

Business Ontologies and Semantic Technologies Class

Sunday, May 9th, 2010

Last week I had the pleasure of attending Semantic Arts’ training class entitled, “Designing and Building Business Ontologies.”  The course, led by Dave McComb and Simon Robe, provided an excellent introduction to semantic technologies and tools as well as coverage of ontological best practices.  I thoroughly enjoyed the 4-day class and achieved my principle goals in attending; namely to understand the semantic web landscape, including technologies such as RDF, RDFS, OWL, SPARQL, as well as the current state of tools and products in this space.

Both Dave and Simon have a deep understanding of this subject area.  They also work with clients using this technology so they bring real-world examples of where the technology shines and where it has limitations.  I recommend this class to anyone who is seeking to reach a baseline understanding of semantic technologies and ontology strategies.

Why am I so interested in semantic web technology?  I am convinced that structuring information such that it can be consumed by systems, in ways more automated than current data storage and association techniques allow, is required in order to achieve any meaningful advancement in the field of information technology (IT). Whether wiring together web services or setting up ETL jobs to create data marts, too much IT energy is wasted on repeatedly integrating data sources; essentially manually wiring together related information in the absence of the computer being able to wire it together autonomously!

(more…)

Business Rules Forum 2009 Winds Down

Friday, November 6th, 2009

With the vendors gone the main hall seemed spacious during the morning keynote and lunch time presentations.  James Taylor [of "Smart (Enough) Systems" fame] delivered the keynote address.  He always has interesting insights regarding the state of affairs for agile systems design, both leveraging automated decisioning and workflow processes. 

James made the point that systems need to be more agile given higher levels of uncertainty with which our businesses deal.  The need to adjust and react is more critical as our business strategies and goals flex to the changing environment.  Essentially he seemed to be saying that businesses should not reduce their efforts to be agile during this economic downturn.  Rather, it is more important to increase agility in order to respond quickly to shifting opportunities.

Following the keynote I attended Brian Dickinson’s session titled, “Business Event Driven Enterprises Rule!”  The description of the session in the conference guide had caught my attention since it mentioned “event partitioning” which was a phrase I had not heard used in terms of designing automated solutions for businesses.

I was glad that I went.  Brian was an energetic speaker!  It was clear that he was deeply committed and passionate about focusing on events rather than functionality when considering process automation.  The hour-long session flew by and it was apparent that we hadn’t made a dent in what he really wanted to communicate.

Brian was kind enough to give attendees a copy of his book, “Creating Customer Focused Organizations” which I hope expands on the premise of his presentation today.  Although quite natural as a design paradigm when building event-driven UI’s and multi-threaded applications, I have not spent time focused on events when designing the business and database tiers of applications.  For me, the first step of working with his central tenants will be to try applying them, at least in a small way, on my next architecture project. (more…)

Business Rules Forum: In the Groove

Thursday, November 5th, 2009

The second day of the BRF is typically the most active.  People are arriving throughout day 1 and start heading out on day 3.  I’m attending RuleML, which follows on the heels of the BRF, so I’ll be here for all if it.

The morning keynote was delivered by Stephen Hendrick (IDC).  His presentation was titled, “BRMS at a Cross Roads: the Next Five Years.”  It was interesting hearing his vision of how BRMS vendors will need to position their offerings in order to be relevant for the future needs of businesses.

I did find myself wondering whether his vision was somewhat off in terms of timing.  The move to offer unified (or at least integrated) solutions based on traditional BRMS, Complex Event Processing, Data Transformation and Analytics seemed well beyond where I find many clients are in terms of leveraging the existing BRMS capabilities.

Between discussions with attendees and work on projects for which Blue Slate’s  customers hire us, the current state of affairs seems to be more about understanding how to begin using a BRMS.  I find many clients are just getting effective governance, rules harvesting and infrastructure support for BRMS integration started.  Discussions surrounding more complex functionality are premature for these organizations.

As usual, there were many competing sessions throughout the day that I wanted to attend.  I had to choose between these and spending some in-depth time with a few of the vendors.  One product that I really wanted to get a look at was JBoss Rules (Red Hat).

Similar to most Red Hat offerings there are free and fee-based versions of the product.  Also, as is typical between the two versions, the fee-based version is aimed at enterprises that do not want to deal with experimental or beta aspects of the product, instead preferring a more formal process of periodic production-worthy upgrades.  The fee-based offering also gets you support, beyond the user groups available to users of the free version.

The naming of the two versions is not clear to me.  I believe that the fee-based version is called JBoss Rules while the free download is called JBoss Drools, owning to the fact that Red Hat used drools as the basis for its rule engine offering.  The Drools suite includes BPM, BRM and Event Processing components.  My principle focus was the BRMS to start.

The premier open source rules offering (my opinion) has come a long way since I last tried it over a year ago.  The feature set includes a version control repository for the rules, somewhat user-friendly rule editing forms and a test harness.  Work is underway to support templating for rules, which is vital for creating rules that can be maintained easily by business users.  I will be downloading and working with this rule engine again shortly! (more…)

Business Rules Forum: Full Fledged Kickoff!

Wednesday, November 4th, 2009

Today the Business Rules Forum (BRF) kicked off for its 12th year.  Gladys Lam welcomed us and set the stage for an enlightening and engaging three days.  Jim Sinur (Gartner) gave the keynote address.  His expertise surrounding the entire field of Business Process Management (BPM), Business Rules Management (BRM) and Complex Event Processing (CEP) gives him significant insight into the industry and trends.

Jim’s talk was a call to action for product vendors and practitioners that the world has changed fundamentally and being able to leverage what he called “weak signals” and myriad events from many sources was becoming a requirement for successful business operations.  As always his talk was accompanied with a little humor and a lot of excellent supporting material.

During the day I attended three sessions and two of the vendor “Fun Labs”.  For me, the most interesting session of the ones I attended was given by Graham Witt (Ajlion).  He discussed his success with creating an approach of allowing business users to document rules using a structured natural language.  His basis was SBVR, but he reduced the complexity to create a practical solution.

Graham did a great job of walking us through a set of definitions for fact model, term, fact types and so forth. Using our understanding of the basic components of a structured rule he explored how one can take ambiguous statements, leverage the structure inherent in the fact model, and create an unambiguous statement that was still completely understandable to the business user.

His approach of creating templates for each type of rule made sense as a very effective method to give the business user the flexibility of expressing different types of rules while staying within a structured syntax.  This certainly seems like an approach to be explored for getting us closer to a DRY (Don’t Repeat Yourself) process that moves rules from the requirements into the design and implementation phases of a rules-based project.

The vendor labs were also interesting.  I attended one run by Innovations Software Technology and another by RuleArts. (more…)

Business Rules Forum Tutorials: Analytics and Events

Tuesday, November 3rd, 2009

This was the second of two pre-conference days offering a set of interesting tutorial sessions.  Although the choices were tough, I decided on Eric Siegel’s and David Luckham’s sessions.  Both were thought provoking.

Eric’s session, “Driving Decisions with Predictive Analytics: The Top Seven Business Applications” caught my attention due to its focus on analytics.  I have taken two data analysis courses as part of the Master’s program at Union Graduate College.  The courses, “Systems Modeling and Optimization” and “Data Mining” really piqued my interest in this field.

What was different about Eric’s presentation was its focus on real-world use of these techniques.  Understandably, he could not delve into the detail of a semester-long course.  He did a great job of introducing the basic concepts of data mining and then explored how these can be leveraged to build models that can then be used to drive business decisions.

Beyond explaining the basics around creating models (formatting data, algorithm choices, training, testing) he discussed how the resulting model isn’t a magic bullet that will generate business rules.  Rather from the model comes the ability to make decision, but those decisions must be created by the business.

I believe that leveraging predictive analytics will continue to grow as a key differentiator for businesses and a key feature leveraged in business rule engines.  Having a keen interest in this area, I look forward to assisting businesses derive value from the growing set of analytical tools and techniques.

My afternoon session choice, delivered by David Luckham, was titled, “Complex Event Processing in An Event-Driven, Information World.”  Complex Event Processing (CEP) is not an area with which I am familiar and David’s presentation covered a broad cross-section of the field.

Professor Luckham (Emeritus) at Stanford has an amazing amount of knowledge regarding CEP.  He discussed its market, history, technology and his predictions for its future.  He flew through several presentations that make up a CEP course he teaches.  Given the amount of material he has on the topic, he allowed us to help tune his presentation based on our particular interests.

It is clear that he has a passion around CEP and a strong belief that it will grow into a core, hence transparent, feature of all service-based networks.  He refers to this end state as “Holistic Event Processing”(HEP).

The power of the platform he describes would be amazing.  Although he did not compare the vision to Mashups and environments such as Yahoo Pipes, the power of HEP would seem to extend and go well beyond the operation of those tools.

It will be interesting to see how this field and the products being created become part of our standard enterprise infrastructure.  There is a long way to go before we reach David’s vision.

Tomorrow the Business Rules Forum launches in earnest with lots of presentations and vendor demonstrations.  I’m looking forward to a variety of interesting discussions as the week goes on.

8SXr9o9ArvMjyP

Net Neutrality: Is There a Reason for Concern?

Monday, October 12th, 2009

Lately the subject of net neutrality has garnered a lot of attention.  As businesses large and small create an ever increasing set of offerings that require lots of bandwidth there is concern that the Internet infrastructure may not be able to keep data flowing smoothly

The core of the Internet’s bandwidth is provided by a few businesses.  These businesses exist to make money.  Fundamentally, when demand exceeds supply the cost of the good or service goes up.  In this case those costs might appear as increased charges for access or a slowing of one company’s data transfer versus another.

As in many debates there are two extreme positions represented by individuals, companies and trade groups.  In this case the dimension being debated is whether there is a need to legislate a message-neutral Internet (Net Neutrality)

The meaning of being “neutral” is that all data flowing across the Internet would be given equal priority.  The data being accessed by a doctor reading a CAT scan from a health records system would receive the same priority as someone watching a YouTube video.

Although the debate surrounds whether net neutrality should be a requirement, the reasons for taking a position vary.  I’ll start with concerns being shared by those that want a neutral net to be guaranteed.

Why Net Neutrality is Important

The Internet has served as a large and level playing field.  With a very small investment, companies can create a web presence that allows them to compete as peers of companies many times their size.

Unlike the brick-and-mortar world where the location, size, inventory, staff and ambiance of a store have direct monetary requirements, a web site is limited by the creativity and effort of a small team with an idea.  If Amazon and Google had needed to create an infrastructure on par with Waldenbooks or IBM in order to get started they would have had a much tougher journey.

Data on the Internet should be equally accessible.  It should not matter who my Internet Service provider (ISP) is.  Nor should my ISPs commercial relationships have a bearing on the service they provide to me to access the services of my choice.

If I choose to use services from my ISP’s competitor I should have access equivalent to using my ISP’s offering.  For instance, if I choose to use Google’s portal versus my ISP’s portal, the data sent by Google must not be impeded in favor of customer’s requesting my ISP’s content.

Network discrimination would dismantle the fundamental design of the Internet.  One of the original design goals for the Internet was its ability to get data from one place to another without regard for the actual content.  In other words, the underlying transport protocol knows nothing about web pages, SSH sessions, videos, and Flash applications.  It is this service agnosticism that has allowed myriad services to be created without having to fundamentally reengineer the Internet backbone.

An infrastructure that used to only routinely carry telnet and UUCP sessions now pass all manner of protocols and data.  Adding a layer of discrimination would require altering this higher-level agnosticism, since it is typically through inspection of the payload that one would arrive at decisions regarding varying the level of service.  This would lead the Internet down a road away from its open standards-based design.

There are other specific arguments made in favor of ensuring net neutrality.  In my opinion most of them relate to one of these I’ve mentioned.  So why oppose the requirement of net neutrality? (more…)