// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for the ‘Software Composition’ Category

Initial Time to Build? Vision to Release in Days? Those Aren’t Relevant Measures for Business Agility!

Tuesday, April 15th, 2014

I routinely receive emails, tweets and snail mail from IT vendors that focus on how their solution accelerates the creation of business applications. They will quote executives and technology leaders, citing case studies that compare the time to build an application on their platform versus others. They will make the claim that this speed to release proves that their platform, tool or solution is “better” than the competition. Further, they claim that it will provide similar value for my business’ application needs. The focus of these advertisements is consistently, “how long did it take to initially create some application.”

This speed-to-create metric is pointless for a couple of reasons. First, an experienced developer will be fast when throwing together a solution using his or her preferred tools. Second, an application spends years in maintenance versus the time spent to build its first version.

Build it fast!

Years ago I built applications for GE in C. I was fast. Once I had a good set of libraries, I could build applications for turbine parts catalogs in days. This was before windowing operating systems. There were frameworks from companies like Borland that made it trivial to create an interactive interface. I moved on to Visual Basic and SQLWindows development and was equally fast at creating client-server applications for GE’s field engineering team. I progressed to C++ and created CGI-based web applications. Again, building and deploying applications in days. Java followed, and I created and deployed applications using the leading edge Netscape browser and Java Applets in days and eventually hours for trivial interfaces.

Since 2000 I’ve used BPM and BRM platforms such as PegaRULES, Corticon, Appian and ILOG. I’ve developed applications using frameworks like Struts, JSF, Spring, Hibernate and the list goes on. Through all of this, I’ve lived the euphoria of the initial release and the pain of refactoring for release 2. In my experience not one of these platforms has simplified the refactoring of a weak design without a significant investment of time.

Speed to initial release is not a meaningful measure of a platform’s ability to support business agility. There is little pain in version 1 regardless of the design thought that goes into it. Agility is about versions 2 and beyond. Specifically, we need to understand what planning and practices during prior versions is necessary to promote agility in future versions.

(more…)

Semantics in the Cognitive Corporation™ Framework

Tuesday, August 14th, 2012

When depicting the Cognitive Corporation™ as a graphic, the use of semantic technology is not highlighted.  Semantic technology serves two key roles in the Cognitive Corporation™ – data storage (part of Know) and data integration, which connects all of the concepts.  I’ll explore the integration role since it is a vital part of supporting a learning organization.

In my last post I talked about the fact that integration between components has to be based on the meaning of the data, not simply passing compatible data types between systems.  Semantic technology supports this need through its design.  What key capabilities does semantic technology offer in support of integration?  Here I’ll highlight a few.

Logical and Physical Structures are (largely) Separate

Semantic technology reduces the tie between the logical and physical structures of the data versus a relational database.  In a relational database it is the physical structure (columns and tables) along with the foreign keys that maintain the relationships in the data.  Just think back to relational database design class, in a normalized database all of the column values are related to the table’s key.

This tight tie between data relationships (logical) and data structure (physical) imposes a steep cost if a different set of logical data relationships is desired.  Traditionally, we create data marts and data warehouses to allow us to represent multiple logical data relationships.  These are copies of the data with differing physical structures and foreign key relationships.  We may need these new structures to allow us to report differently on our data or to integrate with different systems which need the altered logical representations.

With semantic data we can take a physical representation of the data (our triples) and apply different logical representations in the form of ontologies.  To be fair, the physical structure (subject->predicate->object) forces certain constrains on the ontology but a logical transformation is far simpler than a physical one even with such constraints.

(more…)

Cognitive Corporation™ Innovation Lab Kickoff!

Friday, August 10th, 2012

I am excited to share the news that Blue Slate Solutions has kicked off a formal innovation program, creating a lab environment which will leverage the Cognitive Corporation™ framework and apply it to a suite of processes, tools and techniques.  The lab will use a broad set of enterprise technologies, applying the learning organization concepts implicit in the Cognitive Corporation’s™ feedback loop.

I’ve blogged a couple of times (see references at the end of this blog entry) about the Cognitive Corporation™.  The depiction has changed slightly but the fundamentals of the framework are unchanged.

Cognitive Corporation DepictionThe focus is to create a learning enterprise, where the learning is built into the system integrations and interactions. Enterprises have been investing in these individual components for several years; however they have not truly been integrating them in a way to promote learning.

By “integrating” I mean allowing the system to understand the meaning of the data being passed between them.  Creating a screen in a workflow (BPM) system that presents data from a database to a user is not “integration” in my opinion.  It is simply passing data around.  This prevents the enterprise ecosystem (all the components) from working together and collectively learning.

I liken such connections to my taking a hand-written note in a foreign language, which I don’t understand, and typing the text into an email for someone who does understand the original language.  Sure, the recipient can read it, but I, representing the workflow tool passing the information from database (note) to screen (email) in this case, have no idea what the data means and cannot possibly participate in learning from it.  Integration requires understanding.  Understanding requires defined and agreed-upon semantics.

This is just one of the Cognitive Corporation™ concepts that we will be exploring in the lab environment.  We will also be looking at the value of these technologies within different horizontal and vertical domains.  Given our expertise in healthcare, finance and insurance, our team is well positioned to use the lab to explore the use of learning BPM in many contexts.

(more…)

Semantic Web Summit (East) 2010 Concludes

Thursday, November 18th, 2010

I attended my first semantic web conference this week, the Semantic Web Summit (East) held in Boston.  The focus of the event was how businesses can leverage semantic technologies.  I was interested in what people were actually doing with the technology.  The one and a half days of presentations were informative and diverse.

Our host was Mills Davis, a name that I have encountered frequently during my exploration of the semantic web.  He did a great job of keeping the sessions running on time as well as engaging the audience.  The presentations were generally crisp and clear.  In some cases the speaker presented a product that utilizes semantic concepts, describing its role in the value chain.  In other cases we heard about challenges solved with semantic technologies.

My major takeaways were: 1) semantic technologies work and are being applied to a broad spectrum of problems and 2) the potential business applications of these technologies are vast and ripe for creative minds to explore.  This all bodes well for people delving into semantic technologies since there is an infrastructure of tools and techniques available upon which to build while permitting broad opportunities to benefit from leveraging them.

As a CTO with 20+ years focused on business environments, including application development, enterprise application integration, data warehousing, and business intelligence I identified most closely with the sessions geared around intra-business and B2B uses of semantic technology.  There were other sessions looking a B2C which were well done but not applicable to the world in which I find myself currently working.

Talks by Dennis Wisnosky and Mike Dunn were particularly focused on the business value that can be achieved through the use of semantic technologies.  Further, they helped to define basic best practices that they apply to such projects.  Dennis in particular gave specific information around his processes and architecture while talking about the enormous value that his team achieved.

Heartening to me was the fact that these best practices, processes and architectures are not significantly different than those used with other enterprise system endeavors.  So we don’t need to retool all our understanding of good project management practices and infrastructure design, we just need to internalize where semantic technology best fits into the technology stack.

(more…)

Creating RDF Triples from a Relational Database

Thursday, August 5th, 2010

In an earlier blog entry I discussed the potential reduction in refactoring effort if our data is represented as RDF triples rather than relational structures.  As a way to give myself easy access to RDF data and to work more with semantic web tool features I have created a program to export relational data to RDF.

The program is really a proof-of-concept.  It takes a SQL query and converts the resulting rows into assertions of triples.  The approach is simple: given a SQL statement and a chosen primary key column (PK) to represent the instance for the exported data, assert triples with the primary key column value as the subject, the column names as the predicates and the non-PK column values as the objects.

Here is a brief sample taken from the documentation accompanying the code.

  • Given a table named people with the following columns and rows:
       id    name    age
       --    ----    ---
       1     Fred    20
       2     Martha  25
  • And a query of:  select id, name, age from people
  • And the primary key column set to: id
  • Then the asserted triples (shown using Turtle and skipping prefixes) will be:
       dsr:PK_1
          a       owl:Thing , dsr:RdbData ;
          rdfs:label "1" ;
          dsr:name "Fred" ;
          dsr:age "20" .

       dsr:PK_2
          a       owl:Thing , dsr:RdbData ;
          rdfs:label "2" ;
          dsr:name "Martha" ;
          dsr:age "25" .

You can see that the approach represents a quick way to convert the data.

(more…)

My First Semantic Web Program

Saturday, June 5th, 2010

I have create my first slightly interesting, to me anyway, program that uses some semantic web technology.  Of course I’ll look back on this in a year and cringe, but for now it represents my understanding of a small set of features from Jena and Pellet.

The basis for the program is an example program that is described in Hebler, Fischer et al’s book “Semantic Web Programming” (ISBN: 047041801X).  The intent of the program is to load an ontology into three models, each running a different level of reasoner (RDF, RDFS and OWL) and output the resulting assertions (triples).

I made a couple of changes to the book’s sample’s approach.  First I allow any supported input file format to be automatically loaded (you don’t have to tell the program what format is being used).  Second, I report the actual differences between the models rather than just showing all the resulting triples.

As I worked on the code, which is currently housed in one uber-class (that’ll have to be refactored!), I realized that there will be lots of reusable “plumbing” code that comes with this type of work.  Setting up models with various reasoners, loading ontologies, reporting triples, interfacing to triple stores, and so on will become nuisance code to write.

Libraries like Jena help, but they abstract at a low level.  I want a semantic workbench that makes playing with the various libraries and frameworks easy.  To that end I’ve created a Sourceforge project called “Semantic Workbench“.

I intend for the Semantic Workbench to provide a GUI environment for manipulating semantic web technologies. Developers and power users would be able to use such a tool to test ontologies, try various reasoners and validate queries.  Developers could use the workbench’s source code to understand how to utilize frameworks like Jena or reasoner APIs like that of Pellet.

I invite other interested people to join the Sourceforge project. The project’s URL is: http://semanticwb.sourceforge.net/

On the data side, in order to have a rich semantic test data set to utilize, I’ve started an ontology that I hope to grow into an interesting example.  I’m using the insurance industry as its basis.  The rules around insurance and the variety of concepts should provide a rich set of classes, attributes and relationships for modeling.  My first version of this example ontology is included with the sample program.

Finally, I’ve added a semantic web section to my website where I’ll maintain links to useful information I find as well as sample code or files that I think might be of interest to other developers.  I’ve placed the sample program and ontology described earlier in this post on that page along with links to a variety of resources.

My site’s semantic web page’s URL is: http://monead.com/semantic/
The URL for the page describing the sample program is: http://monead.com/semantic/proj_diffinferencing.html

Database Refactoring and RDF Triples

Wednesday, May 12th, 2010

One of the aspects of agile software development that may lead to significant angst is the database.  Unlike refactoring code, the refactoring of the database schema involves a key constraint – state!  A developer may rearrange code to his or her heart’s content with little worry since the program will start with a blank slate when execution begins.  However, the database “remembers.”  If one accepts that each iteration of an agile process produces a production release then the stored data can’t be deleted as part of the next iteration.

The refactoring of a database becomes less and less trivial as project development continues.  While developers have IDE’s to refactor code, change packages, and alter build targets, there are few tools for refactoring databases.

My definition of a database refactoring tool is one that assists the database developer by remembering the database transformation steps and storing them as part of the project – e.g. part of the build process.  This includes both the schema changes and data transformations.  Remember that the entire team will need to reproduce these steps on local copies of the database.  It must be as easy to incorporate a peer’s database schema changes, without losing data, as it is to incorporate the code changes.

These same data-centric complexities exist in waterfall approaches when going from one version to the next.  Whenever the database structure needs to change, a path to migrate the data has to be defined.  That transformation definition must become part of the project’s artifacts so that the data migration for the new version is supported as the program moves between environments (test, QA, load test, integrated test, and production).  Also, the database transformation steps must be automated and reversible!

That last point, the ability to rollback, is a key part of any rollout plan.  We must be able to back out changes.  It may be that the approach to a rollback is to create a full database backup before implementing the update, but that assumption must be documented and vetted (e.g. the approach of a full backup to support the rollback strategy may not be reasonable in all cases).

This database refactoring issue becomes very tricky when dealing with multiple versions of an application.  The transformation of the database schema and data must be done in a defined order.  As more and more data is stored, the process consumes more storage and processing resources.  This is the ETL side-effect of any system upgrade.  Its impact is simply felt more often (e.g. potentially during each iteration) in an agile project.

As part of exploring semantic technology, I am interested in contrasting this to a database that consists of RDF triples.  The semantic relationships of data do not change as often (if at all) as the relational constructs.  Many times we refactor a relational database as we discover concepts that require one-to-many or many-to-many relationships.

Is an RDF triple-based database easier to refactor than a relational database?  Is there something about the use of RDF triples that reduces the likelihood of a multiplicity change leading to a structural change in the data?  If so, using RDF as the data format could be a technique that simplifies the development of applications.  For now, let’s take a high-level look at a refactoring use case.

(more…)

Business Ontologies and Semantic Technologies Class

Sunday, May 9th, 2010

Last week I had the pleasure of attending Semantic Arts’ training class entitled, “Designing and Building Business Ontologies.”  The course, led by Dave McComb and Simon Robe, provided an excellent introduction to semantic technologies and tools as well as coverage of ontological best practices.  I thoroughly enjoyed the 4-day class and achieved my principle goals in attending; namely to understand the semantic web landscape, including technologies such as RDF, RDFS, OWL, SPARQL, as well as the current state of tools and products in this space.

Both Dave and Simon have a deep understanding of this subject area.  They also work with clients using this technology so they bring real-world examples of where the technology shines and where it has limitations.  I recommend this class to anyone who is seeking to reach a baseline understanding of semantic technologies and ontology strategies.

Why am I so interested in semantic web technology?  I am convinced that structuring information such that it can be consumed by systems, in ways more automated than current data storage and association techniques allow, is required in order to achieve any meaningful advancement in the field of information technology (IT). Whether wiring together web services or setting up ETL jobs to create data marts, too much IT energy is wasted on repeatedly integrating data sources; essentially manually wiring together related information in the absence of the computer being able to wire it together autonomously!

(more…)

Technology Luddite?

Thursday, February 11th, 2010

In a recent blog post, Tony Kontzer is discussing a San Francisco Chronicle article about Jaron Lanier.  The article discusses Jaron’s concern regarding limitations imposed on people by virtual reality and Web 2.0 structures.  The article mentions that some people have labeled Jaron a “Luddite”.  Tony goes on to say that the term isn’t a bad one and that Luddites serve an important role, balancing the Pollyanna vision of technology’s value against its potential risks.

Although I agree with Tony’s defense of Jaron’s position, I think the “Luddite” term is being misused in Jaron’s case.  In fact, I disagree with an assessment that Jaron’s comments, as well as the well-articulated theme of his book, “You Are Not a Gadget,” equate to those of a technology Luddite.

Let us consider a definition.  Merriam-Webster includes in their definition of Luddite, “one who is opposed to especially technological change.”  However, Jaron’s point is not one that opposes technological change.  Instead, he is concerned that specific uses of technology and underlying limitations within the virtual (digital) world limit our human interaction and experience.  The limiting factors are imposed by computers and software.

Jaron’s thought process, bringing in examples from both his technology and musical backgrounds, does a great job of describing how computer programs constrain us.  Developers have experienced frustration when extending functionality as they try to add features to an existing program.  Separate from the technologists’ issues, and this is key, computer hardware and software limitations also impose boundaries and set expectations for people who interact with computers.

It is this latter aspect, the unintentional or intentional limiting of people’s uniqueness due to the design and implementation of software, that concerns Jaron. I emphatically agree with him on this point!  I believe that most of us would accept that the setting arbitrary boundaries around self-expression and creativity in the physical world can lower the quality of life for people.  If the digital world does likewise might we end up in the same place?

(more…)

Business Rules Forum Tutorials: Analytics and Events

Tuesday, November 3rd, 2009

This was the second of two pre-conference days offering a set of interesting tutorial sessions.  Although the choices were tough, I decided on Eric Siegel’s and David Luckham’s sessions.  Both were thought provoking.

Eric’s session, “Driving Decisions with Predictive Analytics: The Top Seven Business Applications” caught my attention due to its focus on analytics.  I have taken two data analysis courses as part of the Master’s program at Union Graduate College.  The courses, “Systems Modeling and Optimization” and “Data Mining” really piqued my interest in this field.

What was different about Eric’s presentation was its focus on real-world use of these techniques.  Understandably, he could not delve into the detail of a semester-long course.  He did a great job of introducing the basic concepts of data mining and then explored how these can be leveraged to build models that can then be used to drive business decisions.

Beyond explaining the basics around creating models (formatting data, algorithm choices, training, testing) he discussed how the resulting model isn’t a magic bullet that will generate business rules.  Rather from the model comes the ability to make decision, but those decisions must be created by the business.

I believe that leveraging predictive analytics will continue to grow as a key differentiator for businesses and a key feature leveraged in business rule engines.  Having a keen interest in this area, I look forward to assisting businesses derive value from the growing set of analytical tools and techniques.

My afternoon session choice, delivered by David Luckham, was titled, “Complex Event Processing in An Event-Driven, Information World.”  Complex Event Processing (CEP) is not an area with which I am familiar and David’s presentation covered a broad cross-section of the field.

Professor Luckham (Emeritus) at Stanford has an amazing amount of knowledge regarding CEP.  He discussed its market, history, technology and his predictions for its future.  He flew through several presentations that make up a CEP course he teaches.  Given the amount of material he has on the topic, he allowed us to help tune his presentation based on our particular interests.

It is clear that he has a passion around CEP and a strong belief that it will grow into a core, hence transparent, feature of all service-based networks.  He refers to this end state as “Holistic Event Processing”(HEP).

The power of the platform he describes would be amazing.  Although he did not compare the vision to Mashups and environments such as Yahoo Pipes, the power of HEP would seem to extend and go well beyond the operation of those tools.

It will be interesting to see how this field and the products being created become part of our standard enterprise infrastructure.  There is a long way to go before we reach David’s vision.

Tomorrow the Business Rules Forum launches in earnest with lots of presentations and vendor demonstrations.  I’m looking forward to a variety of interesting discussions as the week goes on.

8SXr9o9ArvMjyP