// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Posts Tagged ‘system integration’

Medicaid Managed Care Congress Conversations Highlight the Value of Data Federation

Thursday, May 22nd, 2014

Photo of Scott, Chris and Dave at MMCC 2014

This week I had the opportunity to attend the Medicaid Managed Care Congress (MMCC) in Baltimore, MD and the privilege of speaking with a variety of leaders from provider, payer, and services organizations. With me from Blue Slate Solutions were Scott Van Buren and Chris Garber. A common theme we heard as we spoke with the attendees was the challenge of bringing data together from multiple sources and making sense of that information.

Medicaid is potentially the most complex government program that exists in the United States. There are federal and state aspects as well as portions that are handled at a local level. Some funding and services are defined as required while others are optional. The financial models’ formulas involve many variables. In short, there are numerous challenges in Medicaid, including the dual eligible changes that seek to address the services disconnects that often exist when a person is eligible for both Medicare and Medicaid.

Combining data from providers, payers, patients, government entities and the community are all necessary in order to optimize the quality of care that is provided to each patient. The definition of provider continues to expand, covering not just the medical needs of a person but incorporating the various social services, so important to the holistic care of an individual, under the umbrella of “provider.”

As we listened to people and talked about their data challenges we were also able to walk them through the Data Unleashed™ approach. The iterative learn-as-you-go process resonated across the board, whether people represented patient advocacy groups, provider organizations or healthcare plans. The capability to start small, obtain value quickly and adapt rapidly to changing environments fits the Medicaid complexities well.

Data Unleashed Front End Screenshot

If you would like to learn more about our agile and lightweight approach to accessing data from across your enterprise in order to quickly begin creating meaningful reporting and analytics, please check out dataunleashed.com for descriptions, videos and case studies. We’d also appreciate the opportunity to host a webinar with your team where we can explore Data Unleashed™ in more depth and discuss your specific data challenges.

Why Isn’t Everybody Doing It?

Monday, April 28th, 2014

SheepThat is a very dangerous question for a leader to ask when evaluating options. Yet it is one I hear far too often in the healthcare realm. It encapsulates a rejection of innovation, evolution and learning all in one terse, often rhetorical, question.

A common context for this question, often prefixed by, “If this is so great…,” is when discussing semantics and semantic technology. Although these concepts are not new to some industries, such as media, they are foreign in many healthcare organizations. Yet we know that healthcare payers and providers alike struggle with massive data integration and data analytics challenges just like media conglomerates.

The needs to: combine siloed information; drive an analytics mindset throughout an organization; and support the flexibility of a constantly changing IT environment are common in large healthcare organizations. Repeated attempts by organizations to meet these needs betray a lack of consensus around how to best achieve a valuable result.

Further, the implication that how most organizations solve a problem is optimal ignores the fact that best practices must change over time. The best way to solve a problem last year may not be the same this year. The healthcare industry is changing, the physical world of servers, networks, disk drives, memory is changing, and the expectations of members are changing. What was infeasible years ago becomes commonplace. Relational databases were all but unworkable in the 1970s due to a lack of experienced DBAs, slow disk drives, slow processors and limited memory.

In the same way, semantic formalization and graph databases were too new and limited to deal with large data sets until people gained expertise with ontologies while system hardware benefitted from another generation of Moore’s law. In the face of ongoing innovation, the question leaders should ask when approaching a challenge is, “What advancements have been made since the last time we looked at this problem?

Innovation Technology Strategy Leadership SignpostLeadership requires leading, not following. Leaders mentor their organizations through change in order to reach new levels of success. Leadership is based on learning, open-mindedness, creativity and risk-taking. The question, “Why isn’t everybody doing it?” is the antithesis of leadership and has no place there. In fact, if everybody is doing something, a leader would be better off asking, “How do we get ahead of what everybody is doing?”

Leaders must be on the forefront of pushing for better, faster, cheaper. Questioning the status quo, looking for new opportunities, seeking to leapfrog the competition, those are key foci for leadership.

As a leader, the next time you find yourself limiting your willingness to explore an option because everybody isn’t doing it, keep in mind that calculators, computers, automobiles, elevators, white boards, LED light bulbs, Google maps, telephones, the Internet, 3-D printing, open heart surgery, and many more concepts that are accepted or gaining traction, had a day when only one person or organization was “doing it.” Challenge yourself and your organization to find new options, new best practices and new paradigms for advancing your strategy and goals.

Semantic Technology – When Should Your Enterprise Consider Adopting It?

Monday, July 8th, 2013

At this year’s Semantic Technology and Business Conference in San Francisco, Mike Delaney and I presented a session discussing Semantic Technology adoption in the enterprise entitled, When to Consider Semantic Technology for Your Enterprise. Our focus in the talk was centered on 3 key messages: 1) describe semantic technology as it relates to enterprise data and applications; 2) discuss where semantic technology augments current data persistence and access technologies; and 3) highlight situations that should lead an enterprise to begin using semantic technology as part of their enterprise architecture.

In order to allow a broader audience to benefit from our session we are creating a set of videos based on our original presentation. These are being released as part of Blue Slate Solutions’ Experts Exchange Series.  Each video will be 5 to 10 minutes in length and will focus on one of the sub-topics from the presentation.

Here is the overall agenda for the video series:

# Title Description
1 Introduction Meet the presenters and the topic
2 What? Define Semantic Technology in the context of these videos
3 What’s New? Compare semantic technology to relational and NoSQL technologies
4 Where? Discuss the ecosystem and maturity of vendors in the semantic technology space
5 Why? Explain the enterprise strengths of semantic technology
6 When? Identify opportunities to exploit semantic technology in the enterprise
7 When Not? Avoid misusing semantic technology
8 Case Study Look at one of our semantic technology projects
9 How? Get started with semantic technology

 

We’ll release a couple of videos every other week so be on the lookout during July and August for this series to be completed. We would appreciate your feedback on the information as well as hearing about your experiences deploying semantic technology as part of an enterprise’s application architecture.

The playlist for the series is located at: http://www.youtube.com/playlist?list=PLyQYGnkKpiugIl0Tz0_ZlmeFhbWQ4XE1I The playlist will be updated with the new videos as they are released.

 

Semantics in the Cognitive Corporation™ Framework

Tuesday, August 14th, 2012

When depicting the Cognitive Corporation™ as a graphic, the use of semantic technology is not highlighted.  Semantic technology serves two key roles in the Cognitive Corporation™ – data storage (part of Know) and data integration, which connects all of the concepts.  I’ll explore the integration role since it is a vital part of supporting a learning organization.

In my last post I talked about the fact that integration between components has to be based on the meaning of the data, not simply passing compatible data types between systems.  Semantic technology supports this need through its design.  What key capabilities does semantic technology offer in support of integration?  Here I’ll highlight a few.

Logical and Physical Structures are (largely) Separate

Semantic technology reduces the tie between the logical and physical structures of the data versus a relational database.  In a relational database it is the physical structure (columns and tables) along with the foreign keys that maintain the relationships in the data.  Just think back to relational database design class, in a normalized database all of the column values are related to the table’s key.

This tight tie between data relationships (logical) and data structure (physical) imposes a steep cost if a different set of logical data relationships is desired.  Traditionally, we create data marts and data warehouses to allow us to represent multiple logical data relationships.  These are copies of the data with differing physical structures and foreign key relationships.  We may need these new structures to allow us to report differently on our data or to integrate with different systems which need the altered logical representations.

With semantic data we can take a physical representation of the data (our triples) and apply different logical representations in the form of ontologies.  To be fair, the physical structure (subject->predicate->object) forces certain constrains on the ontology but a logical transformation is far simpler than a physical one even with such constraints.

(more…)

Semantic Technology and Business Conference, East 2011 – Reflections

Wednesday, December 7th, 2011

I had the pleasure of attending the Semantic Technology and Business Conference in Washington, DC last week.  I have a strong interest in semantic technology and its capabilities to enhance the way in which we leverage information systems.  There was a good selection of topics discussed by people with a variety of  backgrounds working in different verticals.

To begin the conference I attended the half day “Ontology 101” presented by Elisa Kendall and Deborah McGuinness.  They indicated that this presentation has been given at each semantic technology conference and the interest is still strong.  The implication being that new people continue to want to understand this art.

Their material was very useful and if you are someone looking to get a grounding in ontologies (what are they?  how do you go about creating them?) I recommend attending this session the next time it is offered.  Both leaders clearly have deep experience and expertise in this field.  Also, the discussion was not tied to a technology (e.g. RDF) so it was applicable regardless of underlying implementation details.

I wrapped up the first day with Richard Ordowich who discussed the process of reverse engineering semantics (meaning) from legacy data.  The goal of such projects being to achieve a data harmonization of information across the enterprise.

A point he stressed was that a business really needs to be ready to start such a journey.  This type of work is very hard and very time consuming.  It requires an enterprise wide discipline.  He suggests that before working with a company on such an initiative one should ask for examples of prior enterprise program success (e.g. something like BPM, SDLC).

Fundamentally, a project that seeks to harmonize the meaning of data across an enterprise requires organization readiness to go beyond project execution.  The enterprise must put effective governance in place to operate and maintain the resulting ontologies, taxonomies and metadata.

The full conference kicked off the following day.  One aspect that jumped out for me was that a lot of the presentations dealt with government-related projects.  This could have been a side-effect of the conference being held in Washington, DC but I think it is more indicative that spending in this technology is more heavily weighted to public rather than private industry.

Being government-centric I found any claims of “value” suspect.  A project can be valuable, or show value, without being cost effective.  Commercial businesses have gone bankrupt even though they delivered value to their customers.  More exposure of positive-ROI commercial projects will be important to help accelerate the adoption of these technologies.

Other than the financial aspect, the presentations were incredibly valuable in terms of presenting lessons learned, best practices and in-depth tool discussions.  I’ll highlight a few of the sessions and key thoughts that I believe will assist as we continue to apply semantic technology to business system challenges.

(more…)

The Cognitive Corporation™ – An Introduction

Monday, September 26th, 2011

Given my role as an enterprise architect, I’ve had the opportunity to work with many different business leaders, each focused on leveraging IT to drive improved efficiencies, lower costs, increase quality, and broaden market share throughout their businesses.  The improvements might involve any subset of data, processes, business rules, infrastructure, software, hardware, etc.  A common thread is that each project seeks to make the corporation smarter through the use of information technology.

As I’ve placed these separate projects into a common context of my own, I’ve concluded that the long term goal of leveraging information technology must be for it to support cognitive processes.  I don’t mean that the computers will think for us, rather that IT solutions must work together to allow a business to learn, corporately.

The individual tools that we utilize each play a part.  However, we tend to utilize them in a manner that focuses on isolated and directed operation rather than incorporating them into an overall learning loop.  In other words, we install tools that we direct without asking them to help us find better directions to give.

Let me start with a definition: similar to thinking beings, a cognitive corporation™ leverages a feedback loop of information and experiences to inform future processes and rules.  Fundamentally, learning is a process and it involves taking known facts and experiences and combining them to create new hypothesis which are tested in order to derive new facts, processes and rules.  Unfortunately, we don’t often leverage our enterprise applications in this way.

We have many tools available to us in the enterprise IT realm.  These include database management systems, business process management environments, rule engines, reporting tools, content management applications, data analytics tools, complex event processing environments, enterprise service buses, and ETL tools.  Individually, these components are used to solve specific, predefined issues with the operation of a business.  However, this is not an optimal way to leverage them.

If we consider that these tools mimic aspects of an intelligent being, then we need to leverage them in a fashion that manifests the cognitive capability in preference to simply deploying a point-solution.  This involves thinking about the tools somewhat differently.

(more…)

Semantic Web Summit (East) 2010 Concludes

Thursday, November 18th, 2010

I attended my first semantic web conference this week, the Semantic Web Summit (East) held in Boston.  The focus of the event was how businesses can leverage semantic technologies.  I was interested in what people were actually doing with the technology.  The one and a half days of presentations were informative and diverse.

Our host was Mills Davis, a name that I have encountered frequently during my exploration of the semantic web.  He did a great job of keeping the sessions running on time as well as engaging the audience.  The presentations were generally crisp and clear.  In some cases the speaker presented a product that utilizes semantic concepts, describing its role in the value chain.  In other cases we heard about challenges solved with semantic technologies.

My major takeaways were: 1) semantic technologies work and are being applied to a broad spectrum of problems and 2) the potential business applications of these technologies are vast and ripe for creative minds to explore.  This all bodes well for people delving into semantic technologies since there is an infrastructure of tools and techniques available upon which to build while permitting broad opportunities to benefit from leveraging them.

As a CTO with 20+ years focused on business environments, including application development, enterprise application integration, data warehousing, and business intelligence I identified most closely with the sessions geared around intra-business and B2B uses of semantic technology.  There were other sessions looking a B2C which were well done but not applicable to the world in which I find myself currently working.

Talks by Dennis Wisnosky and Mike Dunn were particularly focused on the business value that can be achieved through the use of semantic technologies.  Further, they helped to define basic best practices that they apply to such projects.  Dennis in particular gave specific information around his processes and architecture while talking about the enormous value that his team achieved.

Heartening to me was the fact that these best practices, processes and architectures are not significantly different than those used with other enterprise system endeavors.  So we don’t need to retool all our understanding of good project management practices and infrastructure design, we just need to internalize where semantic technology best fits into the technology stack.

(more…)

Successful Process Automation: A Summary

Monday, July 26th, 2010

InformationWeek Analytics (http://analytics.informationweek.com/index) invited me to write about the subject of process automation.  The article, part of their series covering application architectures, was released in July of this year.  It provided an opportunity for me to articulate the key components that are required to succeed in the automation of business processes.

Both the business and IT are positioned to make-or-break the use of process automation tools and techniques. The business must redefine its processes and operational rules so that work may be automated.  IT must provide the infrastructure and expertise to leverage the tools of the process automation trade.

Starting with the business there must be clearly defined processes by which work gets done.  Each process must be documented, including the points where decisions are made.  The rules for those decisions must then be documented.  Repetitive, low-value and low-risk decisions are immediate candidates for automation.

A key value point that must be reached in order to extract sustainable and meaningful value from process automation is measured in Straight Through Processing (STP).  STP requires that work arrive from a third-party and be automatically processed; returning a final decision and necessary output (letter, claim payment, etc.) without a person being involved in handling the work.

Most businesses begin using process automation tools without achieving any significant STP rate.  This is fine as a starting point so long as the business reviews the manual work, identifies groupings of work, focuses on the largest groupings (large may be based on manual effort, cost or simple volume) and looks to automate the decisions surrounding that group of work.  As STP is achieved for some work, the review process continues as more and more types of work are targeted for automation.

The end goal of process automation is to have people involved in truly exceptional, high-value, high-risk, business decisions.  The business benefits by having people attend to items that truly matter rather than dealing with a large amount background noise that lowers productivity, morale and client satisfaction.

All of this is great in theory but requires an information technology infrastructure that can meet these business objectives.

(more…)

My First Semantic Web Program

Saturday, June 5th, 2010

I have create my first slightly interesting, to me anyway, program that uses some semantic web technology.  Of course I’ll look back on this in a year and cringe, but for now it represents my understanding of a small set of features from Jena and Pellet.

The basis for the program is an example program that is described in Hebler, Fischer et al’s book “Semantic Web Programming” (ISBN: 047041801X).  The intent of the program is to load an ontology into three models, each running a different level of reasoner (RDF, RDFS and OWL) and output the resulting assertions (triples).

I made a couple of changes to the book’s sample’s approach.  First I allow any supported input file format to be automatically loaded (you don’t have to tell the program what format is being used).  Second, I report the actual differences between the models rather than just showing all the resulting triples.

As I worked on the code, which is currently housed in one uber-class (that’ll have to be refactored!), I realized that there will be lots of reusable “plumbing” code that comes with this type of work.  Setting up models with various reasoners, loading ontologies, reporting triples, interfacing to triple stores, and so on will become nuisance code to write.

Libraries like Jena help, but they abstract at a low level.  I want a semantic workbench that makes playing with the various libraries and frameworks easy.  To that end I’ve created a Sourceforge project called “Semantic Workbench“.

I intend for the Semantic Workbench to provide a GUI environment for manipulating semantic web technologies. Developers and power users would be able to use such a tool to test ontologies, try various reasoners and validate queries.  Developers could use the workbench’s source code to understand how to utilize frameworks like Jena or reasoner APIs like that of Pellet.

I invite other interested people to join the Sourceforge project. The project’s URL is: http://semanticwb.sourceforge.net/

On the data side, in order to have a rich semantic test data set to utilize, I’ve started an ontology that I hope to grow into an interesting example.  I’m using the insurance industry as its basis.  The rules around insurance and the variety of concepts should provide a rich set of classes, attributes and relationships for modeling.  My first version of this example ontology is included with the sample program.

Finally, I’ve added a semantic web section to my website where I’ll maintain links to useful information I find as well as sample code or files that I think might be of interest to other developers.  I’ve placed the sample program and ontology described earlier in this post on that page along with links to a variety of resources.

My site’s semantic web page’s URL is: http://monead.com/semantic/
The URL for the page describing the sample program is: http://monead.com/semantic/proj_diffinferencing.html

Database Refactoring and RDF Triples

Wednesday, May 12th, 2010

One of the aspects of agile software development that may lead to significant angst is the database.  Unlike refactoring code, the refactoring of the database schema involves a key constraint – state!  A developer may rearrange code to his or her heart’s content with little worry since the program will start with a blank slate when execution begins.  However, the database “remembers.”  If one accepts that each iteration of an agile process produces a production release then the stored data can’t be deleted as part of the next iteration.

The refactoring of a database becomes less and less trivial as project development continues.  While developers have IDE’s to refactor code, change packages, and alter build targets, there are few tools for refactoring databases.

My definition of a database refactoring tool is one that assists the database developer by remembering the database transformation steps and storing them as part of the project – e.g. part of the build process.  This includes both the schema changes and data transformations.  Remember that the entire team will need to reproduce these steps on local copies of the database.  It must be as easy to incorporate a peer’s database schema changes, without losing data, as it is to incorporate the code changes.

These same data-centric complexities exist in waterfall approaches when going from one version to the next.  Whenever the database structure needs to change, a path to migrate the data has to be defined.  That transformation definition must become part of the project’s artifacts so that the data migration for the new version is supported as the program moves between environments (test, QA, load test, integrated test, and production).  Also, the database transformation steps must be automated and reversible!

That last point, the ability to rollback, is a key part of any rollout plan.  We must be able to back out changes.  It may be that the approach to a rollback is to create a full database backup before implementing the update, but that assumption must be documented and vetted (e.g. the approach of a full backup to support the rollback strategy may not be reasonable in all cases).

This database refactoring issue becomes very tricky when dealing with multiple versions of an application.  The transformation of the database schema and data must be done in a defined order.  As more and more data is stored, the process consumes more storage and processing resources.  This is the ETL side-effect of any system upgrade.  Its impact is simply felt more often (e.g. potentially during each iteration) in an agile project.

As part of exploring semantic technology, I am interested in contrasting this to a database that consists of RDF triples.  The semantic relationships of data do not change as often (if at all) as the relational constructs.  Many times we refactor a relational database as we discover concepts that require one-to-many or many-to-many relationships.

Is an RDF triple-based database easier to refactor than a relational database?  Is there something about the use of RDF triples that reduces the likelihood of a multiplicity change leading to a structural change in the data?  If so, using RDF as the data format could be a technique that simplifies the development of applications.  For now, let’s take a high-level look at a refactoring use case.

(more…)