// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for the ‘Tools and Applications’ Category

MongoDB and Java – Powerful Complementary Platforms

Tuesday, May 31st, 2016

I have found that including MongoDB in the design of Java applications allows me a valuable level of flexibility in meeting client objectives. I have created an initial open source project on GitHub, JavaMongo, with the goal of providing working examples of Java and MongoDB integration. A secondary goal is to include development best practices, such as using testing frameworks and good coding style.

This posting is intended to give a little background on why I find Java and MongoDB to be useful tools in my software development arsenal and then to introduce the JavaMongo project. Future postings will include some videos walking developers through the examples as well as the frameworks being used (like JUnit, Cobertura and Checkstyle)

Background

Java is an ubiquitous platform for creating business applications. It has proven itself across a wide range of use cases from small point-based solutions to large generalized solution stacks. The variety of libraries, frameworks and tools for designing, building, testing and managing Java applications provides significant benefits to companies building solutions using Java. However, an application without ready access to data isn’t particularly useful. As enterprise-scale database options have broadened to include NoSQL, those individuals creating Java-based solutions must be sure to take advantage of new data options in order to benefit from the strengths of such components.

MongoDB is a great NoSQL platform that can be used to provide additional capabilities to your applications. MongoDB is a document store that has proven its reliability, scalability and integrate-ability across numerous small and large-scale applications. Its value and focus complements the way we use relational databases for online transaction-oriented processing (OLTP) and offers advantages over the way we use relational databases for data marts and warehouses.

A point of clarification before proceeding: I’m not here to say that MongoDB is better than some other data product, or, more generally, that document stores are better than relational databases. I find such arguments meaningless without a specific use case or project goal. These technologies are different and have individual strengths and weaknesses in the face of a specific set of project objectives.

I have found that MongoDB plugs in well when I need a place to federate data (structured, semi-structured and unstructured). Given a common platform, it simplifies the work required to build and alter connections between attributes. If you’ve looked at other information about my background you’ll see that I find the use of semantic technology to be incredibly valuable for data federation and classification. MongoDB as a flexible repository plays well with semantics. At the end of this post I’ll give you a small example of that.

JavaMongo Project

The JavaMongo project is intended to provide Java developers with working examples of Java and MongoDB integrations. Over time I expect a variety of common situations to be demonstrated, with associated documentation explaining the use case and the resulting implementation.

In order to have some interesting data to work with, I’m using data sets that my company releases to the public domain. In order to work with the JavaMongo examples you’ll need to import that data into your MongoDB instance. For more information about downloading and importing the sample data, see the discussion on MongoDB Collection of Honeypot Data on my NoSQL topic page.

The initial JavaMongo project contains a basic README file with information on running the example code. Instead of rehashing that information in this post, I’d like to walk through the basic operations being demonstrated in the example code. The main class we’ll explore is BasicStatistics (us.daveread.education.mongo.honeypot.BasicStatistics).

As you know, a Java program starts execution with the main() method. We see that the first step that the BasicStatistics’ main() method takes is to create an instance of the BasicStatictics class.

BasicStatistics Constructor

The constructor code goes through the entire process of connecting to a MongoDB database, accessing a collection and running a query on data in the collection.

First, an instance of MongoClientOptions is created. This class allows us to configure certain client side options related to the connection. I’ll get into more detail with this in future examples. In this case, the program is simply setting the connection timeout to 2000 milliseconds (2 seconds) so that if the instance is not available the program won’t hang for a long time. You wouldn’t make the timeout this short in a production environment but it helps for debugging our local environment by failing fast if something is wrong.

(more…)

Medicaid Managed Care Congress Conversations Highlight the Value of Data Federation

Thursday, May 22nd, 2014

Photo of Scott, Chris and Dave at MMCC 2014

This week I had the opportunity to attend the Medicaid Managed Care Congress (MMCC) in Baltimore, MD and the privilege of speaking with a variety of leaders from provider, payer, and services organizations. With me from Blue Slate Solutions were Scott Van Buren and Chris Garber. A common theme we heard as we spoke with the attendees was the challenge of bringing data together from multiple sources and making sense of that information.

Medicaid is potentially the most complex government program that exists in the United States. There are federal and state aspects as well as portions that are handled at a local level. Some funding and services are defined as required while others are optional. The financial models’ formulas involve many variables. In short, there are numerous challenges in Medicaid, including the dual eligible changes that seek to address the services disconnects that often exist when a person is eligible for both Medicare and Medicaid.

Combining data from providers, payers, patients, government entities and the community are all necessary in order to optimize the quality of care that is provided to each patient. The definition of provider continues to expand, covering not just the medical needs of a person but incorporating the various social services, so important to the holistic care of an individual, under the umbrella of “provider.”

As we listened to people and talked about their data challenges we were also able to walk them through the Data Unleashed™ approach. The iterative learn-as-you-go process resonated across the board, whether people represented patient advocacy groups, provider organizations or healthcare plans. The capability to start small, obtain value quickly and adapt rapidly to changing environments fits the Medicaid complexities well.

Data Unleashed Front End Screenshot

If you would like to learn more about our agile and lightweight approach to accessing data from across your enterprise in order to quickly begin creating meaningful reporting and analytics, please check out dataunleashed.com for descriptions, videos and case studies. We’d also appreciate the opportunity to host a webinar with your team where we can explore Data Unleashed™ in more depth and discuss your specific data challenges.

How Does Semantic Technology Enable Agile Data Analytics?

Friday, April 25th, 2014

I’m glad you asked. SDATAVERSITYcott Van Buren and I will be presenting a Dataversity webinar entitled, Using Semantic Technology to Drive Agile Analytics, on exactly that topic. Scheduled for May 14, 2014 (and available for replay afterwards), this webinar will highlight key semantic technology capabilities and how those provide an environment for data agility.

We will focus most of the webinar on a case study that demonstrates the agility of semantic technology being used to conduct data analysis within a healthcare payer organization. Healthcare expertise is not required in order to understand the case study.

swAs we look into several iterations of data federation and analysis, we will see the effectiveness of bringing the right subset of data together at the right time for a particular data-centric use. This concept translates well to businesses that have multiple sets of data or applications, including data from third parties, and seek to combine relevant subsets of that information for reporting or analytics. Further, we will see how this augments data warehousing projects, where the lightweight and agile data federation approach informs the warehouse design.

Please plan to  join us virtually on May 14 as we describe semantic technology, lightweight data federation and agile data analytics. There will also be time for you to pose questions and delve into areas of interest that we do not cover in our presentation.

The webinar registration page is: http://content.dataversity.net/051414BlueslateWebinar_DVRegistrationPage.html

We look forward to having the opportunity to share our data agility thoughts and experiences with you.

San Jose and the SemTechBiz 2014 Conference, Here I Come!

Friday, April 18th, 2014

semtechbiz2014.imspeaking.203x72I am thrilled to have been invited back to participate at the Semantic Technology and Business (SemTechBiz) conference. This is the premier US conference for learning about, exploring and getting your hands on semantic technology. I’ll be part of a Blue Slate team (including Scott Van Buren and Michael Delaney) who will be conducting a half-day hands-on workshop, Integrating Data Using Semantic Technology, on August 19, 2014. Our mission is to have participants use semantic technology to integrate, federate and perform analysis across several data sources.

We have some work to do to iron out our overall use case, pulling from work we have done with several clients. At a minimum we’ll be working with database schemas, ontologies, reasoners and data analytics tools. It will be a fun and educational experience for attendees.

I’ll post more specifics once the SemTechBiz agenda is published and we have finalized the workshop structure. I hope to see you this August 19-21 in San Jose for our workshop and the amazing learning opportunities throughout the conference.

For more information on the conference, visit its website: http://semtechbizsj2014.semanticweb.com/index.cfm

An Approach to Annotating PowerPoint Slides During a Presentation Using Wacom’s Intuos Tablet

Friday, February 14th, 2014

Picture of the Wacom Intuos tabletI recently purchased a Wacom Intuos tablet to connect to my computer as a tool to allow real-time annotation on slides during a presentation. These presentations could be recorded or live in nature. Using the mouse or touchpad was too limiting and magnified my already poor penmanship.

Once I had the tablet hooked up it was easy to annotate using PowerPoint’s own menus for selecting the mode (pen, highlighter) and colors. However, the navigation to access those features required traversing through an on-screen menu each time the pen was being selected or whenever I wanted to change the pen color. This was a real nuisance and meant that there would be an on-screen distraction and presentation delay whenever I had to navigate through the menu.

I hunted around for an alternative one-click shortcut approach and could not find one. Maybe I missed it but I finally decided to see if I could use PowerPoint macros and active shapes to give me a simple way to select pens and colors. I did get it to function and it works well for my purposes.

I’ve documented and demonstrated what I did in a short video. I’m sharing it in case others are looking for an option to do something similar. Once you get the framework in place, it provides the flexibility to use macros for more than just pen color control, but I’ll leave that to the reader’s and viewer’s imagination.

The video is located at: http://monead.com/video/WacamIntuosTabletandPptAnnotations/

The basic pen color macros that I use in the video are located at: http://monead.com/ppt_pen_macros.txt

I’d enjoy hearing if you have alternative ways to accomplish this or find interesting ways to apply the technique to other presentation features. Happy presenting!

Cognitive Corporation™ Innovation Lab Kickoff!

Friday, August 10th, 2012

I am excited to share the news that Blue Slate Solutions has kicked off a formal innovation program, creating a lab environment which will leverage the Cognitive Corporation™ framework and apply it to a suite of processes, tools and techniques.  The lab will use a broad set of enterprise technologies, applying the learning organization concepts implicit in the Cognitive Corporation’s™ feedback loop.

I’ve blogged a couple of times (see references at the end of this blog entry) about the Cognitive Corporation™.  The depiction has changed slightly but the fundamentals of the framework are unchanged.

Cognitive Corporation DepictionThe focus is to create a learning enterprise, where the learning is built into the system integrations and interactions. Enterprises have been investing in these individual components for several years; however they have not truly been integrating them in a way to promote learning.

By “integrating” I mean allowing the system to understand the meaning of the data being passed between them.  Creating a screen in a workflow (BPM) system that presents data from a database to a user is not “integration” in my opinion.  It is simply passing data around.  This prevents the enterprise ecosystem (all the components) from working together and collectively learning.

I liken such connections to my taking a hand-written note in a foreign language, which I don’t understand, and typing the text into an email for someone who does understand the original language.  Sure, the recipient can read it, but I, representing the workflow tool passing the information from database (note) to screen (email) in this case, have no idea what the data means and cannot possibly participate in learning from it.  Integration requires understanding.  Understanding requires defined and agreed-upon semantics.

This is just one of the Cognitive Corporation™ concepts that we will be exploring in the lab environment.  We will also be looking at the value of these technologies within different horizontal and vertical domains.  Given our expertise in healthcare, finance and insurance, our team is well positioned to use the lab to explore the use of learning BPM in many contexts.

(more…)

Semantic Technology and Business Conference, East 2011 – Reflections

Wednesday, December 7th, 2011

I had the pleasure of attending the Semantic Technology and Business Conference in Washington, DC last week.  I have a strong interest in semantic technology and its capabilities to enhance the way in which we leverage information systems.  There was a good selection of topics discussed by people with a variety of  backgrounds working in different verticals.

To begin the conference I attended the half day “Ontology 101” presented by Elisa Kendall and Deborah McGuinness.  They indicated that this presentation has been given at each semantic technology conference and the interest is still strong.  The implication being that new people continue to want to understand this art.

Their material was very useful and if you are someone looking to get a grounding in ontologies (what are they?  how do you go about creating them?) I recommend attending this session the next time it is offered.  Both leaders clearly have deep experience and expertise in this field.  Also, the discussion was not tied to a technology (e.g. RDF) so it was applicable regardless of underlying implementation details.

I wrapped up the first day with Richard Ordowich who discussed the process of reverse engineering semantics (meaning) from legacy data.  The goal of such projects being to achieve a data harmonization of information across the enterprise.

A point he stressed was that a business really needs to be ready to start such a journey.  This type of work is very hard and very time consuming.  It requires an enterprise wide discipline.  He suggests that before working with a company on such an initiative one should ask for examples of prior enterprise program success (e.g. something like BPM, SDLC).

Fundamentally, a project that seeks to harmonize the meaning of data across an enterprise requires organization readiness to go beyond project execution.  The enterprise must put effective governance in place to operate and maintain the resulting ontologies, taxonomies and metadata.

The full conference kicked off the following day.  One aspect that jumped out for me was that a lot of the presentations dealt with government-related projects.  This could have been a side-effect of the conference being held in Washington, DC but I think it is more indicative that spending in this technology is more heavily weighted to public rather than private industry.

Being government-centric I found any claims of “value” suspect.  A project can be valuable, or show value, without being cost effective.  Commercial businesses have gone bankrupt even though they delivered value to their customers.  More exposure of positive-ROI commercial projects will be important to help accelerate the adoption of these technologies.

Other than the financial aspect, the presentations were incredibly valuable in terms of presenting lessons learned, best practices and in-depth tool discussions.  I’ll highlight a few of the sessions and key thoughts that I believe will assist as we continue to apply semantic technology to business system challenges.

(more…)

Using ARQoid for Android-based SPARQL Query Execution

Thursday, December 1st, 2011

I was recently asked about the SPARQL support in Sparql Droid and whether it could serve as a way for other Android applications to execute SPARQL queries against remote data sources.  It could be used in this way but there is a simpler alternative I’d like to discuss here.

On the Android platform it is actually quite easy to execute SPARQL against remote SPARQL endpoints, RDF data and local models.  The heavy lifting is handled by Androjena’s ARQoid, an Android-centric port of HP’s Jena ARQ engine.

Both engines (the original and the port) do a great job of simplifying the execution of SPARQL queries and consumption of the resulting data.  In this post I’ll go through a simple example of using ARQoid.  Note that all the code being shown here is available for download.  This post is based specifically on the queryRemoteSparqlEndpoint() method in the com.monead.androjena.demo.arqoid.SparqlExamples class.

Setup

To begin, some environment setup needs to be done in order to have a properly configured Android project ready to use ARQoid.

First, obtain the ARQoid JAR and its dependencies.  This is easily accomplished using the download page on the ARQoid Wiki and obtaining the latest ARQoid ZIP file.  Unzip the downloaded archive.   Since I’m discussing an Android application I’d expect that you would have created an Android project and that it contains a libs directory where the JAR files should be placed.

Second, add the JAR files to the classpath for your Android project.  I use the ADT plugin for Eclipse to do Android development.  So to add the JARs to my project I choose the Project menu item, select Properties, choose Build Path, select the Libraries tab, click the Add JARs… button, navigate to the libs directory, select the JAR files and click OK on the open dialogs.

Third, setup a minimal Android project.  The default layout, with a small change to its definition will work fine.

Overview

Now we are ready to write the code that uses ARQoid to access some data.  For this first blog entry I’ll focus on a trivial query against a SPARQL endpoint.  There would be some small differences if we wanted to query a local model or a remote data set.  Those will be covered in follow-on entries.

Here is a list of the ARQoid classes we will be using for this initial example:

  • com.hp.hpl.jena.query.Query – represents the query being executed
  • com.hp.hpl.jena.query.Syntax – represents the query syntaxes supported by ARQoid
  • com.hp.hpl.jena.query.QueryFactory – creates a Query instance based on supplied parameters such as the query string and syntax definition
  • com.hp.hpl.jena.query.QueryExecution – provides the service to  execute the query
  • com.hp.hpl.jena.query.QueryExecutionFactory – creates a QueryExecution instance based on supplied parameters such as a Query instance and SPARQL endpoint URI
  • com.hp.hpl.jena.query.ResultSet – represents the returned data and metadata associated with the executed query
  • com.hp.hpl.jena.query.QuerySolution – represents one row of data within the ResultSet.

We’ll use these classes to execute a simple SPARQL query that retrieves some data associated with space exploration.  Talis provides an endpoint that we can use to access some interesting space exploration data.  The endpoint is located at http://api.talis.com/stores/space/services/sparql.
The query we will execute is:

SELECT ?dataType ?data
WHERE {
  <http://nasa.dataincubator.org/launch/1961-012> ?dataType ?data.
}

This query will give us a little information about Vostok 1 launched by the USSR in 1961.

(more…)

The Cognitive Corporation™ – Effective BPM Requires Data Analytics

Tuesday, October 25th, 2011

The Cognitive Corporation is a framework introduced in an earlier posting.  The framework is meant to outline a set of general capabilities that work together in order to support a growing and thinking organization.  For this post I will drill into one of the least mature of those capabilities in terms of enterprise solution adoption – Learn.

Business rules, decision engines, BPM, complex event processing (CEP), these all invoke images of computers making speedy decisions to the benefit of our businesses.  The infrastructure, technologies and software that provide these solutions (SOA, XML schemas, rule engines, workflow engines, etc.) support the decision automation process.  However, they don’t know what decisions to make.

The BPM-related components we acquire provide the how of decision making (send an email, route a claim, suggest an offer).  Learning, supported by data analytics, provides a powerful path to the what and why of automated decisions (send this email to that person because they are at risk of defecting, route this claim to that underwriter because it looks suspicious, suggest this product to that customer because they appear to be buying these types of items).

I’ll start by outlining the high level journey from data to rules and the cyclic nature of that journey.  Data leads to rules, rules beget responses, responses manifest as more data, new data leads to new rules, and so on.  Therefore, the journey does not end with the definition of a set of processes and rules.  This link between updated data and the determination of new processes and rules is the essence of any learning process, providing a key function for the cognitive corporation.

(more…)

The Cognitive Corporation™ – An Introduction

Monday, September 26th, 2011

Given my role as an enterprise architect, I’ve had the opportunity to work with many different business leaders, each focused on leveraging IT to drive improved efficiencies, lower costs, increase quality, and broaden market share throughout their businesses.  The improvements might involve any subset of data, processes, business rules, infrastructure, software, hardware, etc.  A common thread is that each project seeks to make the corporation smarter through the use of information technology.

As I’ve placed these separate projects into a common context of my own, I’ve concluded that the long term goal of leveraging information technology must be for it to support cognitive processes.  I don’t mean that the computers will think for us, rather that IT solutions must work together to allow a business to learn, corporately.

The individual tools that we utilize each play a part.  However, we tend to utilize them in a manner that focuses on isolated and directed operation rather than incorporating them into an overall learning loop.  In other words, we install tools that we direct without asking them to help us find better directions to give.

Let me start with a definition: similar to thinking beings, a cognitive corporation™ leverages a feedback loop of information and experiences to inform future processes and rules.  Fundamentally, learning is a process and it involves taking known facts and experiences and combining them to create new hypothesis which are tested in order to derive new facts, processes and rules.  Unfortunately, we don’t often leverage our enterprise applications in this way.

We have many tools available to us in the enterprise IT realm.  These include database management systems, business process management environments, rule engines, reporting tools, content management applications, data analytics tools, complex event processing environments, enterprise service buses, and ETL tools.  Individually, these components are used to solve specific, predefined issues with the operation of a business.  However, this is not an optimal way to leverage them.

If we consider that these tools mimic aspects of an intelligent being, then we need to leverage them in a fashion that manifests the cognitive capability in preference to simply deploying a point-solution.  This involves thinking about the tools somewhat differently.

(more…)