// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for the ‘Software Development’ Category

Destination Reached: CISSP

Friday, July 2nd, 2010

CISSP logoI am happy to report that I have been awarded the Certified Information Systems Security Professional (CISSP) by the International Information Systems Security Certification Consortium [(ISC)2]a.

I started pursuing the certification in mid-2009, got serious about studying early this year (2010), took the exam in late April, was notified that I passed and had my background endorsed in May, had to update my resume for an auditor in early June and was awarded the CISSP designation at the end of June.

I felt that this certification was important both professionally and personally.

Professionally, the certification serves as a validation that I have a solid and broad understanding of information systems’ security.  People who have worked with me know that I have been focused on IS security for many years.

Whether performing security-centered code reviews, fixing flawed implementations or teaching designers and developers how to improve the security of their systems, I have been on a mission to mentor and train people to observe effective security practices and principles.  I’ve also had operational responsibility for system infrastructures.  With that experience I was able to pass GIAC’s GSEC and Red Hat’s RHCE exams several years ago.

Personally, the process of studying and passing the exam allowed me to pursue and attain a non-trivial goal.  I am enrolled and taking classes toward my master’s degree, but completing that work will require several more years of part time attendance.  Setting and achieving intermediate goals helps to keep me focused and learning.

If you are wondering what the CISSP is all about, please read on.

(more…)

My First Semantic Web Program

Saturday, June 5th, 2010

I have create my first slightly interesting, to me anyway, program that uses some semantic web technology.  Of course I’ll look back on this in a year and cringe, but for now it represents my understanding of a small set of features from Jena and Pellet.

The basis for the program is an example program that is described in Hebler, Fischer et al’s book “Semantic Web Programming” (ISBN: 047041801X).  The intent of the program is to load an ontology into three models, each running a different level of reasoner (RDF, RDFS and OWL) and output the resulting assertions (triples).

I made a couple of changes to the book’s sample’s approach.  First I allow any supported input file format to be automatically loaded (you don’t have to tell the program what format is being used).  Second, I report the actual differences between the models rather than just showing all the resulting triples.

As I worked on the code, which is currently housed in one uber-class (that’ll have to be refactored!), I realized that there will be lots of reusable “plumbing” code that comes with this type of work.  Setting up models with various reasoners, loading ontologies, reporting triples, interfacing to triple stores, and so on will become nuisance code to write.

Libraries like Jena help, but they abstract at a low level.  I want a semantic workbench that makes playing with the various libraries and frameworks easy.  To that end I’ve created a Sourceforge project called “Semantic Workbench“.

I intend for the Semantic Workbench to provide a GUI environment for manipulating semantic web technologies. Developers and power users would be able to use such a tool to test ontologies, try various reasoners and validate queries.  Developers could use the workbench’s source code to understand how to utilize frameworks like Jena or reasoner APIs like that of Pellet.

I invite other interested people to join the Sourceforge project. The project’s URL is: http://semanticwb.sourceforge.net/

On the data side, in order to have a rich semantic test data set to utilize, I’ve started an ontology that I hope to grow into an interesting example.  I’m using the insurance industry as its basis.  The rules around insurance and the variety of concepts should provide a rich set of classes, attributes and relationships for modeling.  My first version of this example ontology is included with the sample program.

Finally, I’ve added a semantic web section to my website where I’ll maintain links to useful information I find as well as sample code or files that I think might be of interest to other developers.  I’ve placed the sample program and ontology described earlier in this post on that page along with links to a variety of resources.

My site’s semantic web page’s URL is: http://monead.com/semantic/
The URL for the page describing the sample program is: http://monead.com/semantic/proj_diffinferencing.html

Database Refactoring and RDF Triples

Wednesday, May 12th, 2010

One of the aspects of agile software development that may lead to significant angst is the database.  Unlike refactoring code, the refactoring of the database schema involves a key constraint – state!  A developer may rearrange code to his or her heart’s content with little worry since the program will start with a blank slate when execution begins.  However, the database “remembers.”  If one accepts that each iteration of an agile process produces a production release then the stored data can’t be deleted as part of the next iteration.

The refactoring of a database becomes less and less trivial as project development continues.  While developers have IDE’s to refactor code, change packages, and alter build targets, there are few tools for refactoring databases.

My definition of a database refactoring tool is one that assists the database developer by remembering the database transformation steps and storing them as part of the project – e.g. part of the build process.  This includes both the schema changes and data transformations.  Remember that the entire team will need to reproduce these steps on local copies of the database.  It must be as easy to incorporate a peer’s database schema changes, without losing data, as it is to incorporate the code changes.

These same data-centric complexities exist in waterfall approaches when going from one version to the next.  Whenever the database structure needs to change, a path to migrate the data has to be defined.  That transformation definition must become part of the project’s artifacts so that the data migration for the new version is supported as the program moves between environments (test, QA, load test, integrated test, and production).  Also, the database transformation steps must be automated and reversible!

That last point, the ability to rollback, is a key part of any rollout plan.  We must be able to back out changes.  It may be that the approach to a rollback is to create a full database backup before implementing the update, but that assumption must be documented and vetted (e.g. the approach of a full backup to support the rollback strategy may not be reasonable in all cases).

This database refactoring issue becomes very tricky when dealing with multiple versions of an application.  The transformation of the database schema and data must be done in a defined order.  As more and more data is stored, the process consumes more storage and processing resources.  This is the ETL side-effect of any system upgrade.  Its impact is simply felt more often (e.g. potentially during each iteration) in an agile project.

As part of exploring semantic technology, I am interested in contrasting this to a database that consists of RDF triples.  The semantic relationships of data do not change as often (if at all) as the relational constructs.  Many times we refactor a relational database as we discover concepts that require one-to-many or many-to-many relationships.

Is an RDF triple-based database easier to refactor than a relational database?  Is there something about the use of RDF triples that reduces the likelihood of a multiplicity change leading to a structural change in the data?  If so, using RDF as the data format could be a technique that simplifies the development of applications.  For now, let’s take a high-level look at a refactoring use case.

(more…)

Business Ontologies and Semantic Technologies Class

Sunday, May 9th, 2010

Last week I had the pleasure of attending Semantic Arts’ training class entitled, “Designing and Building Business Ontologies.”  The course, led by Dave McComb and Simon Robe, provided an excellent introduction to semantic technologies and tools as well as coverage of ontological best practices.  I thoroughly enjoyed the 4-day class and achieved my principle goals in attending; namely to understand the semantic web landscape, including technologies such as RDF, RDFS, OWL, SPARQL, as well as the current state of tools and products in this space.

Both Dave and Simon have a deep understanding of this subject area.  They also work with clients using this technology so they bring real-world examples of where the technology shines and where it has limitations.  I recommend this class to anyone who is seeking to reach a baseline understanding of semantic technologies and ontology strategies.

Why am I so interested in semantic web technology?  I am convinced that structuring information such that it can be consumed by systems, in ways more automated than current data storage and association techniques allow, is required in order to achieve any meaningful advancement in the field of information technology (IT). Whether wiring together web services or setting up ETL jobs to create data marts, too much IT energy is wasted on repeatedly integrating data sources; essentially manually wiring together related information in the absence of the computer being able to wire it together autonomously!

(more…)

Encapsulation, It Isn’t Just For Your Public Interface

Tuesday, March 23rd, 2010

Encapsulation, one of Object Orientation’s (OO) “Big Three” (or four if you include composition), is the concept most often forgotten when I ask an interview candidate to define the key tenants of OO.  Giving the benefit of the doubt, perhaps it is considered “obvious” and hence not necessarily related to OO design in the person’s mind.  Once I bring it up though, there is usually agreement that it is an important aspect to achieving significant value from OO design.

Classically, encapsulation, also called information hiding, “serves to separate the contractual interface of an abstraction and its implementation.”1 The idea is that the user of the functionality only knows about the public interface (contractual interface) and has no knowledge, nor any ability to tie itself, to the implementation.  The implementation includes both data representation and, effectively, algorithm.

Many times I’ll get an alternate definition; essentially the respondent will define encapsulation as, “as an approach which allows an object’s behavior to be called without the caller having knowledge of how the behavior is implemented.”  This seems very close to the classical definition but misses the key point of “contractual interface.”

I could argue that when any method is called the caller doesn’t have knowledge of how the method is implemented; it just gets a value back.  The missing aspect, the key aspect, is the constraint regarding which methods the caller may call, e.g. the public interface.

What started me on this topic was a recent conversation with a peer regarding read-only objects.  Before I get into specifics, let me baseline the traditional encapsulation approach in a typical object.

(more…)

Cut Waste, Not Costs

Monday, March 15th, 2010

As I read more and more about the Toyota debacle I’m struck by an apparently myopic management drive to cut costs.  In the case of Toyota it appears that cost cutting extended into quality cutting.  A company once known for superb quality had methodically reduced that aspect of their output.  This isn’t just conjecture; it seems that people inside the company had been aware of a decline in quality due to a focus on reducing costs.1 Is there a general lesson to consider?

I believe the failure is one of misplaced focus. The focus when Toyota began cutting costs was to remove waste.  That waste could be found throughout their manufacturing processes.  Wasted materials, productivity, tooling, and equipment were all identified as Toyota’s management and workers struck out on a journey to reduce waste and improve productivity.  They ushered in a set of practices that others would soon adopt.

Head back to the 1950′s and you’ll find Taiichi Ohno2 hard at work addressing myriad manufacturing shortcomings at Toyota.  Mr. Ohno is really the father of lean manufacturing and just-in-time inventory management.  He didn’t name them as such.  He was just trying to remove waste from the entire manufacturing process.  By the late 1990′s these concepts had become standard operating procedure at many firms.

It makes sense that a business would focus on reducing waste.  Although it may require effort to remove waste without reducing productivity, overall one would expect a leaner process to have an overall efficiency gain.  It would also seem that quality does not benefit from waste.  After many years of experience with these principles, companies have found that an approach of using only the resources that are needed when they are needed provides a sound basis for their operations.  So what happened at Toyota?  They apparently went beyond cutting waste.

(more…)

Testing, 1-2-3, Testing

Thursday, February 18th, 2010

During the past several months I’ve had an interesting experience working with Brainbench.  As you may know, Brainbench (a part of Previsor) offers assessment tests and certifications across a wide range of subjects.  They cover many technical and non-technical areas.  I have taken Brainbench exams myself and I have seen them used as a component within a hiring process.  However, I did not understand how these exams were created.

bb_final_logo_white.121x121That mystery ended for me late last year when I received an email looking for technologists to assist in validating a new exam that Brainbench was creating to cover Spring version 2.5.  Being curious about the test creation process I applied for the advertised validator role.  I was pleasantly surprised when they contacted me with an offer for the role of test author instead.

I will not delve into Brainbench’s specific exam creation approach since I assume it is proprietary and want to be sure I respect their intellectual property.  What I found was a very well-planned and thorough process.  Having a background in education and a strong interest in teaching and mentoring, I know the challenge of creating a meaningful assessment.  In the case of their approach, they focus on an accurate and well-considered exam.

I believe that I am quite knowledgeable regarding Spring.  I have used many of its features for work and personal projects.  The philosophies supported by the product (encouraged, not prescribed) address many areas of coding that help reduce clutter, decouple implementations, and simplify testing.  As a true fan of Spring’s feature set, I found it challenging to decide which aspects were most important when assessing an individual’s knowledge of the overall framework. (more…)

Business Rules Forum 2009 Winds Down

Friday, November 6th, 2009

With the vendors gone the main hall seemed spacious during the morning keynote and lunch time presentations.  James Taylor [of "Smart (Enough) Systems" fame] delivered the keynote address.  He always has interesting insights regarding the state of affairs for agile systems design, both leveraging automated decisioning and workflow processes. 

James made the point that systems need to be more agile given higher levels of uncertainty with which our businesses deal.  The need to adjust and react is more critical as our business strategies and goals flex to the changing environment.  Essentially he seemed to be saying that businesses should not reduce their efforts to be agile during this economic downturn.  Rather, it is more important to increase agility in order to respond quickly to shifting opportunities.

Following the keynote I attended Brian Dickinson’s session titled, “Business Event Driven Enterprises Rule!”  The description of the session in the conference guide had caught my attention since it mentioned “event partitioning” which was a phrase I had not heard used in terms of designing automated solutions for businesses.

I was glad that I went.  Brian was an energetic speaker!  It was clear that he was deeply committed and passionate about focusing on events rather than functionality when considering process automation.  The hour-long session flew by and it was apparent that we hadn’t made a dent in what he really wanted to communicate.

Brian was kind enough to give attendees a copy of his book, “Creating Customer Focused Organizations” which I hope expands on the premise of his presentation today.  Although quite natural as a design paradigm when building event-driven UI’s and multi-threaded applications, I have not spent time focused on events when designing the business and database tiers of applications.  For me, the first step of working with his central tenants will be to try applying them, at least in a small way, on my next architecture project. (more…)

Business Rules Forum: In the Groove

Thursday, November 5th, 2009

The second day of the BRF is typically the most active.  People are arriving throughout day 1 and start heading out on day 3.  I’m attending RuleML, which follows on the heels of the BRF, so I’ll be here for all if it.

The morning keynote was delivered by Stephen Hendrick (IDC).  His presentation was titled, “BRMS at a Cross Roads: the Next Five Years.”  It was interesting hearing his vision of how BRMS vendors will need to position their offerings in order to be relevant for the future needs of businesses.

I did find myself wondering whether his vision was somewhat off in terms of timing.  The move to offer unified (or at least integrated) solutions based on traditional BRMS, Complex Event Processing, Data Transformation and Analytics seemed well beyond where I find many clients are in terms of leveraging the existing BRMS capabilities.

Between discussions with attendees and work on projects for which Blue Slate’s  customers hire us, the current state of affairs seems to be more about understanding how to begin using a BRMS.  I find many clients are just getting effective governance, rules harvesting and infrastructure support for BRMS integration started.  Discussions surrounding more complex functionality are premature for these organizations.

As usual, there were many competing sessions throughout the day that I wanted to attend.  I had to choose between these and spending some in-depth time with a few of the vendors.  One product that I really wanted to get a look at was JBoss Rules (Red Hat).

Similar to most Red Hat offerings there are free and fee-based versions of the product.  Also, as is typical between the two versions, the fee-based version is aimed at enterprises that do not want to deal with experimental or beta aspects of the product, instead preferring a more formal process of periodic production-worthy upgrades.  The fee-based offering also gets you support, beyond the user groups available to users of the free version.

The naming of the two versions is not clear to me.  I believe that the fee-based version is called JBoss Rules while the free download is called JBoss Drools, owning to the fact that Red Hat used drools as the basis for its rule engine offering.  The Drools suite includes BPM, BRM and Event Processing components.  My principle focus was the BRMS to start.

The premier open source rules offering (my opinion) has come a long way since I last tried it over a year ago.  The feature set includes a version control repository for the rules, somewhat user-friendly rule editing forms and a test harness.  Work is underway to support templating for rules, which is vital for creating rules that can be maintained easily by business users.  I will be downloading and working with this rule engine again shortly! (more…)

Business Rules Forum: Full Fledged Kickoff!

Wednesday, November 4th, 2009

Today the Business Rules Forum (BRF) kicked off for its 12th year.  Gladys Lam welcomed us and set the stage for an enlightening and engaging three days.  Jim Sinur (Gartner) gave the keynote address.  His expertise surrounding the entire field of Business Process Management (BPM), Business Rules Management (BRM) and Complex Event Processing (CEP) gives him significant insight into the industry and trends.

Jim’s talk was a call to action for product vendors and practitioners that the world has changed fundamentally and being able to leverage what he called “weak signals” and myriad events from many sources was becoming a requirement for successful business operations.  As always his talk was accompanied with a little humor and a lot of excellent supporting material.

During the day I attended three sessions and two of the vendor “Fun Labs”.  For me, the most interesting session of the ones I attended was given by Graham Witt (Ajlion).  He discussed his success with creating an approach of allowing business users to document rules using a structured natural language.  His basis was SBVR, but he reduced the complexity to create a practical solution.

Graham did a great job of walking us through a set of definitions for fact model, term, fact types and so forth. Using our understanding of the basic components of a structured rule he explored how one can take ambiguous statements, leverage the structure inherent in the fact model, and create an unambiguous statement that was still completely understandable to the business user.

His approach of creating templates for each type of rule made sense as a very effective method to give the business user the flexibility of expressing different types of rules while staying within a structured syntax.  This certainly seems like an approach to be explored for getting us closer to a DRY (Don’t Repeat Yourself) process that moves rules from the requirements into the design and implementation phases of a rules-based project.

The vendor labs were also interesting.  I attended one run by Innovations Software Technology and another by RuleArts. (more…)