// JSON-LD for Wordpress Home, Articles and Author Pages. Written by Pete Wailes and Richard Baxter. // See: http://builtvisible.com/implementing-json-ld-wordpress/

Archive for May, 2010

Angels and Saints, Patience Please

Monday, May 31st, 2010

“I like the silent church before the service begins, better than any preaching.” -Ralph Waldo Emerson

This quote has been running through my head a lot as I’ve been spending time alone in church.  Our church music director will be away for the first three Sundays in June and asked if I would be willing to take the reins during his absence.  This isn’t the first time I done this, but I believe it is the longest stint I’ve had.

My initial thinking, and usual approach, when playing the service is to use the piano to accompany everything.  This is where I feel safest.  I spend a lot of time at the piano, working with the Children’s Choir, rehearsing with the Brass Kickers and accompanying myself for a variety of solos and duets.

However, I’ve begun to feel compelled to use the organ.  Its variety of colors, range of tones and wide dynamic range cannot be approached by the piano.  Although I love the sound of a piano accompanying a solo voice, the organ adds significant sonic breadth, especially when accompanying hymns.

In fact, I believe that it is the flatness of verse after verse of a hymn played on the piano that has continually pushed me to move out of my personal comfort zone and explore the organ as a more versatile and ultimately more appropriate instrument for such situations.  To be sure, it is now taking me an inordinate amount of time to prepare for a service.

When using the piano, all I needed to do was learn to play the notes.  Now I need to worry about the voicings for each verse.  Looking at the text to suggest color and dynamics adds work.  Basic tasks such as figuring out which manual to use for each verse and configuring piston settings so that they are convenient to access while playing also adds complexity for someone that does not use the instrument often.

I have great respect for those that make such planning and preparation look easy.  I cannot imagine doing this week after week, at least not with a separate full time job.  I would guess that over time one would get to know the instrument and have a more organized approach to this process.  For me there is a great deal of experimentation, figuring out which ranks extend into which octaves and which timbres sound well together.

To be sure, it is an amazing experience to fiddle with such decisions and hear the difference in the feeling invoked by a given hymn.  Played with certain stops, the piece is upbeat.  Change the sounds and it is suddenly reflective or pensive.  In fact, my biggest risk is probably over-using the breadth of sounds and dynamics.

For instance the carillon seems like a great choice for bringing out a melody, perhaps to introduce a hymn.  However, it would probably be tiresome for the congregation if used every week.  Also, it is tempting to approach some hymns with a powerful accompaniment.  I enjoy hearing the reverberation at the end of the piece while practicing.  However, the congregation shouldn’t be in a shouting match with the organ, so I’ll need to tame my “Virgil Fox-ness”1.

This will be in interesting month of Sundays for me and the congregation.  I pray we will each find enjoyment and meaning during the worship time spent together.

At the least, I hope those worshiping don’t come away saying, “I like the silent church before the service begins, better than Dave’s organ playing.”


Database Refactoring and RDF Triples

Wednesday, May 12th, 2010

One of the aspects of agile software development that may lead to significant angst is the database.  Unlike refactoring code, the refactoring of the database schema involves a key constraint – state!  A developer may rearrange code to his or her heart’s content with little worry since the program will start with a blank slate when execution begins.  However, the database “remembers.”  If one accepts that each iteration of an agile process produces a production release then the stored data can’t be deleted as part of the next iteration.

The refactoring of a database becomes less and less trivial as project development continues.  While developers have IDE’s to refactor code, change packages, and alter build targets, there are few tools for refactoring databases.

My definition of a database refactoring tool is one that assists the database developer by remembering the database transformation steps and storing them as part of the project – e.g. part of the build process.  This includes both the schema changes and data transformations.  Remember that the entire team will need to reproduce these steps on local copies of the database.  It must be as easy to incorporate a peer’s database schema changes, without losing data, as it is to incorporate the code changes.

These same data-centric complexities exist in waterfall approaches when going from one version to the next.  Whenever the database structure needs to change, a path to migrate the data has to be defined.  That transformation definition must become part of the project’s artifacts so that the data migration for the new version is supported as the program moves between environments (test, QA, load test, integrated test, and production).  Also, the database transformation steps must be automated and reversible!

That last point, the ability to rollback, is a key part of any rollout plan.  We must be able to back out changes.  It may be that the approach to a rollback is to create a full database backup before implementing the update, but that assumption must be documented and vetted (e.g. the approach of a full backup to support the rollback strategy may not be reasonable in all cases).

This database refactoring issue becomes very tricky when dealing with multiple versions of an application.  The transformation of the database schema and data must be done in a defined order.  As more and more data is stored, the process consumes more storage and processing resources.  This is the ETL side-effect of any system upgrade.  Its impact is simply felt more often (e.g. potentially during each iteration) in an agile project.

As part of exploring semantic technology, I am interested in contrasting this to a database that consists of RDF triples.  The semantic relationships of data do not change as often (if at all) as the relational constructs.  Many times we refactor a relational database as we discover concepts that require one-to-many or many-to-many relationships.

Is an RDF triple-based database easier to refactor than a relational database?  Is there something about the use of RDF triples that reduces the likelihood of a multiplicity change leading to a structural change in the data?  If so, using RDF as the data format could be a technique that simplifies the development of applications.  For now, let’s take a high-level look at a refactoring use case.


Business Ontologies and Semantic Technologies Class

Sunday, May 9th, 2010

Last week I had the pleasure of attending Semantic Arts’ training class entitled, “Designing and Building Business Ontologies.”  The course, led by Dave McComb and Simon Robe, provided an excellent introduction to semantic technologies and tools as well as coverage of ontological best practices.  I thoroughly enjoyed the 4-day class and achieved my principle goals in attending; namely to understand the semantic web landscape, including technologies such as RDF, RDFS, OWL, SPARQL, as well as the current state of tools and products in this space.

Both Dave and Simon have a deep understanding of this subject area.  They also work with clients using this technology so they bring real-world examples of where the technology shines and where it has limitations.  I recommend this class to anyone who is seeking to reach a baseline understanding of semantic technologies and ontology strategies.

Why am I so interested in semantic web technology?  I am convinced that structuring information such that it can be consumed by systems, in ways more automated than current data storage and association techniques allow, is required in order to achieve any meaningful advancement in the field of information technology (IT). Whether wiring together web services or setting up ETL jobs to create data marts, too much IT energy is wasted on repeatedly integrating data sources; essentially manually wiring together related information in the absence of the computer being able to wire it together autonomously!