JavaOne 2010 Concludes
My last two days at JavaOne 2010 included some interesting sessions as well as spending some time in the pavilion. I’ll mention a few of the session topics that I found interesting as well as some of the products that I intend to check out.
I attended a session on creating a web architecture focused on high-performance with low-bandwidth. The speaker was tasked with designing a web-based framework for the government of Ethiopia. He discussed the challenges that are presented by that country’s infrastructure – consider network speed on the order of 5Kbps between sites. He also had to work with an IT group that, although educated and intelligent, did not have a lot of depth beyond working with an Oracle database’s features.
His solution allows developers to create fully functional web applications that keep exchanged payloads under 10K. Although I understand the logic of the approach in this case, I’m not sure the technique would be practical in situations without such severe bandwidth and skill set limitations.
A basic theme during his talk was to keep the data and logic tightly co-located. In his case it is all located in the database (PL/SQL) but he agreed that it could all be in the application tier (e.g. NoSQL). I’m not convinced that this is a good approach to creating maintainable high-volume applications. It could be that the domain of business applications and business verticals in which I often find myself differ from the use cases that are common to developers promoting the removal of tiers from the stack (whether removing the DB server or the mid-tier logic server).
One part of his approach with which I absolutely concur is to push processing onto the client. The use of the client’s CPU seems common sense to me. The work is around balancing that with security and bandwidth. However, it can be done and I believe we will continue to find more effective ways to leverage all that computer power.
I also enjoyed a presentation on moving data between a data center and the cloud to perform heavy and intermittent processing. The presenters did a great job of describing their trials and successes with leveraging the cloud to perform computationally expensive processing on transient data (e.g. they copy the data up each time they run the process rather than pay to store their data). They also provided a lot of interesting information regarding options, advantages and challenges when leveraging the cloud (Amazon EC2 in this case).
There was a session comparing the use of Domain Specific Languages (DSL) to providing libraries (APIs). This is an area that interests me, since I feel it aligns well with leveraging rules engines and empowering business experts. I should have read the abstract (it was the title that had caught my eye) as their focus was very narrow (again, completely obvious if I had read the abstract).
Thankfully, the presenters’ styles kept the presentation fun and each of them (a DSL-proponent and an API/library-proponent) made some great points about the strengths and weaknesses of the approaches. In the end I am still a fan of DSLs as a way to simplify the “programming” or “configuration” of a rule engine within a narrow domain.
I attended several other sessions that were well done but not as directly relevant to my current areas of interest or professional responsibilities.
Switching gears to the technology pavilion, I was able to spend a couple of hours checking out the booths. There were several vendors and products that were new to me.
Vaadin is an Ajax web-application development framework for Java. Although I had heard of it I had never looked into it. At their booth they had a nice demonstration of its features as well as a hardcopy of the version 6.4 documentation. After reviewing the book (~400 pages) I will certainly be giving it a spin.
ZeroTurnAround was there from Estonia. Their JRebel tool allows for any update to a Java-based web application to be reflected in real time. The deployed runtime within the application server is updated without actually redeploying or restarting anything. This functionality extends to making updates in the web.xml file, Spring configuration files, JSPs, etc.
It was impressive watching as the developer updated a configuration file and flipped back to the browser, hit refresh and showed the updated behavior or screen element. This certainly seems like it could be a productivity enhancer for developers when they get into the nitty-gritty updates to tweak a system.
The last offering that was new to me and seemed interesting was github. Their tagline, Social Coding, says it all. They compete with Subversion, ClearCase and other SCCS-type tools. It is a hosted source code control service that claims to currently house 1.2 million repositories used by over 400,000 developers. It wasn’t clear to me exactly why they were better than a service like SourceForge, but I may take a look and see.
It probably isn’t fair to the vendors of products that I do use or have heard about to not give them some airtime here, so I’ll post information about my conversations at those booths in a future entry.
In all, JavaOne 2010 was a very informative and very well organized event. I thank Oracle for doing such an outstanding job with the conference. Please comment if you had some thoughts on favorite and interesting sessions or products you saw at JavaOne 2010.
This article is a follow-up to my first JavaOne 2010 blog entry at: http://monead.com/blog/?p=761
Tags: data, enterprise applications, enterprise systems, Information Systems, Internet, Java, linkedin, programming