JavaOne 2010 Concludes
Saturday, September 25th, 2010My last two days at JavaOne 2010 included some interesting sessions as well as spending some time in the pavilion. I’ll mention a few of the session topics that I found interesting as well as some of the products that I intend to check out.
I attended a session on creating a web architecture focused on high-performance with low-bandwidth. The speaker was tasked with designing a web-based framework for the government of Ethiopia. He discussed the challenges that are presented by that country’s infrastructure – consider network speed on the order of 5Kbps between sites. He also had to work with an IT group that, although educated and intelligent, did not have a lot of depth beyond working with an Oracle database’s features.
His solution allows developers to create fully functional web applications that keep exchanged payloads under 10K. Although I understand the logic of the approach in this case, I’m not sure the technique would be practical in situations without such severe bandwidth and skill set limitations.
A basic theme during his talk was to keep the data and logic tightly co-located. In his case it is all located in the database (PL/SQL) but he agreed that it could all be in the application tier (e.g. NoSQL). I’m not convinced that this is a good approach to creating maintainable high-volume applications. It could be that the domain of business applications and business verticals in which I often find myself differ from the use cases that are common to developers promoting the removal of tiers from the stack (whether removing the DB server or the mid-tier logic server).
One part of his approach with which I absolutely concur is to push processing onto the client. The use of the client’s CPU seems common sense to me. The work is around balancing that with security and bandwidth. However, it can be done and I believe we will continue to find more effective ways to leverage all that computer power.
I also enjoyed a presentation on moving data between a data center and the cloud to perform heavy and intermittent processing. The presenters did a great job of describing their trials and successes with leveraging the cloud to perform computationally expensive processing on transient data (e.g. they copy the data up each time they run the process rather than pay to store their data). They also provided a lot of interesting information regarding options, advantages and challenges when leveraging the cloud (Amazon EC2 in this case).