Tuesday, November 29, 2011

Aplying Domain Driven Design

One of the common mistakes in constructing software that is sub-par with requirements is the lack of understanding of the domain in which it is to be operated. This fallout or mis-communication is perpetrated evidently by usage of different types of languages between the developers and domain experts (abstracted for the sake of simplicity) Evaluation of failures have enabled us to realize that underestimation of domain can lead into perils in project understanding. One of popular example occurs when you try to apply simple CRUD generators into, say something like medical instrumentation. Your productivity falls and you end up hitting a wall. This is evident as the technology is just the enabler, and not the end to means. I have learnt this the hard way in one of my recent project where over engineering and legacy code forced us to create entity framework configuration files via a self-created generator instead of the inbuilt edmx designer - an example of relying on reverse engineering over traditional process.

However, many software solutions which are designed with their domain in mind are essentially an object-oriented model expressed in terms of their domain, at its core. The domain, rather than data-oriented entities constitute this and it compasses the domain wisdom (logic, data, rules, terminology) expressed in simple and utilizable manner. One of the tangible benefits of utilizing this approach is that this increases the longevity of code as domain outlives its implementation. Moreover, this can be modularized into reusable components and mashed-up as the needs arise.

But to start off with, in existing applications, we can use metadata such as annotations or attributes to 'beautify' our code through using the domain logic without undergoing any unnecessary plumbing in the existing application architecture/stack. Another benefit from using them is that in java or c# multiple inheritance is not possible, but this can exist in the domain.
One thing that most domain-oriented tools tend to do is interface their data/stack through xml, which might be fine for domain experts, but is pain to maintain (think of all the xml documents for business rules and instances).

Thus domain annotations provide a starting point of integrating domain knowledge into our code, making it more reasonable. Other approaches include use of full fledged frameworks like naked objects that are geared for these tasks and like rails, play or asp.net mvc frameworks, can quickly generate abstract applications quickly.

Tuesday, September 20, 2011

Play Framework Cookbook :book review


Play Framework Cookbook

Author : Alexander Reelsen

Play framework is an upcoming framework that comes in the growing breed of rapid application development, using convention over configuration based web application development.
The thing that sets apart this framework apart from others is the customizability of the framework in the jvm, while taking advantage of either java or scala as desired. However, owing to the late arrival of play, there has been a lack of tutorials and documentation in order to solve real world problems with this framework.
This book, being a cookbook covers the topics, head-on and gives practical insight into building web application while covering loads of practical topics, from installation to deployment.

The book has been divided into 7 chapters and a small appendix. The first chapter covers the basics and is good for those coming without rails experience. However, the chapter also presents recepies for customizing all parts of the framework- pretty much everything that other technology cookbooks pack into themselves from cover to cover. The middle chapters dig deeper into the framework and integration with external services. One thing that is special in this cookbook is it devotes 3 chapters to modules as extensibility via modules is one of the main differentiating factor that play framework offers.  In line with practicality, the book devotes its last chapter totally on deployment and release functions. This does omits the cloud vendors that provide paas for play framework directly, but as a lot of cloud offerings for play are unfinalized, they do not find any mention. The appendix is a disappointment as there are no pointers for carrying out further. As with any upcoming technology, it is imperative to look out for more resources in order to update and correct ourselves with its ecosystem. This is especially important as play 2.0 would be out on beta by the end of this year and a reader always expects pragmatism from a hands-on cookbook. Also, due to the flexibility of this framework, various PAAS vendors are providing, or planning to provide play support, which deserves at least a listing of these services. The author clearly states his intention behind leaving scala based applications in play, and this hopefully might make its way in future editions.

Overall, this is a book that lives well up to its expectations on providing us with something apart from the documented portions and the stock examples supplied with the play distribution while providing insights into the activities covered and making them flexible enough to be used practically.
For anyone learning play framework and looking to go beyond the examples for making interesting and practical applications, I’d heartily recommend this book.

Wednesday, August 24, 2011

Up Next : Book review on Play framework

Today I received a copy of the book, 'Play Framework Cookbook' by Alexander Reelsen, courtesy its publisher, Packt Publications.




As the name suggests, it covers various recipes regarding the play framework, which is a promising rapid application development framework in java/scala platform.

This is the first instance of any good documentation of this upcoming framework for professionals and looks promising. I will be going over this book for the next few days and would be posting my views about it, stay tuned!

Friday, August 5, 2011

Need Concurrency ? LMAX to rescue!



In praise of LMAX.

Concurrency is a necessary evil beast in any application. Multicore architecture has ncessiated the use of programming for concurrency if there is a need for enhancing application performance.
Recently, on the martin fowler's blog, I came across an interesting architecture, LMAX which was used by in a financial application to run over 6 Million transaction simultaneously, over a single thread of execution only!
Even if this architecture is limited to finance domain, it makes up for an excellent solution for dealing with concurrency. With its single point of handling of the business logic processing there might seem a potential bottleneck, but is negated via alternative mechanisms to recreate this controller - Event Sourcing.
The concept of in-memory solution for performance is not new, but having concurrency  without the taxing side effects of transactions is indeed interesting.
I'll be doing some more study through it during this weekend, and update here if necessary.

Links :

Saturday, June 25, 2011

The curious case of multithreading Sqlite

In a recent project, I've embarked upon the process of saving information in a quick and lightweight manner for which the serverless database, sqlite was ideal. This is not only a flexible database supporting dynamic data types, but also an extendible one used from embedded web browsers and mobile platforms to rails based websites(good riddiance, mysql) .The database is so lightweight that you'll find yourself exclaiming it as db. It is less than 100 kb in some cases and can be used as a memory resident database for the performance conscious folks and applications.
However, the support of transactions and scalability leave a lot to desire as the documentation cleary states the goals of this database is not these.
In my case, getting started was just the matter of finding the jdbc driver and it s documentation from its website. However, upon customization, there was a need for normalization which in turn required a sequential read/write operations. As soon as this was done, the multi-threaded application that was accessing started experiencing concurrency issues around the database.
This was resolved via using a little used jni wrapper that I discovered, sqlite4java. This not only solved the concurrency issues, but also enabled transactions. The beauty of this approach is also the fact that it uses a single connection as opposed to a pooled/fresh connection per every request, which tends to be a real performance bottleneck as we scale up our application.

For such a nice framework, it is a surprise that these changes have not permeated to the java/native jdbc drivers so far and the users have to cope up with its api at the moment. This could be probably because of its tight integration with the underlying C-based api that is quite performant as compared to its
alternatives in python and java, effectively making it non compliant with the jdbc specification.

As the single connection served entire applications, exposing it as a singleton was the easiest approach :


static SQLiteQueue queue = new SQLiteQueue(new File("BiddingsDB"));

static {
queue.start();
}

For providing a transaction-like support, there was a need to wrap the existing operations in this queue :

public static void databaseInsert(final List list) { queue.execute(new SQLiteJob() { @Override protected Integer job(SQLiteConnection arg0) throws Throwable { insert(bidders, arg0); return null; }}); } The insert method performs the actual reads and writes in a row in a non-blocking manner which earlier used to cause locking problems with sqlite in the past. Integration tests verified this performance gains (recollections): DB insert (Read + Multiple writes in the operation) 1 Record  : ~200 ms 2 s 100 Records : 5s 10s 10000 Records : Fail ~100s As an afterthought, you can have a plain vanilla solution via the scraperwiki.com that not only gives you free cloud access for your scripts, but also its data via sqlite.  This aptly highlights the difference in the mindsets of java as well as scripting developers. Here, I was able to run a rudimentary scraper in a matter of minutes but was of no use for my client due to the gathering restrictions of the running jobs there. Hopefully, we get to see some mixing of both the worlds in the near future as scripting for jvm has been making a lot of noise for the past 3 years, but not much efforts have been made by the two sides so far. Oh, and I am really enjoying the rainy weather as I write this post!

Saturday, April 23, 2011

PC Gaming on Linux

Many people consider Linux as a serious OS for computing, but there is a fun aspect to it that is not really known as there is no marketing done for these games. However, for a moderate gamer, linux platform is satisfying especially if you are looking for playing games for your amusement without going too heavy on your wallet. Linux has also improved on the UI front considerably as now we can have some seriously GPU intensive graphics through out of box programs like compiz.
Cool 3D Desktop with Compiz
Those looking for a quick solution can use http://live.linux-gamers.net that allows you to create and start gaming from the live CD/USB directly.

In addition to the point and click games(like card based games),  you can have some serious fun with 3D games that PC gamers have become familiar with. Be it a fast paced action game like AssaultCube or a racing game like TuxKart, there are options for everyone.
TuxKart  : Funny racing game
AssaultCube : Fast paced action, a la Quake3 style

Thanks to ID software's initiative of providing a linux binary(and their stand on DirectX framework), we can install and play a game like DOOM3 on Linux http://sites.google.com/site/lgmscripts/scripts/games/doom-3 (which is definitively on my TODO list:)

There are certain caveats to this excercise too. Since my laptop has a Nvidia GeForce 8200MG card, I had to use a non open source binary driver as the cannonical one was not working well on Ubuntu. If you consider the freedom and the non presence of any cost/license, it forms a good overall personal entertainment package.

Sunday, March 27, 2011

Domain-Driven Design Using Naked Objects

I just had a chance to read a newly released book, 'Domain-Driven Design Using Naked Objects' by Dan Haywood [http://www.pragprog.com/titles/dhnako] that provides an insight into the world of DDD. If nothing else, this book is for techies & management people alike. Although Naked Objects is covered as the implementation framework (to explain practically, all aspects of DDD), this book serves as an excellent introduction text for all of us who are new into this concept of crafting an enterprise application according to domain. This book also deserves to be read specially because there has been considerable buzz around this 'Domain' stuff in the recent past.

According to the book ,
'Domain-driven design is an approach to building application software that focuses on the bit that matters in enterprise applications: the core business domain. Rather than putting all your effort into technical concerns, you work to identify the key concepts that you want the application to handle, you try to figure out how they relate, and you experiment with allocating responsibilities (functionality). Those concepts might be easy to see (Customers, Products, Orders, and so on), but often there are more subtle ones (Payable, ShippingRecipient, and RepeatingOrder) that won’t get spotted the first time around. So, you can use a team that consists of business domain experts and developers, and you work to make sure that each understands the other by using the common ground of the domain itself.'

Since being a domain expert is the next step for any developer naturally, as his experience increases, there is need to have more insight into the business stuff, instead of just the software. UML although does a great job for explaining the Object Oriented stuff, but ponder for a second, how is it really going to help a businessman that is interested in getting more profits. Naked Objects framework (http://www.nakedobjects.org) is based on the design pattern with same name(http://en.wikipedia.org/wiki/naked_objects) is an open source framework (only for java, .NET is a commercial one) that converts simple bean/components automatically into an interface(read as  multiple applications).
Don't yet confuse this with prototyping because DDD incorporates both the developer and the domain expert teams and we are not just creating the UI.
DDD's 2 central premises explained :
  • A ubiquitous language for integrating & easing communication between domain experts instead of 2, which is the existing norm(like code & UML)
  • Using model-driven design that aims to capture the model of the business process. This is done in code, rather than just visually, as was the case earlier.

Naked Objects
The java based Naked Objects(NO) framework is an evolutionary step forward from Rails (and its other avatars : Grails, Spring Roo, asp.net MVC, etc) that focuses more on M & V rather than MVC and provides much more domain-specific applications in turn resulting in flexibility for all.
A typical NO application consists of multiple sub-projects like the core domain, fixture, service, command line and webapp project through a maven archetype. The coolest thing  is that NO automatically displays the domain objects in an O-O based UI that offers display in more flexible manner than any other IDE.
NO also challenges the common frontend-middleware-backend convention and instead applies the Hexagonal architecture (http://alistair.cockburn.us/Hexagonal+architecture) that deals with the bigger picture in mind. The development in this framework is pojo centric and is heavily based on annotations, which should be pretty much regular stuff for any JEE developer. Also, during my initial evaluation of the framework, the code that's being generated during the development is of maintainable quality, which is virtually essential for maintenance and scaling in any enterprise application.

Hence this book, and its field of study is highly recommended for any enterprise developer/ team/ manager/ domain expert and as is repeatedly mentioned, becomes highly important when one has more years of experience under his belt. I am continuing my exploration in this and if it is really useful for me, would post some exercises here.

Wednesday, March 2, 2011

10 Years of Ruby!

Let me recollect not very recent past. It was 2008 and recession was at its peak. Back then, I was largely a self-taught developer and had a working knowledge in struts & ejb (the java heavyweights at that time). After unsuccessfully trying to yield a job, I finally decided to pursue my masters in computer applications (out of necessity, or so it seemed back then).
During this frustrating period, I heard about Ruby On Rails, which was touted as the next-gen platform/language. As with other people, I was quite scared of it as the name sounded very unfamiliar. So, I didn't pursue it. However in the summers of 2009, as I was idling across the netbeans.org looking for some interesting activity to do, I came across this '5-minute Weblog' exercise on rails and decided to have a go.
Believe me, it was one of the turning point in my career and I spent more than an hour figuring out exactly how I was able to generate a web application just like that using practically no big tools.


As the things turned out, I was not the only one who leaded this entirely new approach. During a recent magazine interview(http://www.pragprog.com/magazines/2010-12/chad-fowler-on-ruby), Chad Fowler, a prominent figure on Ruby Conference and a ruby authority echoed similar views.
During this interview, it was interesting to observe the maturing of ruby as a language and a platform (performance wise and lightweight for mobile devices).

My current job deals with java, however Ruby & Scala have kept my other side of programming (hacking & open-sourcing) buzzing with activity and with a healthy mix of curiosity and satisfaction.

Sunday, February 20, 2011

Increasing usage of Functional Programming in driving scalable architecture

One of the most exciting technologies that I've been pursuing lately is nothing new, but as an old wine in a new bottle, provides an alternative method to solve emerging issues for addressing performance. I am referring to Functional Programming, that has been growing in importance during some period in past few years due to various ongoing development in today's huge data processing applications used in enterprise environments. I've been seeing the rise of this otherwise academic language, and is increasingly used/considered to be usable for the past few years due to various ongoing developments.

Features

Functional programming provides various features that separates it as a different paradigm for programming( http://en.wikipedia.org/wiki/Functional_programming ). I'll be explaining various features as per this post's context.
Higher Order Functions : FP revolves around functions! Think of this as a small version of class in OOP. If you've worked with, say anonymous or inner classes in the past. You've unknowingly been doing FP in OOP, and needless to say, it was difficult to learn and maintain. Generics solve the same problem as First Class Functions, but the resultant implementation leaves a lot to be desired.
Recursion : Again a handy feature that provides a lot of bang for small amount of buck, or code! If you know how to use your functions, you can do a lot with your code without having to write boilerplate or plumbing code in your algorithmic implementations.
Immutability : Now this is what I am talking about! We all are familiar with multi-threaded programming, yet the tools for using it are hardly ever used as multi threaded programming involves sharing of resources, which is not a good thing. In all other programming paradigms, we are familiar with the foll construct :
      int a = 0;   Initialization at this line in the memory
   a = 1;       Memory address pointed by a gets 'Updated'
The problem lies with the second statement as we don't know when and more importantly which thread is going to invoke it. But if you recall from your Mathematics, we used to have following notations there:
   let a=0      Initialize a
   a=5          Assign a something new, and ignore previous state
As the value of a is immutable, we can safely have as many threads to it as we need. This means, as our applications are thread safe, we can have concurrent / parallel executions of the same code by 2 or 200000 invocations without adverse effects.

Applicability in meeting the challenges in scaling and performance


Today's applications need to fully utilize the underlying hardware. Moore's law doesn't holds when we consider raw cycle performance, but is still applicable if we consider the overall increase in processing capability through hyper threaded and multi core processor hardware. The information generation and its processing needs has increased too, leading to use of parallel computations. This is where the FP comes in handy. Its true that infrastructure can be abstracted from the application (as with cloud platforms) but if we do that ourselves, we can have greater flexibility and scalability for our solutions.
One feature of many 'newer' breed of FP languages like F# or scala that are hybrid in nature that I haven't discussed before is the ease of use of these languages is the ability to construct Domain Specific Languages; DSLs. From the business' perspectives, DSLs are fast becoming key to map the technology-business divide and keep application agility and re-usability simultaneously.

My exposure these days is with scala, which is a hybrid FP language that runs on JVM, and is interoperable with java. It is attracting the same kind of curiosity for java developers right now which ruby (largely because of Rails) did a few years back. However, this language aims at scalability, which is also its full name (scala stands for scalable architecture) within the JVM itself, making it an ideal candidate for solving middleware performance and scalability related issues. The fact that it replaced Rails in handling message processing in Twitter speaks for itself. I am also working on a research paper that explains the use of scala (and FP in general) to process massive amounts of data in distributed/cloud environment. Will post some more information here as it gets finalized.

Saturday, January 29, 2011

My initial foray into professional software development – Experiences as a Developer

Its been a fortnight since I've started working full time in a professional software development firm and through this post, I am sharing my experiences. During this period, there have been a lot of revelations for me and many of my fears about work in an IT company have, needless to say, been allayed.
I started working with a small sized local software company that performs all the development activities for a UK based firm that sells products for converting mainframe legacy systems(IBM's i-Series) into modern ones(JavaEE based). Since this is solely focused business on software development, I was able to obtain various advantages that could've not been possible otherwise. Some of which are :-
No managerial bullshit about communication, teamwork & motivation (and what else the HR people can put in).
Simple product oriented workflow with flexible time lines.
A quiet workplace in the outskirts of the city (which is indeed a quaint place).

My only grouse so far (If I really need to proclaim one) has been long work hours (almost 10 hrs) which leave me with no time for outings and socialization on weekdays, but is fine by me as I am growing as a developer in leaps and bounds, and also because I am not much of a social fellow and all the other developers too are busy with themselves during work.

The technologies that I am working on are also my favorite and involve Enterprise Java (JSF, Hibernate, Spring and JPA). However, due to non proficiency in JSF, finally I am learning this framework. In the past, I've blogged against this framework and ever since 2007, have refrained until now in using this technology. The workplace requirements, however enforced this change and this is not as bad as I was imagining it to be.

The first day in the job was a huge eye-opener for me as I was handled a finished application to explore and document my findings. Upon seeing this massive piece of JEE mastery, despair set in quickly an I was beginning to feel panicked. Fortunately, I had done some open source development during my college years that proved to be quite handy for me (Just build the whole thing and unit tested it). By the end of the first day, I had some rough idea about that enterprise application that housed more than 100 JSF beans (that were integrated into similar number of service classes, DAOs, and other layers coupled with a complete in-house API that was stretching back to more than 10 years). To add my misery, the database was a DB2 instance running on AS400 platform remotely and was sluggish at the very best of the performance with no user manipulative tool at my disposal.

The subsequent days, surprisingly, eased this initial pain as I was informed by my boss that the application in fact was auto-generated by a in-house tool based on freemaker/velocity. Also, what was seemingly impossible one day, became reasonable the other and was accomplished on the third. For this success, I'll attribute to the long hours that I am putting at my workstation there instead of shying away from the problem, as the case was earlier.

However, before ending this, I'll sum up an excellent blog entry that I received quite earlier as a tweet:
Ingredients for a perfect technology workplace (read as IT)
Excellent people (Your boss & co-workers)
Excellent projects (That really stimulate you)
Excellent workplace (Or something similar, I can't recollect)