Thursday, July 17

Project Vital Sign Charting Spreadsheet

Recently I read the article in Thoughtworks Anthology called Project Vital Sign written by Stelios Pantazopoulos. In this article the author proposed a few type of charts that an Agile team can produce, usually by the Team Lead or Iteration Manager, as Information Radiators to improve communication among team members as well as with stake holders.

After reading the article, I suddenly recalled a conversation I had a while back when working with a PM who is relatively new to the Agile landscape. She asked me a question that at the time I thought the answer was obvious, she asked that how can you find out if an Agile project is in trouble or on schedule under budget. At the time my answer was, if you attend every kickoff and retrospective meeting then you can pretty much tell from the story board. She left with a puzzled look on her face, apparently what I thought was obvious was not obvious at all to some folks on the team. If this is the case for a PM who works pretty much everyday with the developers in the trench, then you can imagine the disconnection and difficulty a less technical senior manager would face when he/she tries to find out the status of an Agile project. One of the contributing factors* to Scrum's rapid adoption rate in larger corporation is the Burn-Down chart it produces which clearly communicates project status to anyone who would like to know.

But thanks to Stelios' article now we can generate several very useful charts for any Agile project for both the developers as well as any one who is interested to know the project status including the senior managers. I am planning to use some of these charts in my next project, and created spreadsheet template based on the suggestion in the article. I have uploaded this template, please feel free to modify and use it in your project, and let me know if it turns out useful at all for you.

* Other factors are Long Iteration (less agility) and a nice title for the PM (Scrum Master) plus certification program, and of course as always better marketing ;-)

Wednesday, July 16

Cygwin and Maven problem

I ran into a pretty nasty problem while running Maven under Cygwin yesterday. Why? since all workstation at my current client's site are Windows based. The problem happens when I replaced Windows CMD with Cygwin Bash by adding Autorun key under 'HKEY_CURRENT_USER\Software\Microsoft\Command Processor' in the registry. After that Maven stopped working, took me a while to pinpoint the problem, although all other Java applications run just fine including Eclipse, Groovy, and Tomcat.

Finally I gave up replacing the Windows CMD with Bash. At the end, I decided to run all my console using an open source program which allows you to have multiple tabs of Cygwin console running on your Windows machine plus some eye candies. This setup worked out pretty good for me so far.

Tuesday, July 15


Recently I had a few discussion with different developers regarding to server architecture, and to my surprise that few really understand what SEDA is and is not, so I decided to jog down some of my thoughts here and hopefully can clear the things a bit. SEDA - Staged Event-Driven Architecture - the de facto industry golden standard for implementing scalable server. SEDA was firstly introduced in 2001 by Matt Welsh, David Culler, and Eric Brewer, see the original paper. The common misunderstanding about SEDA is many developers believe "SEDA is a high performance architecture", it is not, actually based on my experience implementing the SEDA model usually means sacrificing 10-20% performance. The main problem that SEDA addresses is graceful degradation in server scalability not the performance. What graceful degradation (also known as well conditioning) means is when your server experience an extremely high burst, for example 100x or more volume than the average traffic. While it is certain that the server will not be able to handle the burst but instead of becoming non-responsive or simply crash, ideally we would like to have the server performance degrade gracefully for instance maintain the quality of service for existing clients but rejecting all new clients with a user-friendly message, that's where SEDA comes into the picture. Here is some comparison between common server architectures and the problem while handling this kind of burst.

1. Thread-based Concurrency (The vanilla one thread per client model)

This model does not scale very well* and will not be able to handle the burst and eventually become completely unresponsive or crash the server due to resource exhaustion.

* When I use the word 'scale' here I mean scale to tens of thousand of sockets or more. This simple minded model can scale pretty well on modern operating system (Linux kernel 2.6+ and Windows NT 4+) and shown superior performance with relatively small number of threads (a few thousands), so if you server is never expected to handle tens or even hundreds thousands of sockets this is actually a pretty good architecture.

2. Bounded Thread Pool

To solve the over commit problem with the model #1, thread pool implementation was introduced and widely used, and since the thread pool has a max number of threads configured therefore it is impossible for the server to create unlimited number of threads which leads to the resource exhaustion problem in model #1. But this model can introduce great deal of unfairness during the saturation since when all the threads are busy from the pool all requests will be queued up, thus the server service quality degrades rapidly as soon as it starts reaching the limit of max thread pool size. This degradation is especially fatal for stateless request-and-response based protocol such as HTTP.

3. Event Driven Concurrency (Async Non-Blocking IO)

Event driven server design relies on non-blocking IO to processing each task as a Finite State Machine (FSM). The thread only works on a task when receiving an event from the scheduler informing there certain operation, read or write, is available to be performed. This kind of design usually is implemented by a single thread. Although with some additional complexity in programming, this model scale fairly well with even millions of tasks and maintaining consistent throughput. Although massively more scalable, this model still does not address the fundamental problem during durst, when reaching the saturation point the task processing latency increases exponentially, so this model just simply postpones the problem instead of solving it.

4. Staged Event Driven Architecture (SEDA)

To address the problem in straight event driven model, SEDA introduced a new concept Stage. Instead of processing each task individually, SEDA breaks the process procedure to multiple stages, and for each stage a dedicated scheduler, event queue, and thread pool are implemented. The main benefit of this architecture is because of the multiple stages design you end up having multiple response measuring point for implementing request shedding. For example a SEDA based web server can implement shedding logic at the second stage when normally a dynamic page (JSP/PHP/ASP) would be executed, but while experiencing a burst the second stage event can be reroute to an alternative queue where simple static but user friendly content can be returned to signify that the server is overloaded hence providing some user friendly feedback while protecting the server from resource exhaustion at the same time. Of course SEDA also provides some additional benefit such as dynamic resource allocation and easier code modularity, but nevertheless the biggest benefit is no doubt the graceful degradation capability.

Some final notes on SEDA:

In my experience, I found SEDA model actually behave the best when there is not too many stages implemented, usually 2-4 stages.

Interestingly enough, Enterprise Service Bus (ESB) architecture actually resembles SEDA model at a much larger and higher level, but because of the resemblance ESB architecture has also shown excellent massive concurrency and graceful degradation capability. ESB is a good architecture of choice for well conditioned massive concurrent enterprise integration system, if you do it right of course ;-)

Thursday, July 10

Set up Felix OSGi container with Maven and Spring in 20 mins

Recently I had a chance to try out the Apache Felix an open source implementation of OSGi R4 Service Platform and found there is not a lot of document regarding how to setup your development environment for OSGi, that's why I decided to record some of my finding and learning experience here, and hopefully will shed some light on this issue.

My goal, when I started this exercise, is firstly to use Maven 2 as the build management tool so I can setup an OSGi project just like any other Java project plus easy integration with any continuous integration tools out there. Second, I wanted to setup Spring as the micro container to manage all the wiring and all the neat aspect oriented programming stuff. Personally at the beginning I did not know exactly how well Felix and Spring will mix together, but roughly I had the idea to use OSGi for service level dynamic module management and Spring for lower level wiring. Ok, enough intro lets do some coding.

a. Download Felix binary from
Apache Felix

b. Start Felix by running 'java -jar bin/felix.jar', and just type any name for the profile name
Note: running java -jar in bin folder directly will not work

c. Type ps in Felix shell will show you a list of bundles that are already loaded. Now type shutdown to stop Felix.

d. After some research and trial-and-error, I found the Maven spring-osgi-bundle-archetype is the best fit I can find as a starting place for my little project. Type 'mvn archetype:generate' and pick 32 for spring-osgi-bundle-archetype.
Note: I am using maven 1.0.9

e. Run 'mvn clean install' will actually produce a valid OSGi bundle already without any coding. Try it out.

f. Restart Felix and in the shell type 'install file:path-to-your-bundle-file' should install the bundle you just created, yes its that simple, use 'ps' to check it out. Now you can start or stop the bundle.

g. Lets implement the Activator tutorial from Felix with our newly setup Maven project. See code sample and explanation here. Now if you do another 'mvn clean install' and run 'update #bundel-number' in Felix and expect to see something, you won't. Why? Because we haven't configure the Activator class as a bundle activator. To do that you need to first remove the auto generated Felix plug-in version number 1.0.0 from your pom.xml, since the old 1.0.0 plug-in does not support this configuration. After removing the version number, you need to add


under plug-in configuration.

h. Now run 'mvn clean install' again, then type 'update #bundle-number' in Felix shell, now whenever you start or stop the bundle you will see the log message got printed on the screen.

As you probably already noticed that the Maven project has already Spring configured for you, so you are pretty much all set at this point to start developing a OSGi application. Alright hopefully this setup did not take you more than 20 mins, unless you have a super slow internet connection ;-) Last but not least, you can also integrate Felix inside your Eclipse IDE for debugging and profiling purposes, see for more details.

Have fun with OSGi and feel free to let me know your experience trying this setup out.

Wednesday, July 9

Interview Phantom Read

I have been conducting both technical and management interviews for quite a few years now, and occasionally had to sit in a few of these interview as well. Recently just a few weeks ago, when I was sitting in the meeting room and conducting an interview with three other interviewers, one interviewer asked a question "if you encounter ..... this type of scenario, what would you do?"and I could not stop but start thinking that this type of question is useless at best and usually misleading. The reason is simple since the scenario in the question is hypothetical, thus the interviewee can fabricate an answer without worrying about any kind of constraints existed in the reality. I am not saying everybody will lie under this kind of circumstances, but the problem is you can't verify whether it is a lie or not, since the whole thing is fabricated. In most cases, I found that the interviewee will give you a perfect answer, or a solution that they would like to perform if they are working in an ideal world, I call this kind of answer - Phantom Read. If you buy into this kind of answer you get, you will probably end up hiring the person that the interviewee would like to be in the ideal world but not the actual person sitting in the room, in other words the Phantom.

So what is a good question then? A good question should always be based on the actual experience, sometimes a mere description of what they did could be the best answer you will need. Usually you can comfortably lead to this kind of question by simply asking about the past project experience, and then ask "As you mentioned ..... could you also tell us about what you did when ..... happened?" A follow up question like "If you get to do this all over again, what would you do differently to improve ..... " can provide further insight into your candidate's thinking process and self learning capability from their success or failure.