Saturday, October 24, 2015

Book Review: I Am Malala

I Am Malala: The Girl Who Stood Up for Education and Was Shot by the TalibanI Am Malala: The Girl Who Stood Up for Education and Was Shot by the Taliban by Malala Yousafzai
My rating: 4 of 5 stars

For a 15 year old to have such insight, drive and the grit, it is astonishing to see Malala just going ahead with her business and not give up. Having faced such hardships right from the environment in Swat valley to being shot, Malala is a symbol of courage and I have garnered a lot respect for this nobel prize laureate.

Regarding the book, I started reading this book to gain some real insight in that part of the world and I am glad I started with this masterpiece. The book covers a bit of history of Pakistan to begin with. Then Malala goes on to talk of her childhood friends and family. Her grip on penning down what she feels of her parents and how they look at it is again commendable. Its quite inspiring to see Malala and her father fighting for the fundamental human right of girls education knowing there is a threat looming all the time. You see how both of them are not afraid come what may but when it comes to the other, they do get protective. Amidst all the life story, Malala presents an amazing account of a father-daughter relationship. Malala seems to have a lot of respect for her culture and religion at the same time she is not willing to be fooled by the impositions of gender inequality. Reading this book is a reminder of how much we take for granted things around us. I am sure I will appreciate my freedom a tad bit more now. I hope and wish her dream of getting back to Swat valley and the vision she carries of her motherland does come true. The world needs more Malalas. Looking forward to reading more about her life as she carries on the fight she has taken.

View all my reviews

Wednesday, October 21, 2015

Monitoring Java based Applications: Metrics+Graphite+Grafana

As developers when we write a piece of code we try our best to make sure we have carefully chosen the best algorithms, the data structures are carefully tuned to match the situation at hand and the piece of software we wrote just works out of the box. Unfortunately life is not ideal and there are some hiccups that are bound to hit us however smooth the ride be. Some nasty bugs will inevitably show up on the production server as the software is used for a length of time. The best one can do then is to monitor the software being used in production environment, anticipate the problems before it hits the user and hampers their experience.

This post will talk about such a stack for monitoring java based applications involving:

We will use dropwizard metrics to instrument our Java code and push the data to graphite. Graphite will store our time-series data meaning values such as latency, performance, throughput, counts etc corresponding to say our APIs. Though Graphite renders graphs built on top of the time-series data as and when required, Graphite on its own doesn't provide the best visual experience and dashboard properties. We will thence use Grafana to build and view our dashboards. So the pipeline on a high level will be this:

Here is the documentation of dropwizard metrics listing its various capabilities. Their documentation is not that great, in my opinion, but workable. The good news is there is also a spring library that provides annotations you can use directly to enable instrumentation. Just add an annotation on top of a method and thats it. Here is an example:

@Metered(absolute = true, name= "RuntimeError")
public ErrorMessage handleThrowable(HttpServletRequest request, Throwable t) {



As you see here, there is an annotation (@Metred) which is responsible for instrumenting the following method- handleThrowable(...). The metred annotation gives the timed average meaning number of errors in a unit of time. As simple as that! The code in bold is all you need to write to instrument the following method, for instance.

Next, we need to tell our metrics where Graphite resides so data can be sent to Graphite. Here is how you configure this. The prefixedWith() attribute defines how Graphite will show this. Here is a screenshot from my Graphite dashboard showing the tree-based structure that we just set.


On the right of this pane shown above you ca see the graph you select. The visual features in Graphite is rather limited so we will forward the Graphit input to Grafana. Here is a video introducing Grafana interface.e

Make your dashboards the way you wish to see your graphs and keep monitoring to stay ahead of the bugs!

Tuesday, October 20, 2015

Log Archiving with Logstash

A crash, a panic attack and we rush to the logs hoping it comes to rescue. Alas there is so much content in these logs that finding the relevant content itself is a herculean task. It is thus a good idea to think upfront and brace yourself to handle the panic situation better. Keep all the logs ( application logs, database logs etc) nicely organized in a tree structure all of them being kept at one place. More technically, it is a good idea to archive your logs and have a way to make sense of the information hidden in logs. I was recently faced with such a situation and ended up making an ELK based stack.

I started with syslog-ng. Syslog has been around for sometime so it gave me a sense of stability and good community around it. Unfortunately syslog is a bit outdated too! The next thing we considered was logstash. Logstash along with Elasticsearch  and Kibana forms a powerful stack, popularly known as the ELK stack.

For this blog post, we will limit our scope and stick to logstash part of the ELK stack. We will make an archive which collects logs from different web and database servers and archives them all in one place. The intelligence and visual analysis are left to Elasticsearch and Kibana respectively and thence not part of this post. Following is a very high level architecture I came up with:

archiving architecture.jpg
We are going to work with two pieces of software here:
  1. logstash-forwarder
  2. logstash
We will install logstash-forwarder on the clients (our applications that generate logs). This is an agent that sends the logs to a central archive server. Note that logstash-forwarder needs to be installed at EVERY client. We will next install logstash on the archiving server. This will receive logs from every client and archive it in a format that you can define. Here is what I came up with for my archive structure.

I like to archive by time and then the log source as depicted in the picture above. Now that we are done with sorting the architecture and our logging format we need to next configure our clients and the server. Digitalocean has done a wonderful job of explaining the installations and configurations so I will leave you with this article. You can skip the parts involving Elasticsearch, Kibana and ngnix.

Once you are done with archiving make sure you have a way to periodically clean logs from the application servers, if at all you store last 1-2 days of logs there too. For this I set up a cron job that cleans up logs older than a week in my case.