Browsed by
Author: Nikhil Wanpal

Participating In A 24 Hour Hackathon

Participating In A 24 Hour Hackathon

Just returned from a 24 hour hackathon, sleepy, red-eyed, tired, exhausted and yet writing this post. You know why? Because I skipped it the last time, and the time before, thinking I will do it the next day and that sleep was more important, but never did it. Not going to make the same mistake again. So here I am.

For those who are unaware of what a hackathon is, it is an event where dreamy eyed people enter and leave with red eyes, it is a single night sprint where people come together to build something that they believe will make them a billionaire or like Silicon Valley series mentions time and again, ‘will make the world a better place’! Well, jokes aside, a hackathon is an event / competition where teams / individuals build software / hardware in a single sprint of 24 hours. Hackathons have a theme, ranging from generic things like improving community to specific themes like solving a specific delivery problem the company faces. Some hackathons are closed, conducted only for the members of an organization, some are open to all. Some hackathon focus on profitability of an idea and implementation, teams winning sponsorships from investors, and some focus only on ideas and imagination of the participants. All in all, a hackathon is a developer’s Disneyland!

If you feel like you have ideas, but no time to develop them, try them out or run them by other people? Hackathon is the place for you to build your dream concept into reality. You are an amazing developer, you can punch in code and get things working in no time, hackathon is the place for you to show your skills. You like to interact with people, share ideas and learn how people feel about them, hackathon is the place for you to validate your product. You have a concept, but are looking for skilled brains to develop it, hackathon is the place for you to spot skills and recruit them. You are a nerd, introvert, who loves to code (the stereotypical software developer), walk right in, there are many like you in there. You are a night-owl who believes that sleep is for ‘cats’, you will fit right into a hackathon. Imagine a place which provides electricity, wifi (I have your attention now, don’t I?), food, seating space and all else you need, and leaves you undisturbed for 24 hours with the freedom to build your dream into reality, now that is a hackathon. (Put like this, it sounds better than Disneyland!)

Now let us say you wish to go to a hackathon and ‘make the world a better place’, you need to have a plan, like everything else. There are things that you should and should not do. There are things that you should and should not carry. I have over time built a list of items I carry to a hackathon, and like other ideas validated it against others during the hackathon. So what we have here is a list of items people in general carry, not every item will apply to you though. It is like a camping list, but for geeks.

Preparing for the Hackathon

  • Prepare for your idea: Think, elaborate it, plan it. This is also a test of your agility, all your skills in iterative, agile, development are going to be tested. Know that a 24 hour hackathon is 3 working days time on your hand, it is a lot, plan how to utilize it best.
  • Choose the right team: Long hackathons tend to be team games. Choose your team wisely. You should have compatible yet slightly overlapping, targeted skills, and equal passion. You probably do not need a marketing executive or the so called ‘product owners’ on your team unless the idea is from their domain. It is not a conference, you do not get a booth there. And passion, yes, you do not want your team members jumping into sleeping bags at the chime of 10 only to get up at 6 next morning.
  • Identify what open source projects can help you, know how to use them. Play around with them. May be send a pull request for missing features. But know what is going to help you get it done faster and plan for it.
  • Set up your machines. You do not want to be downloading a database server or creating cloud service provider accounts at the hackathon.
  • Set up productivity tools suitable for your idea. For example, get accustomed to using a clipboard manager; write and keep handy scripts to automate simple tasks like starting db, launching db shell, clearing tables, generators for various frameworks etc.

Packing for the Hackathon

  • Laptop: Of course! Humans are yet to build a computer one can use to develop software on in thin air, unless that is what your idea is for the hackathon.
  • Laptop charger: You will be surprised how many people forget this. Although your machine has juice for one work day, it is not enough for a hackathon. If you think of it, it is actually 3 work days there. And you don’t want to be making ‘connections’ by asking people for charging cables.
  • Phone & charger: You are going to interact with a whole lot of people, do carry your phone to note down numbers, not everyone brings the business cards, it is not a conference. Hackathon is for thinkers and doers not talkers, yet the excessive use of phone screens drains them, so do carry your chargers. Some hackathon venues do provide charging stations, you can check to confirm if there is one.
  • Peripheral devices: If you prefer to use an external mouse, drawing pad, a VR headset or whatever you need, carry them. Pack the devices you need to build your idea, like a raspberry-pi, a hovercraft kit or whatever. Keep pen-drives handy. You can check or request if the organizers provide an external screen, it will be too huge to carry anyway. It is all developers and geeks there, I would not blame you if you do not trust the security of the wifi there. In that case, carry your own portable hotspot.
  • Identity proof: You have registered online, but the organizers need to know who you are before you can enter.
  • Toiletries: Do I need to explain? Just don’t stink, you do not want to be remembered for that.
  • Medicines: If you are on medicines, do not forget to carry them. If you get acidity by staying up late, carry antacids. You get headaches, carry a mild pain killer. Have allergies? Carry an antiallergic. Afterall it is a competition, you want to be your best throughout.

Wearing for the Hackathon

  • Wear something casual and comfortable: If suits are your thing so be it, but remember you are going to be scratching your head over a lot of things in the next few sleepless hours, be comfortable. You do not want to be scratching other parts of your body due to uncomfortable clothes. You do not need to look pretty/handsome, make-ups and hair-sprays are not required, your personal comfort is.
  • If you are representing a startup, wear them on! Hackathons are good for creating awareness and hype.
  • It is okay to wear your lucky accessories, but limit it to that, avoid the temptation to wear superman under your clothes.

Hacking at the Hackathon

  • Not literally. Do not hack others’ devices, you can get banned from the premises or worse.
  • Divide your hours like they were days. 4 hours represent your half day of work in there. Have regular discussions.
  • Divide your tasks and identify interfaces where your tasks meet.
  • Use version control systems extensively, If it is a hardware product, keep taking photos from all angles. At hackathon, I prefer to change the way I commit; usually I commit often but push after cleaning. At a hackathon I commit and push extremely frequently, on every unit completion. It would not be an exaggeration to say that make the version control system your undo log. This also help prepare for an unfortunate event if your machine decides to take a nap while you are banging the keys.
  • Ban headphones on the team. Unless there is interaction, there is no team.
  • Plan your long breaks, like lunch, dinners and snacks to be in sync with your discussions.
  • Take frequent breaks apart from the discussions, individual or team breaks, but get up and walk around. I drink a lot of water, that forces me to take frequent breaks and helps avoid health issues as well. It is also a good idea in your day to day work.
  • Do not sit it through, walk around, jump around, interact, stay active and awake.
  • If you can, take naps in between. Just make sure you allow your partners to pour a water bottle on your head to wake you up, just in case. You do not want to miss all the fun by sleeping through.
  • There tend to be side events every few hours in long hackathons, participate in them, get to know people.
  • Meet people. You will find a whole lot of them are working on something amazing. You might end up meeting your next employer, co-founder, your living idol, or even your soulmate, if you are looking for that. You never know. All those coming to the hackathon tend to be there for their passion.
  • Have fun. Win or lose, unless you have fun at the event, it is pointless. Have what fun means to you. No one forces you to take part in any of the events or to talk to people, if you just wish to code, so be it. But enjoy the 24 hours, you do not get platforms like this every day.

After the Hackathon

  • Wrap up everything, pack the items and belongings, do not forget your chargers and peripherals.
  • Have one final meeting, plan out what you would like to do with the idea and the code / product built so far.
  • Divide the tasks for the future and decide timelines. If you leave here without a plan, and if you have not won the prize, it is highly likely that the idea will never be pursued further.
  • Know that what you are feeling, the mild body ache, sleepy red eyes, it is similar to a jet-lag and treat it as such, by sleeping only at your usual timing. Avoid taking an untimely nap in the day, set your routine back as soon as possible.
  • Write a blog. 😉

If there is something that should be added to the list to make it more usable, please suggest. See you at a hackathon some day.

Docker As Application Registry

Docker As Application Registry

Docker As Application Registry

Docker is great and solves a lot of problems with deployments. It taught VMs to share the resources, like how VMs taught hardware to share resources! Along with production, I have found that docker can work great as an application registry in a local development environment.
By applications I mean software that you install on your OS and launch them with shortcuts and they continue to live and retain state till you uninstall them; not exactly what containers are designed for but can work as. Something like snap or flatpak but with docker and for servers as well, not just for UI apps.

One advantage this has over using standard installers (like apt-get) is you are at complete liberty to start and stop the background processes, like mysql. If you installed mysql this way, you do not need to go and disable the autostart for it, it just does not matter! Similarly for your SonarQube server, you do not need to install it as a daemon, neither do you need to remember where you downloaded it to be able to restart it. Another advantage is most of such applications do have official docker images, it is the intended way to use them now!

One disadvantage of this method is that you always need to address them with their IP, you will not be able to bind them on host network then. But in my view, it is always better to have a dedicated IP, it emulates production scenario better and does not clutter your local machine ports.
So for apps what you need is containers to live long, be able to identify them with name, start and stop them easily and have a dedicated, static address to be able to reach to them. Most of these things are easy, except for a static IP. But once you create a network, you are set. That is it, it is that easy! Create a virtual network, and start your dockers with a name and static IP in that network. Simple!

To create a network:

docker network create -d bridge --subnet="" --gateway="" --ip-range="" permanet

Now any app you need, just specify this network and a static IP of your choice; like this:

docker run --name mysql-server --network="permanet" --ip="" -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql/mysql-server:5.7

More complex containers can be created like this:

docker run -it --name gocd-server --network="permanet" --ip="" -v /yourhome/docker-volumes/gocd/godata:/godata -v /yourhome/docker-volumes/gocd/home:/home/go gocd/gocd-server

I keep a dedicated directory in my home for docker volumes, so I can back it up and use them as is when I change machines or OS. I also have a script where I add all the containers I need, so it is just a matter of copying the volume directory and running the script to create identical setups. Then even your .desktop files work as is!

Here is an interesting setup script for Jenkins (gist), it externalises all data directories from Jenkins, including the plugins and users and mounts your local m2 repository inside Jenkins so as to avoid downloading the libs again :

docker run -it --name jenkins --network="permanet" --ip="" 
-v /yourhome/.m2:/var/jenkins_home/.m2
-v /yourhome/docker-volumes/jenkins/workspace:/var/jenkins_home/workspace
-v /yourhome/docker-volumes/jenkins/jobs:/var/jenkins_home/jobs
-v /yourhome/docker-volumes/jenkins/plugins:/var/jenkins_home/plugins
-v /yourhome/docker-volumes/jenkins/users:/var/jenkins_home/users

You can create as many such networks as you wish for logical separation of groups of such apps; in my case this is the third network (172.30 for that reason), since first two were taken up by some compose scripts.

A list of few such containers I use: mysql (different versions), SonarQube, hystrix-dashboard, zipkin, swagger-ui, a redis-cluster for local use, gocd-server, Jenkins, portainer, postgres, pgadmin etc. I even have a couple Windows software running on wine in such containers, we shall talk about it some day.

Jetbrains messed up our license: Jetbrains still rocks!

Jetbrains messed up our license: Jetbrains still rocks!

Jetbrains messed up our license: Jetbrains still rocks!

It is story time today. We are a small startup, by small I mean like 4 people on the team and we established barely a couple months ago. It was time to start development and like all good Java developers depend on their IDE for their life, we did too. Too soon to go off-topic, but I wonder sometimes how large a program can I write without an IDE.

There is no better IDE for Java or Javascript than Intellij Idea from Jetbrains, and like all developers who know this, we went ahead and bought the Idea license. Lucky for us, being a startup, we were eligible for a 50% off offer. Jetbrains’s sales team was kind enough to approve it and we did get the license without much of a problem. My CTO asked me to get it set-up for myself, it was easy. There was a link in the email, I clicked, logged in and I got the license added in my Jetbrains account. It was almost smooth, except for a short to and fro on email to get the offer to reflect on the checkout page. All said and done, we loved the experience. Who would not love to get Idea at 50% off!??

So, done with the story? Why would I write it up if it was all so smooth? Read on..

A week went by without a problem and all of a sudden one morning my Idea closed on me complaining that I had no license! This was a shocker, we bought it, I had seen in it in my account and it was working for a week! I wondered if my CTO could have accidentally reallocated it, but of course he has better things to do than poke around in Jetbrains account configuration. I logged in to my account and turns out there was actually no license!

Since the CTO and I work in different time-zones, it was almost end of the day when we could chat and I could request him to check the matter. (A day saved by sublime-text.) Like I thought, he told me he had better things to do but we decided to have a look at the account anyway. Since it was a single link click for me to setup the account, he had not even created a Jetbrains account till then. We created the account and logged in, and were shocked to see some 31 licenses in our name. Something was certainly wrong.

Clicking through a couple of menus revealed that we were actually seeing the licenses/account of a different company! And our CTO, had become an admin for them. We had complete control over all the licenses, our as well as their licenses, and we could even decide who becomes an admin. We could remove their admin. While it was all fascinating (the devil!), we needed to know what happened to our license and we found it. This company’s admin had revoked my license and allocated to someone on their team! This company had a name similar to ours (Ours is one word, their was two), it was obvious that the person creating our license mistook us as them and instead of creating a new customer account amended theirs.

What do we do now? I contemplated our options and risks, contacting Jetbrains sale and support team and asking them to fix this was the best and the obvious option, but I had no idea how long it would take. What would AI do till then, being locked out of my own license, and when Idea keeps shutting down spontaneously. Can they even track it? Will they do it? Or would they ask us to buy a new one? There were a lot of unknowns, even when I knew Jetbrains would not leave us in the lurch. What if we wrote an email to this company’s admin and explain what has happened and request them to free up our license in good faith. But then they had their chance, they chose to to re-allocate a license they had not paid for to someone in their team, their admin would of course know this! They had their chance to see where the license came from, why it was allocated to someone outside, but they just revoked and used it, can we trust them to act on good faith? What if they just kicked our admin out, we would have no visibility into what was happening. Scary!

We quickly wrote an email to jetbrains, basically responded on the previous mail chain explaining this and requesting a resolution. Also attached a couple of screenshots showing what exactly had gone wrong, one showing the list of unrelated licenses and one showing our license being reallocated to someone else. Also tweated at Jetbrains to help us out, a good fellow working with Jetbrains responded and asked us to write to support email as well. We forwarded the email to support address and waited.

It was not long before we heard back from the sales team, some 4 hours or so. They quickly separated our company account, moved our license, assigned our admin and responded with a new link to claim license. We could now see a single un-allocated license in our account and allocate it. What they did great was that they also looped in this second company’s admin and wrote to them explaining the situation and offering a discount for one license. In my view, it was a good gesture at amending a mistake.

Mistakes happen, how you fix them is what decides if you retain your customers. And Jetbrains you certainly have retained us.

Creating a Coding Interview Problem

Creating a Coding Interview Problem

Creating a Coding Interview Problem

At work, I have been responsible for conducting the coding tests during the interview process, and have been doing that for over a few years now. Over the time I have made some mistakes, learnt some things and this post is a summary of what one should consider when creating a coding test.
Coding tests are generally the first round of interview; it is either online or on campus, depends, but a coding round is considered as the first gate a candidate has to cross. How you conduct it has some implications, for example, if you asked the candidate to submit code online, you may want to verify that the code was written by the candidate, they understand it and that they did it in the stipulated time. You can do that with an additional pair programming round on campus. If you conduct the test on-campus there is little room for such doubt. But then you need to ensure things like a dedicated desk or meeting room for the test, a machine, and food and coffee may be. But there are some considerations integral to coding tests themselves, these are the ones we shall be looking at today.
Before we begin, let me call out that there are those who believe that interview is not the best process to understand the suitability of a candidate, and I would not disagree. But I believe that if done right, coding tests are the best way to understand a person’s approach to problem solving, their expertise and command on programming. How to go from there is your choice.
We shall see the points I have learnt to consider when creating a test problem, we shall also discuss the the thoughts and reasoning behind them.

What to consider

Skills: What are you looking for

First is to identify the skill set you need, the practices that matter to you for the role you are looking to fill. People consider knowledge of language C as a basic expectation in computer science, but if a developer is not expected to work on C it is pointless to ask a question with C in mind. Similarly, the most attractive questions for an interviewer: questions on shortest path algorithms, sorting and searching algorithms seem pointless. If there was ever a need to use them, I would not expect any employee to implement them anyway, why expect them to implement in interview. All we need is that they understand how those algorithms work. In an interview, we need to better target the skills that matter to us. Yet, it is better to skip any frameworks that you might use, frameworks can be taught and learnt, teaching problem solving approaches takes longer. I am also of the opinion that the programming language used to solve the problem should be of the interviewee’s choice, but that is not always feasible, since we should have enough skill in that language to review the code they submit!

Prioritize: What is the value of each practice/skill to you

Prioritize the skills in these categories: ‘must have’, ‘should have’, ‘good to have’ and ‘cool to have’. Model the problem in a way that targets the ‘must have’ skills and probably touches on some ‘should have’ skills. Skip anything under ‘cool’ category. This is not to say that we should hire a person who can do only the job we require them to do at the time of hiring. It is to say that the person should be able to do at-least that much. You have rest of the interview process to assess the person’s ability to learn or apply creativity, coding test is the gating criteria.

Time: How much time can you allot for the test

This matters a little in an online test, but is a huge consideration in an on-campus test. For on campus tests we need to have provisions for machines, food and meeting room or desk. If it is an interview drive, we need plan for as many number of machines as there are candidates or divide the candidates in batches, with delay in batches equivalent to time for test. If it is an online test, how much time to provide the candidate to revert and if the candidate can get a weekend to revert are considerations as well. Either way, how much time do we demand from the candidate is a question and we need to decide on a problem that needs an estimated amount of time to solve.
It is also important to accommodate for the pressure the candidate may be under during the test. This matters even more for on-campus tests; it can put your time calculations off considerably. I try to make the problem statement very clear and as definitive as possible. Being explicit about what is expected and what is not helps, because an unclear question leaves room for random implementations or shortcuts where I least expect them. That makes it difficult to evaluate the solutions on equal grounds. On the other hand, unclear questions leave room for creativity, but I have seen people missing the point with these.

Branding: Showcase how you are

Interview is a window, window for the company into the candidate’s capabilities and window for candidate into company’s culture. Just as the company needs to like the candidate, the candidate needs to like the company. And the problem statement is the first impression of the company! It is important to make the problem ‘fun’ to work on. I prefer to use casual way to describe the problem and try to make it enjoyable to read and to solve. It should have the feel of a fun place. You should choose a style that best describes your organization. As a side note, nerd jokes or references to Hitchhiker’s Guide to the Galaxy or Star Wars do not always work. 😉

Creating the problem statement

Over the period I established a process for creating the problem statement, enabling a predictable estimation of time and solution. Over time you can choose to skip some of the steps, but to begin with here is the process: 
  1. Once the problem is identified, create the write-up describing it. Instructions, guidelines and rules about time, how to submit the code, languages allowed etc should be mentioned at the top.
  2. Once the problem statement is formulated, ask a peer to read and explain it to you. Ensure it means what you expect it to mean.
  3. Solve the problem. Record the time.
  4. Ask a peer to solve the problem, record the time taken. The peer here should be from the target experience range and skillset. Ask for their experience.
  5. Anything you learn during these discussions, or while solving the problem, convert them into instructions and add them to the top.
  6. Fix the scope of the problem to fit the time.
  7. Solve again, when you are in a different mindset and record the time.
This process, although tedious, sets the expectations right. Ponder on how good is your solution in that time and how acceptable are your mistakes to yourself. Consider that you already knew the problem and had (unknowingly/knowingly) designed a solution in your mind. That should help set expectations from an interviewee ‘under pressure, under time limit, likely less experienced than yourself, groomed in different culture with different practices than yours who was just presented the question.’ Sounds difficult, right? It is not to mean that the evaluation should be lenient though, it is only to identify what is acceptable.
While evaluating, one should remember that you are not looking for candidates who solve the question the same way as you do. In other words, you are not looking for candidates who think just like you. The different the better, it is always surprising how many different ways you find the problem solved.
That’s all from me, all the best for finding the right candidates!
Yet another packager for node

Yet another packager for node

Yet another packager for node

There are so many packaging systems for node already, or maybe not as many, so here I am presenting another way to package your applications into an self extracting executable that has no dependencies. Ah well, a few dependencies, like the processor architecture, and Linux operating system may be, but that is all.

What is it?

It is a modified shell script originally used to create self-extracting and installing applications for Linux platforms. What it does is, it creates a tarball which includes your code, the modules it depends on, the specific node binary it uses, and appends it to a script with the command to execute your code. It is essentially a binary merge of the files, the shell script and the tar.This is not something new, people have used such a system in the past to deliver applications for Linux, every time you see an obscenely large ‘.sh’ file (for being that, a shell file) that can install or execute an application without requiring any other files, know that this is the packaging system being used.This script is merely an adaptation of it for delivering node.js programs. And to give where credit is due, is pulled and compiled from a few sources.

What all can it do?

  1. I have been hoping you would ask that, it is interesting:
  2. Creates a single file that starts your code when executed.
  3. Does so without requiring even node or node_modules installed on the target system.
  4. No knowledge of any framework required, develop your code just as you normally would.
  5. Allows you to name the process it starts. Well, it at least helps you to do so.
  6. Allows you to have environment specific overrides for any configuration you might want.

What can it not do?

  1. It requires to be bundled for the target platform, but this is expected, is it not?
  2. Does not work well when if module has binary/native dependencies, for when things like node-gyp or build-essential come into picture.
  3. Cannot make you fly (but it can make you look smart!)

Where is it? How do I use it?

Here. It is a simple command. To package, run:
./ -s node-bin/ -n selfExeSample -b node-bin/node -m mymodule/ -o dist/
And to run the package:
That easy. The repository also has a sample project to try it out.

Where should I use it?

Well, how can I comment on that, it would be for you to decide! But I can tell how we use it. The company I work for, is primarily a java shop. Our system is quite distributed, composed of many services (I dare not say microservices, it is easy to start flame wars these days) that talk to each other. But ever since we realized the power of node especially in quick new developments that we do, we have leveraged it. We have much code in the form of monitoring and mock servers, automation and code generation tools and fault injection systems built in node. These systems are delivered, they do their job and are removed when no longer required.This is where the script comes in, a no dependency delivery of a tool wherever we need it. Instead of requiring node installed on all servers, we bundle our tool with this script and deliver to the servers we need them on, when the job is done they disappear without a trace. Well almost without a trace, it’s not some stealth tool anyway.
Opinionless Comparison of Spring And Guice as DI frameworks

Opinionless Comparison of Spring And Guice as DI frameworks

Recently I had to delve into the play framework for a particular microservice at work. Now it is not exactly new, nor is Guice, nor DI, but coming from Spring world it was still a big shift in approach. There is a lot of documentation comparing Spring with Guice, stating which is better, why and how. In general these articles discuss specific points where these two frameworks differ in their approaches and which approach seems better to the author. I am not sure these articles really help someone trying to take a dip in the other framework. We know the differing opinions, as they are stated by the authors of the respective frameworks in their own documentation as well, another person (article’s author) reiterating it with an incomplete comparison of these frameworks does not sound helpful. What would work much better is a direct mapping of features, without author’s opinion (Didn’t this sound like an opinion). That should help someone getting into Spring from Guice world or vice a versa.

Now let me warn you, since these are different frameworks for the same purpose, DI (Dependency Injection), they exist for their differences. Hence, there cannot be one-to-one mapping of features/differences in these frameworks. What we can get instead is mapping of similar features and that is what we will have. If nothing else, the comparison below should help someone find the right documentation for what they are trying to do, instead of wondering what to look for.

Another point, we are here discussing Spring and Guice only on their dependency injection approaches and not as web frameworks, AOP, JPA abilities, their ecosystem or any other features they provide. That is for another time maybe, but not today.

Application level @Configuration
Extend AbstractModule, comes closest to that. It defines a part of your application; multiple modules can depend on each other in an application. (Unless your service is too small)
There is no classpath scanning in Guice. (keep reading…)
@Singleton with “bind() with/without .to()” in Module
@Scope(“”); singleton (DEFAULT), prototype, request, session, global-session
Default is unscoped, similar to prototype in Spring, @Singleton, @SessionScoped, @RequestScoped, custom.
eager/lazy differ for production and development.
@Autowired, @Inject
@Inject (from javax or from guice package)
@Qualifier / @Named, Names.named,

annotatedWith and @BindingAnnotation
@Provides or implement Provider<T>
@Bean with @Autowired field in it
Explicit constructor binding: .toConstructor(A.class.getConstructor())
@Named with Names.bindProperties() in your module
Injecting static fields can be achieved with @Autowired on non-static setter method.
For static fields, use .requestStaticInjection() in your Module
ApplicationContext (BeanFactory to be precise)
@Autowired with context.getBean(Clazz, Object…)
@AssistedInject. Allows for using params, with injected beans to instantiate objects.
Provider<T> with FactoryProvider; FactoryModuleBuilder
@PostConstruct, @PreDestroy
No Support for lifecycle events. (extensions)

Let’s also see a few more points which would not fit well in a tabular form:
  • One can add more capabilities to Guice with plugins and there are a few actively maintained like Governator from Netflix. Spring can be extended using BeanPostProcessor or BeanFactoryPostProcessor in your application, but I was unable to find a plugin for extending Spring’s core DI abilities.
  • Unlike Spring, wiring in Guice (called binding) is plain Java, and so Guice has compile time verification of any wiring we do. Spring depends on metadata through annotations, which are not checked during compilation, does not have this feature and exceptions are at runtime.
  • Classpath scanning can be achieved in Guice by extending it. (Some plugins provide this, but governator for one has deprecated it.)
  • Lack of classpath scanning in Guice, most likely, considerably reduces the application startup time in comparison to Spring.
  • In Guice an Interface can declare the default implementation class (is it odd, Spring people?), @ImplementedBy annotation, which can be overridden by .bind() if found in a module. Similarly the interface can declare the configuration class which generates the instance: @ProvidedBy
  • I know I said we are not going to discuss any other abilities, but this one is a little interesting; Guice has built-in support for AOP, in Spring we need an additional dependency.
  • Not a difference, but a point to note: both frameworks have similar injection types, Constructor, method and field.

I have tried to be as opinionless as possible when writing the above piece; although there are a few things that I find important to note.
  • Guice is very much a non-magical (in the words of Guice authors) dependency injection framework, you can literally see DI happen, with the code that you write and can read.
  • Thankfully, Guice has no beans.. NO BEANS! How many beans do we have to remember and disambiguate before it is too much? Javabeans, Enterprise Javabeans, Spring Beans, Coffee Beans, Mr. Bean and I might still have missed a few others!
  • Guice still feels like java, you see, it does believe in extending classes, Spring nowadays seems to believe only in annotations, so much so that a few folks I asked around can’t even remember what ‘extends’ keyword stands for! 😉

So which one is better? Now, that was not the question we were hoping to answer!

Using Docker and a Private Registry with VPN On Windows

Using Docker and a Private Registry with VPN On Windows

Wasn’t that a very specific title? Docker has a very good documentation and reading that alone is enough for most of the straightforward tasks we might want to do. But as always some practical tasks are not straightforward, hence this blog. What we are going to see here today is how to setup docker toolbox on a Windows machine, make it work even when VPN is connected, make it talk to a private, insecure docker registry (that is why VPN) and configure it so it can run docker compose and see how we can set this config as a one-time activity. That’s quite a mouthful, but yes this is what we are going to do. All ready? Let us begin then.

Install Docker Toolbox

Go and download the docker toolbox and install it. That should create a shortcut called “Docker Quickstart Terminal”. Run it. That should show you an error about virtualization.

Enable Virtualization

Restart your machine, enter the BIOS settings and enable virtualization. It may be under advanced settings. On this Laptop, it is under the advanced settings -> device configurations and is named as: “Virtualization Technology (VTx)”. Whatever be the name, enable it.
Docker requires a Linux kernel, and since Windows machines lack it (of course!), docker toolbox runs a lightweight Linux distro called boot2docker in a virtualbox, hence the virtualization setting.

A Handy Tip

This tutorial will require you to copy and paste quite some shell commands, it is better we make that easy. Exit the quickstart terminal. Right click the shortcut, click properties -> options and enable ‘Quick Edit’ mode and save. It might ask for permission. This should now enable paste just by right clicking the mouse, to copy just select the text with mouse. While we are at that, also consider increasing the buffer and window size to suite your taste.

Start Up the VM

Make sure you are not connected to VPN and use the Quickstart Terminal shortcut again, this time it should proceed to validate if the boot2docker image is latest, or it shall pull the latest image, then it shall create a VM, get an IP, setup some ssh keys and finally the whale should appear with a terminal. Run the following commands to get a hang of docker running on windows:
docker -v
docker version
then docker run hello-world
docker images
docker ps -a
(And do read the output of hello-world, it describes how docker works). 

The Disappointment

Feeling happy? Now for a little disappointment, connect VPN and try again. Errors errors everywhere. Disconnect VPN. What happened: Docker is running in a virtualbox on your machine, which gets an IP in local range (normally:, and you are talking to it over ssh. Once VPN is up, it sets the new routes and sends the 192.168.* range traffic out over VPN and your commands never reach your VM running docker. The most popular solution to this is setting a port forwarding and is documented on many blogs/githubissues. Let’s just do that.

A new Beginning

Ensure you are not on VPN and remove the default VM, not necessary, but reduces the confusion. So in the quickstart terminal:
docker-machine rm default

And confirm. We are now going to create a new VM, let us call it ‘custom’. So type in:

docker-machine create -d virtualbox custom
eval "$(docker-machine env custom)"

It might take a couple of minutes, it is almost the same process as the first time. What we did is created a VM named custom and setup the environment to talk to this VM instead of the default. Mark this step, cause if anything goes wrong in the following steps, this is the one you should get back to to start over. Just be sure to use a new name, docker currently does not allow reusing names for VMs, so next time you may not be able to create a VM called custom. A new name should work just fine.

Battling With VPN

Now we shall create the a port forwarding on the virtual machine, binding the default docker port (2376) on localhost/ to forward to this VM, whatever the ip of it.
docker-machine stop custom
/c/Program Files/Oracle/VirtualBox/VBoxManage.exe modifyvm "custom" --natpf1 "docker,tcp,,2376,,2376"
docker-machine start custom
docker ps -a
If you changed the location of virtualbox installation, please use appropriate path to vboxmanage. Assuming it was successful, last command should show you a table with all containers. You can use UI to do that as well: Open VirtualBox, stop the VM, open settings -> network -> NAT adapter -> advanced -> Port forwarding. Click add rule and use the same values as above (comma separates columns). If the command was successful, you should see the rule listed at the same location. Also, this is the place to add an entry if you need any port exposed from a docker container and use it with VPN enabled; for example your application’s tomcat port.
We are not done yet, a few more commands:
export DOCKER_HOST="tcp://localhost:2376"
alias docker="docker --tlsverify=false"
Kudos to this smart guy for that alias. In other posts, you might find IP of the VM (which does not work), public IP of your machine, or even loopback IP ( being used, which might work but I would advise against that. Use ‘localhost’ instead; this and the TLS setting has to do with running docker-compose.
Now enable VPN and enjoy docker. This is where your journey ends if you are not using a private registry; but if you are, then continue.

Configuring Private Insecure Registry

Ensure that VPN is down, and ssh into the docker-machine. We want to enable it to talk to an insecure registry. A private docker registry does not need a name, but docker images in a non-docker-hub registry require that they be tagged with the URL of the registry prefixed to the usual repository name. They say it is for transparency, helps in identifying where the image originates from. Hence, it would be advisable to have a host-name even if your registry is private and has a static IP. That way even if you change the IP of the registry for whatever reason, you do not have to update all images/tags/compose ymls, shell scripts and whatever else is using them. Let us say our registry is hosted at:, on port 5000 and this being insecure, of course, is accessible only over VPN.
This step is intentionally manual, to avoid risks of breaking something else:
docker-machine ssh custom
sudo vi /var/lib/boot2docker/profile
In the EXTRA_ARGS, before the closing quote, add this line: 
(I would ensure a blank line before the quote, as there already was) Save the file and exit vi (:wq). We now need to restart the docker daemon for changes to take effect:


sudo /etc/init.d/docker stop
Ensure service is down:  sudo /etc/init.d/docker status
sudo /etc/init.d/docker start
Ensure service is up: sudo /etc/init.d/docker status 
Exit the VM by typing exit in terminal. (BTW, there is restart command too)

Using the registry

Now let us try pushing and pulling from this registry. In the quickstart terminal: 
docker tag hello-world
docker push
docker rmi
docker run
What we did is tagged an image with the registry, pushed it to the private registry, removed the local copy and run the image by pulling from this registry.

Docker Compose

Next step is to get docker compose up and running with this setup. Actually, we are already ready, everything that we need to run docker-compose is taken care of in the previous steps. Most importantly docker-host configuration. You see, the TLS certs allow only for docker-machine IP and localhost to be used even when we disable verification, but we have already taken that into account and we have already configured our private registry. All set. Just connect VPN, navigate to your directory with docker-compose.yml file and hit: docker-compose up. You should see the images in compose file getting pulled and executed. 

Starting the quickstart terminal second time

When you restart the quickstart terminal you might find that it recreates the ‘default’ VM and configures the environment to use it. That is okay, it does not bother us. But what does bother us is that none of the docker commands are working with VPN again. Please keep reading..

Consecutive starts of quickstart terminal

Well, we have to reconfigure the terminal every time to use our VM of choice. Here is how to do it:
Always make sure that you start the terminal when VPN is down. Starting with VPN up has never worked for me; and then run these commands:
eval "$(docker-machine env custom)"
export DOCKER_HOST="tcp://localhost:2376"
alias docker="docker --tlsverify=false"
Yes, every time you start the terminal. There is a way to avoid this, read on. 

One Time Setup: For The Brave Among Us

From this point on, you are entering undocumented territory and are on your own. If something breaks, do not come looking for me. 🙂 And before making any modifications, take a backup.
If you notice, the shortcut points to a shell script called ‘’. We are going to modify this script to auto-configure our environment every time it is called. Navigate to docker installation directory (directory that quickstart shortcut is pointing to) and open the (After creating a backup) file in a text editor.
Change 1: On line number 10 which looks like: VM=${DOCKER_MACHINE_NAME-default}
change that line to: VM=custom. Custom here is the name of our VM. This saves you from typing the eval line every time.
Change 2: On line 66/67, in “Setting Env” step, after the existing eval command add the following lines:
eval "DOCKER_HOST="tcp://localhost:2376""
eval "alias docker=docker --tlsverify=false"
These handle rest of the config. That is all, save and exit the file and we are ready to roll. This may break when an update the docker toolbox is installed which overwrites the file, may not work if the script changes in future, may break things I am not aware of, hence only for the brave. Besides, I do not use a Windows machine daily, so you guys would be first to know if it starts breaking ;). Let me know and we will figure it out.
Redis Cluster: Fact Sheet (Not Just Issues)

Redis Cluster: Fact Sheet (Not Just Issues)

Redis and the Redis clustering works very differently from the other data stores and data store clusters. The differences are not always as obvious and may come up as realizations down the line while using Redis, like what happened in our case. We are using a Redis cluster, with which, fortunately, we have not faced many issues so far. But that does not mean we will not and we shall need to be prepared.

Recently we were working on getting a Redis cluster up and working with docker compose and was enlightened to some of the differences which later led to disillusionment for me. Thought that there should be a ‘document of facts‘ on Redis and Redis cluster which people/myself can refer to. So I decided to create one, enjoy:

  1. Redis is great as a single server.
  2. In a Redis cluster, all your masters behave as if they are simultaneously active (not sure if they all are masters at the same time technically, but they behave as such).
  3. Every master in a cluster knows every other master/node in the cluster.
  4. There is no single master looking over the orchestration job.
  5. The masters, during clustering (sharding) agree upon the division of load: who shall have which hash slots.
  6. Each master speaks only for itself. If you ask for a key, and if the hash-slot for the same happens to be on the master you asked, it will return a value. Otherwise it returns a ‘redirection’ to the master that has the slot for this key.
  7. It is then the client’s job to resend the request to this new master based on the redirection.
  8. Clients try to sync up with master for which hash-slots lie with which master in order to speed up the retrieval.
  9. Every master knows other master by IP and IP only. It is not possible to use a hostname.
  10. The knowledge about the other nodes in the cluster is stored in a file called: nodes.conf. Although the extension is gives an impression of user modifiable configuration file, it is not a file for humans to modify.
  11. Every master must know other master by actual public ip, it is not possible to use a loopback (like If you do that it ends up in a max-redirection error. How it works is, when a client asks for a key and server responds with a redirection, the ‘smart’ client is expected to follow this redirection and get the value from this other node. Now the ‘dumb’ server responds with only ip it knows other node by, that is your loopback on the Redis server. But this ‘smart’ client (Jedis) is not smart enough to understand that the loopback is actually of the node and apparently starts looking for a Redis node on its own host! Whatever.. Just avoid doing that.
  12. When two nodes meet to form a cluster, one of them has to forgo its data. Either one must be empty.
  13. Replicas are not within master or any other nodes for that matter. Unlike what we know about clustering in Elastic Search or Kafka like services, replicas in Redis are independent nodes. So if you want a replication factor of 2 and have 3 masters, you effectively need 3 * 2 + 3 = 9 nodes in the cluster.
  14. If a master drops off, it is not possible to bring it back into cluster with data. Implication of point 12.
  15. If you need to perform any updates to any of the nodes/servers, take point 12 and 14 into consideration. Take out the master, upgrade, flush and reconnect as a slave, that is how it works.
  16. Converting a single server to cluster is not supported officially. There is one blog of a smart person showing a workaround for such a migration. Inverse of this, cluster to single server shall be equally painful.
  17. Redis / Redis Clustering is not officially supported on a Windows machine. There are unofficial ways to achieve something of the sort, the MSOpenTech’s Redis implementation, which now also supports clusters.
  18. The Java client, Jedis, has two different classes, one for connecting to a single standalone (JedisClient) and other for connecting to a cluster (JedisClusterClient). So if you decide to use the cluster in production, you cannot choose to use a single server during development. Implication is un-necessary load on your laptops. It can be managed by using environment aware wiring. We worked around by creating a jar, with a class that on post-construct just replaces the cluster-client reference of our internal cache utility class with a single server jedis-client. Just placing this jar on classpath during development solves it for us.
  19. Running Redis cluster in docker has its own pain points, on that later. (A different fact sheet for docker soon.)
  20. Extending point 11, if you have two network interfaces on the nodes, and have two isolated networks for two services that use this Redis cluster, how will that work out? Such is a setup is expected in a docker compose, where we isolate the service into different networks. Will need to see how Redis behaves in such a setup.

Although it was not the intention, while reading what I wrote I realized that the points above do look like a rant. In spite of these Redis is a solid, fast cache store and I love it for that. These are merely a few nuisances and related implications which we learnt about and experienced in our use of Redis cluster. Please use them only as points to ponder on when designing your application. Also, these nuisances are based on the state of Redis and Redis cluster at the time of writing which will change in time to come.
Better Ways Of Storing Product Knowledge

Better Ways Of Storing Product Knowledge

So, the Brain-Format is not that good. Which is? To answer that, let’s first discuss the ideal attributes of a product knowledge and of the place we would keep the knowledge in, the repository. We shall start with the basic expectations from the documentation itself, and later discuss the expectations from the repository.
But before we begin, I would like to make a point – on my previous post I got feedback that I probably should not use the term ‘knowledge’, as it is too heavy a term for the simple ‘information’. Well, I disagree. I believe, knowledge in simple terms, is information in usable format, which includes the insights from the information, which of course, are not part of the information itself. It is processed information, and that is what differentiates it. This difference also highlights the importance of this information and that importance also happens to be the goal behind writing down these thoughts.
Now that it is clear, shall we begin?
The first and foremost point is that the product knowledge is better treated like the product code itself. Is that too much to ask? Consider this – we need the product knowledge to always be relevant. For it to be relevant it needs to be updated, it should reflect the latest changes and enhancements done to the product; in effect it is highly likely that it will be modified every time the code is modified. Hence, is it wrong to expect the same flexibility from the documentation that we come expect from the code? Why should we not apply the same quality guidelines? In general terms, should it not be as maintainable as the code itself?
So, the first list is of attributes of the knowledge storage format:
    1. Easy to create: It applies to new documentation, and new additions to existing documentation. Whatever the format, it should not require huge assembly or lot of people or say, multiple approvals.
    2. Easy to maintain: This attribute is rather an abstract one, and many points below shall touch on this in greater detail. (Clean Code, anyone?)
    3. Easy to extend: Extend, in context of documentation means that it should be possible to combine documents to bring related information together, without duplication, It could be through a link to the information, but best would be the ability to embed.
    4. Easy to use: What is the use of the documentation? It should be easy to read/watch/listen/touch/smell/taste etc. (Well, maybe not touch or smell, or taste..)
    5. Should be DRY: This directly relates to the ‘extend’ requirement, it should be possible to have a single authoritative representation of the knowledge.
    6. Presentable: But of course, we want to use it don’t we? We need to like it!
There are many more analogies we can draw, but I think these are enough to convey the point that it should be built with almost the same principles as the code. Now, we take on the documentation repository and also discuss some non-functional requirements that apply to do the documentation but not necessarily to code:
  1. Access Control: Does it need to be discussed? Of course we need access control, and multiple levels of control: Access to read, write/edit, to delete, and the access to grant access should all be controllable. Even better if we could integrate with the corporate account management system and also set roles.
  2. Record History: For the same reason as code, we need a way to undo (and also blame people) any changes done to documents, including restoring deleted content.
  3. Portable: Yes, portable. The knowledge is not only for developers, it is also for the marketing members of the team, the business analysts and the management. We cannot expect that these guys, whose job is to go out and meet people can always have access to internet and VPN. That makes it a non-functional requirement that the knowledge be portable in full or at-least in part. I imagine some companies having problem with this, but those who use distributed version control systems like Git, should not really worry; they are trusting their teams with the working code, knowledge is not going to cause any new special problems.
  4. Lightweight: It should be light on resources. Resources of all sorts, be it storage, network, computing power, but most importantly on the (arguably) costliest resource on the team: ‘user time’.
  5. Searchable: It should be possible to search within the repository by various categories, tags and of course the content.
  6. Shareable: Shareable by either exporting or by providing a reference pointing to the exact content, like a URL.
  7. Encourage Contribution: This is likely the most neglected but probably the most important requirement. If after being all this, the repository does not appeal to people, it is going stale real soon.
Phew..! However, the list is far from complete. But I think I have made my point, so now we’re off to the next task: Looking for a format and a repository that fits all these criteria! Till then, coke anyone?
Cinnamon Crashed, would you like to restart?

Cinnamon Crashed, would you like to restart?

I have been a fan of the Cinnamon DE for years. I like the way it looks and stays out of my way when I am not admiring it and actually doing something useful! But it is somewhat buggy.

This is a quick post about the cinnamon crashes, basically a new reason for it to crash. I was faced with a common issue of cinnamon crashes, suggesting me to restart cinnamon, which when clicked yes resulted in another crash and popup.

Google searches resulted in many solutions, starting with updating cinnamon, resetting the config by deleting the .cinnamon and .local/share/cinnamon directories and verifying if the correct video driver is in use. There was nothing obvious in the syslog, or xsession errors. Nothing helped.

Tired, I reinstalled mint but issue persisted. This was rather peculiar. I mount my home separately, and that of course survives the installs and OSes. This was the first hint at the problem, issue was configuration of something, not necessarily of cinnamon. So I created a new user and tried to login with the user, and voila, cinnamon worked without a crash. So certainly the issue was config for my regular user.

I decided to go about removing related config folders, and the first ones I chose was the gtk-3.0, gtk-2.0 and cinnamon-session directory inside .config directory. And to my luck, cinnamon is working just fine since then.

Probably, I should spend some time to check what exactly from these config was the issue. But at least I now know one more reason why this error might occur and one more way to fix it. And now, you too..!