Yet another packager for node

Yet another packager for node

Yet another packager for node

There are so many packaging systems for node already, or maybe not as many, so here I am presenting another way to package your applications into an self extracting executable that has no dependencies. Ah well, a few dependencies, like the processor architecture, and Linux operating system may be, but that is all.

What is it?

It is a modified shell script originally used to create self-extracting and installing applications for Linux platforms. What it does is, it creates a tarball which includes your code, the modules it depends on, the specific node binary it uses, and appends it to a script with the command to execute your code. It is essentially a binary merge of the files, the shell script and the tar.This is not something new, people have used such a system in the past to deliver applications for Linux, every time you see an obscenely large ‘.sh’ file (for being that, a shell file) that can install or execute an application without requiring any other files, know that this is the packaging system being used.This script is merely an adaptation of it for delivering node.js programs. And to give where credit is due, is pulled and compiled from a few sources.

What all can it do?

  1. I have been hoping you would ask that, it is interesting:
  2. Creates a single file that starts your code when executed.
  3. Does so without requiring even node or node_modules installed on the target system.
  4. No knowledge of any framework required, develop your code just as you normally would.
  5. Allows you to name the process it starts. Well, it at least helps you to do so.
  6. Allows you to have environment specific overrides for any configuration you might want.

What can it not do?

  1. It requires to be bundled for the target platform, but this is expected, is it not?
  2. Does not work well when if module has binary/native dependencies, for when things like node-gyp or build-essential come into picture.
  3. Cannot make you fly (but it can make you look smart!)

Where is it? How do I use it?

Here. It is a simple command. To package, run:
./selfXpackager.sh -s node-bin/launcher.sh -n selfExeSample -b node-bin/node -m mymodule/ -o dist/
And to run the package:
../dist/selfExeSample_launcher.sh
That easy. The repository also has a sample project to try it out.

Where should I use it?

Well, how can I comment on that, it would be for you to decide! But I can tell how we use it. The company I work for, is primarily a java shop. Our system is quite distributed, composed of many services (I dare not say microservices, it is easy to start flame wars these days) that talk to each other. But ever since we realized the power of node especially in quick new developments that we do, we have leveraged it. We have much code in the form of monitoring and mock servers, automation and code generation tools and fault injection systems built in node. These systems are delivered, they do their job and are removed when no longer required.This is where the script comes in, a no dependency delivery of a tool wherever we need it. Instead of requiring node installed on all servers, we bundle our tool with this script and deliver to the servers we need them on, when the job is done they disappear without a trace. Well almost without a trace, it’s not some stealth tool anyway.
Opinionless Comparison of Spring And Guice as DI frameworks

Opinionless Comparison of Spring And Guice as DI frameworks

Recently I had to delve into the play framework for a particular microservice at work. Now it is not exactly new, nor is Guice, nor DI, but coming from Spring world it was still a big shift in approach. There is a lot of documentation comparing Spring with Guice, stating which is better, why and how. In general these articles discuss specific points where these two frameworks differ in their approaches and which approach seems better to the author. I am not sure these articles really help someone trying to take a dip in the other framework. We know the differing opinions, as they are stated by the authors of the respective frameworks in their own documentation as well, another person (article’s author) reiterating it with an incomplete comparison of these frameworks does not sound helpful. What would work much better is a direct mapping of features, without author’s opinion (Didn’t this sound like an opinion). That should help someone getting into Spring from Guice world or vice a versa.


Now let me warn you, since these are different frameworks for the same purpose, DI (Dependency Injection), they exist for their differences. Hence, there cannot be one-to-one mapping of features/differences in these frameworks. What we can get instead is mapping of similar features and that is what we will have. If nothing else, the comparison below should help someone find the right documentation for what they are trying to do, instead of wondering what to look for.


Another point, we are here discussing Spring and Guice only on their dependency injection approaches and not as web frameworks, AOP, JPA abilities, their ecosystem or any other features they provide. That is for another time maybe, but not today.


Spring
Guice
Application level @Configuration
Extend AbstractModule, comes closest to that. It defines a part of your application; multiple modules can depend on each other in an application. (Unless your service is too small)
@ComponentScan
There is no classpath scanning in Guice. (keep reading…)
@Component
@Singleton with “bind() with/without .to()” in Module
@Scope(“”); singleton (DEFAULT), prototype, request, session, global-session
Default is unscoped, similar to prototype in Spring, @Singleton, @SessionScoped, @RequestScoped, custom.
eager/lazy differ for production and development.
@Autowired, @Inject
@Inject (from javax or from guice package)
@Qualifier(<name>)
@Qualifier / @Named, Names.named,

annotatedWith and @BindingAnnotation
@Bean
@Provides or implement Provider<T>
@Bean with @Autowired field in it
Explicit constructor binding: .toConstructor(A.class.getConstructor())
@Value
@Named with Names.bindProperties() in your module
Injecting static fields can be achieved with @Autowired on non-static setter method.
For static fields, use .requestStaticInjection() in your Module
ApplicationContext (BeanFactory to be precise)
Injector
@Autowired with context.getBean(Clazz, Object…)
@AssistedInject. Allows for using params, with injected beans to instantiate objects.
@Lookup
Provider<T> with FactoryProvider; FactoryModuleBuilder
@PostConstruct, @PreDestroy
No Support for lifecycle events. (extensions)


Let’s also see a few more points which would not fit well in a tabular form:
  • One can add more capabilities to Guice with plugins and there are a few actively maintained like Governator from Netflix. Spring can be extended using BeanPostProcessor or BeanFactoryPostProcessor in your application, but I was unable to find a plugin for extending Spring’s core DI abilities.
  • Unlike Spring, wiring in Guice (called binding) is plain Java, and so Guice has compile time verification of any wiring we do. Spring depends on metadata through annotations, which are not checked during compilation, does not have this feature and exceptions are at runtime.
  • Classpath scanning can be achieved in Guice by extending it. (Some plugins provide this, but governator for one has deprecated it.)
  • Lack of classpath scanning in Guice, most likely, considerably reduces the application startup time in comparison to Spring.
  • In Guice an Interface can declare the default implementation class (is it odd, Spring people?), @ImplementedBy annotation, which can be overridden by .bind() if found in a module. Similarly the interface can declare the configuration class which generates the instance: @ProvidedBy
  • I know I said we are not going to discuss any other abilities, but this one is a little interesting; Guice has built-in support for AOP, in Spring we need an additional dependency.
  • Not a difference, but a point to note: both frameworks have similar injection types, Constructor, method and field.


I have tried to be as opinionless as possible when writing the above piece; although there are a few things that I find important to note.
  • Guice is very much a non-magical (in the words of Guice authors) dependency injection framework, you can literally see DI happen, with the code that you write and can read.
  • Thankfully, Guice has no beans.. NO BEANS! How many beans do we have to remember and disambiguate before it is too much? Javabeans, Enterprise Javabeans, Spring Beans, Coffee Beans, Mr. Bean and I might still have missed a few others!
  • Guice still feels like java, you see, it does believe in extending classes, Spring nowadays seems to believe only in annotations, so much so that a few folks I asked around can’t even remember what ‘extends’ keyword stands for! 😉

So which one is better? Now, that was not the question we were hoping to answer!

Using Docker and a Private Registry with VPN On Windows

Using Docker and a Private Registry with VPN On Windows

Wasn’t that a very specific title? Docker has a very good documentation and reading that alone is enough for most of the straightforward tasks we might want to do. But as always some practical tasks are not straightforward, hence this blog. What we are going to see here today is how to setup docker toolbox on a Windows machine, make it work even when VPN is connected, make it talk to a private, insecure docker registry (that is why VPN) and configure it so it can run docker compose and see how we can set this config as a one-time activity. That’s quite a mouthful, but yes this is what we are going to do. All ready? Let us begin then.

Install Docker Toolbox

Go and download the docker toolbox and install it. That should create a shortcut called “Docker Quickstart Terminal”. Run it. That should show you an error about virtualization.

Enable Virtualization

Restart your machine, enter the BIOS settings and enable virtualization. It may be under advanced settings. On this Laptop, it is under the advanced settings -> device configurations and is named as: “Virtualization Technology (VTx)”. Whatever be the name, enable it.
Docker requires a Linux kernel, and since Windows machines lack it (of course!), docker toolbox runs a lightweight Linux distro called boot2docker in a virtualbox, hence the virtualization setting.

A Handy Tip

This tutorial will require you to copy and paste quite some shell commands, it is better we make that easy. Exit the quickstart terminal. Right click the shortcut, click properties -> options and enable ‘Quick Edit’ mode and save. It might ask for permission. This should now enable paste just by right clicking the mouse, to copy just select the text with mouse. While we are at that, also consider increasing the buffer and window size to suite your taste.

Start Up the VM

Make sure you are not connected to VPN and use the Quickstart Terminal shortcut again, this time it should proceed to validate if the boot2docker image is latest, or it shall pull the latest image, then it shall create a VM, get an IP, setup some ssh keys and finally the whale should appear with a terminal. Run the following commands to get a hang of docker running on windows:
docker -v
docker version
then docker run hello-world
docker images
docker ps -a
(And do read the output of hello-world, it describes how docker works). 

The Disappointment

Feeling happy? Now for a little disappointment, connect VPN and try again. Errors errors everywhere. Disconnect VPN. What happened: Docker is running in a virtualbox on your machine, which gets an IP in local range (normally: 192.168.99.100), and you are talking to it over ssh. Once VPN is up, it sets the new routes and sends the 192.168.* range traffic out over VPN and your commands never reach your VM running docker. The most popular solution to this is setting a port forwarding and is documented on many blogs/githubissues. Let’s just do that.

A new Beginning

Ensure you are not on VPN and remove the default VM, not necessary, but reduces the confusion. So in the quickstart terminal:
docker-machine rm default

And confirm. We are now going to create a new VM, let us call it ‘custom’. So type in:

docker-machine create -d virtualbox custom
eval "$(docker-machine env custom)"

It might take a couple of minutes, it is almost the same process as the first time. What we did is created a VM named custom and setup the environment to talk to this VM instead of the default. Mark this step, cause if anything goes wrong in the following steps, this is the one you should get back to to start over. Just be sure to use a new name, docker currently does not allow reusing names for VMs, so next time you may not be able to create a VM called custom. A new name should work just fine.

Battling With VPN

Now we shall create the a port forwarding on the virtual machine, binding the default docker port (2376) on localhost/127.0.0.1 to forward to this VM, whatever the ip of it.
docker-machine stop custom
/c/Program Files/Oracle/VirtualBox/VBoxManage.exe modifyvm "custom" --natpf1 "docker,tcp,,2376,,2376"
docker-machine start custom
docker ps -a
If you changed the location of virtualbox installation, please use appropriate path to vboxmanage. Assuming it was successful, last command should show you a table with all containers. You can use UI to do that as well: Open VirtualBox, stop the VM, open settings -> network -> NAT adapter -> advanced -> Port forwarding. Click add rule and use the same values as above (comma separates columns). If the command was successful, you should see the rule listed at the same location. Also, this is the place to add an entry if you need any port exposed from a docker container and use it with VPN enabled; for example your application’s tomcat port.
We are not done yet, a few more commands:
export DOCKER_HOST="tcp://localhost:2376"
export DOCKER_TLS_VERIFY="0"
alias docker="docker --tlsverify=false"
Kudos to this smart guy for that alias. In other posts, you might find IP of the VM (which does not work), public IP of your machine, or even loopback IP (127.0.0.1) being used, which might work but I would advise against that. Use ‘localhost’ instead; this and the TLS setting has to do with running docker-compose.
 
Now enable VPN and enjoy docker. This is where your journey ends if you are not using a private registry; but if you are, then continue.

Configuring Private Insecure Registry

Ensure that VPN is down, and ssh into the docker-machine. We want to enable it to talk to an insecure registry. A private docker registry does not need a name, but docker images in a non-docker-hub registry require that they be tagged with the URL of the registry prefixed to the usual repository name. They say it is for transparency, helps in identifying where the image originates from. Hence, it would be advisable to have a host-name even if your registry is private and has a static IP. That way even if you change the IP of the registry for whatever reason, you do not have to update all images/tags/compose ymls, shell scripts and whatever else is using them. Let us say our registry is hosted at: dockerregistry.example.com, on port 5000 and this being insecure, of course, is accessible only over VPN.
This step is intentionally manual, to avoid risks of breaking something else:
docker-machine ssh custom
sudo vi /var/lib/boot2docker/profile
In the EXTRA_ARGS, before the closing quote, add this line: --insecure-registry=dockerregistry.example.com:5000 
(I would ensure a blank line before the quote, as there already was) Save the file and exit vi (:wq). We now need to restart the docker daemon for changes to take effect:

 

sudo /etc/init.d/docker stop
Ensure service is down:  sudo /etc/init.d/docker status
sudo /etc/init.d/docker start
Ensure service is up: sudo /etc/init.d/docker status 
Exit the VM by typing exit in terminal. (BTW, there is restart command too)

Using the registry

Now let us try pushing and pulling from this registry. In the quickstart terminal: 
docker tag hello-world dockerregistry.example.com:5000/hello-world
docker push dockerregistry.example.com:5000/hello-world
docker rmi dockerregistry.example.com:5000/hello-world
docker run dockerregistry.example.com:5000/hello-world
What we did is tagged an image with the registry, pushed it to the private registry, removed the local copy and run the image by pulling from this registry.

Docker Compose

Next step is to get docker compose up and running with this setup. Actually, we are already ready, everything that we need to run docker-compose is taken care of in the previous steps. Most importantly docker-host configuration. You see, the TLS certs allow only for docker-machine IP and localhost to be used even when we disable verification, but we have already taken that into account and we have already configured our private registry. All set. Just connect VPN, navigate to your directory with docker-compose.yml file and hit: docker-compose up. You should see the images in compose file getting pulled and executed. 

Starting the quickstart terminal second time

When you restart the quickstart terminal you might find that it recreates the ‘default’ VM and configures the environment to use it. That is okay, it does not bother us. But what does bother us is that none of the docker commands are working with VPN again. Please keep reading..

Consecutive starts of quickstart terminal

Well, we have to reconfigure the terminal every time to use our VM of choice. Here is how to do it:
Always make sure that you start the terminal when VPN is down. Starting with VPN up has never worked for me; and then run these commands:
eval "$(docker-machine env custom)"
export DOCKER_HOST="tcp://localhost:2376"
export DOCKER_TLS_VERIFY="0"
alias docker="docker --tlsverify=false"
Yes, every time you start the terminal. There is a way to avoid this, read on. 

One Time Setup: For The Brave Among Us

From this point on, you are entering undocumented territory and are on your own. If something breaks, do not come looking for me. 🙂 And before making any modifications, take a backup.
If you notice, the shortcut points to a shell script called ‘start.sh’. We are going to modify this script to auto-configure our environment every time it is called. Navigate to docker installation directory (directory that quickstart shortcut is pointing to) and open the start.sh (After creating a backup) file in a text editor.
Change 1: On line number 10 which looks like: VM=${DOCKER_MACHINE_NAME-default}
change that line to: VM=custom. Custom here is the name of our VM. This saves you from typing the eval line every time.
Change 2: On line 66/67, in “Setting Env” step, after the existing eval command add the following lines:
eval "DOCKER_HOST="tcp://localhost:2376""
eval "DOCKER_TLS_VERIFY="0""
eval "alias docker=docker --tlsverify=false"
These handle rest of the config. That is all, save and exit the file and we are ready to roll. This may break when an update the docker toolbox is installed which overwrites the file, may not work if the script changes in future, may break things I am not aware of, hence only for the brave. Besides, I do not use a Windows machine daily, so you guys would be first to know if it starts breaking ;). Let me know and we will figure it out.
Redis Cluster: Fact Sheet (Not Just Issues)

Redis Cluster: Fact Sheet (Not Just Issues)

Redis and the Redis clustering works very differently from the other data stores and data store clusters. The differences are not always as obvious and may come up as realizations down the line while using Redis, like what happened in our case. We are using a Redis cluster, with which, fortunately, we have not faced many issues so far. But that does not mean we will not and we shall need to be prepared.


Recently we were working on getting a Redis cluster up and working with docker compose and was enlightened to some of the differences which later led to disillusionment for me. Thought that there should be a ‘document of facts‘ on Redis and Redis cluster which people/myself can refer to. So I decided to create one, enjoy:


  1. Redis is great as a single server.
  2. In a Redis cluster, all your masters behave as if they are simultaneously active (not sure if they all are masters at the same time technically, but they behave as such).
  3. Every master in a cluster knows every other master/node in the cluster.
  4. There is no single master looking over the orchestration job.
  5. The masters, during clustering (sharding) agree upon the division of load: who shall have which hash slots.
  6. Each master speaks only for itself. If you ask for a key, and if the hash-slot for the same happens to be on the master you asked, it will return a value. Otherwise it returns a ‘redirection’ to the master that has the slot for this key.
  7. It is then the client’s job to resend the request to this new master based on the redirection.
  8. Clients try to sync up with master for which hash-slots lie with which master in order to speed up the retrieval.
  9. Every master knows other master by IP and IP only. It is not possible to use a hostname.
  10. The knowledge about the other nodes in the cluster is stored in a file called: nodes.conf. Although the extension is gives an impression of user modifiable configuration file, it is not a file for humans to modify.
  11. Every master must know other master by actual public ip, it is not possible to use a loopback (like 127.0.0.1). If you do that it ends up in a max-redirection error. How it works is, when a client asks for a key and server responds with a redirection, the ‘smart’ client is expected to follow this redirection and get the value from this other node. Now the ‘dumb’ server responds with only ip it knows other node by, that is your loopback on the Redis server. But this ‘smart’ client (Jedis) is not smart enough to understand that the loopback is actually of the node and apparently starts looking for a Redis node on its own host! Whatever.. Just avoid doing that.
  12. When two nodes meet to form a cluster, one of them has to forgo its data. Either one must be empty.
  13. Replicas are not within master or any other nodes for that matter. Unlike what we know about clustering in Elastic Search or Kafka like services, replicas in Redis are independent nodes. So if you want a replication factor of 2 and have 3 masters, you effectively need 3 * 2 + 3 = 9 nodes in the cluster.
  14. If a master drops off, it is not possible to bring it back into cluster with data. Implication of point 12.
  15. If you need to perform any updates to any of the nodes/servers, take point 12 and 14 into consideration. Take out the master, upgrade, flush and reconnect as a slave, that is how it works.
  16. Converting a single server to cluster is not supported officially. There is one blog of a smart person showing a workaround for such a migration. Inverse of this, cluster to single server shall be equally painful.
  17. Redis / Redis Clustering is not officially supported on a Windows machine. There are unofficial ways to achieve something of the sort, the MSOpenTech’s Redis implementation, which now also supports clusters.
  18. The Java client, Jedis, has two different classes, one for connecting to a single standalone (JedisClient) and other for connecting to a cluster (JedisClusterClient). So if you decide to use the cluster in production, you cannot choose to use a single server during development. Implication is un-necessary load on your laptops. It can be managed by using environment aware wiring. We worked around by creating a jar, with a class that on post-construct just replaces the cluster-client reference of our internal cache utility class with a single server jedis-client. Just placing this jar on classpath during development solves it for us.
  19. Running Redis cluster in docker has its own pain points, on that later. (A different fact sheet for docker soon.)
  20. Extending point 11, if you have two network interfaces on the nodes, and have two isolated networks for two services that use this Redis cluster, how will that work out? Such is a setup is expected in a docker compose, where we isolate the service into different networks. Will need to see how Redis behaves in such a setup.

Although it was not the intention, while reading what I wrote I realized that the points above do look like a rant. In spite of these Redis is a solid, fast cache store and I love it for that. These are merely a few nuisances and related implications which we learnt about and experienced in our use of Redis cluster. Please use them only as points to ponder on when designing your application. Also, these nuisances are based on the state of Redis and Redis cluster at the time of writing which will change in time to come.
Better Ways Of Storing Product Knowledge

Better Ways Of Storing Product Knowledge

So, the Brain-Format is not that good. Which is? To answer that, let’s first discuss the ideal attributes of a product knowledge and of the place we would keep the knowledge in, the repository. We shall start with the basic expectations from the documentation itself, and later discuss the expectations from the repository.
But before we begin, I would like to make a point – on my previous post I got feedback that I probably should not use the term ‘knowledge’, as it is too heavy a term for the simple ‘information’. Well, I disagree. I believe, knowledge in simple terms, is information in usable format, which includes the insights from the information, which of course, are not part of the information itself. It is processed information, and that is what differentiates it. This difference also highlights the importance of this information and that importance also happens to be the goal behind writing down these thoughts.
Now that it is clear, shall we begin?
The first and foremost point is that the product knowledge is better treated like the product code itself. Is that too much to ask? Consider this – we need the product knowledge to always be relevant. For it to be relevant it needs to be updated, it should reflect the latest changes and enhancements done to the product; in effect it is highly likely that it will be modified every time the code is modified. Hence, is it wrong to expect the same flexibility from the documentation that we come expect from the code? Why should we not apply the same quality guidelines? In general terms, should it not be as maintainable as the code itself?
So, the first list is of attributes of the knowledge storage format:
    1. Easy to create: It applies to new documentation, and new additions to existing documentation. Whatever the format, it should not require huge assembly or lot of people or say, multiple approvals.
    2. Easy to maintain: This attribute is rather an abstract one, and many points below shall touch on this in greater detail. (Clean Code, anyone?)
    3. Easy to extend: Extend, in context of documentation means that it should be possible to combine documents to bring related information together, without duplication, It could be through a link to the information, but best would be the ability to embed.
    4. Easy to use: What is the use of the documentation? It should be easy to read/watch/listen/touch/smell/taste etc. (Well, maybe not touch or smell, or taste..)
    5. Should be DRY: This directly relates to the ‘extend’ requirement, it should be possible to have a single authoritative representation of the knowledge.
    6. Presentable: But of course, we want to use it don’t we? We need to like it!
There are many more analogies we can draw, but I think these are enough to convey the point that it should be built with almost the same principles as the code. Now, we take on the documentation repository and also discuss some non-functional requirements that apply to do the documentation but not necessarily to code:
  1. Access Control: Does it need to be discussed? Of course we need access control, and multiple levels of control: Access to read, write/edit, to delete, and the access to grant access should all be controllable. Even better if we could integrate with the corporate account management system and also set roles.
  2. Record History: For the same reason as code, we need a way to undo (and also blame people) any changes done to documents, including restoring deleted content.
  3. Portable: Yes, portable. The knowledge is not only for developers, it is also for the marketing members of the team, the business analysts and the management. We cannot expect that these guys, whose job is to go out and meet people can always have access to internet and VPN. That makes it a non-functional requirement that the knowledge be portable in full or at-least in part. I imagine some companies having problem with this, but those who use distributed version control systems like Git, should not really worry; they are trusting their teams with the working code, knowledge is not going to cause any new special problems.
  4. Lightweight: It should be light on resources. Resources of all sorts, be it storage, network, computing power, but most importantly on the (arguably) costliest resource on the team: ‘user time’.
  5. Searchable: It should be possible to search within the repository by various categories, tags and of course the content.
  6. Shareable: Shareable by either exporting or by providing a reference pointing to the exact content, like a URL.
  7. Encourage Contribution: This is likely the most neglected but probably the most important requirement. If after being all this, the repository does not appeal to people, it is going stale real soon.
Phew..! However, the list is far from complete. But I think I have made my point, so now we’re off to the next task: Looking for a format and a repository that fits all these criteria! Till then, coke anyone?
Cinnamon Crashed, would you like to restart?

Cinnamon Crashed, would you like to restart?

I have been a fan of the Cinnamon DE for years. I like the way it looks and stays out of my way when I am not admiring it and actually doing something useful! But it is somewhat buggy.

This is a quick post about the cinnamon crashes, basically a new reason for it to crash. I was faced with a common issue of cinnamon crashes, suggesting me to restart cinnamon, which when clicked yes resulted in another crash and popup.

Google searches resulted in many solutions, starting with updating cinnamon, resetting the config by deleting the .cinnamon and .local/share/cinnamon directories and verifying if the correct video driver is in use. There was nothing obvious in the syslog, or xsession errors. Nothing helped.

Tired, I reinstalled mint but issue persisted. This was rather peculiar. I mount my home separately, and that of course survives the installs and OSes. This was the first hint at the problem, issue was configuration of something, not necessarily of cinnamon. So I created a new user and tried to login with the user, and voila, cinnamon worked without a crash. So certainly the issue was config for my regular user.

I decided to go about removing related config folders, and the first ones I chose was the gtk-3.0, gtk-2.0 and cinnamon-session directory inside .config directory. And to my luck, cinnamon is working just fine since then.

Probably, I should spend some time to check what exactly from these config was the issue. But at least I now know one more reason why this error might occur and one more way to fix it. And now, you too..!

Worst Place To Keep Your Product Documentation: Human Brains

Worst Place To Keep Your Product Documentation: Human Brains

One thing I can say from my experience with the products that I have worked on is this – documentation of a product is nearly as important as the code itself, and there should be a comparable amount of effort to keep it usable. Of course, it won’t earn you money-wise, but also won’t create new competition if it leaks either. But does that make the documentation any less important?
What it can do though, is it can save you money and time. It saves time when a new member joins, it saves time when a change is needed: to functionality or to technology, it saves time when requirements are conveyed to a vendor or details to a potential client. It takes the burden off your head, because you no longer have to remember things, except for remembering to document what you know. It is important to secure it, for if it leaks, your competitors can learn from your design, and that can potentially create new competitors. More importantly, it can create immediate threats, because knowing your architecture makes it easy to attack products. (Open Source case is different.)


Despite all these reasons, there is a reluctance to maintain the product documentation in document form! Products rely on team members to remember the technical details and functional flows, they rely on members to convey this knowledge to every new member, they rely on existing members to recollect it when the time comes, and members with all this knowledge to be with them – forever! It can’t work, it has been seen to not work, and yet, we insist!


If it is not yet clear, I haven’t been a great fan of this strategy. Here is why:
  1. No one can remember it all even if someone does, no one can always recollect it at the right time.
  2. When it comes to conveying the knowledge, people tend to convey only what is required for the current task at hand, not the full picture. I am not saying that it is wrong, because at that time that is the only knowledge required. But this process requires a long time for the new member to get the full picture, and hence, to be more productive. You can conduct sessions, but again we are expecting people to remember and recollect what was told to them only once (or twice)!
  3. Then there is the problem of members being reluctant to convey, having vested interests in not transferring the knowledge. The argument can be made that building a cooperative team can solve this, but we know how hard it is build a all-the-time cooperating, motivated team, we will accept it.
  4. Unavailability of ‘the person who knows the answer’ is hard to argue with. People can be unavailable for multiple reasons: they could be away from the office, involved in a different task, travelling for business purposes, unwell, on leave, not being reachable at the moment of the crisis and what not.
  5. And again, people switch jobs – we can’t expect every experienced employee to work with us forever. And even if they did, refer point 4.


And even after all this, people can get hit by a car, or a flowerpot in the head and get amnesia! My point is, can we leave the stability of our product to such things? There are better ways to handle this knowledge. Better, proven ways.


I think at this point it should be clear that we are talking about knowledge of the product. This does not only include the requirement documents, High and low level technical design documents or user-stories, acceptance criteria, issues raised in the agile tracking systems but also the insights, the gists, the summaries, the diagrams, the communications, MOMs, presentations, sessions in searchable, easily retrievable and referable form. (Too late in a post to define the core subject, but better late than never!)


I would not want to get into the details on how to store documentation outside of human brains right away, but understanding that we need to is the first step in that direction.

A Guess-The-Color-Code Game: ColorCode

A Guess-The-Color-Code Game: ColorCode

This post was written ~6 years ago, a lot of things in the world of browsers have changed since, the game may no longer work/look correctly.

A game with unknown name!
That is what it was when I started working on it…
I played this game first as a board game, on a very very old board, with no reference to the name. It was to be played by two players.
The other day, my younger sisters friends had come home, they brought with them a board game to play. It belonged to her mom (so old). No one knew the name, but it was just plain fun to play.

We decided to implement it in code. And here it is.. A digital version of the game.
But what to name it? First we thought of naming it colour-seq, but some amount of Googling revealed that there exist many versions of this game in digital format already and they are called ColorCode. So thats what it is! 🙂

Game objective:
Its simple, match the same colour sequence with same colours as the computer had in his mind.

Game rules:

  • The computer decides a sequence of the colours (in mind!)
  • The player puts the pegs in the holes and tries to arrive at the same sequence.
  • Computer will indicate on every entry how many of the colours the player missed and how many positions were incorrect.
  • You can choose from 3 different difficulty levels:
    • Easy: 4 colours,4 positions (no chance to choose a wrong color), 10 attempts
    • Medium: 5 colours, 4 positions, 10 attempts.
    • Hard: 6 colours, 4 positions and 10 attempts.
  • You can also select if you want numbers to be  displayed on the pegs along with the colours to identify them.

How to play:

  • Once you choose the options, you will see the game board.
  • Drag and drop the coloured pegs from the right hand side floating bar onto the central part of the board with 4 holes (not holes, but, u get it, right?)
  • You can rearrange the colour sequence.
  • Once you are satisfied, press the submit button.
  • Computer will match the code you entered with the one it had in mind and will answer in a colour code.
  • On the left part, where you see the holes arranged in two columns:
    • If a position turns orange, there is a colour in wrong position.
    • If a position turns black, there is a wrong colour.
    • Counting the number of black squares, you know how many colours were wrong. (i.e. you selected the colour but computer did not, choose another.)
    • Counting the number of orange squares, you know how many colours were in wrong position (i.e. the colour is correct, but not is correct position, rearrange.)
  • Starting with a guess for the first attempt, it goes all Logic from there on..
Known issues:
  • The floating bar has a little glitch, it floats a little weird sometimes.

Future enhancements

  • Forgot to mention, have not tested the code on Internet explorer at all. Will do the testing and required fixing once I find a windows machine. Till then, please use firefox or chrome.
  • I plan to make it more customisable: allow for choosing colours, columns and attempts manually.
  • Make them circles.
  • Remove the hiding plate above to reveal the code in computer’s mind.
  • What if computer never told you the colour was wrong..? Thinking… 😀

Well, a note:
I know its all JavaScript and all the geeks will simply turn on their firebug to check what the colour sequence is. But please don’t, otherwise, there is no fun! But if you just could not convince your conscience, let me tell you I have taken care not to spoil the fun for you. The colour code the computer remembers is actually salted, hashed number and not the simple colour name you might make some guess at.. The salt is randomly generated on every new game. Just to help you take a firm stand against your disobedient mind! (Oh I know, its still not difficult to crack it, but worth the effort! 🙂 )

So what are you waiting for? Hit the ‘Start the game’ button!

If you find more glitches and bugs please note them in the comments. And feel free to tell me you enjoy it! 😀

Just another object-oriented approach to jQuery plugins

Just another object-oriented approach to jQuery plugins

It has been almost a year since I have been working primarily in JavaScript. During this time I have written three jQuery plugins and loads of other scripts.This is the story of how my approach to writing jQuery plugins has evolved.

I was working on my first plugin, which was supposed to be a large (in LOC) and went through the authoring mentioned on the jQuery site. In the beginning- it was great, a few exposed methods, well organized code, private functions, everything looked pretty. Soon the code reached some 1000 lines and it started becoming messy for me. To clarify, I am basically a java developer. I accept that the coding practices will obviously differ in every language, but for me, object oriented approach to code seems much more understandable and tidier than ‘functions everywhere’ thing!

I began searching for object oriented approaches people take in writing jQuery plugins. There are many, but mostly at the cost of some other flexibility. Some approaches allow only one public method, claiming only one namespace is must but more control was needed in the calendar. Some allow complete public access to the options object, but additional control was needed. There were one time calculations based on options that were necessary for the required functionality. Now making the options object public won’t give me that control, will it? But apart from that, I could not understand the requirement of making it fully public. Pardon me, this statement is not to question those who follow these approaches, but this is what was thought.

A better approach was needed, where all the flexibility of making it a ‘functions everywhere’ is retained and a little more organization is achieved in the code. So there emerged a merger. A merger that:

  • Allows multiple methods to be made available.
  • Claims only one namespace.
  • Does not make options simply public.
  • Keeps a context, maintains a state.
  • And follows every other requirement mentioned as a guideline while writing plugins by the jQuery authors.
That merger has now evolved into a simple, precise plugin template! All you need is a case-sensitive, replace-all and you are ready with a working plugin, set with the basic features ready for more, organized plugin…
Here’s the code:
/**
* Plugin comments
*/
(function($, undefined){
var MyPlugin = function(element, options){

/*
* *************************** Variables ***************************
*/
var defaults = {
defaultValue : '2'
}; //default options

/*
* *************************** Plugin Functions ***************************
*/

/*
* Initializes plugin.
*/
function initialize(options){
extendOptions(options);
sl.log("Got Options- initialize: ");
sl.log(options);
}

/*
* Updates plugin.
*/
function update(options){
sl.log("Got Options- update: ");
sl.log(options);
}

/*
* Destroy plugin changes
*/
function destroy(options){
// Remove all added classes.
// Remove all bound methods.

// Remove plugin data
element.removeData('myplugin');
}

/*
* Updates plugin options after plugin has been initialized.
*/
function setOptions(options){
extendOptions(options);
}

//expose plugin functions
this.initialize = initialize;
this.update = update;
this.destroy = destroy;
this.setOptions = setOptions;

/*
* *************************** Utility Methods ***************************
*/
/*
* Extend the default options using the passed options.
*/
function extendOptions(options){
if (options) {
$.extend(true, defaults, options);
}
}
};

var mP = $.myPlugin = {version: "0.01"};
$.fn.myPlugin = function(options){
var args = arguments; // full argument array passed to the plugin.

// Available methods in plugin
var pMethods = {
init : function(options){
// Get the plugin data
if (this.data('myplugin')) return;
// Initialize the plugin
var myplugin = new MyPlugin(this, options);
// Add plugin data to the element
this.data('myplugin', myplugin);
myplugin.initialize(options);
},
update : function(options){
// Get the plugin data
var myplugin = this.data('myplugin');
if (!myplugin) return; // do nothing if plugin is not instantiated.

myplugin.update(options);
},
destroy : function(options){
// Get the plugin data
var myplugin = this.data('myplugin');
if (!myplugin) return; // do nothing if plugin is not instantiated.

// destroy data and revert all plguin changes.
myplugin.destroy(options);
},
setOptions : function(options){
// Get the plugin data
var myplugin = this.data('myplugin');
if (!myplugin) return; // do nothing if plugin is not instantiated.

// Update the plugin options
myplugin.setOptions(options);
}
};

// For each element, check and invoke appropriate method passing the options object
return this.each(function(i, tElement){
var element = $(tElement);

if (pMethods[options]){
pMethods[options].call(element, args[1]);
} else if (typeof options === 'object' || !options){
pMethods['init'].call(element, args[0]);
} else {
$.error( 'Method ' + options + ' does not exist in jQuery.myplugin' );
}
});
};
})(jQuery);
Now what you need to get going is the replace-all, this is what you replace:

MyPlugin : PluginName
myPlugin : Plugin JQuery Method Name/pluginName
myplugin : Data Name/Variable Name
mp       : pN
pMethods : pluginNameMethods
defaults : defaulsObjectName

That’s it!
You are ready with a working plugin!
Oh yes, that sl there in the code is actually the SmartLogger. Read about it here.
This is a quick post and I plan to update the post with more explanation of the code, do visit again!

Let me know how you find it in the comments.

Habit-Firebug Saver: SmartLogger

Habit-Firebug Saver: SmartLogger

How many of the web developers do not depend on Firebug or the chrome’s console… Just wondering..

BTW, its plain fun to work with firebug, makes life a lot easier.. Its a different matter all together that the other browser that you have to develop for does not have a powerful enough tool. (Name deliberately avoided to avoid the eminent flame-war!) Yes the current versions have a quite powerful debug and development tools but (hopefully) few developers working on products still have to consider some 10 year old versions (namely 6, 6.5 and 7). Ah, the pain.. Anyways, we are not discussing that..

What we are talking about is the issues that we face when testing our changes to a thousand lines JavaScript code on multiple browsers, especially after we are accustomed to the ease of firebug. 🙂

I spent much of my time commenting my console.log() statements before I could dare to open the page in IE. Well, fear not, the days have passed! The pain drove me to write a logger object that can not only sense presence of console object but can do much more than that, like ability to assert, selective logging and more..

I call it the SmartLogger.

//Global Logger Object, for use during development, configurable logger.
var SmartLogger = function(options) {

var sl = {}; // Logger Object

// Accepting passed params.
options = options || {};
sl.enableLogger = options.enableLogger!==undefined?options.enableLogger:true;
sl.enableAssert = options.enableAssert!==undefined?options.enableAssert:true;
sl.loggerOutput = options.loggerOutput!==undefined?options.loggerOutput:undefined; //'console', 'alert', undefined
sl.selectiveEnable = options.selectiveEnable!==undefined?options.selectiveEnable:'';
sl.selectiveDisable = options.selectiveDisable!==undefined?options.selectiveDisable:'';

// Logger properties
sl.name = "SmartLogger";
sl.whoami = function(){ return "SmartLogger_"+sl.enableLogger+"_"+sl.enableAssert+"_"+sl.loggerOutput+"_"+sl.selectiveEnable+"_"+sl.selectiveDisable;}
sl.version = '0.7';

// Checks if console object is defined. Checked only at the time of instantiation.
var hasConsole = (typeof console === "object");

// Checks if logging should be done to console.
function logToConsole(){
if (sl.loggerOutput){
if (sl.loggerOutput === 'console') return true;
} else {
if(hasConsole) return true;
}
return false;
}

// Handles the logging intelligence
function handleLogging(logMethod, logString, strId){
if(!sLog(strId)) {return;}
// Decides if to log and logs or alerts appropriately.
if(sl.enableLogger){
if (logToConsole()){ // && hasConsole
if(hasConsole)console[logMethod](logString);
} else {
alert(logString);
}
}
};

// Handles the selective logging functionality
function sLog(strId){
var allowLog = true;
if (sl.selectiveEnable) {
allowLog = strId === sl.selectiveEnable;
} else if (sl.selectiveDisable) {
allowLog = !(strId === sl.selectiveDisable);
}

return allowLog;
};

// Returns a formatted object structure with current values to complete depth.
function printString(obj, name, str, strEnd){
var stringified;
name = name?name:"Object",
str = str?str:"";
strEnd = strEnd?strEnd:"";
stringified = str+name+" : {n";
for (var a in obj){
if (typeof obj[a] === 'object'){
stringified+= printString(obj[a],a,"t",",");
} else {
stringified+= str+"t"+a +" : "+obj[a]+",n";
}
}
stringified += str+"}"+strEnd+"n";
return stringified;
};

// Exposed methods of the object
//log a string to console/alert
sl.log = function(str, strId){
handleLogging('log', str, strId);
};

//debug logging a string to console/alert
sl.debug = function(str, strId){
handleLogging('debug', str, strId);
};

//write an information string to console/alert
sl.info = function(str, strId){
handleLogging('info', str, strId);
};

//throw error string to console/alert
sl.error = function(str, strId){
handleLogging('error', str, strId);
};

//Assert an assumption
sl.assert = function(str, strId){
if(sl.enableAssert){
handleLogging('log', 'Assumption: true', strId);
if(!str){
handleLogging('error', 'Assumption failed!', strId);
debugger;
}
}
};

// Logs the formatted object structure with current values to console/alert
sl.stringToConsole = function(obj, str){
sl.log(printString(obj, str));
};

return sl;
};

var sl = new SmartLogger();

Features:

  • Multiple logging profiles can be maintained at the same time with different properties.
var sl = new SmartLogger();
var sl2 = new SmartLogger({selectiveEnable: 'block1'});
  • Proprieties can be set at the time of instantiation or even later.
var sl = new SmartLogger();
sl.loggerOutput = 'console';
var sl2 = new SmartLogger({loggerOutput: 'console'});
  • name, version number and whoami to identify the logger with a string of its current properties.
var sl = new SmartLogger();
sl.name // SmartLogger.
sl.version // 0.7
sl.whoamI() // Returns a string of its properties with the name of the object in a specific sequence:
// "SmartLogger_"+ enableLogger +"_"+ enableAssert+"_"+ loggerOutput+"_"+ selectiveEnable+"_"+ selectiveDisable;
// Example: SmartLogger_true_true_console__b
// We will see what these properties are in some time..
  • Enable or disable logging altogether: enableLogger controls if the statements should ever be logged.
var sl = new SmartLogger();
sl.log('gets logged');
sl.enableLogger = false;
sl.log('never gets logged');
  • Intelligently decides where the logging statements should go…
sl.loggerOutput = undefined; //default
/* Decides based on presence of 'console' object.
If console is present statements will be logged to console,
else like in case of IE, will be 'alerted' to the user.
Now at times this can get messy, with loads of log statements alerting on our face..
But wait, we have ways to handle that.*/

sl.loggerOutput = 'console';
// Plain instruction, no intelligence, all statements will always go to console.
// If console is not present statements will just be eaten-up.

sl.loggerOutput = 'alert';
// Another plain instruction, all statements will always be alerted.
// Will not bother to check if console exists or not.
  • Log formatted objects to console. Now you wont need that much with firebug but to see the entire contents of the object, well formatted you can just say stringToConsole.
// Just a sample object with unknown properties.
var obj = {prop1: 'value',functProp:function(){return "this is a function that returns me!";}, propObj:{prop2:'value2'}};
sl.stringToConsole(obj); // You say this.

// On console or in the alert prompt, you get this
Object : {
prop1 : value,
functProp : function () {
return "this is a function that returns me!";
},
propObj : {
prop2 : value2,
},
}
  • Assert your assumptions. Checks that the assumption is true, if yes, logs so. If assumption fails, will write out an error on the console and invoke the debugger so the user can check in the stack exactly where the assumption failed and why.
sl.assert(Obj.str===Obj2.str)
sl.assert(1==1); // logs 'Assumption: true' to console.
sl.assert(1==2); // Logs error 'Assumption: failed!' and invoke debugger to the assert line in SmartLogger.

//Now you can go and check in the stack and watch to panels to check value and call stack.
  • Has a wrapper for 4 of the logging APIs from firebug and adding new is not much of a task. What it already has:
    • log
    • debug
    • info
    • error
  • Has ability of selective logging.
Now this thing is a live saver. The properties selectiveEnable and selectiveDisable control what statements to log. While these are not mandatory inputs to all the wrappers but I suggest you set them always. These are logging context that can be used to selectively enable logs for only partial of the code, the code that currently interests you..
// Suppose we were working on a defect number 101 and now we are developing a functionality for
// automating welcome messages to users and are asked to urgently fix defect 203.
// Ah, complex scenario, but it will only help understand the purpose.

// When we are working on defect 101, we had the logger configured as statements as:
sl.log("reached here"); // worst way to log: who knows wheres here! but just an example.

// Now we are working on the functionality and
// we would not want those 10 logging statements added while we were working on the defect.
// We can remove them or simply enable the 'selective logger'!
sl.selectiveEnable = 'welcomer';
sl.log("fetching message", "welcomer");
// And voila, only the 'welcomer' messages will be logged.

// Now we get the next urgent defect.
sl.selectiveEnable = 'defect203';
sl.log("value in Obj1.str"+Obje1.str, "defect203");
// We get only the defect203 logs!

// Now some of our new changes depend on the changes we made in defect101, but we cant get the logs from those..
// What do we do? If someone did not enable selective logger and removed the statements, please add them back (:p),
// remove statements for 'welcomer' functionality. Or simply, disable 'welcomer' messages..!
sl.selectiveEnable = '';
sl.selectiveDisable = 'welcomer';
sl.log("value in Obj1.str"+Obje1.str, "defect203");
sl.log("value in Obj2.varInt1"+Obj2.varInt1, "defect101");
// Ha ha! Log statements for 'welcomer' gone and we get the rest!
While using the SmartLogger, I suggest you always pass the string identifier, so that you can control the logs at any point of time later.

What can you expect next in SmartLogger:

  • Use assert from firebug itself.
  • Check that the function exists in the logger before calling it.
  • Make the selective logger take arrays.
  • In stringToConsole, handle functions too to remove that glitch in no closing bracket.

Let me know what you think about the SmartLogger, if you would like any additions to its behaviour and also if you find any defects in the comments below.