Saturday, October 19, 2013

Creating a simple TCP server in Java and using it from Python – to use the Stanford POS tagger

As the advanced network protocols flood the market we forgot how easy it is to create a simple server and a client if we want to integrate two different platforms – Java and Python in this case. The standard TCP socket library is there on practically every platform so connecting to a simple TCP server or actually creating one is just a couple of lines even in C. If we need to, we can build more complex protocols and use more complex document formats, but it's important to remember that we don't need it all the time.

POS tagging

For one of my projects, I needed to use the Stanford POS tagger to parse a large text corpus. Even though there are Python POS taggers, my favourite one is by far the Java based Stanford implementation. Usually I use it directly from Java, but in this case the input file was a bit tricky to parse and Python did it very well so I just wanted to do the POS tagging in Java and everything else in Python.

My first thought was to create a file based interaction between them but that wasn't as responsive as I wanted it to be – this type of batch processing wasn't too appealing. Then for a moment I was considering using more advanced techniques like Apache Thrift or Google Protobuf but why would I need them if I just need to send a sentence over the write and receive the POS tagger version of it?

Creating the Java server

The server part seemed to be the trickiest one as once in a while I received exception I wasn't really foreseeing. As I didn't really care about those so putting the server routine in a simple try catch solved all the issues.
This is the simplified version of the server code:

// Only listen on localhost, no remote connection allowed.
ServerSocket serverSocket = 
  new ServerSocket(10007, 0, InetAddress.getByName("localhost"));

while(true) // We never really terminate it. CTRL+C is enough.
    System.out.println ("Waiting for connection.....");
    Socket clientSocket = serverSocket.accept();
    System.out.println ("Waiting for input.....");

    // Create IO streams.
    PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true);
    BufferedReader in = 
      new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));

    String inputLine;

        while ((inputLine = in.readLine()) != null)
            String tagged = tagger.tagString(inputLine); // POS tag the input.
            out.println(tagged);  // Send back the tagged output.

        out.close(); // The client decided to close the connection, cleanup.
    catch(Exception ex)
        System.out.println("Client has disconnected.");
// serverSocket.close(); // We never reach this.

There are couple of interesting things I've found. One is that the Java server gets upset and throws an exception if the client decides to disappear instead of cleanly closing the socket, so a try-catch-finally was required. The other thing is that both reading and writing has to be buffered, otherwise the performance would have been really poor. The PrintWriter is buffered, so it will write the whole line to the wire in once instead of byte-by-byte.

In my case the input and output was standard English text but for unicode some kind of encoding might be required.

Connecting from Python

The Python client was unexpectedly easy, not counting the import it was three lines all together:

import telnetlib

HOST = "localhost"
PORT = 10007
tn = telnetlib.Telnet(HOST, PORT)

response = tn.read_until("\n")

Performance and issues

The text corpus I was parsing was quite big (10+Gb) so I quickly realised it won't be extremely fast to parse all the text. I was using an old Linux server (512Mb, Core2Duo) to parse the data overnight. The total process was around 20 hours with an average of ~100 sentences tagged a second.

The only issue I was facing was that the Stanford POS tagger once in a while ran out of heap memory so I had to increase the initial heap size, but 500mb seemed to be enough (-mx500m).
Otherwise the process was surprisingly stable and performing well even on that really old machine. The POS tagger did not leak memory at all so I wouldn't hesitate running the same setup again next time.

Wednesday, September 4, 2013

Drawing circle and calculating sinus function without trigonometry, power, or root

The formula of a circle is quite straightforward (r=sqrt(x^2+y^2)) but it's not trivial how to draw a circle or calculate the trigonometric functions without advanced math. However, an interesting finding from 1972 makes it really easy.

Minsky discovered (by a mistake) that the following loop will draw an almost perfect circle on the screen:

    x = x - epsilon * y
    y = y + epsilon * x # NOTE: the x is the new x value here

Turns out that epsilon in the equation is practically the rotation angle in radians, so the above loop will gradually rotate the x and y coordinates in a circle.

Calculating sinus

If we can draw this circle, we can easily estimate the sinus values: for the current angle, which is basically the sum of epsilons so far,  we have a height (y), which just needs to be normalized to the 0-1 range to get the actual sinus for the angle.

The smaller the steps (epsilon) are, the more accurate the formula will be. However, because it's not a perfect circle, it can never be a very accurate estimation. If epsilon is large, the algorithm will draw a visible ellipsis instead of a circle (slightly tilted left).

The code in JavaScript

I've used a simple canvas object to draw on, so the core logic of the drawing looks like this:

for(var i = 0; i < steps; i++)
    x -= y/epsilon;
    y += x/epsilon;

    // Draws circle.
    context.fillRect(middleX+x, middleY+y, 1, 1);

    var deg = i / 2 / epsilon / Math.PI * 360; // Angle in degrees.
    var rad = i / epsilon; // Angle in radians.
    var sinVal = y / 100; // Normalising to 0-1

    // Draws sinus wave.
    context.fillRect(sinX+deg, sinY+y, 1, 1);

    if(i % parseInt(steps / sinMarkerCount) == 0)
        var text = "Sin(" + parseInt(deg) + ")=" + sinVal.toFixed(2);
        // Draws couple sinus values on sinus wave.
        context.fillText(text, sinX+deg, sinY+y);

And the result is this:

To grab the full source, go here. Or have a look at the live demo here

Monday, September 2, 2013

Dutch national flag problem - performance in Python and C (two pivot partition, three way partition)

One of the typical interview questions is the three way partitioning, also known as the Dutch national flag problem: given an array with three different values, sort it in a way that all values are grouped together (like a three colored flag) in linear time without extra memory. The problem was first described by Edsger Dijkstra.

As it's a typical sorting problem, any sorting would do but they would need n*log(n) time. There are two good solutions to it: do a counting sort which requires two passes over the array; place the elements to their correct location using two pointers.

The latter uses a reader, lower, and an upper pointer to do the sorting. Reader and lower starts from one end of the array, upper pointer starts at the other end. The algorithms goes like this:

- If it's a small value, swap it with the lower pointer and step lower pointer one up
- If it's a middle value, do nothing with it and step reader pointer one up
- If it's a larger value, swap it with the upper pointer and step upper pointer one down

After the reader reaches the upper pointer, we are done: all smaller values have been swapped to the lower end of the array, all larger values pushed to the upper end so middle values necessarily occupy the middle of the array.

Two pivot partitioning

Java 7 introduced an idea with it's traditional quicksort implementation: it uses the above method to create three partitions instead of the two. The new partitioning phase is still linear time but provides three subproblems to solve instead of the two like the traditional quicksort. This little modification in practise is usually faster than the two partitioning quicksort.

When we don't have predefined values like the flag problem above, we can simply take two random values (a, b) from the array and sort it like this:

  <a   a-b   b<

I've created two simple implementation in Python and C to compare them:

Python implementation C implementation
def partition(array):
  if array is None or
   len(array) == 0:
    raise TypeError

  small = array[0]
  large = array[len(array)-1]

  if small > large:
    small, large = large, small

  wr = len(array) - 1
  wl = 0
  reader = 0

  while reader <= wr:
    v = array[reader]

    if v < small:
      array[wl], array[reader] =
       v, array[wl]
      wl += 1
    elif v > large:
      array[wr], array[reader] =
       v, array[wr]
      wr -= 1
      reader += 1

  return (wl, wr)
int partition(int array[], int n, 
  int* left, int* right)
  if(!array || !n)
    return 0;

  int small = array[0];
  int large = array[n-1];
  if(small > large)
    int tmp = large;
    large = small;
    small = tmp;

  int wr = n-1;
  int wl = 0;

  for(int reader = 0; reader <= wr; )
    int v = array[reader];

    if(v < small)
      array[reader] = array[wl];
      array[wl] = v;
    else if(v > large)
      array[reader] = array[wr];
      array[wr] = v;
  *left = wl;
  *right = wr;

  return 1;
Runner code for PythonRunner code for C
start = time.time()

for j in range(1000 * 1000):
  arr = [0] * 10
  for i in range(10):
    # much slower, ~20 sec        
    # arr[i] = random.randint(0,10)
    arr[i] = int(random.random()*10) 
  l, r = partition(arr)

end = time.time()
int n = 10;
int n = 10;
int arr[n];

int start = clock();
for(int j = 0; j < 1000 * 1000; j++)
    for(int i = 0; i < n; i++)
        arr[i] = rand() % n;
    int l, r;
    partition(arr, n, &l, &r);
int end = clock();
cout << ((float)end - start) / 
// Compiled with 
// gcc -lstdc++ -std=c++11

Both implementations are readable but Python seems much less "noisy", easier to read, so I really wanted it to be as good and fast as the C implementation. To my disappointment, the Python implementation is about 50 times slower than the C counterpart (and even slower if randint() is used instead of random()). This is a bit surprising as only very basic array arithmetic is involved so the slowdown is somewhat unjustified.

Performance of the Python and C implementation of the two pivot partition, ran 10^6 times on a random integer array with 10 elements.

Sunday, August 18, 2013

Introsort algorithm in .NET 4.5 – internal implementation details

Even though introspective sort was invented in 1997 (by David Musser) and was taken on  by many frameworks no too long after, Microsoft decided to add it to the .NET platform only in 2012. However, it worth having a look at the implementation, it’s quite smartly written.


Currently the fastest sorting algorithm is quicksort but it’s got a couple of edge cases when it performs really poorly – in O(n^2) time instead of the O(n*log(n)) typical runtime. Even though a smart pivot selection makes it practically impossible to have an O(n^2) runtime, “evil” input can force the sorting to fall into this, making the running time much longer than expected.

Musser had two ideas to speed up the typical quicksort algorithm:

  • If the algorithm on any sub-array runs longer than expected (recursion is deeper than log(array size)) then let’s switch to heapsort which is guaranteed to finish in O(n*log(n)) time
  • If the number of elements is small enough don’t use quicksort, simply do an insertion sort which is faster on small data sets 

These two optimizations allow introsort to outperform the “dumb” quicksort implementation on most arrays (even ~200 times on artificial “evil” arrays consisting of 100.000 elements).

Microsoft implementation

Interestingly Microsoft decided not to use the fully optimized introsort in every case: the framework checks the current runtime version which has to be at least .NET 4.5 desktop (full). This means that for some reason on mobile devices and older platforms the framework will use a “limited depth quicksort” but will not optimize the sorting of small arrays. My guess is that Microsoft wanted to show a huge performance gain with 4.5 but from engineering perspective it’s a bit hard to sell.

Let’s have a look at the pseudo code of the fully optimized implementation:

        if (length <= 16)
            if (length == 1)
                already sorted, return
            if (length == 2)
                SwapIfGreater left, right then return
            if (length == 3)
                SwapIfGreater left, middle
                SwapIfGreater left, right
                SwapIfGreater middle, right
            if (depthLimit == 0)
            pivot = QuickSortPartition(subarray)
            recurse IntroSort(right of pivot)
            Set subarray to left of pivot

There is a couple of interesting parts to look at. The first is the way the algorithm sorts the 2 and 3 size arrays. The 2 is pretty trivial (swap is not in order), but the 3 is bit more hidden. The first list makes sure that [0] <= [1] holds. The second makes sure [0] <= [2] is true while the third makes sure [1] <= [2] holds as well, ultimately sorting the array.

The other key thing is the way the recursion is implemented. Unlike the textbook based implementations which use recurse(left), recurse(right), this uses a single recursion. The reason is that before it recurses to the left side, it makes sure the sub array is still larger than 16 elements by letting the upper part of the code run on it again. Please note that this is not a tail recursion because there is still executable code after the recursion so the compiler cannot optimize it.


Finally it’s really worth to have a look at the heapsort implementation as it’s beautifully written:

    for middle down to 1
        DownHeap element

    for end down to 0
        Swap first element with current
        DownHeap fist element

        int parentPos = elementPos * 2
        if arr[parentPos + 1] > arr[parentPos]
            parentPos = parentPos + 1

        if arr[parentPos] > element
        Swap arr[elementPos], arr[parentPos]
        elementPos = parentPos
    arr[parentPos] = element

The DownHeap is a standard heap correcting implementation, pushing the newly inserted item down as long as its parent node is larger than itself – as a side effect, it will build a max heap, meaning the first element in the array will be the largest element and its children are always smaller or equal to itself ([0] >= [1], [0] >= [2] but there is no known between [1] and [2]).

However, the HeapSort implementation is very concise: the first loop from middle to first element will heapify the array, meaning the array will store a max heap where the two children of an element can be found in arr[pos*2] and arr[pos*2+1] position with the properties described above.

The second loop relies on this property: it will keep looping from the end and keep swapping the current item with the first in the array. As we know it’s a max heap, so arr[0] will always be the largest element. Whatever is put on arr[0] from the last position will be shifted down to its proper position by the DownHeap call, maintaining a max heap all the way through.

It means that the arr[0..pos] subarray always will be a max heap while arr[pos+1..end] will be a sorted subarray with the largest element at the end.


Since 2012 Microsoft slightly changed the default sorting algorithm on the .NET platform that might give better performance even on average arrays. However, the biggest change is more around artificially crafted “evil” arrays that previously could have caused O(n^2) run time: the new implementation is guaranteed to finish in O(n*log(n)) time. To have a look at the actual implementation, you can use ILSpy.

Thursday, August 15, 2013

Async-Await in .NET is not for performance – it’s for readability

While I see the LINQ on the .NET platform one of the best if not the best language feature, I think the async-await idea is more like smoke in the mirror than anything else. Actually, it’s even worse: as the underlying operations are not trivial in some cases we are better off using linear code.

Idea based on desktop apps

The whole async-await idea is mostly coming from the cumbersomeness of desktop GUI applications and UI threads: only a single thread can modify the UI so when a long running process finishes we need to ask the UI thread to display the result for us but the operation itself cannot update the UI. It’s not a language problem; it’s a Windows/framework problem. However, Microsoft decided to “solve” it with a language feature, because:

  • Concurrent programming is a hot topic and Microsoft wanted to “be there”
  • C# language evolves fast so adding a new feature is quite easy

While the latter is a great thing (hey Java, can you hear me?), adding a language feature because of marketing reasons is just plain wrong.

Myth #1: Await will make the code run faster

Aysnc-await is a unique language feature because it’s implemented more as code generator than anything else. Microsoft decided to support the async-await keywords with generated classes and a huge, somewhat undocumented framework that just “solves your problems”. The underlying implementation is so hidden, we tend to believe that it will just make the code faster somehow.

It will not. It will reorganize your code so you don’t have to write callbacks to complete the rest of the function when a long running process completes. The rewriting of your linear code to emulate the callbacks is massive, it worth to have a look at it using ILSpy. Anyhow, the Task<> that is created by an async function was there before, nothing new about it: it will be scheduled to run on the ThreadPool; which, again, was there from 1.0. However, in certain cases (like in a loop) the execution will be essentially sequential. No parallelism, no speedup.

On the other hand, while it can be really helpful in a desktop application it makes little or no sense at all in a server environment – like ASP.NET. Unfortunately VS2013 will generate sample ASP.NET project with async-awaits all over. It is wrong! Why?

  • Servers are optimized to handle lot of parallel requests – they will manage the threads for you, no need for extra "optimizations"
  • A single request should never take too long – use queues if that’s the case and refresh the client using Javascript instead

Unfortunately Microsoft decided to return Task<>  in the new ASP.NET api calls forcing us to either call .Wait() or to use async-await. As typical HTTP is around 100-300ms the whole idea of asynchronously serving all requests in a synchronous environment is somewhat flawed. Remember, the user is actually waiting for your HTTP response.

Myth #2: Creating a thread is expensive

The biggest argument against creating a new thread for your long running task is that thread creation is expensive. But how expensive exactly? I did two quick measurements:

  • Start 1000 threads and wait (Thread.Sleep) in all for 1 second – all threads completed in 1136ms, meaning that the overhead for creating, starting, and stopping a single thread was around 0.13ms
  • Start 1000 threads and do not do any work in them – all threads completed in 95ms, meaning that the overhead was 0.095ms per thread

Let’s say 0.1ms per thread is the overhead and creating even 1000 threads is not too stressing for any modern machines – including my average spec laptop. Let’s say you create a brand new thread for every request (bad idea) and a request takes 100ms to complete. That’s 0.1% overhead. It wouldn’t really rush to optimize this overhead.

However, if you have more than 1000 requests per sec (which is more than 86 million requests per day) you really should use some kind of queuing to limit the number of concurrent threads – but all server environments like ASP.NET does that for you, don’t worry.

Myth #3: Concurrent programming can be easy

In an average environment concurrent programming is pretty much unnecessary – no real need to squeeze every computing bit out from a CPU so a standard sequential program will just do fine. For the rare cases when we need to update the UI of a desktop app or do a little background processing, async-await will be more than enough.

However, for any high performance computing projects forget about the features offered by any language or framework. You will need to understand work scheduling, synchronization, hardware bottlenecks, data layout patterns in memory, and in some cases even SIMD instructions and branch prediction. No IDE, programming language or framework will understand that for you. Even in concurrent languages like Google Go you have to understand synchronization quite well to efficiently use your resources.

As currently we operate with a few CPU cores on a large, common, shared memory, concurrent programming is far from trivial – my guess is that until we change our hardware architecture it always will be.


Out of curiosity I measured how long it would take to calculate the square root of a large amount of numbers (1000 times of 0, 1 million, 3 million numbers) divided into 1000 work units using different approaches (source codes below):

  • Single thread – do all calculations on a single thread
  • Lot of threads – try to start 1000 threads to do it (actually around 8 can start because the CPU is too busy running other calculations. This is a surprisingly good self-limiting side effect).
  • Lot of async – async done in a loop (which will essentially run sequentially)
  • Parallel.For – using the built in .NET parallel library
  • Task WhenAll – using the WhenAll() synchronizer and simply creating an running the tasks
  • ThreadPool – Using the default QueueUserWorkItem on the default thread pool

Couple things to take away from the chart:

  • Parallel.For is slightly faster than other solutions
  • Misusing async-await will yield linear performance without compiler warning
  • Creating a lot of threads to do the work is only slightly slower than the best performing solution (4% slower in this case).
So is async-await bad?

I think it would be crucial for everyone to understand the internals of async-await before jumping in because really surprising side effects can and will pop up - there are just too many questions around weird behavior of async-await on Stackoverflow. However, the idea is very useful in situations where a callback code would severely undermine the readability of the code, especially on desktop apps. Anywhere else? Not too sure.

Code snippets

private static void SingleThread()
    Stopwatch sw = Stopwatch.StartNew();
    for (int i = 0; i < n; i++)

public static void ParallelForeach()
    var sw = Stopwatch.StartNew();

    Parallel.For(0, n, i =>

    Console.WriteLine(n + " parallels foreach finished in " + sw.ElapsedMilliseconds);

public static async void LotofAsync()
    var sw = Stopwatch.StartNew();

    for (int i = 0; i < n; i++)
        await Task.Factory.StartNew(
        () =>

    Console.WriteLine(n + " awaits started and finished in " + sw.ElapsedMilliseconds);

public static void TaskWhenAll()
    var sw = Stopwatch.StartNew();
    var tasks = new List<Task>();

    for (int i = 0; i < n; i++)
        var t = Task.Factory.StartNew(
        () =>


    Console.WriteLine(n + " tasks WhenAll finished in " + sw.ElapsedMilliseconds);

public static void Threadpool()
    var sw = Stopwatch.StartNew();
    var tasks = new List<Task>();

    int total = n;
    var re = new ManualResetEvent(false);

    for (int i = 0; i < n; i++)
        ThreadPool.QueueUserWorkItem(new WaitCallback(o =>
            Interlocked.Decrement(ref total);
            if (total == 0)
                re.Set(); // ThreadPool does not support waiting on tasks by itself.


    Console.WriteLine(n + " tasks WhenAll finished in " + sw.ElapsedMilliseconds);

public static void LotofThreads()
    var sw = Stopwatch.StartNew();

    var threads = new List<Thread>();

    for (int i = 0; i < n; i++)
        var t = new Thread(new ThreadStart(() =>


    threads.ForEach(t => t.Join());

    Console.WriteLine(n + " threads started and finished in " + sw.ElapsedMilliseconds);

private static double Sq()
    double sum = 0;
    for (int j = 0; j < 1 * 1000 * 1000; j++)
        sum += Math.Sqrt(j);
    return sum;

Tuesday, August 13, 2013

Damn cool machines – stirling engine, ram pump, and valveless pulsejet engine

Once in a while it is worth looking around what kind of software or in this case hardware solutions were invented that did not catch enough attention. This post is about three hardware designs that amaze me.

Stirling engine

The Otto engine is by far the most successful engine we know these days but it requires special kind of fuel like gasoline or gases to operate. In some cases it is not available but cheap heat source can be accessed, like burning solid materials or focusing solar energy with mirrors.

In this case a stirling engine can convert heat into mechanical movement in an easy way. The idea is based on the rules of thermodynamics: the temperature of the gas always equals to pressure multiplied by volume (within constant factor). This basically means if we heat up the gas it expands or the pressure needs to rise. Similarly, if we decrease the pressure, the volume or the temperature has to drop (or both moving together).

The stirling engine is basically a two cylinder/piston engine that moves the heat energy from the source to a cooling side using the above equation.
The phases of the engine are:
1 – Heat up the gas in the first cylinder which will expand, pushing the piston away and forcing the gas to flow into the second cylinder
2 – The piston in the second cylinder moves away while cooling the gas down, which reduces the pressure
3 – Both pistons in first and seconds cylinders move back to initial position with the cool gas
4 – The heat source heats the gas again, increasing the pressure and restarting the cycle
(note – there are one cylinder but two piston designs following the same logic)
The oscillation is quite fast between the two cylinders but usually the torque is quite low. To increase the torque, multiple cylinders can be added in parallel.

Ram water pump

Electric pumps are extremely efficient in moving water but they need electricity which in some cases is not available – like remote areas or developing countries. The idea behind the ram water pump is the same that “kicks” your water hose when you are watering or washing your car but suddenly releasing the handle of the sprinkler gun. The reason the hose jumps is because the water is moving in the hose but when you block it, it has to stop – with a sharp deceleration. The quicker it needs to stop the bigger the pressure will be in the hose for a very short period of time.

The ram pump is basically a long tube that allows the water to gain speed in it – flowing in from a slow but not limited source like a river. This tube has a self-closing valve at its end that closes as the speed of the outflowing water increases, ultimately increasing the pressure in the tube. This pressure is now high enough to open the second one way valve at the end of the tube which will allow a small amount of high pressure water to flow into the “output” of the pump. This can push the water up quite high but the higher it needs to go the smaller amount the outflow will be: the pressure that is generated with this method is limited (usually by the speed of water in the tube, the rigidity of the tube and the speed of the closing valve).

The advantage of the ram pump is that is extremely simple to install and operate, the drawback is that the output flow is not constant and fairly limited. It is best suited to lift water from a river to a higher area like a water tower which can provide constant flow with constant pressure.

Valveless pulse jet

There are basically two types of jet engines: continuous combustion and non-continuous combustion engines. Most aircrafts use the former one as its operation is smooth and very controlled with practically constant throttle and no vibration. However, one of the most interesting non-continuous engines is the valveless pulse jet engine which does not have any moving part in it making it extremely simple and essentially maintenance free.

The idea is quite simple: if we could somehow create a combustion chamber that pushes out most of the exhaust gases after igniting the fuel, it would suck in fresh air as the gases would start to cool down, preparing for the next cycle. It turns out that it’s quite easy to create such a shape! To help the forward thrust both the intake (smaller) and exhaust (larger) pipe usually faces the same direction as some of the exhaust would still exit through intake because there are no valves in the system.

The result is a high frequency resonation consisting of tiny explosions in the chamber which will generate almost constant thrust. On the downside, the system needs external source of air flow to start, will get very hot and the series of tiny explosions make it extremely noisy.

Monday, August 12, 2013

Hosted Git (GitLab) in 5 minutes on Azure Virtual Machine

Finally Microsoft realized they are not the only software vendors out there so they started to support other suppliers on their cloud platform, Windows Azure. Apart from the stock images from Microsoft, BitNami publishes their own high quality virtual machines as well – mostly based on some kind of Linux system.

One of their images is a completely set up and ready to use Git source control solution using GitLab. This could really save a lot of time for us as the installation guide seems to be a little longer than anyone would prefer spending on setting up Git.

To install the image simply Browse VM Depot, select the GitLab image and create a new virtual machine from it. Microsoft went so far with supporting Linux systems that we can even enter our own username and password for the box during setup, no need to use any default logins on the fresh machine.

GitLab setup isn’t particularly resource hungry, so we are perfect fine with the extra small (A0) instance to host our git/wiki/bug tracking/code review solution. The whole setup takes around 15 minutes, which is mostly just automated - we don’t need to spend more than couple of minutes of setting up the image. The monthly cost would be around $20 for the XS instance which is pretty cheap for an on-demand solution. However, if we need a build server as well, probably a beefier machine would be more suitable.

Wednesday, July 31, 2013

Languages and databases - a step back a day

I've started to learn the Go language recently and to be frank I'm horrified. Horrified by the way the future of programming looks like and the NoSQL movement just making it worse.

Languages - a step back

If you haven't read any Fortran code yet, do it by any means. It's a language that was invented in 1957 when 'computers' were practically non existent. However, if you look at it more than 50 years later it's still readable and not really ugly - especially compared to some C++ code...

Then we had Python, Ruby, Java, C# and all sorts of modern languages. In fact, we have a new language born almost every day (have you heard about Rust for instance?). The only issue with them is that the programming model hasn't really changed in the past 50 years. We write text files, line by line, instructing the computer what to do step after step. The processors didn't really change much either unfortunately.

And the horrifying part? Google comes along, creates a new language (Go) and makes a big hype around it. And they are really good at making hype around things. As a result people think that this is the future, that finally this is a solution to our problems! Did you say you wanted a nice IDE to work with? Sure, you can use Vim or Emacs! Or wanted a great debugger? GDB is the perfect solution! How about inheritance? Event handlers? Lambda expression? Language integrated queries? Generics? Annotations? You don't need them, right?

How about a language that is not a stripped down version of other great languages? Go is great, it's readable, it compiles to native code and usually the code is faster than Java. Hurray! Oh, your site has like 10.000 impressions a day? That's less than 1 request per second if distributed over 8 hours, or 3 request a second if you have a spike for an hour! What framework cannot handle that? Even if you have 100.000 impressions in an hour it's still just 30 request per second. It's practically zero for any kind of hardware and software framework today but it would already be among the biggest sites out there.

But no, let's invent a language that fits our specific purpose and shove it down everyone else's throat. But wait a sec, why not actually evolve a language? Not strip it down but to make it THE next big thing? How about a language where I don't have to write boilerplate ever again? Where validating a field from front-end to backend does not make me write code again and again? Where I don't have to think about threads and mutexes when writing distributed code? How about not writing for cycle every again? Or not bothering with connection strings again? And forgetting about beginning and ending transactions? Because these are the actual problems developers are working on every day.

NoSQL - the NoSolution

And this brings us to the next big issue, the NoSQL movement. Just because a handful of companies are dealing with data volumes that cannot be handled by relational databases, it does not mean everyone is. But everyone seems to think it's the best storage idea ever. However, before you jump in, make sure you have a look at what functional complexities these companies struggle with! A wall post? To upload a photo? To synchronize your plain text notes? Now check what complexities an ERP system is dealing with.

But again, when several parties criticized Google for this not-relational approach, they arrogantly responded: "it works for us". Sure it does, because you can invest thousands of hours/weeks/years of engineering time in creating a simple web application. Or just to add a new field to an existing one. Or create a new report. Oh, creating a new report needs a new data structure? Well, it works for you, not for me!

How about inventing the next big thing in databases? Where I don't need to map my fields to columns, where I could compile my code against the database to eliminate 99% of useless integration test? To have the ability to create reports with ease? To never bother with change scripts again?

We think we got it right - so we are stuck

The really horrifying part of all this is that the 'average Joe' developer does not know anything about all this. He's happy copy-pasting code from Stackoverflow, day in, day out. Meanwhile the 'top of the game' developers are just mesmerized by all the latest-and-greatest ideas from companies that do very different things from what they do.

However, if we get stuck and keep thinking that the way forward is a dumbed down language and database that is as smart as a text file, we won't see much progress any time soon. Actually, we haven't seen that much since '57 either...

Given that you don't need just to follow other ideas, do you think it's all going in the right direction? I do not.

Saturday, July 27, 2013

The next big thing on the web – custom HTML elements

If you are either a HTML component vendor or ever wanted to use someone else's HTML module, you already faces the same issue: components may not work together and even if they do, they put a lot of noise in your pure HTML structure. This is going to change very soon.

Large component vendors like Kendo or Telerik are creating amazing HTML components that are, in theory, just plug and play: whatever framework you are working with it should be really easy to use those enhanced modules. However, even if they work as they are intended to, they change and litter the HTML DOM so any kind of debugging can be a bit of a pain.

Noisy HTML

The problem is that even if we start with a nice and clean HTML code that looks like this:

<select id="size">
<option>S - 6 3/4"</option>
<option>M - 7 1/4"</option>
<option>L - 7 1/8"</option>
<option>XL - 7 5/8"</option>

the frameworks will turn this into a lot of different divs and spans to be able to apply the nice visuals on it. However, this will create unexpected nodes and structures in your HTML code which may or may not display correctly after applying it. The above will be replaced to this during runtime (some attributes are removed for readability):

<span class="k-widget k-dropdown k-header" unselectable="on" …>
<span unselectable="on" class="k-dropdown-wrap k-state-default">
<span unselectable="on" class="k-input">S - 6 3/4"</span>
<span unselectable="on" class="k-select">
<span unselectable="on" class="k-icon k-i-arrow-s">select</span>
<select id="size" data-role="dropdownlist" style="display: none;">
<option value="S - 6 3/4&quot;" selected="selected">S - 6 3/4"</option>
<option value="M - 7 1/4&quot;">M - 7 1/4"</option>
<option value="L - 7 1/8&quot;">L - 7 1/8"</option>
<option value="XL - 7 5/8&quot;">XL - 7 5/8"</option>

Custom elements

Even though the custom HTML elementdefinition is still just a W3C draft and basically none of the browsers are natively supporting it (ChromeCanary and Firefox has a partial support), it's still by far the most interesting upcoming HTML feature I've seen in a while.

With custom elements, you can define a custom tag and add any HTML or Javascript rendering within it – the browser will interpret that locally but won't be visible from the outside. The only requirement is that the tag must contain a dash so the name will never interfere with upcoming standard HTML tags (eg: <hello> is not a valid custom tag but <adams-hello> is).

Defining custom elements

To define a custom tag, it has to be declared in the following format:

<element name="adams-hello">

The rendering is a bit tricky though: just because we have some internal content it will not be visible from the outside. In the above case “...” will not be rendered when someone uses the <adams-hello></adams-hello> tag. In some cases it can be a bit confusing but it's for the greater good: any internal structure can be pre-stored and used when necessary, no need to fiddle with what and what's not visible: anything that we have not explicitly rendered with Javascript is not visible.

Displaying time

To continue with our simple extension above, let's add some simple rendering to it to display the current time:

<element name="adams-hello">
Hey, the time is:
<time id='currentTime' />
<footer>It will not be visible</footer>
      var el;
      function tick(){
              el.innerHTML = (new Date()).toString();
      if (this !== window) {
            var section = this.querySelector('section');
                prototype: {
                        readyCallback: function() {
                              var root = this.createShadowRoot();
                              root.innerHTML = section.innerHTML;
                              el = root.querySelector("#currentTime");
                              setInterval(tick, 1000);

The code is pretty straightforward, however, there are couple of things that are good to know about:
- We need to register for the readyCallback to do any kind of rendering. Without that the element won't display anything.
- If we want to hide the internal content of the DOM from debuggers we can create a shadow root. This is not really supported yet so the root element can be simply this. In that case the internal structure will appear in the DOM (note: even if we use shadow root the browser knows about it and can display it to the developer is we enable that option – it's not for security, more for easier debugging).
Without shadow root - Chrome will display internal structure

With shadow root - nice and clean markup

You can grab the full source code fromhere. Remember to run the samples on a local web server (like python -m SimpleHTTPServer 8080) otherwise the HTML import won't work.

How can I use it today?

As basically none of the browsers support is yet so the only solution is to use some kind of polyfill framework to “bring the browser up to speed”. The only reasonable implementation I've seen is Google's Polymer Project. The framework is pretty well written so even Internet Explorer will be able to work with most of the features mentioned above (note: for me shadow root did not work). Give it a go and be prepared: this is really an exciting feature in HTML!

Friday, July 26, 2013

Simulating crowds and animal swarms

If you've ever looked at an ant farm I'm sure you were surprised by how organised they were. But the truth is they all follow a very simple logic which “magically” adds up to a large scale organised colony. How do they do that?

Swarm behaviour is not really observable on one entity but very visible on the large scale. Interestingly it works with us, humans as well: we don't know how we board a train or plain but a simulated environment can predict very well will we move and how efficient the crowd movement will be.


Studying the crowd movement is not new by any means: Craig Reynolds in 1986 came up with three simple rules that can describe the large swarm movement quite accurately. He named the rules as “boids” after birds (it supposed to be the New York accent style bird?).

The three rules are:
- separation: the entities prefer to keep some distance from each other
- alignment: try to go the same way as the group or the local flock-mates
- cohesion: try to go closer to the centre of the crowd


In my case I just wanted to write up something quickly to validate how much fine-tuning would be required. To my surprise t was very reasonable for the first version! The simulated crowd moved moved in a very reasonable pattern, nothing like robots. For instance when directed to a remote point the head of the group got a bit away from the rest, just like you would expect from a larger human group.
Boids are just having rest and staying together with some random movement.
Boids are chasing a target point – the rest of the group is trying to catch up with the first ones.
The code was written is Javascript and I haven't really used any optimisation – for a couple of hundred members even the O(n^2) algorithm did just fine. The movement was calculated at every 100ms and to smooth out the movement I just used CSS animations.

        position: absolute;
        height: 10px;
        width: 10px;
        transition: margin 0.1s;

The main loop looks like this:

function StepBoids(boids, destination, blocks){
        for(var i = 0, b; b = boids[i]; i++){
                var dCenter = StepBoidCenter(b, boids); // Get a vector to center.
                var dDest = StepBoidDestination(b, destination); // Get a vector to destination.
                var dAway = StepBoidAway(b, boids); // Get a vector away from others.
                var x = dAway.x + dCenter.x + dDest.x;
                var y = dAway.y + dCenter.y + dDest.y;
                var length = Math.sqrt(x*x + y*y);
                        x = x * Boid.speed / length; // Normalise speed.
                        y = y * Boid.speed / length;

                var blocked = false;
                for(var j = 0, block; block = blocks[j]; j++){
                        if(block.inBlock(b.x + x, b.y + y)){
                                blocked = true; // Simple 'wall' detection.
                        b.x += x;
                        b.y += y;

Navigating around an object

As the algorithm didn't have any routing defined, my expectation was that any object in their way would just simply block them and every member would get stuck. However, something much more interesting happened:

Boids moving around an object and finding their way to the destination

Of course! The first couple of boids get stuck but the rest stay away from those, eventually pushing each other around the obstacle. Again, there is no code to avoid any obstacles – it's just a side effect of the group behaviour.

Further steps – why train boarding is so slow

To put this into something more useful, you can simulate how much the boids would sow down if you had a few of them standing around the train doors while the rest is trying to get out. Seeing that maybe next time we would want to give a bit more space to the ones getting off the train so we can get on much faster.

You can grab the full source code from here.

Sunday, July 14, 2013

Ubuntu loses network connection after do-release-upgrade - Ignoring unknown interface eth0=eth0

Over the weekend I decided to upgrade one of my linux servers, running an Ubuntu desktop. The Ubuntu do-release-upgrade tool is pretty reliable, even over SSH the whole process completed without any errors (try that from Windows 2003 - 2008). The only issue is that after reboot the machine disappeared from the net. Had to hack into the console to have a look what's going on. Turns out my eth0 interface wasn't configured so didn't get any IP addresses:

> ifup eth0
 * Reconfiguring network interfaces... Ignoring unknown interface eth0=eth0.

The fix was pretty easy, just had to edit /etc/network/interfaces files and try again:

auto lo
iface lo inet loopback
# For DHCP only
auto eth0
# Statis IP assignment
iface eth0 inet static
# Google's DNS - fast and reliable

ifup eth0 was much friendlier this time, the box was back on the net again.

Tuesday, June 25, 2013

Static constructors and sealed classes - how to make your code untestable

In almost all programming languages there is a possibility to put some application logic in static constructor. However, it's usually up to the platform to decide when to execute the code we put there - usually it happens when we the environment runs a piece of code that refers that class.

Recently I was reading through the Windows Azure Training Kit (written by Microsoft) and this is a piece of code from the kit:

public class GuestBookDataSource{
static GuestBookDataSource()
storageAccount = CloudStorageAccount
CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
CloudTable table = cloudTableClient.GetTableReference("GuestBookEntry");
This code will open or create a cloud SQL table before accessing anything around it. As the code runs only once, it's seems like an efficient way to do the initialization of the environment.

There are multiple issues with this approach

- We don't know when the code is actually running. Depending where we refer GuestBookDataSource for the first time, the code would execute at a different stage in our application. As it will actually connect to another machine to open the table, it will suffer from some kind of delay. We don't know when though. Predictability is really important, so a simple Init() would be easier to understand and debug.

- A constructor -or in this case referencing to a class- really should not throw random network or file exceptions. This is not something we expect when we call var gds = new GuestBookDataSource(); This line should always execute instantly without any delay and without throwing exceptions.

- It's completely untestable. The implementation is tightly bound to CloudStorageAccount and we cannot mock it or inject a dummy implementation. In a unit test, it will call the actual cloud implementation it no matter what. Moreover, we cannot reset its state as a static constructor will be called exactly once in a process. This means that depending the order we run our unit tests, our tests may or may not fail.

So what would be a better solution?

Unfortunately the CloudStorageAccount is not implementing the interface, so we would need to wrap it in our custom interfaces. Something like this would work:

public interface IStorageAccount
public void Init(string connectionString);
public ICloudTable GetTable(string tableName);

public interface ICloudTable
public TableResult Execute(TableOperation operation,
    TableRequestOptions requestOptions = null,
    OperationContext operationContext = null);

// Add other methods you actually need.

public class StorageAccount : IStorageAccount
CloudStorageAccount storageAccount;

public void Init(string connectionString)
    storageAccount = CloudStorageAccount

public ICloudTable GetTable(string tableName)
    CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
    CloudTable table = cloudTableClient.GetTableReference(tableName);
    return new Table(table);

public class Table : ICloudTable
private CloudTable table;
public Table(CloudTable table)
    this.table = table;
public TableResult Execute(TableOperation operation,
    TableRequestOptions requestOptions = null,
    OperationContext operationContext = null)
    return this.table.Execute(operation, requestOptions, operationContext);

public class GuestBookDataSource_Testable
private static IStorageAccount storageAccount;
private static ICloudTable table;

// Probably called with some kind of dependency injection framework.
public GuestBookDataSource_Testable(IStorageAccount storageAccount)
    if (GuestBookDataSource_Testable.storageAccount == null)
 GuestBookDataSource_Testable.storageAccount = storageAccount;

public void Init()
    if (GuestBookDataSource_Testable.table == null)
 GuestBookDataSource_Testable.table = storageAccount.GetTable("GuestBookEntry");

public void UnInit()
    GuestBookDataSource_Testable.storageAccount = null;
    GuestBookDataSource_Testable.table = null;

But wait, this code is much longer!?

Yes, unfortunately it is. The main reason is that Microsoft decided not to program against interfaces and make all Cloud* classes sealed. It means that we cannot inherit from them to "mock" them nor can we replace them with their interfaces. The code above is long, because we had to create wrapper interfaces and implementations to fix what Microsoft did not want to. As a result, the above implementation is testable as we can pass in any dummy implementation for the constructor of GuestBookDataSource_Testable.

So if Microsoft doesn't do it, why should I?

There are pros and cons of course. The code gets longer which statistically means more bugs. On the other hand, you can unit test your code to make sure any changes in the code are not ruining other parts of the project. However, keep in mind that there are great pieces of software out there without any kind of automated testing. Unit testing is not required when you are writing code - it will help you to maintain a generally higher quality of code but in some cases there might be other, much more important factors. One perfect example is timing: if you don't ship the code by the end of next week your competition would crush you and you are out of business. Well, do whatever you need to ship, even if it means you don't unit test everything. Although, even in this case you really should not connect to another machine in a static constructor.

Saturday, June 22, 2013

One of my projects uses a self hosted executable that is accepting incoming connections on port 80. The package has an embedded Jetty so I didn't need to deal with any low level TCP stuff. That is the good part. The bad part is that it turned out that the hosting environment (Ubuntu linux) for security reasons by default does not allow a non-root user to use any TCP port below 1024 - so the first free for all port is 1024.
One easy solution would be to set up Nginx and forward the traffic it receives on port 80 to my self hosted app on a different port. Quite easy to set up but why waste a network hop when I didn't really need any feature Nginx could offer?
Installing authbind

Luckily I'm not the only one with this kind of issue out there. In fact, authbind was released in 1998. To install, I simply did an apt-get install authbind and that's about it. The configuration is a bit trickier as we need to create a file in /etc/authbind/byport directory by the user who wants to access the specific port. So, in my case I needed a file called 80 owned by the user who is allowed to access that port.

Convincing Java to play nice with authbind

After creating the file we can test it right away whether we have permission to open that port. Simply using netcat: authbind nc -l 80 should not throw an error. So far, so good. Unfortunately by default Java will try IPv6 so it won't be able to use this port (authbind is IPv4 only). The only easy solution is to force Java to use IPv4 stack instead of v6:
authbind --deep /usr/bin/java -jar mypack.jar       
(The --deep option allows any child process to inherit the permission for the port)

Another solution - iptables

Some people dislike setting up iptables on production as it can cause really funky errors and if done remotely it's quite easy to lock out ourselves from the machine. However, iptables can forward traffic from any port to any other port simply using:

iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080

Again, it's a bit hard-wired solution, so probably setting up Nginx to forward the traffic is a bit more user friendly setup.

Monday, June 3, 2013

Are we developers the romance writers of the science world?

As developers we tend to do repetitive, sometimes tedious labour every day. We stand at a scrum/kanban board every morning pretending the problems we are facing are interesting, hard and we are the only solution to them. Maybe we think we are the smartest in our team. Maybe everyone in that team thinks he is the smartest.

Anyhow, the job we have to deliver as an everyday developer relies roughly on the same technologies. We like it or not, but most of our work is based on someone else's work, someone else's idea, methodology. It's so rare that we, as everyday developers can add something brand new that wasn't there before.

And no, a HTML5 progress bar is not all that new. And no, creating a iPhone app that uses Facebook login is not new either. But if we are not really in the invention part of the game, surely the people at the big fancy IT companies are, right?
What developers at Google/Amazon/eBay/Microsoft/Adobe/Facebook do?

Without a doubt these companies invent the most interesting technologies and the most reliable platforms. Does it mean that if you worked for them you would actually create something new? Something that is not based on someone else's idea decades ago?

Well, Google invented BigTable, Facebook invented Cassandra. These two technologies were new, interesting and not a copy-paste of the first result from StackOverflow. So, if you joined them you would finally create something new, right?

Probably wrong. Google employs tens of thousands of engineers and their job is to churn out code. Code that relies on inventions of other people like BigTable. Code that will be just another app on Android or iPhone. The fact is most of those engineers are not working on the groundbreaking new stuff like the self driving cars or the Glass project. Most of them are churning out code 8-10 hours a day, just like you do.
Romance writing

There is an interesting trend in writing - just like in software 'engineering'. The more you churn out the more profitable you are. There is no time to research, write, rewrite a book for years. No one will pay for it. You are willing to fork out let's say $5 for a digital book on Amazon. Most probably you wouldn't even read it, so that's not a big loss.

But paying $50 for a book? Doesn't happen anymore. With software, it's very similar. Even if you are a startup, your goal is to churn out a lot of code, create a somewhat working prototype and get a lot of money for it. Didn't work out? Try another one.

And if you are a bank? Churn out some code, bear the politics, support the 40 year old systems. There is no time or money for innovation and research in an everyday developer's life. Wake up, commute 1-2 hours, spend 8-10 at work, another 1-2 hours on the way home, pay the bills and restart. Another day, another dozen lines of Javascript - or another romance book.
Famous developers and famous writers - marketing
The other day I was watching a video on Channel 9. The guy (who is an MVP) was trying to demo how to write testable code using dependency injection. His Visual Studio had so many "productivity enhancer plugins" installed that every file looked like a Christmas tree. But he could not write code. He was actually struggling to create the demo code - not more than 15 lines.

I'm sure you had those moments when you were watching someone doing something and you already knew why it would not work - this video was like that. For all the 50 minutes. But hey, the guy is on Channel 9, he is an MVP, he is the CTO at a somewhat good sounding company. He is famous, right?

It's somewhat similar to the Fifty Shades of Grey effect. Take an average romance book, make an insanely good marketing for it and everyone will think this is actually a good book. Even though there was nothing new about the topic, style, or content. It was one in a dozen. Marketing made it famous.

Daniel Kahneman in his "Thinking Fast and Slow" book got it perfectly right: "People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media". The guy is famous because once he managed to get into Channel 9 and that's it. Nothing to do with skills or innovation. He is just around all the time. Similarly, this very average fifty book is just all around so we think it's important.
Are you a rockstar? We don't need you     

The other day I was having a chat with a friend and he told me that the -really cool- company he is working for no longer looking for top-end developers. And it suddenly all made sense. The tools and environments are getting so much easier every day and the problems and the size of the problems are somewhat staying the same. You need people who won't get bored writing the millionth jQuery popup, or won't give you that strange look when you ask them to fix the spelling mistakes on the site. Because the business problem you are having does not need software engineer superheros to solve. You don't need Average Joe yet, but no need for prima donnas either.
What's next - research
I've been watching university lectures recently and seen two exceptional - one from Stanford and the other one from MIT. And after years of "new jQuery xx version is out" type of "news" I was finally learning something again. I almost never envy anyone but interestingly the lecturers got under my skin.

They seemed to be passionate about what they were doing. They enjoyed the research their university was doing around that topic. They couldn't care less about the newest GCC or Java 1.8 because for them inventing something new was way above fancy tools or frameworks. So, do you want to do something that's not the everyday developer's job? Don't ignore research or even university research next time. Maybe I won't...
Agree / Disagree? Leave a comment!

Tuesday, May 28, 2013

Using Vim on Mac OS - basic settings

In every programmers life comes a moment when they need to deal with Vim somehow - some people just need to close it once accidentally opened, some people need to start using it for work. For a long while I could get away without using it but then I had to start editing Octave scripts and a simple notepad did not seem to be too useful to do that.

Ancient roots

Vim was released 21 years ago so most of us can argue that is is an ancient editor. An editor from an era where even Java wasn't invented (it was released in 1995). This reason alone could invalidate its usage as so many things changed since then. Well, the thing may not be that different. We are still using some kind of text editors to write source code and the problems around code writing are roughly the same: quickly edit and reformat specially formatted texts.

From the first look Vim is an editor that does not even support mouse and even deleting a line is troublesome - not to mention exiting the editor. However (given that we are willing to sacrifice couple of hours to learn the basics) Vim is a very-very capable and customizable editor. Most of the very fundamental features that helps code writing are there: autocompletion, autoformatting indents, jumping between functions, syntax highlighting and so on. It's true that default settings of the editor help to hide the interesting features, so let's have a look how to make Vim not suck.

How to make Vim not suck

To change the inconvenient defaults create a file called .vimrc under home directory and add these settings to it:

:syntax on " Highlight the keywords based on filetype
:set ruler " Display the current line number in the status bar
:set autoindent " New lines will keep the previous indentation
filetype indent on " Enable file type based indentation
filetype plugin on " Enable file type based plugins
:set ignorecase " When searching (/) will ignore the case of the input
:set smartcase " Don't ignore case when the term contains any uppercase
:set paste " To ignore auto-indent when pasting from the clipboard
:set hlsearch " Highlight search results. To clear use :noh
:set incsearch " Search incrementally, as you type
:map <c-w> :tabclose<cr> " CTRL+W will close the current tab
:map <c-t> :tabnew<cr> " CTRL+T opens a new tab
:nnoremap <leader>s :w<cr> " \s will save the current file
:map <tab> :tabNext<cr> " Jump to next tab
:nnoremap <c-n> :set invrelativenumber<cr> " relative line #s
:set backspace=indent,eol,start " fix BS, del
:nnoremap <s-tab> :tabprevious<cr> " Jump to previous tab
:set laststatus=2 " Always display status line

For basic code editing these settings are sufficient. However, for more serious coding it makes sense to enable code autocompletion.

Introducing neocomplcache

Vim by default supports autocompletion but it requires to press funny key combinations like <CTRL+X> - <CTRL+O> to list the suggestions (this will bring up the OmniComplete dialog). However, it can be a bit inconvenient to use, so adding a plugin that automatically opens the completion dialog on typing is much easier. There are a lot of different plugins that help with this, one of them is neocomplcache.
To install, add this line to .vimrc:

let g:neocomplcache_enable_at_startup = 1

And download the neocomplcache source from here.
Make a directory under home called .vim and copy the content of the downloaded source into it. To verify that the plugin is loaded after startup, check it using the


If everything is set up properly when editing a code or text file the purple autocomplete popup should show when it finds a chunk of word that was mentioned before in this or other currently opened file.

Alternatively there is another plugin called YouCompleteMe which is a bit heavier but very popular autocomplete. It worth giving it a try.

Running external programs on current file

Vim may fail on you if you want to have fancy code formatting so in some cases the easiest solution is to rely on external code formatters (note: if you are lucky enough, someone created these as Vim plugins as well). To dump the current file to a programs standard input and replace the current file with the program's standard output, simply create a mapping for a specific file type like this:

autocmd FileType javascript nnoremap <buffer> <leader>ff :%!~/temp/js-beautify/js-beautify -t -i <cr>

When pressing \ff in a Javascript file it will invoke jsbeautifier using stdin (-i) using tab formatting (-t) and replace the content with the result. It's important that it's using the (not yet) saved buffer so if the changes look bad we don't need to save it.

To store the position before the format and restore it, use:
:let lastpos=getpos('.')<cr>:call setpos('.',lastpos)<cr>


It was quite a pleasant discovery when I noticed that Vim does support code folding too. It means that any other decent IDE it is able to collapse larger chunks of code (like methods and classes) so it's easier to navigate around in the code. To enable the default folding for Javascript, simply add this to .vimrc:

let javaScript_fold=1

Have a look at the cheat sheet to learn how to open/close the folds.

Learning new tricks

With Vim the learning process is pretty much never finished, we just get to a point that is enough for us. However, once in a while it worth revisiting how others use Vim to make sure we are not missing something obvious. Of course, a Vim cheat sheet can come handy too. Love it, hate it? Give it a go - there is always room to turn back!


Speed up HTML creation

It's quite typical that we need to create a complex HTML tree structure when editing a document but it can be a bit time consuming. Creating the following takes an effort:

<footer id="pageFooter">
      <a href='/contact'>Contact us</a>
      <a href='/legal'>Copyright</a>

However, there is a very handy plugin that can generate this code using the following syntax only:


When pressing CTRL+Y then , (comma) it will expand to the above - of course without the textual content. Have a look, it might save you some time - Zen Coding for Vim.
In case the auto-indentation seems to be incorrect, use the Javascript indenter plugin.

Fixing HTML5 syntax highlight

Vim by default does not recognize the new HTML5 tags like footer or video. The easy fix is to add the following the the .vim/after/syntax/html.vim file:

" HTML 5 tags
syn keyword htmlTagName contained article aside audio bb canvas command datagrid
syn keyword htmlTagName contained datalist details dialog embed figure footer
syn keyword htmlTagName contained header hgroup keygen mark meter nav output
syn keyword htmlTagName contained progress time ruby rt rp section time video

" HTML 5 arguments
syn keyword htmlArg contained autofocus placeholder min max step
syn keyword htmlArg contained contenteditable contextmenu draggable hidden item
syn keyword htmlArg contained itemprop list subject spellcheck
" this doesn't work because default syntax file alredy define a 'data' attribute
syn match   htmlArg "\<\(data-[\-a-zA-Z0-9_]\+\)=" contained