Ultra fast HashTable (Dictionary) with direct indexing
One of the most common usages of hash tables is to count/group occurrences of different elements in it, something like this: dict[“key”]++. The direct access hash table outperforms any generic implementations, even though hash tables are supposed to be excellent at this task. Although this operation is considered to be constant time ( O(1)), the generic implementations of dictionaries do an unnecessary step: double hashing of the key.
If we look at the disassembled code of the example above, this is what happens in the background (.NET ILDASM simplified code snippet):
Generally hashing is fairly quick, however, if the key is a string for instance, the hash function has to check every character of the string to create the hash value, which is an integer. The CPU is really good working with integers and logical values but by no means designed to process strings, hence this operation is slow compared to native CPU type operations.
The direct access dictionary
It is fairly easy to create a dictionary that does support direct writing of its values, once we know their locations. The get_Item function has to find the location anyway, so only a small modification is needed: when you find the location, return it rather than internally use it for storing or retrieving data.
The following test code can be rewritten with my custom dictionary:
To this:
The syntax looks much cleaner (no branches!) and hopefully we can get a shorter running time if we can reuse the hashes of the keys.
Originally I expected to get performance gains only with strings because with integers the hashing does not really take long time, as the integer’s hash code is itself. The results were a bit different though:
For the testing, I chose 20.000 random items (strings with length 3-10 or integers) as keys, and just kept counting their occurrences in the dictionaries in 1.000 iterations. The results were quite shocking, as the custom direct access dictionary with string indexing was more than twice as fast as the default .NET 4.0 dictionary. When the keys were integers, I still won a whopping 40 percent performance gain, which comes mainly from reducing the number of indirections when accessing the data in the hash table.
Source code
The custom dictionary is open source and part of the MapReduce.NET framework that I’ve created. If you want to grab a copy, head to the framework's site and browse the source (MapReduce.NET\Collections\CustomDictionary.cs)
Concurrency
It is important to note, that this implementation is by no means thread safe, as we are accessing the internals of the hash table. However, adding external synchronization to it can solve the problem, but be aware, synchronization is slow (read my article about it).
If we look at the disassembled code of the example above, this is what happens in the background (.NET ILDASM simplified code snippet):
get_Item(!0) // retrieve the item using the key (hashing happens) ldc.i4.1 // store the result integer add // increment the integer set_Item(!0,!1) // store the new value with the original key (hashing happens)
Generally hashing is fairly quick, however, if the key is a string for instance, the hash function has to check every character of the string to create the hash value, which is an integer. The CPU is really good working with integers and logical values but by no means designed to process strings, hence this operation is slow compared to native CPU type operations.
The direct access dictionary
It is fairly easy to create a dictionary that does support direct writing of its values, once we know their locations. The get_Item function has to find the location anyway, so only a small modification is needed: when you find the location, return it rather than internally use it for storing or retrieving data.
The following test code can be rewritten with my custom dictionary:
Dictionary<string, int> dictOriginal = new Dictionary<string, int>(); … if (!dictOriginal.ContainsKey(w)) dictOriginal[w] = 1; else dictOriginal[w]++; …
To this:
CustomDictionary<string, int> dictCustom = new CustomDictionary<string, int>(); … int pos = dictCustom.InitOrGetPosition(w); int curr = dictCustom.GetAtPosition(pos); dictCustom.StoreAtPosition(pos, ++curr); …
The syntax looks much cleaner (no branches!) and hopefully we can get a shorter running time if we can reuse the hashes of the keys.
Originally I expected to get performance gains only with strings because with integers the hashing does not really take long time, as the integer’s hash code is itself. The results were a bit different though:
For the testing, I chose 20.000 random items (strings with length 3-10 or integers) as keys, and just kept counting their occurrences in the dictionaries in 1.000 iterations. The results were quite shocking, as the custom direct access dictionary with string indexing was more than twice as fast as the default .NET 4.0 dictionary. When the keys were integers, I still won a whopping 40 percent performance gain, which comes mainly from reducing the number of indirections when accessing the data in the hash table.
Source code
The custom dictionary is open source and part of the MapReduce.NET framework that I’ve created. If you want to grab a copy, head to the framework's site and browse the source (MapReduce.NET\Collections\CustomDictionary.cs)
Concurrency
It is important to note, that this implementation is by no means thread safe, as we are accessing the internals of the hash table. However, adding external synchronization to it can solve the problem, but be aware, synchronization is slow (read my article about it).
Hello
ReplyDeleteNice article.
But what is the license of your code?
I did not find any license text on Google code.
Thank you
Hi Chris, please feel free to use it as you want. However, I'd appreciate if you could put a reference to me / my blog in a comment where you use it.
DeleteNice blog. I am thinking of designing a thread safe version of the same. But I would definitely put you in the reference of the same. So you are essentially indexing the hashes. Hmmm nice approach.
ReplyDelete