Buckets, Capacity, and Load Factor

Contents[Show]

The hash function maps a potential infinite number of keys on a finite number of buckets. What is the strategy of the C++ runtime and how can you tailor it to your needs, that is what this article is all about.

 

In order not to lose the big picture I want to explicitly stress the point. This article is about std::unordered_set but observations hold also for the other three unordered associative containers in C++11. If you want to have a closer look, you should read my previous post Hash tables.

Rehashing

The hash function decides to which bucket the key goes. Because, the hash function reduces a potential infinite number of keys on a finite number of buckets it can happen that various keys go to the same bucket. This event is called collision. The keys in each bucket are typically stored as singly linked list. With this knowledge in mind you can quite simply reason how fast the access time on a key is in an unordered associative container. The application of the hash function is a constant operation, the searching of the key in the singly linked list is a linear operation. Therefore, the goal of the hash function is it to produce few collisions.

The number of buckets is called the capacity, the average number of elements of each bucket the load factor. The C++ runtime creates by default new buckets if the load factor goes beyond 1. This process is called rehashing and you can explicitly trigger it by setting the capacity of the unordered associative container to a higher value. Once more, the few new terms on the spot.

  • Capacity: Number of buckets
  • Load factor: Average number of elements (keys) per bucket
  • Rehashing: Creation of new buckets

You can read and adjust these characteristics.

 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
// rehash.cpp

#include <iostream>
#include <random>
#include <unordered_set>

void getInfo(const std::unordered_set<int>& hash){
  std::cout << "hash.bucket_count(): " << hash.bucket_count() << std::endl;
  std::cout << "hash.load_factor(): " << hash.load_factor() << std::endl;
}

void fillHash(std::unordered_set<int>& h,int n){
  std::random_device seed;
  // default generator
  std::mt19937 engine(seed());
  // get random numbers 0 - 1000
  std::uniform_int_distribution<> uniformDist(0,1000);

  for ( int i=1; i<= n; ++i){
    h.insert(uniformDist(engine));
  }
}

int main(){

  std::cout << std::endl;

  std::unordered_set<int> hash;
  std::cout << "hash.max_load_factor(): " << hash.max_load_factor() << std::endl;

  std::cout << std::endl;

  getInfo(hash);

  std::cout << std::endl;

  // only to be sure
  hash.insert(500);
  // get the bucket of 500
  std::cout << "hash.bucket(500): " << hash.bucket(500) << std::endl;

  std::cout << std::endl;

  // add 100 elements
  fillHash(hash,100);
  getInfo(hash);

  std::cout << std::endl;

  // at least 500 buckets
  std::cout << "hash.rehash(500): " << std::endl;
  hash.rehash(500);

  std::cout << std::endl;

  getInfo(hash);

  std::cout << std::endl;

  // get the bucket of 500
  std::cout << "hash.bucket(500): " << hash.bucket(500) << std::endl;

  std::cout << std::endl;

}

 

At first to the helper functions getInfo (line 7 - 10) and fillHash (line 12 - 22). The function getInfo returns for the std::unordered_set the number of buckets and the load factor. The function fillHash fills the unordered associative container with n arbitrary created integers.

It is interesting to compare the execution of the program on Linux and Windows. I used on Linux the GCC; I used on Windows the cl.exe compiler. I translated the program without optimization.

 

rehashLinux

At first, I'm interested in the maximum load factor (line 29) of the empty container. If the std::unordered_set exceeds the maximum load factor, a rehashing will take place. The maximum load factor is on 1. The GCC initially starts with 11 buckets, Windows starts with 8 buckets. Of course, the load factor is 0. If I put the key 500 (line 38) into the hash table, it will go to the bucket 5 on Linux and to the bucket 6 on Windows. In line 45 I added 100 keys to the hash table. Afterwards Linux has 97 and Windows has 512 buckets. Linux has less than 100 buckets because I got a few identical keys. Linux now has a load factor near to 1, but Windows about 0.2. Therefore, I can put a lot more elements into the Windows hash table without the need of rehashing. I trigger the rehashing on Linux by requesting at least 500 buckets (line 51). Here you see the difference. On Linux I get new buckets and the keys must be newly distributed. But this is not happening on Windows because 512 is bigger than 500. At the end the key 500 (line 61) is in a different bucket.

 

rehashWindows

It's difficult to draw conclusions from the observed behaviour on Linux and Windows. It seems in the unoptimized case, that Linux optimizes for memory consumption and Windows for performance. But I want to give one advice for the usage of unordered associative containers.

My rule of thumb

If you know, how large your unordered associative container will become, start with a reasonable number of buckets. Therefore, you can spare a lot of expensive rehashings. Because each rehashing includes memory allocation and the new distribution of all keys. The question is, what is reasonable. My rule of thumb is to start with a bucket size similar to the number of your keys. Therefore, your load factor is close to 1.

I'm looking forward to discussion about my rule of thumb. If possible, I will add them to this post.

What's next?

POD's stands for Plain Old Data. These are data types that have a C standard layout. Therefore, you can directly manipulate them with the C functions memcpy, memmove, memcmp, or memset. With C++11 even instances of classes can be PODs. I will write in the next post, which requirements must hold for the class.

 

 

 

 

 

 

 

 

 

 

 

title page smalltitle page small Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library".   Get your e-book. Support my blog.

 

 

 

Comments   

0 #1 sergio_nsk 2017-04-15 05:22
On my Ubuntu 16.04, g++ of version 5.4.0 initially starts unordered_set with 1 bucket only. The initial element 500 goes to that only bucket, hash.bucket(500) returns 0.
After 100 random elements inserted (96 or more unique elements), hash.bucket_count() returns 199.
After 100 random elements inserted (95 or less unique elements), hash.bucket_count() returns 97.
After rehashing for at least 500 elements, hash.bucket_count() returns 503 and the element 500 is in the bucket 500.

On my CentOS 7 with g++ 4.8.5 results are like yours.
Quote
0 #2 seo 2017-05-23 19:18
Hi there! I simply wish to give you a huge thumbs up for your
excellent info you've got here on this post.
I'll be returning to your website for more soon.
Quote
0 #3 backlinks 2017-07-11 22:28
Very nice article, totally what I needed.
Quote

Add comment


My Newest E-Books

Latest comments

Subscribe to the newsletter (+ pdf bundle)

Blog archive

Source Code

Visitors

Today 355

All 538076

Currently are 171 guests and no members online