Blocking and Non-Blocking Algorithms

Contents[Show]

Blocking, non-blocking, lock-free and wait-free. Each of these terms describes a key characteristic of an algorithm when executed in a concurrent environment. So, reasoning about the runtime behaviour of your program often means to put your algorithm in the right bucket. Therefore, this post is about buckets.

 

An algorithm fall in one of two buckets: blocking or non-blocking.

 

 BlockingNonBlocking

Let's first talk about blocking.

Blocking

Intuitively, is quite clear, what blocking for an algorithm mean. But concurrency is not about intuition, it's about precise terms. The easiest way to define blocking is to define it with the help of non-blocking.

  • Non-blocking: An algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread.(Java concurrency in practice)

There is not any word about locking in this definition. That's right. Non-blocking is a wider term.

To block a program is quite easy. The typical use-case is to use more than one mutex and lock them in a different sequence. Nice timing and you have a deadlock. But there are a lot more ways to produce blocking behaviour.

Each time, you have to wait for a resource, a block is possible.

Here are a few examples for synchronising access to a resource:

  • A condition variable with wait.
  • A future with wait or get.

Even the join call of a thread can be used to block a thread.

// deadlockWait.cpp

#include <iostream>
#include <mutex>
#include <string>
#include <thread>

std::mutex coutMutex;

int main(){

  std::thread t([]{
    std::cout << "Still waiting ..." << std::endl;            // 2
    std::lock_guard<std::mutex> lockGuard(coutMutex);         // 3
    std::cout << "child: " << std::this_thread::get_id() << std::endl;}
  );

  {

    std::lock_guard<std::mutex> lockGuard(coutMutex);          // 1
    std::cout << "creator: " << std::this_thread::get_id() << std::endl;

    t.join();                                                  // 5

  }                                                            // 4

}

 

The program run will block immediately. 

deadlockWait

What is happening? The creator thread locks in (1) the mutex. Now, the child thread executes (2). To get the mutex in expression (3), the creator thread has at first unlock it. But the creator thread will only unlock the mutex if the lockGuard (1) goes in (4) out of scope. That will never happen because the child thread has at first to lock the mutex coutMutex.

Let's have a look at the non-blocking algorithms.

Non-blocking

NonBlocking

The main categories for non-blocking algorithms are lock-free and wait-free. Each wait-free algorithm is lock-free and each lock-free is non-blocking. Non-blocking and lock-free are not the same. There is an additional guarantee, called obstruction-free, which I will ignore in this post because it is not so relevant.

Non-blocking algorithms are typically implemented with CAS instructions. CAS stands for compare and swap. CAS is called compare_exchange_strong or compare_exchange_weak in C++.

I will in this post only refer to the strong version. For more information, read my previous post The Atomic Boolean. The key idea of both operations is that a call of atomicValue.compare_exchange_strong(expected, desired) obeys the following rules in an atomically fashion.

  • If the atomic comparison of atomicValue with expected returns true, atomicValue will be set in the same atomic operation to desired.
  • If the comparison returns false, expected will be set to atomicValue.

Let's know have a closer look at lock-free versus wait-free. 

At first, the definition of lock-free and wait-free. Both definitions are quite similar. Therefore, it makes a lot of sense to define them together.

  • Lock-free: A non-blocking algorithm is lock-free if there is guaranteed system-wide progress.
  • Wait-free: A non-blocking algorithm is wait-free if there is guaranteed per-thread progress.

Lock-free

 

// fetch_mult.cpp

#include <atomic>
#include <iostream>

template <typename T>
T fetch_mult(std::atomic<T>& shared, T mult){                          // 1
  T oldValue = shared.load();                                          // 2
  while (!shared.compare_exchange_strong(oldValue, oldValue * mult));  // 3
  return oldValue;
}

int main(){
  std::atomic<int> myInt{5};
  std::cout << myInt << std::endl;          
  fetch_mult(myInt,5);
  std::cout << myInt << std::endl;         
}

 

The algorithm fetch_mult (1) mutiplies an std::atomic shared by mult. The key observation is that there is a small-time window between the reading of the old value T oldValue = shared Load (2) and the comparison with the new value (3). Therefore, another thread can always kick in and change the oldValue. If you reason about such a bad interleaving of threads, you see, that there can be no per-thread progress progress guarantee. 

Therefore, the algorithm is lock-free, but not wait-free.

Here is the output of the program.

fetch mult

 

While a lock-free algorithm guarantees system-wide progress, a wait-free algorithm guarantees per-thread progress.

Wait-free

If you reason about the lock-free algorithm in the last example you will see. A compare_exchange_strong call involves synchronisation. First you read the old value and than you update the new value if the initial condition already holds. If the initial condition hold, you publish the new value. If not, you do it once more if put the call in a while loop. Therefore compare_exchange_strong behaves like an atomic transaction.

The key part of the next program needs no synchronisation.

 

// relaxed.cpp

#include <vector>
#include <iostream>
#include <thread>
#include <atomic>
 
std::atomic<int> cnt = {0};
 
void add(){                                           // 1
    for (int n = 0; n < 1000; ++n) {
        cnt.fetch_add(1, std::memory_order_relaxed);  // 2
    }
}
 
int main()
{
    std::vector<std::thread> v;
    for (int n = 0; n < 10; ++n) {
        v.emplace_back(add);
    }
    for (auto& t : v) {
        t.join();
    }
    std::cout << "Final counter value is " << cnt << '\n';
}

 

Have a closer look at function add (1). There is no synchronisation involved in expression (2). The value 1 is just added to the atomic cnt.

And here is the output of the program. We always get 10000. Because 10 threads increment the value 1000 times.

 relaxed

For simplicity reason I ignored a few other guarantees in this post such as starvation-free as subset of blocking or wait-free bounded as subset of wait-free. You can read the details at the blog Concurrency Freaks

What's next?

In the next post, I will write about a curiosity. It's the so called ABA problem which is a kind of false positive case for CAS instructions. That means,  although it seems that old value of a CAS instruction is still the same, it changed in the meantime.

 

 

Thanks a lot to my Patreon Supporters: Matt Braun, Roman Postanciuc, Tobias Zindl, Marko, G Prvulovic, Reinhold Dröge, Abernitzke, Frank Grimm, Sakib, Broeserl, António Pina, Darshan Mody, Sergey Agafyin, Андрей Бурмистров, Jake, GS, Lawton Shoemake, Animus24, Jozo Leko, John Breland, espkk, Wolfgang Gärtner,  Louis St-Amour, Stephan Roslen, Venkat Nandam, Jose Francisco, Douglas Tinkham, Kuchlong Kuchlong, Avi Kohn, Robert Blanch, Truels Wissneth, Kris Kafka, Mario Luoni, Neil Wang, Friedrich Huber, lennonli, Pramod Tikare Muralidhara, Peter Ware, Tobi Heideman, Daniel Hufschläger, Red Trip, Alexander Schwarz, and Tornike Porchxidze.

 

Thanks in particular to Jon Hess, Lakshman, Christian Wittenhorst, Sherhy Pyton, Dendi Suhubdy, Sudhakar Belagurusamy, and Richard Sargeant.

My special thanks to Embarcadero CBUIDER STUDIO FINAL ICONS 1024 Small

 

Seminars

I'm happy to give online-seminars or face-to-face seminars world-wide. Please call me if you have any questions.

Bookable (Online)

Deutsch

English

Standard Seminars 

Here is a compilation of my standard seminars. These seminars are only meant to give you a first orientation.

New

Contact Me

Modernes C++,

RainerGrimmSmall

 
 

 

 

 

 

Comments   

+1 #1 Ivan Walulya 2017-06-05 23:16
We always get 1000. Because 10 threads increment the value 1000 times.

should be 10,000
Quote
0 #2 mehmet ışık 2017-06-05 23:31
Very good
Quote
+1 #3 Rainer Grimm 2017-06-06 03:26
Quoting Ivan Walulya:
We always get 1000. Because 10 threads increment the value 1000 times.

should be 10,000

Thanks, fixed.
Quote
0 #4 Charla 2017-07-29 20:10
Great beat ! I wish to apprentice while you amend your web site, how could i subscribe for
a blog web site? The account helped me a acceptable deal.

I had been tiny bit acquainted of this your broadcast offered bright clear concept
Quote

My Newest E-Books

Course: Modern C++ Concurrency in Practice

Course: C++ Standard Library including C++14 & C++17

Course: Embedded Programming with Modern C++

Course: Generic Programming (Templates)

Course: C++ Fundamentals for Professionals

Subscribe to the newsletter (+ pdf bundle)

Blog archive

Source Code

Visitors

Today 3196

Yesterday 5600

Week 47308

Month 72934

All 6093998

Currently are 182 guests and no members online

Kubik-Rubik Joomla! Extensions

Latest comments