C++ Core Guidelines: Sharing Data between Threads


If you want to have fun with threads, you should share mutable data between them. In order to get no data race and, therefore, undefined behaviour, you have to think about the synchronisation of your threads.



The three rules in this post are may be quite obvious for the experienced multithreading developer but very crucial for the novice in the multithreading domain. Here are they:

 Let's start with the most obvious rule.

CP.20: Use RAII, never plain lock()/unlock()

No naked mutex! Put your mutex always in a lock. The lock will automatically release (unlock) the mutex if it goes out of scope. RAII stands for Resource Acquisition Is Initialization and means that you bind the lifetime of a resource to the lifetime of a local variable. C++ automatically manages the lifetime of locals. 

std::lock_guard, std::unique_lock, std::shared_lock (C++14), or std::std::scoped_lock (C++17) implement this pattern but also the smart pointers std::unique_ptr, and std::shared_ptr. My previous post Garbage Collection - No Thanks  explains the details to RAII.

What does this mean for your multithreading code?

std::mutex mtx;

void do_stuff()
    // ... do stuff ... (1)


It doesn't matter if an exception occurs in (1) or you just forgot to unlock the mtx; in both cases, you will get a deadlock if another thread wants to acquire (lock) the std::mutex mtx. The rescue is quite obvious.

std::mutex mtx;

void do_stuff()
    std::lock_guard<std::mutex> lck {mtx};
    // ... do stuff ...
}                 // (1)


Put the mutex into a lock and the mutex will automatically be unlocked at (1) because the lck goes out of scope.

CP.21: Use std::lock() or std::scoped_lock to acquire multiple mutexes

If a thread needs more than one mutex, you have to be extremely careful that you lock the mutexes always in the same sequence. If not, a bad interleaving of threads may cause a deadlock. The following program causes a deadlock.

// lockGuardDeadlock.cpp

#include <iostream>
#include <chrono>
#include <mutex>
#include <thread>

struct CriticalData{
  std::mutex mut;

void deadLock(CriticalData& a, CriticalData& b){

  std::lock_guard<std::mutex>guard1(a.mut);           // (2)        
  std::cout << "Thread: " << std::this_thread::get_id() <<  std::endl;

  std::lock_guard<std::mutex>guard2(b.mut);          // (2)
  std::cout << "Thread: " << std::this_thread::get_id() <<  std::endl;
  // do something with a and b (critical region)        (3)

int main(){

  std::cout << std::endl;

  CriticalData c1;
  CriticalData c2;

  std::thread t1([&]{deadLock(c1, c2);});            // (1)
  std::thread t2([&]{deadLock(c2, c1);});            // (1)


  std::cout << std::endl;



Thread t1 and t2 need two resources CriticalData to perform their job (3). CriticalData has its own mutex mut to synchronise the access. Unfortunately, both invoke the function deadlock with the arguments c1 and c2 in a different sequence (1). Now we have a race condition. If thread t1 can lock the first mutex a.mut but not the second one b.mut because in the meantime thread t2 locks the second one, we will get a deadlock (2).


The easiest way to solve the deadlock is to lock both mutexes atomically.

With C++11, you can use an std::unique_lock together with std::lock. std::unique_lock you can defer the locking of its mutex. The function std::lock, which can lock an arbitrary number of mutexes in an atomic way, does the locking finally.

void deadLock(CriticalData& a, CriticalData& b){
    std::unique_lock<mutex> guard1(a.mut, std::defer_lock);
    std::unique_lock<mutex> guard2(b.mut, std::defer_lock);
    std::lock(guard1, guard2);
    // do something with a and b (critical region)


With C++17, an std::scoped_lock can lock an arbitrary number of mutex in one atomic operation.

void deadLock(CriticalData& a, CriticalData& b){
    std::scoped_lock(a.mut, b.mut);
    // do something with a and b (critical region


CP.22: Never call unknown code while holding a lock (e.g., a callback)

Why is this code snippet really bad?

std::mutex m;
{ std::lock_guard<std::mutex> lockGuard(m); sharedVariable = unknownFunction(); }


I can only speculate about the unknownFunction. If unknownFunction

  • tries to lock the mutex m, that will be undefined behaviour. Most of the times, you will get a deadlock.
  • starts a new thread that tries to lock the mutex m, you will get a deadlock.
  • locks another mutex m2 you may get a deadlock because you lock the two mutexes m and m2 at the same time. Now it can happen that another thread locks the same mutexes in a different sequence.
  • will not directly or indirectly try to lock the mutex m; all seems to be fine. “Seems” because your coworker can modify the function or the function is dynamically linked, and you get a different version. All bets are open what may happen.
  • work as expected you may have a performance problem because you don’t know how long the function unknownFunction would take. What meant to be a multithreaded program may become a single-threaded program.

To solve this issues, use a local variable:

std::mutex m;
auto tempVar = unknownFunction(); { std::lock_guard<std::mutex> lockGuard(m); sharedVariable = tempVar; }


This additional indirection solves all issues. tempVar is a local variable and can, therefore, not be a victim of a data race. This means that you can invoke unknownFunction without a synchronisation mechanism. Additionally, the time for holding a lock is reduced to its bare minimum: assigning the value of tempVar to sharedVariable.

What's next?

If you don't call join or detach on your created thread child, the child will throw an std::terminate exception in its destructor. std::terminatecalls per default std::abort. To overcome this issue, the guidelines support library has a gsl::joining_thread which calls join at the end of its scope. I will have a closer look at gsl::joining_thread in my next post.


Thanks a lot to my Patreon Supporters: Eric Pederson, Paul Baxter,  Sai Raghavendra Prasad Poosa, Meeting C++, Matt Braun, Avi Lachmish, Adrian Muntea, and Roman Postanciuc.



Get your e-book at leanpub:

The C++ Standard Library


Concurrency With Modern C++


Get Both as one Bundle

cover   ConcurrencyCoverFrame   bundle
With C++11, C++14, and C++17 we got a lot of new C++ libraries. In addition, the existing ones are greatly improved. The key idea of my book is to give you the necessary information to the current C++ libraries in about 200 pages.  

C++11 is the first C++ standard that deals with concurrency. The story goes on with C++17 and will continue with C++20.

I'll give you a detailed insight in the current and the upcoming concurrency in C++. This insight includes the theory and a lot of practice with more the 100 source files.


Get my books "The C++ Standard Library" (including C++17) and "Concurrency with Modern C++" in a bundle.

In sum, you get more than 550 pages full of modern C++ and more than 100 source files presenting concurrency in practice.


Add comment

My Newest E-Books

Latest comments

Subscribe to the newsletter (+ pdf bundle)

Blog archive

Source Code


Today 630

All 844784

Currently are 212 guests and no members online

Kubik-Rubik Joomla! Extensions