Transactional memory is based on the idea of a transaction from the database theory. Transactional memory shall make the handling of threads a lot easier. That for two reasons. Data races and deadlocks disappear. Transactions are composable.
A transaction is an action that has the properties Atomicity, Consistency, Isolation, and Durability (ACID). Except for the durability, all properties hold for transactional memory in C++; therefore, only three short questions are left.
ACI(D)
What means atomicity, consistency, and isolation for an atomic block consisting of some statements?
atomic{
statement1;
statement2;
statement3;
}
- Atomicity: Either all or no statement of the block is performed.
- Consistency: The system is always in a consistent state. All transactions build a total order.
- Isolation: Each transaction runs in total isolation from the other transactions.
How are these properties guaranteed? A transaction remembers its initial state. Then the transaction will be performed without synchronisation. If a conflict happens during its execution, the transaction will be interrupted and put to its initial state. This rollback causes that the transaction will be executed once more. If the initial state of the transaction even holds at the end of the transaction, the transaction will be committed.
A transaction is a kind of speculative activity that is only committed if the initial state holds. It is in contrast to a mutex an optimistic approach. A transaction is performed without synchronisation. It will only be published if no conflict with its initial state happens. A mutex is a pessimistic approach. At first, the mutex ensures that no other thread can enter the critical region. The thread will enter the critical region only if it is the exclusive owner of the mutex and, hence, all other threads are blocked.
C++ supports transactional memory in two flavours: synchronised blocks and atomic blocks.
Transactional Memory
Up to now, I only wrote about transactions. No, I will write more specifically about synchronised blocks and atomic blocks. Both can be encapsulated in the other. To be specific, synchronised blocks are no atomic blocks because they can execute transaction-unsafe code. This may be code like the output to the console which can not be made undone. This is the reason why synchronised blocks are often called relaxed.
Synchronised Blocks
Synchronised blocks behave such as they are protected by a global lock. This means all synchronised blocks obey a total order; therefore, all changes to a synchronized block are available in the next synchronised block. There is a synchronize-with relation between the synchronised blocks. Because synchronised blocks behave like protected by a global lock, they can not cause a deadlock. While a classical lock protects a memory area from explicit threads, the global lock of a synchronised block protects from all threads. That is the reason, why the following program is well-defined:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
// synchronized.cpp
#include <iostream>
#include <vector>
#include <thread>
int i= 0;
void increment(){
synchronized{
std::cout << ++i << " ,";
}
}
int main(){
std::cout << std::endl;
std::vector<std::thread> vecSyn(10);
for(auto& thr: vecSyn)
thr = std::thread([]{ for(int n = 0; n < 10; ++n) increment(); });
for(auto& thr: vecSyn) thr.join();
std::cout << "\n\n";
}
|
Although variable i in line 7 is a global variable and the operations in the synchronized block are transaction-unsafe, the program is well-defined. The access to i and std::cout happens in total order. That is due to the synchronised block.
The output of the program is not so thrilling. The values for i are written in an increasing sequence, separated by a comma. Only for completeness.

What about data races? You can have them with synchronised blocks. Only a small modification is necessary.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
// nonsynchronized.cpp
#include <chrono>
#include <iostream>
#include <vector>
#include <thread>
using namespace std::chrono_literals;
int i= 0;
void increment(){
synchronized{
std::cout << ++i << " ,";
std::this_thread::sleep_for(1ns);
}
}
int main(){
std::cout << std::endl;
std::vector<std::thread> vecSyn(10);
std::vector<std::thread> vecUnsyn(10);
for(auto& thr: vecSyn)
thr = std::thread([]{ for(int n = 0; n < 10; ++n) increment(); });
for(auto& thr: vecUnsyn)
thr = std::thread([]{ for(int n = 0; n < 10; ++n) std::cout << ++i << " ,"; });
for(auto& thr: vecSyn) thr.join();
for(auto& thr: vecUnsyn) thr.join();
std::cout << "\n\n";
}
|
To observe the data race, I let the synchronised block sleep for a nanosecond (line 15). At the same time, I access std::cout without using a synchronised block (line 29); therefore I launch 10 threads that increment the global variable i. The output shows the issue.

I put red circles around the issues in the output. These are the spots, at which std::cout is used by at least two threads at the same time. The C++11 standard guarantees that the characters will be written in an atomic way that is only an optical issue. But what is worse, is that the variable i is written by at least two threads. This is a data race. Therefore the program has undefined behaviour. If you look carefully at the output of the program you see that 103 is written twice.
The total order of synchronised blocks also holds for atomic blocks.
Atomic Blocks
You can execute transaction-unsafe code in a synchronised block but not in an atomic block. Atomic blocks are available in the forms: atomic_noexcept, atomic_commit, and atomic_cancel. The three suffixes _noexcept, _commit, and _cancel define how an atomic block should manage an exception.
- atomic_noexcept: If an exception throws, std::abort will be called and the program aborts.
- atomic_cancel: In the default case, std::abort is called. That will not hold if a transaction-safe exception throws that is responsible for the ending of the transaction. In this case, the transaction will be cancelled, put to its initial state and the exception will be thrown.
- atomic_commit: If an exception is thrown, the transaction will be committed normally.
transaction-safe exceptions: std::bad_alloc, std::bad_array_length, std::bad_array_new_length, std::bad_cast, std::bad_typeid, std::bad_exception, std::exception, and all exceptions that are derived from them are transaction-safe.
transaction_safe versus transaction_unsafe Code
You can declare a function as transaction_safe or attach the transaction_unsafe attribute to it.
int transactionSafeFunction() transaction_safe;
[[transaction_unsafe]] int transactionUnsafeFunction();
transaction_safe is part of the type of a function. But what does transaction_safe mean? A transaction_safe function is according to the proposal N4265 a function that has a transaction_safe definition. This holds true if the following properties do not apply to its definition.
- It has a volatile parameter or a volatile variable.
- It has transaction-unsafe statements.
- If the function uses a constructor or destructor of a class in its body that has a volatile non-static member.
Of course, this definition of transaction_safe is not sufficient because it uses the term transaction_unsafe. You can read in the proposal N4265 and get the answer to what does transaction_unsafe means.
What's next?
The next post is about the fork-join paradigm. To be specific, it's about task blocks.
Thanks a lot to my Patreon Supporters: Matt Braun, Roman Postanciuc, Tobias Zindl, G Prvulovic, Reinhold Dröge, Abernitzke, Frank Grimm, Sakib, Broeserl, António Pina, Sergey Agafyin, Андрей Бурмистров, Jake, GS, Lawton Shoemake, Animus24, Jozo Leko, John Breland, Venkat Nandam, Jose Francisco, Douglas Tinkham, Kuchlong Kuchlong, Robert Blanch, Truels Wissneth, Kris Kafka, Mario Luoni, Friedrich Huber, lennonli, Pramod Tikare Muralidhara, Peter Ware, Daniel Hufschläger, Alessandro Pezzato, Bob Perry, Satish Vangipuram, Andi Ireland, Richard Ohnemus, Michael Dunsky, Leo Goodstadt, John Wiederhirn, Yacob Cohen-Arazi, Florian Tischler, Robin Furness, Michael Young, Holger Detering, Bernd Mühlhaus, Matthieu Bolt, Stephen Kelley, Kyle Dean, Tusar Palauri, Dmitry Farberov, Juan Dent, George Liao, Daniel Ceperley, Jon T Hess, Stephen Totten, Wolfgang Fütterer, Matthias Grün, Phillip Diekmann, Ben Atakora, and Ann Shatoff.
Thanks in particular to Jon Hess, Lakshman, Christian Wittenhorst, Sherhy Pyton, Dendi Suhubdy, Sudhakar Belagurusamy, Richard Sargeant, Rusty Fleming, John Nebel, Mipko, Alicja Kaminska, and Slavko Radman.
My special thanks to Embarcadero 
My special thanks to PVS-Studio 
My special thanks to Tipi.build 
Seminars
I'm happy to give online seminars or face-to-face seminars worldwide. Please call me if you have any questions.
Bookable (Online)
German
Standard Seminars (English/German)
Here is a compilation of my standard seminars. These seminars are only meant to give you a first orientation.
- C++ - The Core Language
- C++ - The Standard Library
- C++ - Compact
- C++11 and C++14
- Concurrency with Modern C++
- Design Pattern and Architectural Pattern with C++
- Embedded Programming with Modern C++
- Generic Programming (Templates) with C++
New
- Clean Code with Modern C++
- C++20
Contact Me
- Phone: +49 7472 917441
- Mobil:: +49 176 5506 5086
- Mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
- German Seminar Page: www.ModernesCpp.de
- Mentoring Page: www.ModernesCpp.org
Modernes C++,

Comments
good, every one be able to without difficulty understand it, Thanks a lot.
If you have a synxchronized-block this will be mapped to the transactional memory facilities of the CPU. There are currently two facilities, one of the Intel-CPUs since the Skylake and one of IBM POWER-CPU. Both implementations maintain a read- and write-set of each thread in a core in the L1-cache and if another thread writes to either of these, the transaction is stopped and rolled back on all cores that participate in this transaction. Another reason could be a system-call like when your cout calling the kernel for the console-output. So as the L1-cache is rather small its very likely that either your heavy-weight cout or the system-call will roll-back the printf and and cause that all participating threads will undergo the transaction in exlusive mode. So the solution will be to have the increment in the synchronized-block and to do the cout-output outside.
RSS feed for comments to this post