Parallel Algorithm of the Standard Template Library

Contents[Show]

The idea is quite simple. The Standard Template has more than 100 algorithms for searching, counting, and manipulation of ranges and their elements. With C++17, 69 of them are overloaded and a few new are added. The overloaded and new algorithm can be invoked with a so-called execution policy. By using the execution policy, you can specify whether the algorithm should run sequential, parallel, or parallel and vectorized.

 

A first example

Vectorization stands for the SIMD (Single Instruction, Multiple Data) extensions of the instruction set of a modern processor. SIMD enables your processor to execute one operation in parallel on several data.

Which overloaded variant of an algorithm is used can be chosen by the policy tag. How does that work?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
vector<int> v = ...

// standard sequential sort
std::sort(v.begin(), v.end());

// sequential execution
std::sort(std::parallel::seq, v.begin(), v.end());

// permitting parallel execution
std::sort(std::parallel::par, v.begin(), v.end());

// permitting parallel and vectorized execution
std::sort(std::parallel::par_unseq, v.begin(), v.end());

 

The example shows that you can even use the classic variant of std::sort (line 4). In contrary, you explicitly specify in C++17, whether the sequential (line 7), parallel (line 10), or the parallel and vectorized (line 13) version is used.

You have to keep two points in mind.

Two special points

At one hand, that an algorithm will actually not always be performed parallel and vectorized if you use the execution policy std::parallel::par_unseq. At the other hand, you as the user is responsible for using the algorithm in the correct way.

Parallel and vectorized execution

Whether an algorithm runs in a parallel and vectorized way depends on many points. It depends if the CPU and the operating system support SIMD instructions. Additionally, it's a question of the compiler and the optimisation level that you used to translate your code.

 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
const int SIZE= 8;

int vec[]={1,2,3,4,5,6,7,8};
int res[]={0,};

int main(){
  for (int i= 0; i < SIZE; ++i) {
    res[i]= vec[i]+5;
  }
}

 

Line 8 is the key line in the small program. Thanks to the compiler explorer https://godbolt.org/ , it is quite easy to generate the assembler instructions for clang 3.6 with and without maximum optimisation (-03). 

Without Optimisation

Although my time fiddling with assembler instruction is long gone, it's quite obvious that all is done sequentially.

seq

With maximum optimisation

By using maximum optimisation, I get instructions that run in parallel on several data.

vec

The move operation (movdqa) and the add operation (paddd) use the special register xmm0 and xmm1. Both registers are so-called SSE registers and have 128 Bits. SSE stands für Streaming SIMD Extensions. 

Hazards of data races and deadlocks

The algorithm does not automatically protect from data races and deadlocks. Example?

 

int numComp= 0;

std::vector<int> vec={1,3,8,9,10};

std::sort(std::parallel::vec, vec.begin(), vec.end(), 
         [&numComp](int fir, int sec){ numComp++; return fir < sec; });
         

 

The small code snippet has a data race. numComp counts the number of operations. That means, in particular, that numComp is modified in the lambda-function. Therefore, the code snippet has a data race. In order to be well-defined, numComp has to be an atomic variable.

Static versus dynamic execution policy

Sorry to say, but the execution policy made it not in the C++17 standard. We have to wait for C++20.

The creation of a thread is an expensive operation. Therefore, it makes no sense to sort a small container in a parallel (std::parallel::par) or parallel and vectorized (std::parallel:par_unseq) fashion. The administrative overhead to deal with the threads will outweigh the benefit of parallelization. It even get worse if you use a divide and conquer algorithm such as quicksort.

 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
template <class ForwardIt>
void quicksort(ForwardIt first, ForwardIt last){
  if(first == last) return;
  auto pivot = *std::next(first, std::distance(first,last)/2);
  ForwardIt middle1 = std::partition(std::parallel::par, first, last, 
                      [pivot](const auto& em){ return em < pivot; });
  ForwardIt middle2 = std::partition(std::parallel::par, middle1, last, 
                      [pivot](const auto& em){ return !(pivot < em); });
  quicksort(first, middle1);
  quicksort(middle2, last);
}

 

Now the issue is that the number of threads is way too big for your system. To solve this issue, C++17 supports a dynamic execution policy.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
std::size_t threshold= ...;  // some value 

template <class ForwardIt>
void quicksort(ForwardIt first, ForwardIt last){
  if(first == last) return;
  std::size_t distance= std::distance(first,last);
  auto pivot = *std::next(first, distance/2);

  std::parallel::execution_policy exec_pol= std::parallel::par;
  if ( distance < threshold ) exec_pol= std::parallel_execution::seq;

  ForwardIt middle1 = std::partition(exec_pol, first, last, 
                      [pivot](const auto& em){ return em < pivot; });
  ForwardIt middle2 = std::partition(exec_pol, middle1, last, 
                      [pivot](const auto& em){ return !(pivot < em); });
  quicksort(first, middle1);
  quicksort(middle2, last);
}

 

I use in line 9 and line 10 the dynamic execution policy. By default, quicksort will run in parallel (line 9). If the length of the range is smaller than a given threshold (line 1), quicksort will run sequential (line 10).

All algorithm

69 of the algorithm of the STL support a parallel or a parallel and vectorized execution. Here they are.

ParallelAlgorithmn

In addition, we get 8 new algorithms.

New algorithms

The new variation of  std::for_each and the new algorithm std::for_each_n, std::exclusive_scan, std::inclusive_scan, std::transfom_exclusive_scan , and std::transform_inclusive_scan are in the std namespace. That will not hold for std::parallel::reduce and std::parallel::transform_reduce. Both are in the namespace std::parallel.

std::for_each
std::for_each_n
std::exclusive_scan
std::inclusive_scan
std::transform_exclusive_scan
std::transform_inclusive_scan

std::parallel::reduce
std::parallel::transform_reduce

Let's have closer look at std::parallel::transform_reduce.

transform becomes map

The from Haskell known function map is called std::transform in C++.  If that is not a broad hint. When I substitute in the name std::parallel::transform_reduce transform with map, I will get std::parallel::map_reduce. MapReduce is the well-known parallel framework that maps in the first phase each value to a new value and reduces in the second phase all values to the result.

The algorithm is directly applicable in C++17. Of course, my algorithm will not run on a big, distributed system but the strategy is the same. Therefore, I map in the map phase each word to its length and reduce in the reduce phase the lengths of all words to their sum. The result is the sum of the length of all words.

 

std::vector<std::string> str{"Only","for","testing","purpose"};


std::size_t result= std::parallel::transform_reduce(std::parallel::par, 
                                str.begin(), str.end(), 
                                [](std::string s){ return s.length(); }, 
                                0, [](std::size_t a, std::size_t b){ return a + b; });

std::cout << result << std::endl;      //   21

 

The 0 is the initial element for the reduction.

What's next?

With the next post, I will go three years further in the future. In C++20 we get atomic smart pointers. In accordance with their C++11 pendants they are called std::atomic_shared_ptr and std::atomic_weak_ptr. 

 

 

 

 

 

 

 

title page smalltitle page small Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library".   Get your e-book. Support my blog.

Tags: C++17

Comments   

0 #1 KARTHIKEYAN VASUKI B 2017-02-20 00:09
good
Quote

Add comment


My Newest E-Book

Latest comments

Subscribe to the newsletter (+ pdf bundle)

Blog archive

Source Code

Visitors

Today 220

All 257840

Currently are 176 guests and no members online