Asynchronous Function Calls

Contents[Show]

std:.async feels like an asynchronous function call. Under the hood std::async is a task. One, which is extremely easy to use.

std::async

std::async gets a callable as a work package. In this example it's a function, a function object or a lambda function. 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
// async.cpp

#include <future>
#include <iostream>
#include <string>

std::string helloFunction(const std::string& s){
  return "Hello C++11 from " + s + ".";
}

class HelloFunctionObject{
  public:
    std::string operator()(const std::string& s) const {
      return "Hello C++11 from " + s + ".";
    }
};

int main(){

  std::cout << std::endl;

  // future with function
  auto futureFunction= std::async(helloFunction,"function");

  // future with function object
  HelloFunctionObject helloFunctionObject;
  auto futureFunctionObject= std::async(helloFunctionObject,"function object");

  // future with lambda function
  auto futureLambda= std::async([](const std::string& s ){return "Hello C++11 from " + s + ".";},"lambda function");

  std::cout << futureFunction.get() << "\n" 
	    << futureFunctionObject.get() << "\n" 
	    << futureLambda.get() << std::endl;

  std::cout << std::endl;

}

 

The program execution is not so exciting.

 async

The future gets a function (line23), a function object (line 27) and a lambda function (line 30). At the end, each future request its value (line 32).

And again, a little bit more formal. The std::async calls in line 23, 27 and 30 create a data channel between the two endpoints future and promise. The promise immediately starts to execute its work package. But that is only the default behaviour. By the get call, the future requests the result of It's work packages

Eager or lazy evaluation

Eager or lazy evaluation are two orthogonal strategies, to calculate the result of an expression. In case of eager evaluation, the expression will immediately be evaluated, in case of lazy evaluation, the expression will only be evaluated, if needed. Often lazy evaluation is called call-by-need. With lazy evaluation you save time and compute power, because there is no evaluation on suspicion. An expression can be a mathematical calculation, a function or a std::async call. 

By default, std::async executed immediately its work package. The C++ runtime decides, if the calculation happens in the same or a new thread. With the flag std::launch::async std::async will run it's work package in a new thread. In opposite to that, the flag std::launch::deferred expresses, that std::async runs in the same thread. The execution is in this case lazy. That implies, that the eager evaluations starts immediately, but the lazy evaluation with the policy std::launch::deferred starts, when the future asks for the value with its get call. 

The program shows that different behaviour.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// asyncLazy.cpp

#include <chrono>
#include <future>
#include <iostream>

int main(){

  std::cout << std::endl;

  auto begin= std::chrono::system_clock::now();

  auto asyncLazy=std::async(std::launch::deferred,[]{ return  std::chrono::system_clock::now();});

  auto asyncEager=std::async( std::launch::async,[]{ return  std::chrono::system_clock::now();});

  std::this_thread::sleep_for(std::chrono::seconds(1));

  auto lazyStart= asyncLazy.get() - begin;
  auto eagerStart= asyncEager.get() - begin;

  auto lazyDuration= std::chrono::duration<double>(lazyStart).count();
  auto eagerDuration=  std::chrono::duration<double>(eagerStart).count();

  std::cout << "asyncLazy evaluated after : " << lazyDuration << " seconds." << std::endl;
  std::cout << "asyncEager evaluated after: " << eagerDuration << " seconds." << std::endl;

  std::cout << std::endl;

}

 

Both std::async calls (line 13 and 15) returns the currentl time point. But the first call is lazy, the second greedy. The short sleep of one second in line 17 makes that obvious. By the call asyncLazy.get() in line 19, the result will be available after a short nap.  The is not true for asyncEager. asyncEager.get() gets the result from the immediately executed work package.

asyncLazy

A bigger compute job

std::async is quite convenient, to put a bigger compute job on more shoulders. So, the calculation of the scalar product is done in the program with four asynchronous function calls.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
// dotProductAsync.cpp

#include <chrono>
#include <iostream>
#include <future>
#include <random>
#include <vector>
#include <numeric>

static const int NUM= 100000000;

long long getDotProduct(std::vector<int>& v, std::vector<int>& w){

  auto future1= std::async([&]{return std::inner_product(&v[0],&v[v.size()/4],&w[0],0LL);});
  auto future2= std::async([&]{return std::inner_product(&v[v.size()/4],&v[v.size()/2],&w[v.size()/4],0LL);});
  auto future3= std::async([&]{return std::inner_product(&v[v.size()/2],&v[v.size()*3/4],&w[v.size()/2],0LL);});
  auto future4= std::async([&]{return std::inner_product(&v[v.size()*3/4],&v[v.size()],&w[v.size()*3/4],0LL);});

  return future1.get() + future2.get() + future3.get() + future4.get();
}


int main(){

  std::cout << std::endl;

  // get NUM random numbers from 0 .. 100
  std::random_device seed;

  // generator
  std::mt19937 engine(seed());

  // distribution
  std::uniform_int_distribution<int> dist(0,100);

  // fill the vectors
  std::vector<int> v, w;
  v.reserve(NUM);
  w.reserve(NUM);
  for (int i=0; i< NUM; ++i){
    v.push_back(dist(engine));
    w.push_back(dist(engine));
  }

  // measure the execution time
  std::chrono::system_clock::time_point start = std::chrono::system_clock::now();
  std::cout << "getDotProduct(v,w): " << getDotProduct(v,w) << std::endl;
  std::chrono::duration<double> dur  = std::chrono::system_clock::now() - start;
  std::cout << "Parallel Execution: "<< dur.count() << std::endl;

  std::cout << std::endl;

}

 

The program uses the functionality of the random and time library. Both libraries are part of C++11. The two vectors v and w are created and filled with random number in the lines 27 - 43.  Each of the vector gets (line 40 - 43) hundred million elements. dist(engine) in line 41 and 42 generated the random numbers, which are uniform distributed on the range from 0 to 100. The current calculation of the scalar product takes place in the function getDotProduct (line 12 - 20). std::async uses internally the standard template library algorithm std::inner_product. The return statement sums up the results of the futures.

It takes about 0.4 seconds to calculate the result on my PC.

dotProductAsync

But now the question is. How fast is the program, if I executed it on one core? A small modification of the function getDotProduct and we know the truth.


long
long getDotProduct(std::vector<int>& v,std::vector<int>& w){ return std::inner_product(v.begin(),v.end(),w.begin(),0LL); }

 

The execution of the program is four times slower.

 

dotProduct

Optimization

But, if I compile the program with maximal optimization level O3 with my GCC, the performance difference is nearly gone. The parallel execution is about 10 percent faster.

 

dotProductComparisonOptimization

What's next?

In the next post I show you, how to parallelize big compute job by using std::packaged_task.(Proofreader Alexey Elymanov)

 

 

 

 

 

 

title page smalltitle page small Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library".   Get your e-book. Support my blog.

 

Tags: async, tasks

Comments   

0 #1 ranking factors 2016-11-22 16:09
What's Happening i am new to this, I stumbled upon this I have found It positively helpful
and it has aided me out loads. I am hoping to contribute & assist different
customers like its aided me. Good job.
Quote
0 #2 Wilbert 2016-12-24 00:37
Nicee post. I was checking continuously this weblog and I am
impressed! Very helpful information specifically the last phase :) I
handle such info much. I was seeking this certain information for a long time.
Thanks and best of luck.
Quote
0 #3 Michal 2017-01-04 22:05
I read this post fully concerning the comparison of hottest and previous technologies,
it's remarkable article.
Quote
0 #4 Shella 2017-02-21 05:25
Hi there to every single one, it's actually a pleasant for me to go to see
this web page, it contains useful Information.
Quote
0 #5 Kurt 2017-02-27 23:49
Heya i am for the first time here. I found this board and I too
find It really usefuyl & it helped me out a lot.
Iam hopping to gikve one thing back and hrlp others like
you aded me.
Quote
0 #6 Armand 2017-04-03 02:01
Wooh I loove your content, bookmarked!
Quote

Add comment


My Newest E-Book

Latest comments

Subscribe to the newsletter (+ pdf bundle)

Blog archive

Source Code

Visitors

Today 169

All 277623

Currently are 148 guests and no members online