Openmp Reference Sheet For C C Plus Plus

ADVERTISEMENT

OpenMP Reference Sheet for C/C++
<A,B,C such that total iterations known at start of loop>
for(A=C;A<B;A++) {
<your code here>
Constructs
<force ordered execution of part of the code. A=C will be guaranteed to execute
<parallelize a for loop by breaking apart iterations into chunks>
before A=C+1>
#pragma omp ordered {
#pragma omp parallel for [shared(vars), private(vars), firstprivate(vars),
<your code here>
lastprivate(vars), default(shared|none), reduction(op:vars), copyin(vars), if(expr),
}
ordered, schedule(type[,chunkSize])]
}
<A,B,C such that total iterations known at start of loop>
for(A=C;A<B;A++) {
<parallelized sections of code with each section operating in one thread>
<your code here>
#pragma omp sections [private(vars), firstprivate(vars), lastprivate(vars),
reduction(op:vars), nowait] {
<force ordered execution of part of the code. A=C will be guaranteed to execute
#pragma omp section {
before A=C+1>
<your code here>
#pragma omp ordered {
}
<your code here>
#pragma omp section {
}
<your code here>
}
}
....
<parallelized sections of code with each section operating in one thread>
}
#pragma omp parallel sections [shared(vars), private(vars), firstprivate(vars),
lastprivate(vars), default(shared|none), reduction(op:vars), copyin(vars), if(expr)] {
<only one thread will execute the following. NOT always by the master thread>
#pragma omp section {
#pragma omp single {
<your code here>
<your code here (only executed once)>
}
}
#pragma omp section {
}
<your code here>
}
....
}
Directives
shared(vars) <share the same variables between all the threads>
<grand parallelization region with optional work-sharing constructs defining more
private(vars) <each thread gets a private copy of variables. Note that other than the
specific splitting of work and variables amongst threads. You may use work-sharing
master thread, which uses the original, these variables are not initialized to
constructs without a grand parallelization region, but it will have no effect (sometimes
anything.>
useful if you are making OpenMP'able functions but want to leave the creation of threads
firstprivate(vars) <like private, but the variables do get copies of their master thread
to the user of those functions)>
values>
#pragma omp parallel [shared(vars), private(vars), firstprivate(vars), lastprivate(vars),
lastprivate(vars) <copy back the last iteration (in a for loop) or the last section (in a
default(private|shared|none), reduction(op:vars), copyin(vars), if(expr)] {
sections) variables to the master thread copy (so it will persist even after the
<the work-sharing constructs below can appear in any order, are optional, and can
parallelization ends)>
be used multiple times. Note that no new threads will be created by the constructs.
default(private|shared|none) <set the default behavior of variables in the parallelization
They reuse the ones created by the above parallel construct.>
construct. shared is the default setting, so only the private and none setting have
effects. none forces the user to specify the behavior of variables. Note that even with
<your code here (will be executed by all threads)>
shared, the iterator variable in for loops still is private by necessity >
reduction(op:vars) <vars are treated as private and the specified operation(op, which
<parallelize a for loop by breaking apart iterations into chunks>
can be +,*,-,&,|,&,&&,||) is performed using the private copies in each thread. The
#pragma omp for [private(vars), firstprivate(vars), lastprivate(vars),
master thread copy (which will persist) is updated with the final value.>
reduction(op:vars), ordered, schedule(type[,chunkSize]), nowait]

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education
Go
Page of 2