Dune::CollectiveCommunication< MPI_Comm > Class Template Reference
[Parallel Communication]
Specialization of CollectiveCommunication for MPI.
More...
#include <dune/common/parallel/mpicollectivecommunication.hh>
List of all members.
Public Member Functions |
| CollectiveCommunication (const MPI_Comm &c=MPI_COMM_WORLD) |
| Instantiation using a MPI communicator.
|
int | rank () const |
int | size () const |
template<typename T > |
T | sum (T &in) const |
template<typename T > |
int | sum (T *inout, int len) const |
template<typename T > |
T | prod (T &in) const |
template<typename T > |
int | prod (T *inout, int len) const |
template<typename T > |
T | min (T &in) const |
template<typename T > |
int | min (T *inout, int len) const |
template<typename T > |
T | max (T &in) const |
template<typename T > |
int | max (T *inout, int len) const |
int | barrier () const |
template<typename T > |
int | broadcast (T *inout, int len, int root) const |
template<typename T > |
int | gather (T *in, T *out, int len, int root) const |
template<typename T > |
int | gatherv (T *in, int sendlen, T *out, int *recvlen, int *displ, int root) const |
template<typename T > |
int | scatter (T *send, T *recv, int len, int root) const |
template<typename T > |
int | scatterv (T *send, int *sendlen, int *displ, T *recv, int recvlen, int root) const |
| operator MPI_Comm () const |
template<typename T , typename T1 > |
int | allgather (T *sbuf, int count, T1 *rbuf) const |
template<typename T > |
int | allgatherv (T *in, int sendlen, T *out, int *recvlen, int *displ) const |
template<typename BinaryFunction , typename Type > |
int | allreduce (Type *inout, int len) const |
template<typename BinaryFunction , typename Type > |
int | allreduce (Type *in, Type *out, int len) const |
Detailed Description
template<>
class Dune::CollectiveCommunication< MPI_Comm >
Specialization of CollectiveCommunication for MPI.
Constructor & Destructor Documentation
Instantiation using a MPI communicator.
Member Function Documentation
template<typename T , typename T1 >
Gathers data from all tasks and distribute it to all. The block of data sent from the jth process is received by every process and placed in the jth block of the buffer recvbuf.
- Parameters:
-
[in] | sbuf | The buffer with the data to send. Has to be the same for each task. |
[in] | count | The number of elements to send by any process. |
[out] | rbuf | The receive buffer for the data. Has to be of size notasks*count, with notasks being the number of tasks in the communicator. |
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Gathers data of variable length from all tasks and distribute it to all. The block of data sent from the jth process is received by every process and placed in the jth block of the buffer out.
- Parameters:
-
[in] | in | The send buffer with the data to send. |
[in] | sendlen | The number of elements to send on each task. |
[out] | out | The buffer to store the received data in. |
[in] | recvlen | An array with size equal to the number of processes containing the number of elements to recieve from process i at position i, i.e. the number that is passed as sendlen argument to this function in process i. |
[in] | displ | An array with size equal to the number of processes. Data recieved from process i will be written starting at out+displ[i]. |
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
template<typename BinaryFunction , typename Type >
Compute something over all processes for each component of an array and return the result in every process. The template parameter BinaryFunction is the type of the binary function to use for the computation
- Parameters:
-
| in | The array to compute on. |
| out | The array to store the results in. |
| len | The number of components in the array |
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
template<typename BinaryFunction , typename Type >
Compute something over all processes for each component of an array and return the result in every process. The template parameter BinaryFunction is the type of the binary function to use for the computation
- Parameters:
-
| inout | The array to compute on. |
| len | The number of components in the array |
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Wait until all processes have arrived at this point in the program.
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Distribute an array from the process with rank root to all other processes.
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Gather arrays on root task. Each process sends its in array of length len to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array which must have size len * number of processes.
- Parameters:
-
[in] | in | The send buffer with the data to send. |
[out] | out | The buffer to store the received data in. Might have length zero on non-root tasks. |
[in] | len | The number of elements to send on each task. |
[in] | root | The root task that gathers the data. |
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
- Note:
- out must have space for P*len elements
Gather arrays of variable size on root task. Each process sends its in array of length sendlen to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array.
- Parameters:
-
[in] | in | The send buffer with the data to be sent |
[in] | sendlen | The number of elements to send on each task |
[out] | out | The buffer to store the received data in. May have length zero on non-root tasks. |
[in] | recvlen | An array with size equal to the number of processes containing the number of elements to receive from process i at position i, i.e. the number that is passed as sendlen argument to this function in process i. May have length zero on non-root tasks. |
[out] | displ | An array with size equal to the number of processes. Data received from process i will be written starting at out+displ[i] on the root process. May have length zero on non-root tasks. |
[in] | root | The root task that gathers the data. |
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.
Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.
Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.
Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.
Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.
Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.
Return rank, is between 0 and size()-1.
Scatter array from a root to all other task. The root process sends the elements with index from k*len to (k+1)*len-1 in its array to task k, which stores it at index 0 to len-1.
- Parameters:
-
[in] | send | The array to scatter. Might have length zero on non-root tasks. |
[out] | recv | The buffer to store the received data in. Upon completion of the method each task will have same data stored there as the one in send buffer of the root task before. |
[in] | len | The number of elements in the recv buffer. |
[in] | root | The root task that gathers the data. |
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
- Note:
- out must have space for P*len elements
Scatter arrays of variable length from a root to all other tasks. The root process sends the elements with index from send+displ[k] to send+displ[k]-1 in its array to task k, which stores it at index 0 to recvlen-1.
- Parameters:
-
[in] | send | The array to scatter. May have length zero on non-root tasks. |
[in] | sendlen | An array with size equal to the number of processes containing the number of elements to scatter to process i at position i, i.e. the number that is passed as recvlen argument to this function in process i. |
[in] | displ | An array with size equal to the number of processes. Data scattered to process i will be read starting at send+displ[i] on root the process. |
[out] | recv | The buffer to store the received data in. Upon completion of the method each task will have the same data stored there as the one in send buffer of the root task before. |
[in] | recvlen | The number of elements in the recv buffer. |
[in] | root | The root task that gathers the data. |
- Returns:
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Number of processes in set, is greater than 0.
Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.
Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.
The documentation for this class was generated from the following file: