Dune::CollectiveCommunication< MPI_Comm > Class Template Reference
[Parallel Communication]

Specialization of CollectiveCommunication for MPI. More...

#include <dune/common/parallel/mpicollectivecommunication.hh>

List of all members.

Public Member Functions

 CollectiveCommunication (const MPI_Comm &c=MPI_COMM_WORLD)
 Instantiation using a MPI communicator.
int rank () const
int size () const
template<typename T >
sum (T &in) const
template<typename T >
int sum (T *inout, int len) const
template<typename T >
prod (T &in) const
template<typename T >
int prod (T *inout, int len) const
template<typename T >
min (T &in) const
template<typename T >
int min (T *inout, int len) const
template<typename T >
max (T &in) const
template<typename T >
int max (T *inout, int len) const
int barrier () const
template<typename T >
int broadcast (T *inout, int len, int root) const
template<typename T >
int gather (T *in, T *out, int len, int root) const
template<typename T >
int gatherv (T *in, int sendlen, T *out, int *recvlen, int *displ, int root) const
template<typename T >
int scatter (T *send, T *recv, int len, int root) const
template<typename T >
int scatterv (T *send, int *sendlen, int *displ, T *recv, int recvlen, int root) const
 operator MPI_Comm () const
template<typename T , typename T1 >
int allgather (T *sbuf, int count, T1 *rbuf) const
template<typename T >
int allgatherv (T *in, int sendlen, T *out, int *recvlen, int *displ) const
template<typename BinaryFunction , typename Type >
int allreduce (Type *inout, int len) const
template<typename BinaryFunction , typename Type >
int allreduce (Type *in, Type *out, int len) const

Detailed Description

template<>
class Dune::CollectiveCommunication< MPI_Comm >

Specialization of CollectiveCommunication for MPI.


Constructor & Destructor Documentation

Dune::CollectiveCommunication< MPI_Comm >::CollectiveCommunication ( const MPI_Comm &  c = MPI_COMM_WORLD  )  [inline]

Instantiation using a MPI communicator.


Member Function Documentation

template<typename T , typename T1 >
int Dune::CollectiveCommunication< MPI_Comm >::allgather ( T *  sbuf,
int  count,
T1 *  rbuf 
) const [inline]

Gathers data from all tasks and distribute it to all. The block of data sent from the jth process is received by every process and placed in the jth block of the buffer recvbuf.

Parameters:
[in] sbuf The buffer with the data to send. Has to be the same for each task.
[in] count The number of elements to send by any process.
[out] rbuf The receive buffer for the data. Has to be of size notasks*count, with notasks being the number of tasks in the communicator.
Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::allgatherv ( T *  in,
int  sendlen,
T *  out,
int *  recvlen,
int *  displ 
) const [inline]

Gathers data of variable length from all tasks and distribute it to all. The block of data sent from the jth process is received by every process and placed in the jth block of the buffer out.

Parameters:
[in] in The send buffer with the data to send.
[in] sendlen The number of elements to send on each task.
[out] out The buffer to store the received data in.
[in] recvlen An array with size equal to the number of processes containing the number of elements to recieve from process i at position i, i.e. the number that is passed as sendlen argument to this function in process i.
[in] displ An array with size equal to the number of processes. Data recieved from process i will be written starting at out+displ[i].
Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

template<typename BinaryFunction , typename Type >
int Dune::CollectiveCommunication< MPI_Comm >::allreduce ( Type *  in,
Type *  out,
int  len 
) const [inline]

Compute something over all processes for each component of an array and return the result in every process. The template parameter BinaryFunction is the type of the binary function to use for the computation

Parameters:
in The array to compute on.
out The array to store the results in.
len The number of components in the array
Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

template<typename BinaryFunction , typename Type >
int Dune::CollectiveCommunication< MPI_Comm >::allreduce ( Type *  inout,
int  len 
) const [inline]

Compute something over all processes for each component of an array and return the result in every process. The template parameter BinaryFunction is the type of the binary function to use for the computation

Parameters:
inout The array to compute on.
len The number of components in the array
Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

int Dune::CollectiveCommunication< MPI_Comm >::barrier (  )  const [inline]

Wait until all processes have arrived at this point in the program.

Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::broadcast ( T *  inout,
int  len,
int  root 
) const [inline]

Distribute an array from the process with rank root to all other processes.

Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::gather ( T *  in,
T *  out,
int  len,
int  root 
) const [inline]

Gather arrays on root task. Each process sends its in array of length len to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array which must have size len * number of processes.

Parameters:
[in] in The send buffer with the data to send.
[out] out The buffer to store the received data in. Might have length zero on non-root tasks.
[in] len The number of elements to send on each task.
[in] root The root task that gathers the data.
Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

Note:
out must have space for P*len elements
template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::gatherv ( T *  in,
int  sendlen,
T *  out,
int *  recvlen,
int *  displ,
int  root 
) const [inline]

Gather arrays of variable size on root task. Each process sends its in array of length sendlen to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array.

Parameters:
[in] in The send buffer with the data to be sent
[in] sendlen The number of elements to send on each task
[out] out The buffer to store the received data in. May have length zero on non-root tasks.
[in] recvlen An array with size equal to the number of processes containing the number of elements to receive from process i at position i, i.e. the number that is passed as sendlen argument to this function in process i. May have length zero on non-root tasks.
[out] displ An array with size equal to the number of processes. Data received from process i will be written starting at out+displ[i] on the root process. May have length zero on non-root tasks.
[in] root The root task that gathers the data.
Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::max ( T *  inout,
int  len 
) const [inline]

Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::max ( T &  in  )  const [inline]

Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::min ( T *  inout,
int  len 
) const [inline]

Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::min ( T &  in  )  const [inline]

Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

Dune::CollectiveCommunication< MPI_Comm >::operator MPI_Comm (  )  const [inline]
template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::prod ( T *  inout,
int  len 
) const [inline]

Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::prod ( T &  in  )  const [inline]

Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.

int Dune::CollectiveCommunication< MPI_Comm >::rank (  )  const [inline]

Return rank, is between 0 and size()-1.

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::scatter ( T *  send,
T *  recv,
int  len,
int  root 
) const [inline]

Scatter array from a root to all other task. The root process sends the elements with index from k*len to (k+1)*len-1 in its array to task k, which stores it at index 0 to len-1.

Parameters:
[in] send The array to scatter. Might have length zero on non-root tasks.
[out] recv The buffer to store the received data in. Upon completion of the method each task will have same data stored there as the one in send buffer of the root task before.
[in] len The number of elements in the recv buffer.
[in] root The root task that gathers the data.
Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

Note:
out must have space for P*len elements
template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::scatterv ( T *  send,
int *  sendlen,
int *  displ,
T *  recv,
int  recvlen,
int  root 
) const [inline]

Scatter arrays of variable length from a root to all other tasks. The root process sends the elements with index from send+displ[k] to send+displ[k]-1 in its array to task k, which stores it at index 0 to recvlen-1.

Parameters:
[in] send The array to scatter. May have length zero on non-root tasks.
[in] sendlen An array with size equal to the number of processes containing the number of elements to scatter to process i at position i, i.e. the number that is passed as recvlen argument to this function in process i.
[in] displ An array with size equal to the number of processes. Data scattered to process i will be read starting at send+displ[i] on root the process.
[out] recv The buffer to store the received data in. Upon completion of the method each task will have the same data stored there as the one in send buffer of the root task before.
[in] recvlen The number of elements in the recv buffer.
[in] root The root task that gathers the data.
Returns:
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

int Dune::CollectiveCommunication< MPI_Comm >::size (  )  const [inline]

Number of processes in set, is greater than 0.

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::sum ( T *  inout,
int  len 
) const [inline]

Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::sum ( T &  in  )  const [inline]

Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.


The documentation for this class was generated from the following file:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines

Generated on 25 Mar 2018 for dune-common by  doxygen 1.6.1