![]() |
ScalES-PPM
|
This document intends to show how to:
The partitioning modules provide the following functionality:
Multiple data structures are available to represent a partition, addressing the different needs of the available algorithms.
To decompose a data structure one first needs to describe the topology of the data structure in question. ScalES-PPM provides for several basic data types to make this step as simple as possible:
n-dimensional rectilinears like e.g. regular grids use use a rank 1 n-dimensional extent array to describe the enumeration of indices in each dimension
i.e. to describe a 2-dimensional grid with indices 1..m in x-direction and indices 1..n in y-direction the corresponding data structure would be declared as:
For graph-structured data contains a distributed graph data structure ppm_distributed::graph_csr_dist_i4 graph_csr_dist_i4. The module also contains routines to construct such a graph from a rectilinear partitioned into one rectilinear per process.
For repartitioning the input already is a partition. The following types are therefore both input and output argument to some routines.
Given p parts over a set of n elements, arbitrary partitions vary in requirements. To bridge the gap from succinct but rigid to flexible but more redundant data structures the following types are provided by :
start(1:p+1)
tabulates the sub-array of the second table elements
which lists the elements sorted by partition, i.e. elements(start(x):start(x+1)-1)
lists the elements of part x elem
the elements for one (sub-)set. A vector of set_i4
can accordingly describe a partition. part_range
= [1,p], component array assigned
tabulates for each element elem(i)
where elem(i)
part_range
partition(i)
. Extending the example from above, an m n grid can be partitioned into
k
l
even sized parts by calling the uniform_partition method of : USE ppm_extents, ONLY: extent USE ppm_rectilinear, ONLY: lidx2rlcoord USE ppm_uniform_partition, ONLY: uniform_partition ... TYPE(extent) :: grid(2), part(2), deco(2) INTEGER :: m, n, k, l, comm_rank, ierror ... CALL mpi_comm_rank(mpi_comm_world, comm_rank, ierror) deco = (/ extent(1, k), extent(1, l) /) grid = (/ extent(1, m), extent(1, n) /) part = uniform_partition(grid, deco, lidx2rlcoord(deco, comm_rank + 1)) In the above example, the linear MPI rank is mapped to a cartesian coordinate with the help of the utility module ppm_rectilinear.
Because the axes are divided separately, the partition thus obtained forms a block partition.
The library provides convenience wrappers for both MeTiS and ParMeTiS. The following example code shows how to
If your data has no inherent connectivity, i.e. the problem is embarrassingly parallel, graphs or grids are improper models to use for partitioning. In this case partitioning by weight only is probably the most sensible solution. The library provides a greedy method to partition such data sets.
For an already partitioned set the library offers a routine based on swapping elements such that memory reallocation can be avoided. The example is for the multi-process variant which has MPI collective call semantics.
Based on the part
and new_part
arrays of above example, the application can then reorganize the data decomposition. If a certain amount of imbalance among partitions is tolerable, it can be beneficial to add an efficiency_threshold argument to the call to repartition_swap_mp.
Das diesem Bericht zugrundeliegende Vorhaben wurde mit Mitteln des Bundesministeriums für Bildung, und Forschung unter dem Förderkennzeichen 01IH08004E gefördert. Die Verantwortung für den Inhalt dieser Veröffentlichung liegt beim Autor.