DOLFINx
0.5.1
DOLFINx C++ interface
|
MPI support functionality. More...
Classes | |
class | Comm |
A duplicate MPI communicator and manage lifetime of the communicator. More... | |
struct | dependent_false |
Enumerations | |
enum class | tag : int { consensus_pcx , consensus_pex } |
MPI communication tags. | |
Functions | |
int | rank (MPI_Comm comm) |
Return process rank for the communicator. | |
int | size (MPI_Comm comm) |
Return size of the group (number of processes) associated with the communicator. | |
constexpr std::array< std::int64_t, 2 > | local_range (int rank, std::int64_t N, int size) |
Return local range for the calling process, partitioning the global [0, N - 1] range across all ranks into partitions of almost equal size. More... | |
constexpr int | index_owner (int size, std::size_t index, std::size_t N) |
Return which rank owns index in global range [0, N - 1] (inverse of MPI::local_range). More... | |
std::vector< int > | compute_graph_edges_pcx (MPI_Comm comm, const std::span< const int > &edges) |
Determine incoming graph edges using the PCX consensus algorithm. More... | |
std::vector< int > | compute_graph_edges_nbx (MPI_Comm comm, const std::span< const int > &edges) |
Determine incoming graph edges using the NBX consensus algorithm. More... | |
template<typename T > | |
std::pair< std::vector< std::int32_t >, std::vector< T > > | distribute_to_postoffice (MPI_Comm comm, const std::span< const T > &x, std::array< std::int64_t, 2 > shape, std::int64_t rank_offset) |
Distribute row data to 'post office' ranks. More... | |
template<typename T > | |
std::vector< T > | distribute_from_postoffice (MPI_Comm comm, const std::span< const std::int64_t > &indices, const std::span< const T > &x, std::array< std::int64_t, 2 > shape, std::int64_t rank_offset) |
Distribute rows of a rectangular data array from post office ranks to ranks where they are required. More... | |
template<typename T > | |
std::vector< T > | distribute_data (MPI_Comm comm, const std::span< const std::int64_t > &indices, const std::span< const T > &x, int shape1) |
Distribute rows of a rectangular data array to ranks where they are required (scalable version). More... | |
template<typename T > | |
constexpr MPI_Datatype | mpi_type () |
MPI Type. | |
MPI support functionality.
std::vector< int > compute_graph_edges_nbx | ( | MPI_Comm | comm, |
const std::span< const int > & | edges | ||
) |
Determine incoming graph edges using the NBX consensus algorithm.
Given a list of outgoing edges (destination ranks) from this rank, this function returns the incoming edges (source ranks) to this rank.
[in] | comm | MPI communicator |
[in] | edges | Edges (ranks) from this rank (the caller). |
std::vector< int > compute_graph_edges_pcx | ( | MPI_Comm | comm, |
const std::span< const int > & | edges | ||
) |
Determine incoming graph edges using the PCX consensus algorithm.
Given a list of outgoing edges (destination ranks) from this rank, this function returns the incoming edges (source ranks) to this rank.
[in] | comm | MPI communicator |
[in] | edges | Edges (ranks) from this rank (the caller). |
std::vector< T > distribute_data | ( | MPI_Comm | comm, |
const std::span< const std::int64_t > & | indices, | ||
const std::span< const T > & | x, | ||
int | shape1 | ||
) |
Distribute rows of a rectangular data array to ranks where they are required (scalable version).
This function determines local neighborhoods for communication, and then using MPI neighbourhood collectives to exchange data. It is scalable if the neighborhoods are relatively small, i.e. each process communicated with a modest number of othe processes.
[in] | comm | The MPI communicator |
[in] | indices | Global indices of the data (row indices) required by calling process |
[in] | x | Data (2D array, row-major) on calling process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank. |
[in] | shape1 | The number of columns of the data array x . |
indices
(row-major storage) shape1 > 0
std::vector< T > distribute_from_postoffice | ( | MPI_Comm | comm, |
const std::span< const std::int64_t > & | indices, | ||
const std::span< const T > & | x, | ||
std::array< std::int64_t, 2 > | shape, | ||
std::int64_t | rank_offset | ||
) |
Distribute rows of a rectangular data array from post office ranks to ranks where they are required.
This function determines local neighborhoods for communication, and then using MPI neighbourhood collectives to exchange data. It is scalable if the neighborhoods are relatively small, i.e. each process communicated with a modest number of othe processes/
[in] | comm | The MPI communicator |
[in] | indices | Global indices of the data (row indices) required by calling process |
[in] | x | Data (2D array, row-major) on calling process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank. |
[in] | shape | The global shape of x |
[in] | rank_offset | The rank offset such that global index of local row i in x is rank_offset + i . It is usually computed using MPI_Exscan . |
indices
(row-major storage) shape1 > 0
std::pair< std::vector< std::int32_t >, std::vector< T > > distribute_to_postoffice | ( | MPI_Comm | comm, |
const std::span< const T > & | x, | ||
std::array< std::int64_t, 2 > | shape, | ||
std::int64_t | rank_offset | ||
) |
Distribute row data to 'post office' ranks.
This function takes row-wise data that is distributed across processes. Data is not duplicated across ranks. The global index of a row is its local row position plus the offset for the calling process. The post office rank for a row is determined by applying MPI::index_owner to the global index, and the row is then sent to the post office rank. The function returns that row data for which the caller is the post office.
[in] | comm | MPI communicator |
[in] | x | Data to distribute (2D, row-major layout) |
[in] | shape | The global shape of x |
[in] | rank_offset | The rank offset such that global index of local row i in x is rank_offset + i . It is usually computed using MPI_Exscan . |
x
, i.e. rows for which the calling process is the post office
|
constexpr |
Return which rank owns index in global range [0, N - 1] (inverse of MPI::local_range).
[in] | size | Number of MPI ranks |
[in] | index | The index to determine owning rank |
[in] | N | Total number of indices |
|
constexpr |