DOLFINx 0.9.0
DOLFINx C++ interface
|
MPI support functionality. More...
Classes | |
class | Comm |
A duplicate MPI communicator and manage lifetime of the communicator. More... | |
struct | dependent_false |
Enumerations | |
enum class | tag : int { consensus_pcx , consensus_pex } |
MPI communication tags. | |
Functions | |
int | rank (MPI_Comm comm) |
Return process rank for the communicator. | |
int | size (MPI_Comm comm) |
void | check_error (MPI_Comm comm, int code) |
Check MPI error code. If the error code is not equal to MPI_SUCCESS, then std::abort is called. | |
constexpr std::array< std::int64_t, 2 > | local_range (int rank, std::int64_t N, int size) |
Return local range for the calling process, partitioning the global [0, N - 1] range across all ranks into partitions of almost equal size. | |
constexpr int | index_owner (int size, std::size_t index, std::size_t N) |
Return which rank owns index in global range [0, N - 1] (inverse of MPI::local_range). | |
std::vector< int > | compute_graph_edges_pcx (MPI_Comm comm, std::span< const int > edges) |
Determine incoming graph edges using the PCX consensus algorithm. | |
std::vector< int > | compute_graph_edges_nbx (MPI_Comm comm, std::span< const int > edges) |
Determine incoming graph edges using the NBX consensus algorithm. | |
template<typename U > | |
std::pair< std::vector< std::int32_t >, std::vector< typename std::remove_reference_t< typename U::value_type > > > | distribute_to_postoffice (MPI_Comm comm, const U &x, std::array< std::int64_t, 2 > shape, std::int64_t rank_offset) |
Distribute row data to 'post office' ranks. | |
template<typename U > | |
std::vector< typename std::remove_reference_t< typename U::value_type > > | distribute_from_postoffice (MPI_Comm comm, std::span< const std::int64_t > indices, const U &x, std::array< std::int64_t, 2 > shape, std::int64_t rank_offset) |
Distribute rows of a rectangular data array from post office ranks to ranks where they are required. | |
template<typename U > | |
std::vector< typename std::remove_reference_t< typename U::value_type > > | distribute_data (MPI_Comm comm0, std::span< const std::int64_t > indices, MPI_Comm comm1, const U &x, int shape1) |
Distribute rows of a rectangular data array to ranks where they are required (scalable version). | |
template<typename T > | |
constexpr MPI_Datatype | mpi_type () |
MPI Type. | |
MPI support functionality.
void check_error | ( | MPI_Comm | comm, |
int | code ) |
std::vector< int > compute_graph_edges_nbx | ( | MPI_Comm | comm, |
std::span< const int > | edges ) |
Determine incoming graph edges using the NBX consensus algorithm.
Given a list of outgoing edges (destination ranks) from this rank, this function returns the incoming edges (source ranks) to this rank.
[in] | comm | MPI communicator |
[in] | edges | Edges (ranks) from this rank (the caller). |
std::vector< int > compute_graph_edges_pcx | ( | MPI_Comm | comm, |
std::span< const int > | edges ) |
Determine incoming graph edges using the PCX consensus algorithm.
Given a list of outgoing edges (destination ranks) from this rank, this function returns the incoming edges (source ranks) to this rank.
[in] | comm | MPI communicator |
[in] | edges | Edges (ranks) from this rank (the caller). |
std::vector< typename std::remove_reference_t< typename U::value_type > > distribute_data | ( | MPI_Comm | comm0, |
std::span< const std::int64_t > | indices, | ||
MPI_Comm | comm1, | ||
const U & | x, | ||
int | shape1 ) |
Distribute rows of a rectangular data array to ranks where they are required (scalable version).
This function determines local neighborhoods for communication, and then uses MPI neighbourhood collectives to exchange data. It is scalable if the neighborhoods are relatively small, i.e. each process communicated with a modest number of other processes.
[in] | comm0 | Communicator to distribute data across. |
[in] | indices | Global indices of the data (row indices) required by the calling process. |
[in] | comm1 | Communicator across which x is distributed. Can be MPI_COMM_NULL on ranks where x is empty. |
[in] | x | Data (2D array, row-major) on calling process to be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank on comm1 . |
[in] | shape1 | The number of columns of the data array x . |
indices
(row-major storage). shape1 > 0
std::vector< typename std::remove_reference_t< typename U::value_type > > distribute_from_postoffice | ( | MPI_Comm | comm, |
std::span< const std::int64_t > | indices, | ||
const U & | x, | ||
std::array< std::int64_t, 2 > | shape, | ||
std::int64_t | rank_offset ) |
Distribute rows of a rectangular data array from post office ranks to ranks where they are required.
This function determines local neighborhoods for communication, and then using MPI neighbourhood collectives to exchange data. It is scalable if the neighborhoods are relatively small, i.e. each process communicated with a modest number of other processes.
[in] | comm | The MPI communicator. |
[in] | indices | Global indices of the data (row indices) required by calling process. |
[in] | x | Data (2D array, row-major) on calling process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank. |
[in] | shape | The global shape of x . |
[in] | rank_offset | The rank offset such that global index of local row i in x is rank_offset + i . It is usually computed using MPI_Exscan on comm1 from MPI::distribute_data. |
indices
(row-major storage). shape1 > 0
. std::pair< std::vector< std::int32_t >, std::vector< typename std::remove_reference_t< typename U::value_type > > > distribute_to_postoffice | ( | MPI_Comm | comm, |
const U & | x, | ||
std::array< std::int64_t, 2 > | shape, | ||
std::int64_t | rank_offset ) |
Distribute row data to 'post office' ranks.
This function takes row-wise data that is distributed across processes. Data is not duplicated across ranks. The global index of a row is its local row position plus the offset for the calling process. The post office rank for a row is determined by applying dolfinx::MPI::index_owner to the global index, and the row is then sent to the post office rank. The function returns that row data for which the caller is the post office.
[in] | comm | MPI communicator. |
[in] | x | Data to distribute (2D, row-major layout). |
[in] | shape | The global shape of x . |
[in] | rank_offset | The rank offset such that global index of local row i in x is rank_offset + i . It is usually computed using MPI_Exscan . |
x
, i.e. rows for which the calling process is the post office.
|
constexpr |
Return which rank owns index in global range [0, N - 1] (inverse of MPI::local_range).
[in] | size | Number of MPI ranks |
[in] | index | The index to determine owning rank |
[in] | N | Total number of indices |
|
constexpr |
int size | ( | MPI_Comm | comm | ) |
Return size of the group (number of processes) associated with the communicator