DOLFINx 0.10.0.0
DOLFINx C++ interface
Loading...
Searching...
No Matches
dolfinx::MPI Namespace Reference

MPI support functionality. More...

Classes

class  Comm
 A duplicate MPI communicator and manage lifetime of the communicator. More...
 
struct  dependent_false
 
struct  mpi_type_mapping
 MPI Type. More...
 

Enumerations

enum class  tag : int { consensus_pcx = 1200 , consensus_pex = 1201 , consensus_nbx = 1202 }
 MPI communication tags.
 

Functions

int rank (MPI_Comm comm)
 Return process rank for the communicator.
 
int size (MPI_Comm comm)
 
void check_error (MPI_Comm comm, int code)
 Check MPI error code. If the error code is not equal to MPI_SUCCESS, then std::abort is called.
 
constexpr std::array< std::int64_t, 2 > local_range (int rank, std::int64_t N, int size)
 Return local range for the calling process, partitioning the global [0, N - 1] range across all ranks into partitions of almost equal size.
 
constexpr int index_owner (int size, std::size_t index, std::size_t N)
 Return which rank owns index in global range [0, N - 1] (inverse of MPI::local_range).
 
std::vector< int > compute_graph_edges_pcx (MPI_Comm comm, std::span< const int > edges)
 Determine incoming graph edges using the PCX consensus algorithm.
 
std::vector< int > compute_graph_edges_nbx (MPI_Comm comm, std::span< const int > edges, int tag=static_cast< int >(tag::consensus_nbx))
 Determine incoming graph edges using the NBX consensus algorithm.
 
template<typename U >
std::pair< std::vector< std::int32_t >, std::vector< typename std::remove_reference_t< typename U::value_type > > > distribute_to_postoffice (MPI_Comm comm, const U &x, std::array< std::int64_t, 2 > shape, std::int64_t rank_offset)
 Distribute row data to 'post office' ranks.
 
template<typename U >
std::vector< typename std::remove_reference_t< typename U::value_type > > distribute_from_postoffice (MPI_Comm comm, std::span< const std::int64_t > indices, const U &x, std::array< std::int64_t, 2 > shape, std::int64_t rank_offset)
 Distribute rows of a rectangular data array from post office ranks to ranks where they are required.
 
template<typename U >
std::vector< typename std::remove_reference_t< typename U::value_type > > distribute_data (MPI_Comm comm0, std::span< const std::int64_t > indices, MPI_Comm comm1, const U &x, int shape1)
 Distribute rows of a rectangular data array to ranks where they are required (scalable version).
 

Variables

template<typename T >
MPI_Datatype mpi_t = mpi_type_mapping<T>::type
 Retrieves the MPI data type associated to the provided type.
 

Detailed Description

MPI support functionality.

Function Documentation

◆ check_error()

void check_error ( MPI_Comm comm,
int code )

Check MPI error code. If the error code is not equal to MPI_SUCCESS, then std::abort is called.

Parameters
[in]commMPI communicator.
[in]codeError code returned by an MPI function call.

◆ compute_graph_edges_nbx()

std::vector< int > compute_graph_edges_nbx ( MPI_Comm comm,
std::span< const int > edges,
int tag = static_cast<int>(tag::consensus_nbx) )

Determine incoming graph edges using the NBX consensus algorithm.

Given a list of outgoing edges (destination ranks) from this rank, this function returns the incoming edges (source ranks) to this rank.

Note
This function is for sparse communication patterns, i.e. where the number of ranks that communicate with each other is relatively small. It is scalable, i.e. no arrays the size of the communicator are constructed and the communication pattern is sparse. It implements the NBX algorithm presented in https://dx.doi.org/10.1145/1837853.1693476.
The order of the returned ranks is not deterministic.
Collective.
Parameters
[in]commMPI communicator
[in]edgesEdges (ranks) from this rank (the caller).
[in]tagTag used in non-blocking MPI calls. A tag can be required when this function is called a second time on some ranks before a previous call has completed on all other ranks.
Returns
Ranks that have defined edges from them to this rank.
Note
An alternative to passing a tag is to ensure that there is an implicit or explicit barrier before and after the call to this function.

◆ compute_graph_edges_pcx()

std::vector< int > compute_graph_edges_pcx ( MPI_Comm comm,
std::span< const int > edges )

Determine incoming graph edges using the PCX consensus algorithm.

Given a list of outgoing edges (destination ranks) from this rank, this function returns the incoming edges (source ranks) to this rank.

Note
This function is for sparse communication patterns, i.e. where the number of ranks that communicate with each other is relatively small. It is not scalable as arrays the size of the communicator are allocated. It implements the PCX algorithm described in https://dx.doi.org/10.1145/1837853.1693476.
For sparse graphs, this function has \(O(p)\) cost, where \(p\)is the number of MPI ranks. It is suitable for modest MPI rank counts.
The order of the returned ranks is not deterministic.
Collective
Parameters
[in]commMPI communicator
[in]edgesEdges (ranks) from this rank (the caller).
Returns
Ranks that have defined edges from them to this rank.

◆ distribute_data()

template<typename U >
std::vector< typename std::remove_reference_t< typename U::value_type > > distribute_data ( MPI_Comm comm0,
std::span< const std::int64_t > indices,
MPI_Comm comm1,
const U & x,
int shape1 )

Distribute rows of a rectangular data array to ranks where they are required (scalable version).

This function determines local neighborhoods for communication, and then uses MPI neighbourhood collectives to exchange data. It is scalable if the neighborhoods are relatively small, i.e. each process communicated with a modest number of other processes.

Parameters
[in]comm0Communicator to distribute data across.
[in]indicesGlobal indices of the data (row indices) required by the calling process.
[in]comm1Communicator across which x is distributed. Can be MPI_COMM_NULL on ranks where x is empty.
[in]xData (2D array, row-major) on calling process to be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank on comm1.
[in]shape1The number of columns of the data array x.
Returns
The data for each index in indices (row-major storage).
Precondition
shape1 > 0

◆ distribute_from_postoffice()

template<typename U >
std::vector< typename std::remove_reference_t< typename U::value_type > > distribute_from_postoffice ( MPI_Comm comm,
std::span< const std::int64_t > indices,
const U & x,
std::array< std::int64_t, 2 > shape,
std::int64_t rank_offset )

Distribute rows of a rectangular data array from post office ranks to ranks where they are required.

This function determines local neighborhoods for communication, and then using MPI neighbourhood collectives to exchange data. It is scalable if the neighborhoods are relatively small, i.e. each process communicated with a modest number of other processes.

Parameters
[in]commThe MPI communicator.
[in]indicesGlobal indices of the data (row indices) required by calling process.
[in]xData (2D array, row-major) on calling process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank.
[in]shapeThe global shape of x.
[in]rank_offsetThe rank offset such that global index of local row i in x is rank_offset + i. It is usually computed using MPI_Exscan on comm1 from MPI::distribute_data.
Returns
The data for each index in indices (row-major storage).
Precondition
shape1 > 0.

◆ distribute_to_postoffice()

template<typename U >
std::pair< std::vector< std::int32_t >, std::vector< typename std::remove_reference_t< typename U::value_type > > > distribute_to_postoffice ( MPI_Comm comm,
const U & x,
std::array< std::int64_t, 2 > shape,
std::int64_t rank_offset )

Distribute row data to 'post office' ranks.

This function takes row-wise data that is distributed across processes. Data is not duplicated across ranks. The global index of a row is its local row position plus the offset for the calling process. The post office rank for a row is determined by applying dolfinx::MPI::index_owner to the global index, and the row is then sent to the post office rank. The function returns that row data for which the caller is the post office.

Parameters
[in]commMPI communicator.
[in]xData to distribute (2D, row-major layout).
[in]shapeThe global shape of x.
[in]rank_offsetThe rank offset such that global index of local row i in x is rank_offset + i. It is usually computed using MPI_Exscan.
Returns
(0) local indices of my post office data and (1) the data (row-major). It does not include rows that are in x, i.e. rows for which the calling process is the post office.

◆ index_owner()

int index_owner ( int size,
std::size_t index,
std::size_t N )
constexpr

Return which rank owns index in global range [0, N - 1] (inverse of MPI::local_range).

Parameters
[in]sizeNumber of MPI ranks.
[in]indexThe index to determine the owning rank of.
[in]NTotal number of indices.
Returns
Rank of the owning process.

◆ local_range()

std::array< std::int64_t, 2 > local_range ( int rank,
std::int64_t N,
int size )
constexpr

Return local range for the calling process, partitioning the global [0, N - 1] range across all ranks into partitions of almost equal size.

Parameters
[in]rankMPI rank of the caller
[in]NThe value to partition.
[in]sizeNumber of MPI ranks across which to partition N.

◆ size()

int size ( MPI_Comm comm)

Return size of the group (number of processes) associated with the communicator.

Variable Documentation

◆ mpi_t

template<typename T >
MPI_Datatype mpi_t = mpi_type_mapping<T>::type

Retrieves the MPI data type associated to the provided type.

Template Parameters
Tcpp type to map