DOLFINx
0.4.1
DOLFINx C++ interface
|
This class represents the distribution index arrays across processes. An index array is a contiguous collection of N+1 indices [0, 1, . . ., N] that are distributed across M processes. On a given process, the IndexMap stores a portion of the index set using local indices [0, 1, . . . , n], and a map from the local indices to a unique global index. More...
#include <IndexMap.h>
Public Types | |
enum class | Mode { insert , add } |
Mode for reverse scatter operation. | |
enum class | Direction { reverse , forward } |
Edge directions of neighborhood communicator. | |
Public Member Functions | |
IndexMap (MPI_Comm comm, std::int32_t local_size) | |
Create an non-overlapping index map with local_size owned on this process. More... | |
IndexMap (MPI_Comm comm, std::int32_t local_size, const xtl::span< const int > &dest_ranks, const xtl::span< const std::int64_t > &ghosts, const xtl::span< const int > &src_ranks) | |
Create an index map with local_size owned indiced on this process. More... | |
IndexMap (const IndexMap &map)=delete | |
IndexMap (IndexMap &&map)=default | |
Move constructor. | |
~IndexMap ()=default | |
Destructor. | |
IndexMap & | operator= (IndexMap &&map)=default |
Move assignment. | |
IndexMap & | operator= (const IndexMap &map)=delete |
std::array< std::int64_t, 2 > | local_range () const noexcept |
Range of indices (global) owned by this process. | |
std::int32_t | num_ghosts () const noexcept |
Number of ghost indices on this process. | |
std::int32_t | size_local () const noexcept |
Number of indices owned by on this process. | |
std::int64_t | size_global () const noexcept |
Number indices across communicator. | |
const std::vector< std::int64_t > & | ghosts () const noexcept |
Local-to-global map for ghosts (local indexing beyond end of local range) | |
MPI_Comm | comm () const |
Return the MPI communicator used to create the index map. More... | |
MPI_Comm | comm (Direction dir) const |
Return a MPI communicator with attached distributed graph topology information. More... | |
void | local_to_global (const xtl::span< const std::int32_t > &local, const xtl::span< std::int64_t > &global) const |
Compute global indices for array of local indices. More... | |
void | global_to_local (const xtl::span< const std::int64_t > &global, const xtl::span< std::int32_t > &local) const |
Compute local indices for array of global indices. More... | |
std::vector< std::int64_t > | global_indices () const |
Global indices. More... | |
const graph::AdjacencyList< std::int32_t > & | scatter_fwd_indices () const noexcept |
Local (owned) indices shared with neighbor processes, i.e. are ghosts on other processes, grouped by sharing (neighbor) process (destination ranks in forward communicator and source ranks in the reverse communicator). scatter_fwd_indices().links(p) gives the list of owned indices that needs to be sent to neighbourhood rank p during a forward scatter. More... | |
const std::vector< std::int32_t > & | scatter_fwd_ghost_positions () const noexcept |
Position of ghost entries in the receive buffer after a forward scatter, e.g. for a receive buffer b and a set operation, the ghost values should be updated by ghost_value[i] = b[scatter_fwd_ghost_positions[i]] . More... | |
std::vector< int > | ghost_owners () const |
Compute the owner on the neighborhood communicator of each ghost index. More... | |
std::map< std::int32_t, std::set< int > > | compute_shared_indices () const |
std::pair< IndexMap, std::vector< std::int32_t > > | create_submap (const xtl::span< const std::int32_t > &indices) const |
Create new index map from a subset of indices in this index map. The order of the indices is preserved, with new map effectively a 'compressed' map. More... | |
template<typename T > | |
void | scatter_fwd_begin (const xtl::span< const T > &send_buffer, MPI_Datatype &data_type, MPI_Request &request, const xtl::span< T > &recv_buffer) const |
Start a non-blocking send of owned data to ranks that ghost the data. The communication is completed by calling IndexMap::scatter_fwd_end. The send and receive buffer should not be changed until after IndexMap::scatter_fwd_end has been called. More... | |
void | scatter_fwd_end (MPI_Request &request) const |
Complete a non-blocking send from the local owner of to process ranks that have the index as a ghost. This function complete the communication started by IndexMap::scatter_fwd_begin. More... | |
template<typename T > | |
void | scatter_fwd (const xtl::span< const T > &local_data, xtl::span< T > remote_data, int n) const |
Send n values for each index that is owned to processes that have the index as a ghost. The size of the input array local_data must be the same as n * size_local(). More... | |
template<typename T > | |
void | scatter_rev_begin (const xtl::span< const T > &send_buffer, MPI_Datatype &data_type, MPI_Request &request, const xtl::span< T > &recv_buffer) const |
Start a non-blocking send of ghost values to the owning rank. The non-blocking communication is completed by calling IndexMap::scatter_rev_end. A reverse scatter is the transpose of IndexMap::scatter_fwd_begin. More... | |
void | scatter_rev_end (MPI_Request &request) const |
Complete a non-blocking send of ghost values to the owning rank. This function complete the communication started by IndexMap::scatter_rev_begin. More... | |
template<typename T > | |
void | scatter_rev (xtl::span< T > local_data, const xtl::span< const T > &remote_data, int n, IndexMap::Mode op) const |
Send n values for each ghost index to owning to the process. More... | |
This class represents the distribution index arrays across processes. An index array is a contiguous collection of N+1 indices [0, 1, . . ., N] that are distributed across M processes. On a given process, the IndexMap stores a portion of the index set using local indices [0, 1, . . . , n], and a map from the local indices to a unique global index.
IndexMap | ( | MPI_Comm | comm, |
std::int32_t | local_size | ||
) |
IndexMap | ( | MPI_Comm | comm, |
std::int32_t | local_size, | ||
const xtl::span< const int > & | dest_ranks, | ||
const xtl::span< const std::int64_t > & | ghosts, | ||
const xtl::span< const int > & | src_ranks | ||
) |
Create an index map with local_size owned indiced on this process.
[in] | comm | The MPI communicator |
[in] | local_size | Local size of the IndexMap, i.e. the number of owned entries |
[in] | dest_ranks | Ranks that 'ghost' indices that are owned by the calling rank. I.e., ranks that the caller will send data to when updating ghost values. |
[in] | ghosts | The global indices of ghost entries |
[in] | src_ranks | Owner rank (on global communicator) of each entry in ghosts |
MPI_Comm comm | ( | ) | const |
Return the MPI communicator used to create the index map.
MPI_Comm comm | ( | Direction | dir | ) | const |
Return a MPI communicator with attached distributed graph topology information.
[in] | dir | Edge direction of communicator (forward, reverse) |
std::map< std::int32_t, std::set< int > > compute_shared_indices | ( | ) | const |
Compute map from each local (owned) index to the set of ranks that have the index as a ghost
std::pair< IndexMap, std::vector< std::int32_t > > create_submap | ( | const xtl::span< const std::int32_t > & | indices | ) | const |
Create new index map from a subset of indices in this index map. The order of the indices is preserved, with new map effectively a 'compressed' map.
[in] | indices | Local indices in the map that should appear in the new index map. All indices must be owned, i.e. indices must be less than this->size_local() . |
indices
must be sorted and contain no duplicates std::vector< int > ghost_owners | ( | ) | const |
Compute the owner on the neighborhood communicator of each ghost index.
The neighborhood ranks are the 'source' ranks on the 'reverse' communicator, i.e. the neighborhood source ranks on the communicator returned by IndexMap::comm(IndexMap::Direction::reverse). The source ranks on IndexMap::comm(IndexMap::Direction::reverse) communicator can be used to convert the returned neighbour ranks to the rank indices on the full communicator.
std::vector< std::int64_t > global_indices | ( | ) | const |
Global indices.
void global_to_local | ( | const xtl::span< const std::int64_t > & | global, |
const xtl::span< std::int32_t > & | local | ||
) | const |
Compute local indices for array of global indices.
[in] | global | Global indices |
[out] | local | The local of the corresponding global index in 'global'. Returns -1 if the local index does not exist on this process. |
void local_to_global | ( | const xtl::span< const std::int32_t > & | local, |
const xtl::span< std::int64_t > & | global | ||
) | const |
Compute global indices for array of local indices.
[in] | local | Local indices |
[out] | global | The global indices |
|
inline |
Send n values for each index that is owned to processes that have the index as a ghost. The size of the input array local_data must be the same as n * size_local().
[in] | local_data | Local data associated with each owned local index to be sent to process where the data is ghosted. Size must be n * size_local(). |
[in,out] | remote_data | Ghost data on this process received from the owning process. Size will be n * num_ghosts(). |
[in] | n | Number of data items per index |
|
inline |
Start a non-blocking send of owned data to ranks that ghost the data. The communication is completed by calling IndexMap::scatter_fwd_end. The send and receive buffer should not be changed until after IndexMap::scatter_fwd_end has been called.
[in] | send_buffer | Local data associated with each owned local index to be sent to process where the data is ghosted. It must not be changed until after a call to IndexMap::scatter_fwd_end. The order of data in the buffer is given by IndexMap::scatter_fwd_indices. |
data_type | The MPI data type. To send data with a block size use MPI_Type_contiguous with size n | |
request | The MPI request handle for tracking the status of the non-blocking communication | |
recv_buffer | A buffer used for the received data. The position of ghost entries in the buffer is given by IndexMap::scatter_fwd_ghost_positions. The buffer must not be accessed or changed until after a call to IndexMap::scatter_fwd_end. |
|
inline |
Complete a non-blocking send from the local owner of to process ranks that have the index as a ghost. This function complete the communication started by IndexMap::scatter_fwd_begin.
[in] | request | The MPI request handle for tracking the status of the send |
|
noexcept |
Position of ghost entries in the receive buffer after a forward scatter, e.g. for a receive buffer b
and a set operation, the ghost values should be updated by ghost_value[i] = b[scatter_fwd_ghost_positions[i]]
.
|
noexcept |
Local (owned) indices shared with neighbor processes, i.e. are ghosts on other processes, grouped by sharing (neighbor) process (destination ranks in forward communicator and source ranks in the reverse communicator). scatter_fwd_indices().links(p)
gives the list of owned indices that needs to be sent to neighbourhood rank p
during a forward scatter.
Entries are ordered such that scatter_fwd_indices.offsets()
is the send displacement array for a forward scatter and scatter_fwd_indices.array()[i]
in the index of the owned index that should be placed at position i
in the send buffer for a forward scatter.
|
inline |
Send n values for each ghost index to owning to the process.
[in,out] | local_data | Local data associated with each owned local index to be sent to process where the data is ghosted. Size must be n * size_local(). |
[in] | remote_data | Ghost data on this process received from the owning process. Size will be n * num_ghosts(). |
[in] | n | Number of data items per index |
[in] | op | Sum or set received values in local_data |
|
inline |
Start a non-blocking send of ghost values to the owning rank. The non-blocking communication is completed by calling IndexMap::scatter_rev_end. A reverse scatter is the transpose of IndexMap::scatter_fwd_begin.
[in] | send_buffer | Send buffer filled with ghost data on this process to be sent to the owning rank. The order of the data is given by IndexMap::scatter_fwd_ghost_positions, with IndexMap::scatter_fwd_ghost_positions()[i] being the index of the ghost data that should be placed in position i of the buffer. |
data_type | The MPI data type. To send data with a block size use MPI_Type_contiguous with size n | |
request | The MPI request handle for tracking the status of the send | |
recv_buffer | A buffer used for the received data. It must not be changed until after a call to IndexMap::scatter_rev_end. The ordering of the data is given by IndexMap::scatter_fwd_indices, with IndexMap::scatter_fwd_indices()[i] being the position in the owned data array that corresponds to position i in the buffer. |
|
inline |
Complete a non-blocking send of ghost values to the owning rank. This function complete the communication started by IndexMap::scatter_rev_begin.
[in] | request | The MPI request handle for tracking the status of the send |