Note: this is documentation for an old release. View the latest documentation at docs.fenicsproject.org/dolfinx/v0.9.0/cpp/doxygen/d2/d30/classdolfinx_1_1common_1_1IndexMap.html
DOLFINx  0.4.1
DOLFINx C++ interface
Public Types | Public Member Functions | List of all members
IndexMap Class Reference

This class represents the distribution index arrays across processes. An index array is a contiguous collection of N+1 indices [0, 1, . . ., N] that are distributed across M processes. On a given process, the IndexMap stores a portion of the index set using local indices [0, 1, . . . , n], and a map from the local indices to a unique global index. More...

#include <IndexMap.h>

Public Types

enum class  Mode { insert , add }
 Mode for reverse scatter operation.
 
enum class  Direction { reverse , forward }
 Edge directions of neighborhood communicator.
 

Public Member Functions

 IndexMap (MPI_Comm comm, std::int32_t local_size)
 Create an non-overlapping index map with local_size owned on this process. More...
 
 IndexMap (MPI_Comm comm, std::int32_t local_size, const xtl::span< const int > &dest_ranks, const xtl::span< const std::int64_t > &ghosts, const xtl::span< const int > &src_ranks)
 Create an index map with local_size owned indiced on this process. More...
 
 IndexMap (const IndexMap &map)=delete
 
 IndexMap (IndexMap &&map)=default
 Move constructor.
 
 ~IndexMap ()=default
 Destructor.
 
IndexMapoperator= (IndexMap &&map)=default
 Move assignment.
 
IndexMapoperator= (const IndexMap &map)=delete
 
std::array< std::int64_t, 2 > local_range () const noexcept
 Range of indices (global) owned by this process.
 
std::int32_t num_ghosts () const noexcept
 Number of ghost indices on this process.
 
std::int32_t size_local () const noexcept
 Number of indices owned by on this process.
 
std::int64_t size_global () const noexcept
 Number indices across communicator.
 
const std::vector< std::int64_t > & ghosts () const noexcept
 Local-to-global map for ghosts (local indexing beyond end of local range)
 
MPI_Comm comm () const
 Return the MPI communicator used to create the index map. More...
 
MPI_Comm comm (Direction dir) const
 Return a MPI communicator with attached distributed graph topology information. More...
 
void local_to_global (const xtl::span< const std::int32_t > &local, const xtl::span< std::int64_t > &global) const
 Compute global indices for array of local indices. More...
 
void global_to_local (const xtl::span< const std::int64_t > &global, const xtl::span< std::int32_t > &local) const
 Compute local indices for array of global indices. More...
 
std::vector< std::int64_t > global_indices () const
 Global indices. More...
 
const graph::AdjacencyList< std::int32_t > & scatter_fwd_indices () const noexcept
 Local (owned) indices shared with neighbor processes, i.e. are ghosts on other processes, grouped by sharing (neighbor) process (destination ranks in forward communicator and source ranks in the reverse communicator). scatter_fwd_indices().links(p) gives the list of owned indices that needs to be sent to neighbourhood rank p during a forward scatter. More...
 
const std::vector< std::int32_t > & scatter_fwd_ghost_positions () const noexcept
 Position of ghost entries in the receive buffer after a forward scatter, e.g. for a receive buffer b and a set operation, the ghost values should be updated by ghost_value[i] = b[scatter_fwd_ghost_positions[i]]. More...
 
std::vector< int > ghost_owners () const
 Compute the owner on the neighborhood communicator of each ghost index. More...
 
std::map< std::int32_t, std::set< int > > compute_shared_indices () const
 
std::pair< IndexMap, std::vector< std::int32_t > > create_submap (const xtl::span< const std::int32_t > &indices) const
 Create new index map from a subset of indices in this index map. The order of the indices is preserved, with new map effectively a 'compressed' map. More...
 
template<typename T >
void scatter_fwd_begin (const xtl::span< const T > &send_buffer, MPI_Datatype &data_type, MPI_Request &request, const xtl::span< T > &recv_buffer) const
 Start a non-blocking send of owned data to ranks that ghost the data. The communication is completed by calling IndexMap::scatter_fwd_end. The send and receive buffer should not be changed until after IndexMap::scatter_fwd_end has been called. More...
 
void scatter_fwd_end (MPI_Request &request) const
 Complete a non-blocking send from the local owner of to process ranks that have the index as a ghost. This function complete the communication started by IndexMap::scatter_fwd_begin. More...
 
template<typename T >
void scatter_fwd (const xtl::span< const T > &local_data, xtl::span< T > remote_data, int n) const
 Send n values for each index that is owned to processes that have the index as a ghost. The size of the input array local_data must be the same as n * size_local(). More...
 
template<typename T >
void scatter_rev_begin (const xtl::span< const T > &send_buffer, MPI_Datatype &data_type, MPI_Request &request, const xtl::span< T > &recv_buffer) const
 Start a non-blocking send of ghost values to the owning rank. The non-blocking communication is completed by calling IndexMap::scatter_rev_end. A reverse scatter is the transpose of IndexMap::scatter_fwd_begin. More...
 
void scatter_rev_end (MPI_Request &request) const
 Complete a non-blocking send of ghost values to the owning rank. This function complete the communication started by IndexMap::scatter_rev_begin. More...
 
template<typename T >
void scatter_rev (xtl::span< T > local_data, const xtl::span< const T > &remote_data, int n, IndexMap::Mode op) const
 Send n values for each ghost index to owning to the process. More...
 

Detailed Description

This class represents the distribution index arrays across processes. An index array is a contiguous collection of N+1 indices [0, 1, . . ., N] that are distributed across M processes. On a given process, the IndexMap stores a portion of the index set using local indices [0, 1, . . . , n], and a map from the local indices to a unique global index.

Constructor & Destructor Documentation

◆ IndexMap() [1/2]

IndexMap ( MPI_Comm  comm,
std::int32_t  local_size 
)

Create an non-overlapping index map with local_size owned on this process.

Note
Collective
Parameters
[in]commThe MPI communicator
[in]local_sizeLocal size of the IndexMap, i.e. the number of owned entries

◆ IndexMap() [2/2]

IndexMap ( MPI_Comm  comm,
std::int32_t  local_size,
const xtl::span< const int > &  dest_ranks,
const xtl::span< const std::int64_t > &  ghosts,
const xtl::span< const int > &  src_ranks 
)

Create an index map with local_size owned indiced on this process.

Note
Collective
Parameters
[in]commThe MPI communicator
[in]local_sizeLocal size of the IndexMap, i.e. the number of owned entries
[in]dest_ranksRanks that 'ghost' indices that are owned by the calling rank. I.e., ranks that the caller will send data to when updating ghost values.
[in]ghostsThe global indices of ghost entries
[in]src_ranksOwner rank (on global communicator) of each entry in ghosts

Member Function Documentation

◆ comm() [1/2]

MPI_Comm comm ( ) const

Return the MPI communicator used to create the index map.

Returns
Communicator

◆ comm() [2/2]

MPI_Comm comm ( Direction  dir) const

Return a MPI communicator with attached distributed graph topology information.

Parameters
[in]dirEdge direction of communicator (forward, reverse)
Returns
A neighborhood communicator for the specified edge direction

◆ compute_shared_indices()

std::map< std::int32_t, std::set< int > > compute_shared_indices ( ) const
Todo:
Aim to remove this function? If it's kept, should it work with neighborhood ranks?

Compute map from each local (owned) index to the set of ranks that have the index as a ghost

Returns
shared indices

◆ create_submap()

std::pair< IndexMap, std::vector< std::int32_t > > create_submap ( const xtl::span< const std::int32_t > &  indices) const

Create new index map from a subset of indices in this index map. The order of the indices is preserved, with new map effectively a 'compressed' map.

Parameters
[in]indicesLocal indices in the map that should appear in the new index map. All indices must be owned, i.e. indices must be less than this->size_local().
Precondition
indices must be sorted and contain no duplicates
Returns
The (i) new index map and (ii) a map from the ghost position in the new map to the ghost position in the original (this) map

◆ ghost_owners()

std::vector< int > ghost_owners ( ) const

Compute the owner on the neighborhood communicator of each ghost index.

The neighborhood ranks are the 'source' ranks on the 'reverse' communicator, i.e. the neighborhood source ranks on the communicator returned by IndexMap::comm(IndexMap::Direction::reverse). The source ranks on IndexMap::comm(IndexMap::Direction::reverse) communicator can be used to convert the returned neighbour ranks to the rank indices on the full communicator.

Returns
The owning rank on the neighborhood communicator of the ith ghost index.

◆ global_indices()

std::vector< std::int64_t > global_indices ( ) const

Global indices.

Returns
The global index for all local indices (0, 1, 2, ...) on this process, including ghosts

◆ global_to_local()

void global_to_local ( const xtl::span< const std::int64_t > &  global,
const xtl::span< std::int32_t > &  local 
) const

Compute local indices for array of global indices.

Parameters
[in]globalGlobal indices
[out]localThe local of the corresponding global index in 'global'. Returns -1 if the local index does not exist on this process.

◆ local_to_global()

void local_to_global ( const xtl::span< const std::int32_t > &  local,
const xtl::span< std::int64_t > &  global 
) const

Compute global indices for array of local indices.

Parameters
[in]localLocal indices
[out]globalThe global indices

◆ scatter_fwd()

void scatter_fwd ( const xtl::span< const T > &  local_data,
xtl::span< T >  remote_data,
int  n 
) const
inline

Send n values for each index that is owned to processes that have the index as a ghost. The size of the input array local_data must be the same as n * size_local().

Parameters
[in]local_dataLocal data associated with each owned local index to be sent to process where the data is ghosted. Size must be n * size_local().
[in,out]remote_dataGhost data on this process received from the owning process. Size will be n * num_ghosts().
[in]nNumber of data items per index

◆ scatter_fwd_begin()

void scatter_fwd_begin ( const xtl::span< const T > &  send_buffer,
MPI_Datatype &  data_type,
MPI_Request &  request,
const xtl::span< T > &  recv_buffer 
) const
inline

Start a non-blocking send of owned data to ranks that ghost the data. The communication is completed by calling IndexMap::scatter_fwd_end. The send and receive buffer should not be changed until after IndexMap::scatter_fwd_end has been called.

Parameters
[in]send_bufferLocal data associated with each owned local index to be sent to process where the data is ghosted. It must not be changed until after a call to IndexMap::scatter_fwd_end. The order of data in the buffer is given by IndexMap::scatter_fwd_indices.
data_typeThe MPI data type. To send data with a block size use MPI_Type_contiguous with size n
requestThe MPI request handle for tracking the status of the non-blocking communication
recv_bufferA buffer used for the received data. The position of ghost entries in the buffer is given by IndexMap::scatter_fwd_ghost_positions. The buffer must not be accessed or changed until after a call to IndexMap::scatter_fwd_end.

◆ scatter_fwd_end()

void scatter_fwd_end ( MPI_Request &  request) const
inline

Complete a non-blocking send from the local owner of to process ranks that have the index as a ghost. This function complete the communication started by IndexMap::scatter_fwd_begin.

Parameters
[in]requestThe MPI request handle for tracking the status of the send

◆ scatter_fwd_ghost_positions()

const std::vector< std::int32_t > & scatter_fwd_ghost_positions ( ) const
noexcept

Position of ghost entries in the receive buffer after a forward scatter, e.g. for a receive buffer b and a set operation, the ghost values should be updated by ghost_value[i] = b[scatter_fwd_ghost_positions[i]].

Returns
Position of the ith ghost entry in the received buffer

◆ scatter_fwd_indices()

const graph::AdjacencyList< std::int32_t > & scatter_fwd_indices ( ) const
noexcept

Local (owned) indices shared with neighbor processes, i.e. are ghosts on other processes, grouped by sharing (neighbor) process (destination ranks in forward communicator and source ranks in the reverse communicator). scatter_fwd_indices().links(p) gives the list of owned indices that needs to be sent to neighbourhood rank p during a forward scatter.

Entries are ordered such that scatter_fwd_indices.offsets() is the send displacement array for a forward scatter and scatter_fwd_indices.array()[i] in the index of the owned index that should be placed at position i in the send buffer for a forward scatter.

Returns
List of indices that are ghosted on other processes

◆ scatter_rev()

void scatter_rev ( xtl::span< T >  local_data,
const xtl::span< const T > &  remote_data,
int  n,
IndexMap::Mode  op 
) const
inline

Send n values for each ghost index to owning to the process.

Parameters
[in,out]local_dataLocal data associated with each owned local index to be sent to process where the data is ghosted. Size must be n * size_local().
[in]remote_dataGhost data on this process received from the owning process. Size will be n * num_ghosts().
[in]nNumber of data items per index
[in]opSum or set received values in local_data

◆ scatter_rev_begin()

void scatter_rev_begin ( const xtl::span< const T > &  send_buffer,
MPI_Datatype &  data_type,
MPI_Request &  request,
const xtl::span< T > &  recv_buffer 
) const
inline

Start a non-blocking send of ghost values to the owning rank. The non-blocking communication is completed by calling IndexMap::scatter_rev_end. A reverse scatter is the transpose of IndexMap::scatter_fwd_begin.

Parameters
[in]send_bufferSend buffer filled with ghost data on this process to be sent to the owning rank. The order of the data is given by IndexMap::scatter_fwd_ghost_positions, with IndexMap::scatter_fwd_ghost_positions()[i] being the index of the ghost data that should be placed in position i of the buffer.
data_typeThe MPI data type. To send data with a block size use MPI_Type_contiguous with size n
requestThe MPI request handle for tracking the status of the send
recv_bufferA buffer used for the received data. It must not be changed until after a call to IndexMap::scatter_rev_end. The ordering of the data is given by IndexMap::scatter_fwd_indices, with IndexMap::scatter_fwd_indices()[i] being the position in the owned data array that corresponds to position i in the buffer.

◆ scatter_rev_end()

void scatter_rev_end ( MPI_Request &  request) const
inline

Complete a non-blocking send of ghost values to the owning rank. This function complete the communication started by IndexMap::scatter_rev_begin.

Parameters
[in]requestThe MPI request handle for tracking the status of the send

The documentation for this class was generated from the following files: