DOLFINx 0.9.0
DOLFINx C++ interface
|
Functions | |
std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > | distribute (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &list, const graph::AdjacencyList< std::int32_t > &destinations) |
Distribute adjacency list nodes to destination ranks. | |
std::tuple< std::vector< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > | distribute (MPI_Comm comm, std::span< const std::int64_t > list, std::array< std::size_t, 2 > shape, const graph::AdjacencyList< std::int32_t > &destinations) |
Distribute fixed size nodes to destination ranks. | |
std::vector< std::int64_t > | compute_ghost_indices (MPI_Comm comm, std::span< const std::int64_t > owned_indices, std::span< const std::int64_t > ghost_indices, std::span< const int > ghost_owners) |
Take a set of distributed input global indices, including ghosts, and determine the new global indices after remapping. | |
std::vector< std::int64_t > | compute_local_to_global (std::span< const std::int64_t > global, std::span< const std::int32_t > local) |
std::vector< std::int32_t > | compute_local_to_local (std::span< const std::int64_t > local0_to_global, std::span< const std::int64_t > local1_to_global) |
Compute a local0-to-local1 map from two local-to-global maps with common global indices. | |
Tools for distributed graphs
std::vector< std::int64_t > compute_ghost_indices | ( | MPI_Comm | comm, |
std::span< const std::int64_t > | owned_indices, | ||
std::span< const std::int64_t > | ghost_indices, | ||
std::span< const int > | ghost_owners ) |
Take a set of distributed input global indices, including ghosts, and determine the new global indices after remapping.
Each rank receive 'input' global indices [i0, i1, ..., i(m-1), im, / ..., i(n-1)]
, where the first m
indices are owned by the caller and the remainder are 'ghosts' indices that are owned by other ranks.
Each rank assigns new global indices to its owned indices. The new index is the rank offset (scan of the number of indices owned by the lower rank processes, typically computed using MPI_Exscan
with MPI_SUM
), i.e. i1 -> offset + 1
, i2 -> offset + 2
, etc. Ghost indices are number by the remote owning processes. The function returns the new ghost global indices by retrieving the new indices from the owning ranks.
[in] | comm | MPI communicator |
[in] | owned_indices | List of owned global indices. It should not contain duplicates, and these indices must not appear in owned_indices on other ranks. |
[in] | ghost_indices | List of ghost global indices. |
[in] | ghost_owners | The owning rank for each entry in ghost_indices . |
std::vector< std::int64_t > compute_local_to_global | ( | std::span< const std::int64_t > | global, |
std::span< const std::int32_t > | local ) |
Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.
[in] | global | Adjacency list with global link indices. |
[in] | local | Adjacency list with local, contiguous link indices. |
std::vector< std::int32_t > compute_local_to_local | ( | std::span< const std::int64_t > | local0_to_global, |
std::span< const std::int64_t > | local1_to_global ) |
Compute a local0-to-local1 map from two local-to-global maps with common global indices.
[in] | local0_to_global | Map from local0 indices to global indices |
[in] | local1_to_global | Map from local1 indices to global indices |
std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > distribute | ( | MPI_Comm | comm, |
const graph::AdjacencyList< std::int64_t > & | list, | ||
const graph::AdjacencyList< std::int32_t > & | destinations ) |
Distribute adjacency list nodes to destination ranks.
The global index of each node is assumed to be the local index plus the offset for this rank.
[in] | comm | MPI Communicator |
[in] | list | The adjacency list to distribute |
[in] | destinations | Destination ranks for the ith node in the adjacency list. The first rank is the 'owner' of the node. |
std::tuple< std::vector< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > distribute | ( | MPI_Comm | comm, |
std::span< const std::int64_t > | list, | ||
std::array< std::size_t, 2 > | shape, | ||
const graph::AdjacencyList< std::int32_t > & | destinations ) |
Distribute fixed size nodes to destination ranks.
The global index of each node is assumed to be the local index plus the offset for this rank.
[in] | comm | MPI Communicator |
[in] | list | Constant degree (valency) adjacency list. The array shape is (num_nodes, degree). Storage is row-major. |
[in] | shape | Shape (num_nodes, degree) of list . |
[in] | destinations | Destination ranks for the ith node (row) of list . The first rank is the 'owner' of the node. |