DOLFINx
0.5.1
DOLFINx C++ interface
|
Tools for distributed graphs. More...
Functions | |
std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > | distribute (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &list, const graph::AdjacencyList< std::int32_t > &destinations) |
Distribute adjacency list nodes to destination ranks. More... | |
std::vector< std::int64_t > | compute_ghost_indices (MPI_Comm comm, const std::span< const std::int64_t > &owned_indices, const std::span< const std::int64_t > &ghost_indices, const std::span< const int > &ghost_owners) |
Take a set of distributed input global indices, including ghosts, and determine the new global indices after remapping. More... | |
std::vector< std::int64_t > | compute_local_to_global_links (const graph::AdjacencyList< std::int64_t > &global, const graph::AdjacencyList< std::int32_t > &local) |
Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape. More... | |
std::vector< std::int32_t > | compute_local_to_local (const std::span< const std::int64_t > &local0_to_global, const std::span< const std::int64_t > &local1_to_global) |
Compute a local0-to-local1 map from two local-to-global maps with common global indices. More... | |
Tools for distributed graphs.
std::vector< std::int64_t > compute_ghost_indices | ( | MPI_Comm | comm, |
const std::span< const std::int64_t > & | owned_indices, | ||
const std::span< const std::int64_t > & | ghost_indices, | ||
const std::span< const int > & | ghost_owners | ||
) |
Take a set of distributed input global indices, including ghosts, and determine the new global indices after remapping.
Each rank receive 'input' global indices [i0, i1, ..., i(m-1), im, ..., i(n-1)]
, where the first m
indices are owned by the caller and the remained are 'ghosts' indices that are owned by other ranks.
Each rank assigns new global indices to its owned indices. The new index is the rank offset (scan of the number of indices owned by the lower rank processes, typically computed using MPI_Exscan
with MPI_SUM
), i.e. i1 -> offset + 1
, i2 -> offset + 2
, etc. Ghost indices are number by the remote owning processes. The function returns the new ghost global indices but retrieving the new indices from the owning ranks.
[in] | comm | MPI communicator |
[in] | owned_indices | List of owned global indices. It should not contain duplicates, and these indices must now appear in owned_indices on other ranks. |
[in] | ghost_indices | List of ghost global indices. |
[in] | ghost_owners | The owning rank for each entry in ghost_indices . |
std::vector< std::int64_t > compute_local_to_global_links | ( | const graph::AdjacencyList< std::int64_t > & | global, |
const graph::AdjacencyList< std::int32_t > & | local | ||
) |
Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.
[in] | global | Adjacency list with global link indices |
[in] | local | Adjacency list with local, contiguous link indices |
std::vector< std::int32_t > compute_local_to_local | ( | const std::span< const std::int64_t > & | local0_to_global, |
const std::span< const std::int64_t > & | local1_to_global | ||
) |
Compute a local0-to-local1 map from two local-to-global maps with common global indices.
[in] | local0_to_global | Map from local0 indices to global indices |
[in] | local1_to_global | Map from local1 indices to global indices |
std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > distribute | ( | MPI_Comm | comm, |
const graph::AdjacencyList< std::int64_t > & | list, | ||
const graph::AdjacencyList< std::int32_t > & | destinations | ||
) |
Distribute adjacency list nodes to destination ranks.
The global index of each node is assumed to be the local index plus the offset for this rank.
[in] | comm | MPI Communicator |
[in] | list | The adjacency list to distribute |
[in] | destinations | Destination ranks for the ith node in the adjacency list. The first rank is the 'owner' of the node. |