Tools for distributed graphs.
More...
|
std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > | distribute (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &list, const graph::AdjacencyList< std::int32_t > &destinations) |
| Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank. More...
|
|
std::vector< std::int64_t > | compute_ghost_indices (MPI_Comm comm, const std::vector< std::int64_t > &global_indices, const std::vector< int > &ghost_owners) |
| Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known. More...
|
|
template<typename T > |
xt::xtensor< T, 2 > | distribute_data (MPI_Comm comm, const std::vector< std::int64_t > &indices, const xt::xtensor< T, 2 > &x) |
| Distribute data to process ranks where it it required. More...
|
|
std::vector< std::int64_t > | compute_local_to_global_links (const graph::AdjacencyList< std::int64_t > &global, const graph::AdjacencyList< std::int32_t > &local) |
| Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape. More...
|
|
std::vector< std::int32_t > | compute_local_to_local (const std::vector< std::int64_t > &local0_to_global, const std::vector< std::int64_t > &local1_to_global) |
| Compute a local0-to-local1 map from two local-to-global maps with common global indices. More...
|
|
Tools for distributed graphs.
- Todo:
- Add a function that sends data to the 'owner'
◆ compute_ghost_indices()
std::vector< std::int64_t > dolfinx::graph::build::compute_ghost_indices |
( |
MPI_Comm |
comm, |
|
|
const std::vector< std::int64_t > & |
global_indices, |
|
|
const std::vector< int > & |
ghost_owners |
|
) |
| |
Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known.
- Parameters
-
[in] | comm | MPI communicator |
[in] | global_indices | List of arbitrary global indices, ghosts at end |
[in] | ghost_owners | List of owning processes of the ghost indices |
- Returns
- Indexing of ghosts in a global space starting from 0 on process 0
◆ compute_local_to_global_links()
Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.
- Parameters
-
[in] | global | Adjacency list with global link indices |
[in] | local | Adjacency list with local, contiguous link indices |
- Returns
- Map from local index to global index, which if applied to the local adjacency list indices would yield the global adjacency list
◆ compute_local_to_local()
std::vector< std::int32_t > dolfinx::graph::build::compute_local_to_local |
( |
const std::vector< std::int64_t > & |
local0_to_global, |
|
|
const std::vector< std::int64_t > & |
local1_to_global |
|
) |
| |
Compute a local0-to-local1 map from two local-to-global maps with common global indices.
- Parameters
-
[in] | local0_to_global | Map from local0 indices to global indices |
[in] | local1_to_global | Map from local1 indices to global indices |
- Returns
- Map from local0 indices to local1 indices
◆ distribute()
Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank.
- Parameters
-
[in] | comm | MPI Communicator |
[in] | list | The adjacency list to distribute |
[in] | destinations | Destination ranks for the ith node in the adjacency list |
- Returns
- Adjacency list for this process, array of source ranks for each node in the adjacency list, and the original global index for each node.
◆ distribute_data()
template<typename T >
xt::xtensor< T, 2 > dolfinx::graph::build::distribute_data |
( |
MPI_Comm |
comm, |
|
|
const std::vector< std::int64_t > & |
indices, |
|
|
const xt::xtensor< T, 2 > & |
x |
|
) |
| |
Distribute data to process ranks where it it required.
- Parameters
-
[in] | comm | The MPI communicator |
[in] | indices | Global indices of the data required by this process |
[in] | x | Data on this process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank |
- Returns
- The data for each index in
indices