Note: this is documentation for an old release. View the latest documentation at docs.fenicsproject.org/dolfinx/v0.9.0/cpp/doxygen/d1/d52/namespacedolfinx_1_1graph_1_1build.html
DOLFINx 0.8.0
DOLFINx C++ interface
Loading...
Searching...
No Matches
Functions
dolfinx::graph::build Namespace Reference

Functions

std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > distribute (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &list, const graph::AdjacencyList< std::int32_t > &destinations)
 Distribute adjacency list nodes to destination ranks.
 
std::vector< std::int64_t > compute_ghost_indices (MPI_Comm comm, std::span< const std::int64_t > owned_indices, std::span< const std::int64_t > ghost_indices, std::span< const int > ghost_owners)
 Take a set of distributed input global indices, including ghosts, and determine the new global indices after remapping.
 
std::vector< std::int64_t > compute_local_to_global (std::span< const std::int64_t > global, std::span< const std::int32_t > local)
 
std::vector< std::int32_t > compute_local_to_local (std::span< const std::int64_t > local0_to_global, std::span< const std::int64_t > local1_to_global)
 Compute a local0-to-local1 map from two local-to-global maps with common global indices.
 

Detailed Description

Tools for distributed graphs

Todo
Add a function that sends data to the 'owner'

Function Documentation

◆ compute_ghost_indices()

std::vector< std::int64_t > compute_ghost_indices ( MPI_Comm comm,
std::span< const std::int64_t > owned_indices,
std::span< const std::int64_t > ghost_indices,
std::span< const int > ghost_owners )

Take a set of distributed input global indices, including ghosts, and determine the new global indices after remapping.

Each rank receive 'input' global indices [i0, i1, ..., i(m-1), im, ..., i(n-1)], where the first m indices are owned by the caller and the remainder are 'ghosts' indices that are owned by other ranks.

Each rank assigns new global indices to its owned indices. The new index is the rank offset (scan of the number of indices owned by the lower rank processes, typically computed using MPI_Exscan with MPI_SUM), i.e. i1 -> offset + 1, i2 -> offset + 2, etc. Ghost indices are number by the remote owning processes. The function returns the new ghost global indices by retrieving the new indices from the owning ranks.

Parameters
[in]commMPI communicator
[in]owned_indicesList of owned global indices. It should not contain duplicates, and these indices must not appear in owned_indices on other ranks.
[in]ghost_indicesList of ghost global indices.
[in]ghost_ownersThe owning rank for each entry in ghost_indices.
Returns
New global indices for the ghost indices.

◆ compute_local_to_global()

std::vector< std::int64_t > compute_local_to_global ( std::span< const std::int64_t > global,
std::span< const std::int32_t > local )

Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.

Parameters
[in]globalAdjacency list with global link indices.
[in]localAdjacency list with local, contiguous link indices.
Returns
Map from local index to global index, which if applied to the local adjacency list indices would yield the global adjacency list.

◆ compute_local_to_local()

std::vector< std::int32_t > compute_local_to_local ( std::span< const std::int64_t > local0_to_global,
std::span< const std::int64_t > local1_to_global )

Compute a local0-to-local1 map from two local-to-global maps with common global indices.

Parameters
[in]local0_to_globalMap from local0 indices to global indices
[in]local1_to_globalMap from local1 indices to global indices
Returns
Map from local0 indices to local1 indices

◆ distribute()

std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > distribute ( MPI_Comm comm,
const graph::AdjacencyList< std::int64_t > & list,
const graph::AdjacencyList< std::int32_t > & destinations )

Distribute adjacency list nodes to destination ranks.

The global index of each node is assumed to be the local index plus the offset for this rank.

Parameters
[in]commMPI Communicator
[in]listThe adjacency list to distribute
[in]destinationsDestination ranks for the ith node in the adjacency list. The first rank is the 'owner' of the node.
Returns
  1. Received adjacency list for this process
  2. Source ranks for each node in the adjacency list
  3. Original global index for each node in the adjacency list
  4. Owner rank of ghost nodes