Graph (dolfinx::graph
)
Adjacency list
-
template<typename T>
class AdjacencyList This class provides a static adjacency list data structure. It is commonly used to store directed graphs. For each node in the contiguous list of nodes [0, 1, 2, …, n) it stores the connected nodes. The representation is strictly local, i.e. it is not parallel aware.
Public Functions
-
inline explicit AdjacencyList(const std::int32_t n)
Construct trivial adjacency list where each of the n nodes is connected to itself.
- Parameters
n – [in] Number of nodes
-
template<typename U, typename V, typename = std::enable_if_t<std::is_same<std::vector<T>, std::decay_t<U>>::value && std::is_same<std::vector<std::int32_t>, std::decay_t<V>>::value>>
inline AdjacencyList(U &&data, V &&offsets) Construct adjacency list from arrays of data.
- Parameters
data – [in] Adjacency array
offsets – [in] The index to the adjacency list in the data array for node i
-
template<typename X>
inline explicit AdjacencyList(const std::vector<X> &data) Set all connections for all entities (T is a ‘2D’ container, e.g. a
std::vector<<std::vector<std::size_t>>
,std::vector<<std::set<std::size_t>>
, etc).- Parameters
data – [in] Adjacency list data, where
std::next(data, i)
points to the container of edges for nodei
.
-
AdjacencyList(const AdjacencyList &list) = default
Copy constructor.
-
AdjacencyList(AdjacencyList &&list) = default
Move constructor.
-
~AdjacencyList() = default
Destructor.
-
AdjacencyList &operator=(const AdjacencyList &list) = default
Assignment operator.
-
AdjacencyList &operator=(AdjacencyList &&list) = default
Move assignment operator.
-
inline bool operator==(const AdjacencyList &list) const
Equality operator.
- Returns
True is the adjacency lists are equal
-
inline std::int32_t num_nodes() const
Get the number of nodes.
- Returns
The number of nodes in the adjacency list
-
inline int num_links(int node) const
Number of connections for given node.
- Parameters
node – [in] Node index
- Returns
The number of outgoing links (edges) from the node
-
inline std::span<T> links(int node)
Get the links (edges) for given node.
- Parameters
node – [in] Node index
- Returns
Array of outgoing links for the node. The length will be AdjacencyList::num_links(node).
-
inline std::span<const T> links(int node) const
Get the links (edges) for given node (const version)
- Parameters
node – [in] Node index
- Returns
Array of outgoing links for the node. The length will be AdjacencyList:num_links(node).
-
inline const std::vector<T> &array() const
Return contiguous array of links for all nodes (const version)
-
inline const std::vector<std::int32_t> &offsets() const
Offset for each node in array() (const version)
-
inline std::string str() const
Informal string representation (pretty-print)
- Returns
String representation of the adjacency list
-
inline explicit AdjacencyList(const std::int32_t n)
Adjacency list builders
-
template<typename U>
AdjacencyList<typename std::decay_t<U>::value_type> dolfinx::graph::regular_adjacency_list(U &&data, int degree) Construct a constant degree (valency) adjacency list.
A constant degree graph has the same number of edges for every node.
- Parameters
data – [in] Adjacency array
degree – [in] The number of (outgoing) edges for each node
- Returns
An adjacency list
Re-ordering
-
std::vector<std::int32_t> dolfinx::graph::reorder_gps(const graph::AdjacencyList<std::int32_t> &graph)
Re-order a graph using the Gibbs-Poole-Stockmeyer algorithm.
The algorithm is described in
An Algorithm for Reducing the Bandwidth and Profile of a Sparse Matrix, SIAM Journal on Numerical Analysis, 13(2): 236-250, 1976, https://www.jstor.org/stable/2156090.
- Parameters
graph – [in] The graph to compute a re-ordering for
- Returns
Reordering array
map
, wheremap[i]
is the new index of nodei
Partitioning
-
AdjacencyList<std::int32_t> dolfinx::graph::partition_graph(MPI_Comm comm, int nparts, const AdjacencyList<std::int64_t> &local_graph, bool ghosting)
Partition graph across processes using the default graph partitioner.
- Parameters
comm – [in] MPI Communicator that the graph is distributed across
nparts – [in] Number of partitions to divide graph nodes into
local_graph – [in] Node connectivity graph
ghosting – [in] Flag to enable ghosting of the output node distribution
- Returns
Destination rank for each input node
-
graph::partition_fn dolfinx::graph::scotch::partitioner(scotch::strategy strategy = strategy::none, double imbalance = 0.025, int seed = 0)
Create a graph partitioning function that uses PT-SCOTCH.
- Parameters
strategy – [in] The SCOTCH strategy
imbalance – [in] The allowable imbalance (between 0 and 1). The smaller value the more balanced the partitioning must be.
seed – [in] Random number generator seed
- Returns
A graph partitioning function
-
graph::partition_fn dolfinx::graph::parmetis::partitioner(double imbalance = 1.02, std::array<int, 3> options = {1, 0, 5})
Create a graph partitioning function that uses ParMETIS.
- Parameters
imbalance – [in] Imbalance tolerance. See ParMETIS manual for details.
options – [in] The ParMETIS option. See ParMETIS manual for details.
-
graph::partition_fn dolfinx::graph::kahip::partitioner(int mode = 1, int seed = 1, double imbalance = 0.03, bool suppress_output = true)
Create a graph partitioning function that uses KaHIP.
- Parameters
mode – [in] The KaHiP partitioning mode (see https://github.com/KaHIP/KaHIP/blob/master/parallel/parallel_src/interface/parhip_interface.h)
seed – [in] The KaHiP random number generator seed
imbalance – [in] The allowable imbalance
suppress_output – [in] Suppresses KaHIP output if true
- Returns
A KaHIP graph partitioning function with specified parameter options
Enumerations and typedefs
-
using dolfinx::graph::partition_fn = std::function<graph::AdjacencyList<std::int32_t>(MPI_Comm, int, const AdjacencyList<std::int64_t>&, bool)>
Signature of functions for computing the parallel partitioning of a distributed graph.
- Param comm
[in] MPI Communicator that the graph is distributed across
- Param nparts
[in] Number of partitions to divide graph nodes into
- Param local_graph
[in] Node connectivity graph
- Param ghosting
[in] Flag to enable ghosting of the output node distribution
- Return
Destination rank for each input node
Functions for building distributed graphs
-
namespace build
Tools for distributed graphs.
- Todo:
Add a function that sends data to the ‘owner’
Functions
-
std::tuple<graph::AdjacencyList<std::int64_t>, std::vector<int>, std::vector<std::int64_t>, std::vector<int>> distribute(MPI_Comm comm, const graph::AdjacencyList<std::int64_t> &list, const graph::AdjacencyList<std::int32_t> &destinations)
Distribute adjacency list nodes to destination ranks.
The global index of each node is assumed to be the local index plus the offset for this rank.
- Parameters
comm – [in] MPI Communicator
list – [in] The adjacency list to distribute
destinations – [in] Destination ranks for the ith node in the adjacency list. The first rank is the ‘owner’ of the node.
- Returns
Received adjacency list for this process
Source ranks for each node in the adjacency list
Original global index for each node in the adjacency list
Owner rank of ghost nodes
-
std::vector<std::int64_t> compute_ghost_indices(MPI_Comm comm, const std::span<const std::int64_t> &owned_indices, const std::span<const std::int64_t> &ghost_indices, const std::span<const int> &ghost_owners)
Take a set of distributed input global indices, including ghosts, and determine the new global indices after remapping.
Each rank receive ‘input’ global indices
[i0, i1, ..., i(m-1), im, ..., i(n-1)]
, where the firstm
indices are owned by the caller and the remained are ‘ghosts’ indices that are owned by other ranks.Each rank assigns new global indices to its owned indices. The new index is the rank offset (scan of the number of indices owned by the lower rank processes, typically computed using
MPI_Exscan
withMPI_SUM
), i.e.i1 -> offset + 1
,i2 -> offset + 2
, etc. Ghost indices are number by the remote owning processes. The function returns the new ghost global indices but retrieving the new indices from the owning ranks.- Parameters
comm – [in] MPI communicator
owned_indices – [in] List of owned global indices. It should not contain duplicates, and these indices must now appear in
owned_indices
on other ranks.ghost_indices – [in] List of ghost global indices.
ghost_owners – [in] The owning rank for each entry in
ghost_indices
.
- Returns
New global indices for the ghost indices.
-
std::vector<std::int64_t> compute_local_to_global_links(const graph::AdjacencyList<std::int64_t> &global, const graph::AdjacencyList<std::int32_t> &local)
Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.
- Parameters
global – [in] Adjacency list with global link indices
local – [in] Adjacency list with local, contiguous link indices
- Returns
Map from local index to global index, which if applied to the local adjacency list indices would yield the global adjacency list
-
std::vector<std::int32_t> compute_local_to_local(const std::span<const std::int64_t> &local0_to_global, const std::span<const std::int64_t> &local1_to_global)
Compute a local0-to-local1 map from two local-to-global maps with common global indices.
- Parameters
local0_to_global – [in] Map from local0 indices to global indices
local1_to_global – [in] Map from local1 indices to global indices
- Returns
Map from local0 indices to local1 indices