DOLFINx 0.10.0.0
DOLFINx C++ interface
|
Graph data structures and algorithms. More...
Namespaces | |
namespace | build |
namespace | kahip |
Interfaces to KaHIP parallel partitioner. |
Classes | |
class | AdjacencyList |
This class provides a static adjacency list data structure. More... |
Typedefs | |
using | partition_fn |
Signature of functions for computing the parallel partitioning of a distributed graph. |
Functions | |
template<typename V = std::nullptr_t, typename U> requires requires { typename std::decay_t<U>::value_type; requires std::convertible_to< U, std::vector<typename std::decay_t<U>::value_type>>; } | |
AdjacencyList< typename std::decay_t< U >::value_type, V > | regular_adjacency_list (U &&data, int degree) |
Construct a constant degree (valency) adjacency list. | |
std::vector< std::int32_t > | reorder_gps (const graph::AdjacencyList< std::int32_t > &graph) |
Re-order a graph using the Gibbs-Poole-Stockmeyer algorithm. | |
AdjacencyList< std::int32_t > | partition_graph (MPI_Comm comm, int nparts, const AdjacencyList< std::int64_t > &local_graph, bool ghosting) |
Partition graph across processes using the default graph partitioner. | |
template<typename T> | |
graph::AdjacencyList< int > | compute_destination_ranks (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &graph, const std::vector< T > &node_disp, const std::vector< T > &part) |
AdjacencyList< std::tuple< int, std::size_t, std::int8_t >, std::pair< std::int32_t, std::int32_t > > | comm_graph (const common::IndexMap &map, int root=0) |
Compute an directed graph that describes the parallel communication patterns. | |
std::string | comm_to_json (const AdjacencyList< std::tuple< int, std::size_t, std::int8_t >, std::pair< std::int32_t, std::int32_t > > &g) |
Build communication graph data as a JSON string. |
Graph data structures and algorithms.
Data structures for building and representing graphs, and algorithms on graphs, e.g., re-ordering and partitioning.
using partition_fn |
Signature of functions for computing the parallel partitioning of a distributed graph.
[in] | comm | MPI Communicator that the graph is distributed across |
[in] | nparts | Number of partitions to divide graph nodes into |
[in] | local_graph | Node connectivity graph |
[in] | ghosting | Flag to enable ghosting of the output node distribution |
graph::AdjacencyList< std::tuple< int, std::size_t, std::int8_t >, std::pair< std::int32_t, std::int32_t > > comm_graph | ( | const common::IndexMap & | map, |
int | root = 0 ) |
Compute an directed graph that describes the parallel communication patterns.
The graph describes the communication pattern for a 'forward scatter', i.e. sending owned data to ranks that ghost the data (owner->ghost operation).
Each node in the graph corresponds to an MPI rank. A graph edge is a forward (owner->ghost) communication path. The edge weight is the number 'values' communicated along the edge. Each edge also has a marker that indicates if the edge is sending data to:
The graph data can be visualised using a tool like NetworkX,
[in] | map | Index map to build the graph for. |
[in] | root | MPI rank on which to build the communication graph data. |
std::string comm_to_json | ( | const AdjacencyList< std::tuple< int, std::size_t, std::int8_t >, std::pair< std::int32_t, std::int32_t > > & | g | ) |
Build communication graph data as a JSON string.
The data string can be decoded (loaded) to create a Python object from which a NetworkX graph can be constructed.
See comm_graph for a description of the data.
[in] | g | Communication graph. |
graph::AdjacencyList< int > compute_destination_ranks | ( | MPI_Comm | comm, |
const graph::AdjacencyList< std::int64_t > & | graph, | ||
const std::vector< T > & | node_disp, | ||
const std::vector< T > & | part ) |
[in] | comm | The communicator |
[in] | graph | Graph, using global indices for graph edges |
[in] | node_disp | The distribution of graph nodes across MPI ranks. The global index gidx of local index lidx is lidx + / node_disp[my_rank]. |
[in] | part | The destination rank for owned nodes, i.e. dest[i] is the destination of the node with local index i. |
graph::AdjacencyList< std::int32_t > partition_graph | ( | MPI_Comm | comm, |
int | nparts, | ||
const AdjacencyList< std::int64_t > & | local_graph, | ||
bool | ghosting ) |
Partition graph across processes using the default graph partitioner.
[in] | comm | MPI communicator that the graph is distributed across. |
[in] | nparts | Number of partitions to divide graph nodes into. |
[in] | local_graph | Node connectivity graph. |
[in] | ghosting | Flag to enable ghosting of the output node distribution. |
AdjacencyList< typename std::decay_t< U >::value_type, V > regular_adjacency_list | ( | U && | data, |
int | degree ) |
Construct a constant degree (valency) adjacency list.
A constant degree graph has the same number of links (edges) for every node.
[in] | data | Adjacency array. |
[in] | degree | Number of (outgoing) links for each node. |
std::vector< std::int32_t > reorder_gps | ( | const graph::AdjacencyList< std::int32_t > & | graph | ) |
Re-order a graph using the Gibbs-Poole-Stockmeyer algorithm.
The algorithm is described in An Algorithm for Reducing the Bandwidth and Profile of a Sparse Matrix, SIAM Journal on Numerical Analysis, 13(2): 236-250, 1976, https://doi.org/10.1137/0713023.
[in] | graph | The graph to compute a re-ordering for |