Note: this is documentation for an old release. View the latest documentation at docs.fenicsproject.org/v0.1.0/v0.9.0/cpp
DOLFINx  0.1.0
DOLFINx C++ interface
Functions
dolfinx::graph::build Namespace Reference

Tools for distributed graphs. More...

Functions

std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > distribute (MPI_Comm comm, const graph::AdjacencyList< std::int64_t > &list, const graph::AdjacencyList< std::int32_t > &destinations)
 Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank. More...
 
std::vector< std::int64_t > compute_ghost_indices (MPI_Comm comm, const std::vector< std::int64_t > &global_indices, const std::vector< int > &ghost_owners)
 Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known. More...
 
template<typename T >
xt::xtensor< T, 2 > distribute_data (MPI_Comm comm, const std::vector< std::int64_t > &indices, const xt::xtensor< T, 2 > &x)
 Distribute data to process ranks where it it required. More...
 
std::vector< std::int64_t > compute_local_to_global_links (const graph::AdjacencyList< std::int64_t > &global, const graph::AdjacencyList< std::int32_t > &local)
 Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape. More...
 
std::vector< std::int32_t > compute_local_to_local (const std::vector< std::int64_t > &local0_to_global, const std::vector< std::int64_t > &local1_to_global)
 Compute a local0-to-local1 map from two local-to-global maps with common global indices. More...
 

Detailed Description

Tools for distributed graphs.

Todo:
Add a function that sends data to the 'owner'

Function Documentation

◆ compute_ghost_indices()

std::vector< std::int64_t > dolfinx::graph::build::compute_ghost_indices ( MPI_Comm  comm,
const std::vector< std::int64_t > &  global_indices,
const std::vector< int > &  ghost_owners 
)

Compute ghost indices in a global IndexMap space, from a list of arbitrary global indices, where the ghosts are at the end of the list, and their owning processes are known.

Parameters
[in]commMPI communicator
[in]global_indicesList of arbitrary global indices, ghosts at end
[in]ghost_ownersList of owning processes of the ghost indices
Returns
Indexing of ghosts in a global space starting from 0 on process 0

◆ compute_local_to_global_links()

std::vector< std::int64_t > dolfinx::graph::build::compute_local_to_global_links ( const graph::AdjacencyList< std::int64_t > &  global,
const graph::AdjacencyList< std::int32_t > &  local 
)

Given an adjacency list with global, possibly non-contiguous, link indices and a local adjacency list with contiguous link indices starting from zero, compute a local-to-global map for the links. Both adjacency lists must have the same shape.

Parameters
[in]globalAdjacency list with global link indices
[in]localAdjacency list with local, contiguous link indices
Returns
Map from local index to global index, which if applied to the local adjacency list indices would yield the global adjacency list

◆ compute_local_to_local()

std::vector< std::int32_t > dolfinx::graph::build::compute_local_to_local ( const std::vector< std::int64_t > &  local0_to_global,
const std::vector< std::int64_t > &  local1_to_global 
)

Compute a local0-to-local1 map from two local-to-global maps with common global indices.

Parameters
[in]local0_to_globalMap from local0 indices to global indices
[in]local1_to_globalMap from local1 indices to global indices
Returns
Map from local0 indices to local1 indices

◆ distribute()

std::tuple< graph::AdjacencyList< std::int64_t >, std::vector< int >, std::vector< std::int64_t >, std::vector< int > > dolfinx::graph::build::distribute ( MPI_Comm  comm,
const graph::AdjacencyList< std::int64_t > &  list,
const graph::AdjacencyList< std::int32_t > &  destinations 
)

Distribute adjacency list nodes to destination ranks. The global index of each node is assumed to be the local index plus the offset for this rank.

Parameters
[in]commMPI Communicator
[in]listThe adjacency list to distribute
[in]destinationsDestination ranks for the ith node in the adjacency list
Returns
Adjacency list for this process, array of source ranks for each node in the adjacency list, and the original global index for each node.

◆ distribute_data()

template<typename T >
xt::xtensor< T, 2 > dolfinx::graph::build::distribute_data ( MPI_Comm  comm,
const std::vector< std::int64_t > &  indices,
const xt::xtensor< T, 2 > &  x 
)

Distribute data to process ranks where it it required.

Parameters
[in]commThe MPI communicator
[in]indicesGlobal indices of the data required by this process
[in]xData on this process which may be distributed (by row). The global index for the [0, ..., n) local rows is assumed to be the local index plus the offset for this rank
Returns
The data for each index in indices