Common (dolfinx::common
)
-
namespace common
Miscellaneous classes, functions and types.
Generic tools.
This namespace provides utility type functions for managing subsystems, convenience classes and library-wide typedefs.
Enums
Functions
-
std::vector<int32_t> compute_owned_indices(std::span<const std::int32_t> indices, const IndexMap &map)
Given a sorted vector of indices (local numbering, owned or ghost) and an index map, this function returns the indices owned by this process, including indices that might have been in the list of indices on another processes.
- Parameters:
indices – [in] List of indices
map – [in] The index map
- Returns:
Indices owned by the calling process
-
std::tuple<std::int64_t, std::vector<std::int32_t>, std::vector<std::vector<std::int64_t>>, std::vector<std::vector<int>>> stack_index_maps(const std::vector<std::pair<std::reference_wrapper<const IndexMap>, int>> &maps)
Compute layout data and ghost indices for a stacked (concatenated) index map, i.e. ‘splice’ multiple maps into one.
The input maps are concatenated, with indices in
maps
and owned by the caller remaining owned by the caller. Ghost data is stored at the end of the local range as normal, with the ghosts in blocks in the order of the index maps inmaps
.Note
Index maps with a block size are unrolled in the data for the concatenated index map.
Note
Communication is required to compute the new ghost indices.
- Parameters:
maps – [in] List of (index map, block size) pairs
- Returns:
The (0) global offset of a concatenated map for the calling rank, (1) local offset for the owned indices of each submap in the concatenated map, (2) new indices for the ghosts for each submap, and (3) owner rank of each ghost entry for each submap.
-
std::pair<IndexMap, std::vector<std::int32_t>> create_sub_index_map(const IndexMap &imap, std::span<const std::int32_t> indices, IndexMapOrder order = IndexMapOrder::any, bool allow_owner_change = false)
Create a new index map from a subset of indices in an existing index map.
- Parameters:
imap – [in] Parent map to create a new sub-map from.
indices – [in] Local indices in
imap
(owned and ghost) to include in the new index map.order – [in] Control the order in which ghost indices appear in the new map.
allow_owner_change – [in] If
true
, indices that are not included inindices
by their owning process can be included inindices
by processes that ghost the indices to be included in the new submap. These indices will be owned by one of the sharing processes in the submap. Iffalse
, and exception is raised if an index is included by a sharing process and not by the owning process.
- Returns:
The (i) new index map and (ii) a map from local indices in the submap to local indices in the original (this) map.
- Pre:
indices
must be sorted and must not contain duplicates.
-
template<typename U, typename V>
std::pair<std::vector<typename U::value_type>, std::vector<typename V::value_type>> sort_unique(const U &indices, const V &values) Sort two arrays based on the values in array
indices
.Any duplicate indices and the corresponding value are removed. In the case of duplicates, the entry with the smallest value is retained.
- Parameters:
indices – [in] Array of indices.
values – [in] Array of values.
- Returns:
Sorted (indices, values), with sorting based on indices.
-
template<class T>
std::size_t hash_local(const T &x) Compute a hash of a given object.
The hash is computed using Boost container hash (https://www.boost.org/doc/libs/release/libs/container_hash/).
- Parameters:
x – [in] The object to compute a hash of.
- Returns:
The hash values.
-
template<class T>
std::size_t hash_global(MPI_Comm comm, const T &x) Compute a hash for a distributed (MPI) object.
A hash is computed on each process for the local part of the object. Then, a hash of the std::vector containing each local hash key in rank order is returned.
Note
Collective
- Parameters:
comm – [in] The communicator on which to compute the hash.
x – [in] The object to compute a hash of.
- Returns:
The hash values.
-
class IndexMap
- #include <IndexMap.h>
This class represents the distribution index arrays across processes. An index array is a contiguous collection of
N+1
indices[0, 1, . . ., N]
that are distributed acrossM
processes. On a given process, the IndexMap stores a portion of the index set using local indices[0, 1, . . . , n]
, and a map from the local indices to a unique global index.Public Functions
-
IndexMap(MPI_Comm comm, std::int32_t local_size)
Create an non-overlapping index map.
Note
Collective
- Parameters:
comm – [in] MPI communicator that the index map is distributed across.
local_size – [in] Local size of the index map, i.e. the number of owned entries.
-
IndexMap(MPI_Comm comm, std::int32_t local_size, std::span<const std::int64_t> ghosts, std::span<const int> owners)
Create an overlapping (ghosted) index map.
This constructor uses a ‘consensus’ algorithm to determine the ranks that ghost indices that are owned by the caller. This requires non-trivial MPI communication. If the ranks that ghost indices owned by the caller are known, it more efficient to use the constructor that takes these ranks as an argument.
Note
Collective
- Parameters:
comm – [in] MPI communicator that the index map is distributed across.
local_size – [in] Local size of the index map, i.e. the number of owned entries
ghosts – [in] The global indices of ghost entries
owners – [in] Owner rank (on
comm
) of each entry inghosts
-
IndexMap(MPI_Comm comm, std::int32_t local_size, const std::array<std::vector<int>, 2> &src_dest, std::span<const std::int64_t> ghosts, std::span<const int> owners)
Create an overlapping (ghosted) index map.
This constructor is optimised for the case where the ‘source’ (ranks that own indices ghosted by the caller) and ‘destination’ ranks (ranks that ghost indices owned by the caller) are already available. It allows the complex computation of the destination ranks from
owners
.Note
Collective
- Parameters:
comm – [in] MPI communicator that the index map is distributed across.
local_size – [in] Local size of the index map, i.e. the number
src_dest – [in] Lists of [0] src and [1] dest ranks. The list in each must be sorted and not contain duplicates.
src
ranks are owners of the indices inghosts
.dest
ranks are the rank that ghost indices owned by the caller.ghosts – [in] The global indices of ghost entries
owners – [in] Owner rank (on
comm
) of each entry inghosts
-
~IndexMap() = default
Destructor.
-
std::array<std::int64_t, 2> local_range() const noexcept
Range of indices (global) owned by this process.
-
std::int32_t num_ghosts() const noexcept
Number of ghost indices on this process.
-
std::int32_t size_local() const noexcept
Number of indices owned by this process.
-
std::int64_t size_global() const noexcept
Number indices across communicator.
-
std::span<const std::int64_t> ghosts() const noexcept
Local-to-global map for ghosts (local indexing beyond end of local range)
-
MPI_Comm comm() const
Return the MPI communicator that the map is defined on.
- Returns:
Communicator
-
void local_to_global(std::span<const std::int32_t> local, std::span<std::int64_t> global) const
Compute global indices for array of local indices.
- Parameters:
local – [in] Local indices
global – [out] The global indices
-
void global_to_local(std::span<const std::int64_t> global, std::span<std::int32_t> local) const
Compute local indices for array of global indices.
- Parameters:
global – [in] Global indices
local – [out] The local of the corresponding global index in ‘global’. Returns -1 if the local index does not exist on this process.
-
std::vector<std::int64_t> global_indices() const
Build list of indices with global indexing.
- Returns:
The global index for all local indices
(0, 1, 2, ...)
on this process, including ghosts
-
inline std::span<const int> owners() const
The ranks that own each ghost index.
- Returns:
List of ghost owners. The owning rank of the ith ghost index is
owners()[i]
.
-
graph::AdjacencyList<int> index_to_dest_ranks() const
Compute map from each local (owned) index to the set of ranks that have the index as a ghost.
- Todo:
Aim to remove this function?
- Returns:
shared indices
Build a list of owned indices that are ghosted by another rank.
- Returns:
The local index of owned indices that are ghosts on other rank(s). The indices are unique and sorted.
-
std::span<const int> src() const noexcept
Ordered set of MPI ranks that own caller’s ghost indices.
Typically used when creating neighbourhood communicators.
- Returns:
MPI ranks than own ghost indices. The ranks are unique and sorted.
-
std::span<const int> dest() const noexcept
Ordered set of MPI ranks that ghost indices owned by caller.
Typically used when creating neighbourhood communicators.
- Returns:
MPI ranks than own ghost indices. The ranks are unique and sorted.
-
std::array<double, 2> imbalance() const
Returns the imbalance of the current IndexMap.
The imbalance is a measure of load balancing across all processes, defined as the maximum number of indices on any process divided by the average number of indices per process. This function calculates the imbalance separately for owned indices and ghost indices and returns them as a std::array<double, 2>. If the total number of owned or ghost indices is zero, the respective entry in the array is set to -1.
Note
This is a collective operation and must be called by all processes in the communicator associated with the IndexMap.
- Returns:
An array containing the imbalance in owned indices (first element) and the imbalance in ghost indices (second element).
-
IndexMap(MPI_Comm comm, std::int32_t local_size)
-
template<class Allocator = std::allocator<std::int32_t>>
class Scatterer - #include <Scatterer.h>
A Scatterer supports the MPI scattering and gathering of data that is associated with a common::IndexMap.
Scatter and gather operations uses MPI neighbourhood collectives. The implementation is designed is for sparse communication patterns, as it typical of patterns based on and IndexMap.
Public Types
Public Functions
-
inline Scatterer(const IndexMap &map, int bs, const Allocator &alloc = Allocator())
Create a scatterer.
- Parameters:
map – [in] The index map that describes the parallel layout of data.
bs – [in] The block size of data associated with each index in
map
that will be scattered/gathered.alloc – [in] The memory allocator for indices.
-
template<typename T>
inline void scatter_fwd_begin(std::span<const T> send_buffer, std::span<T> recv_buffer, std::span<MPI_Request> requests, Scatterer::type type = type::neighbor) const Start a non-blocking send of owned data to ranks that ghost the data.
The communication is completed by calling Scatterer::scatter_fwd_end. The send and receive buffer should not be changed until after Scatterer::scatter_fwd_end has been called.
- Parameters:
send_buffer – [in] Local data associated with each owned local index to be sent to process where the data is ghosted. It must not be changed until after a call to Scatterer::scatter_fwd_end. The order of data in the buffer is given by Scatterer::local_indices.
recv_buffer – A buffer used for the received data. The position of ghost entries in the buffer is given by Scatterer::remote_indices. The buffer must not be accessed or changed until after a call to Scatterer::scatter_fwd_end.
requests – The MPI request handle for tracking the status of the non-blocking communication
type – [in] The type of MPI communication pattern used by the Scatterer, either Scatterer::type::neighbor or Scatterer::type::p2p.
-
inline void scatter_fwd_end(std::span<MPI_Request> requests) const
Complete a non-blocking send from the local owner to process ranks that have the index as a ghost.
This function completes the communication started by Scatterer::scatter_fwd_begin.
- Parameters:
requests – [in] The MPI request handle for tracking the status of the send
-
template<typename T, typename F>
inline void scatter_fwd_begin(std::span<const T> local_data, std::span<T> local_buffer, std::span<T> remote_buffer, F pack_fn, std::span<MPI_Request> requests, Scatterer::type type = type::neighbor) const Scatter data associated with owned indices to ghosting ranks.
Note
This function is intended for advanced usage, and in particular when using CUDA/device-aware MPI.
- Parameters:
local_data – [in] All data associated with owned indices. Size is
size_local()
from the IndexMap used to create the scatterer, multiplied by the block size. The data for each index is blocked.local_buffer – [in] Working buffer. The required size is given by Scatterer::local_buffer_size.
remote_buffer – [out] Working buffer. The required size is given by Scatterer::remote_buffer_size.
pack_fn – [in] Function to pack data from
local_data
into the send buffer. It is passed as an argument to support CUDA/device-aware MPI.requests – [in] The MPI request handle for tracking the status of the send
type – [in] The type of MPI communication pattern used by the Scatterer, either Scatterer::type::neighbor or Scatterer::type::p2p.
-
template<typename T, typename F>
inline void scatter_fwd_end(std::span<const T> remote_buffer, std::span<T> remote_data, F unpack_fn, std::span<MPI_Request> requests) const Complete a non-blocking send from the local owner to process ranks that have the index as a ghost, and unpack received buffer into remote data.
This function completes the communication started by Scatterer::scatter_fwd_begin.
- Parameters:
remote_buffer – [in] Working buffer, same used in Scatterer::scatter_fwd_begin.
remote_data – [out] Received data associated with the ghost indices. The order follows the order of the ghost indices in the IndexMap used to create the scatterer. The size equal to the number of ghosts in the index map multiplied by the block size. The data for each index is blocked.
unpack_fn – [in] Function to unpack the received buffer into
remote_data
. It is passed as an argument to support CUDA/device-aware MPI.requests – [in] The MPI request handle for tracking the status of the send
-
template<typename T>
inline void scatter_fwd(std::span<const T> local_data, std::span<T> remote_data) const Scatter data associated with owned indices to ghosting ranks.
- Parameters:
local_data – [in] All data associated with owned indices. Size is
size_local()
from the IndexMap used to create the scatterer, multiplied by the block size. The data for each index is blockedremote_data – [out] Received data associated with the ghost indices. The order follows the order of the ghost indices in the IndexMap used to create the scatterer. The size equal to the number of ghosts in the index map multiplied by the block size. The data for each index is blocked.
-
template<typename T>
inline void scatter_rev_begin(std::span<const T> send_buffer, std::span<T> recv_buffer, std::span<MPI_Request> requests, Scatterer::type type = type::neighbor) const Start a non-blocking send of ghost data to ranks that own the data.
The communication is completed by calling Scatterer::scatter_rev_end. The send and receive buffers should not be changed until after Scatterer::scatter_rev_end has been called.
The buffer must not be accessed or changed until after a call to Scatterer::scatter_fwd_end.
- Parameters:
send_buffer – [in] Data associated with each ghost index. This data is sent to process that owns the index. It must not be changed until after a call to Scatterer::scatter_ref_end.
recv_buffer – Buffer used for the received data. The position of owned indices in the buffer is given by Scatterer::local_indices. Scatterer::local_displacements()[i] is the location of the first entry in
recv_buffer
received from neighbourhood rank i. The number of entries received from neighbourhood rank i is Scatterer::local_displacements()[i + 1] - Scatterer::local_displacements()[i].recv_buffer[j]
is the data associated with the index Scatterer::local_indices()[j] in the index map.requests – The MPI request handle for tracking the status of the non-blocking communication
type – [in] The type of MPI communication pattern used by the Scatterer, either Scatterer::type::neighbor or Scatterer::type::p2p.
-
inline void scatter_rev_end(std::span<MPI_Request> request) const
End the reverse scatter communication.
This function must be called after Scatterer::scatter_rev_begin. The buffers passed to Scatterer::scatter_rev_begin must not be modified until after the function has been called.
- Parameters:
request – [in] The handle used when calling Scatterer::scatter_rev_begin
-
template<typename T, typename F>
inline void scatter_rev_begin(std::span<const T> remote_data, std::span<T> remote_buffer, std::span<T> local_buffer, F pack_fn, std::span<MPI_Request> request, Scatterer::type type = type::neighbor) const Scatter data associated with ghost indices to owning ranks.
Note
This function is intended for advanced usage, and in particular when using CUDA/device-aware MPI.
- Template Parameters:
T – The data type to send
F – The pack function
- Parameters:
remote_data – [in] Received data associated with the ghost indices. The order follows the order of the ghost indices in the IndexMap used to create the scatterer. The size equal to the number of ghosts in the index map multiplied by the block size. The data for each index is blocked.
remote_buffer – [out] Working buffer. The requires size is given by Scatterer::remote_buffer_size.
local_buffer – [out] Working buffer. The requires size is given by Scatterer::local_buffer_size.
pack_fn – [in] Function to pack data from
local_data
into the send buffer. It is passed as an argument to support CUDA/device-aware MPI.request – MPI request handles for tracking the status of the non-blocking communication.
type – [in] Type of MPI communication pattern used by the Scatterer, either Scatterer::type::neighbor or Scatterer::type::p2p.
-
template<typename T, typename F, typename BinaryOp>
inline void scatter_rev_end(std::span<const T> local_buffer, std::span<T> local_data, F unpack_fn, BinaryOp op, std::span<MPI_Request> request) End the reverse scatter communication, and unpack the received local buffer into local data.
This function must be called after Scatterer::scatter_rev_begin. The buffers passed to Scatterer::scatter_rev_begin must not be modified until after the function has been called.
- Parameters:
local_buffer – [in] Working buffer. Same buffer should be used in Scatterer::scatter_rev_begin.
local_data – [out] All data associated with owned indices. Size is
size_local()
from the IndexMap used to create the scatterer, multiplied by the block size. The data for each index is blocked.unpack_fn – [in] Function to unpack the receive buffer into
local_data
. It is passed as an argument to support CUDA/device-aware MPI.op – [in] The reduction operation when accumulating received values. To add the received values use
std::plus<T>()
.request – [in] The handle used when calling Scatterer::scatter_rev_begin
-
template<typename T, typename BinaryOp>
inline void scatter_rev(std::span<T> local_data, std::span<const T> remote_data, BinaryOp op) Scatter data associated with ghost indices to ranks that own the indices.
-
inline std::int32_t local_buffer_size() const noexcept
Size of buffer for local data (owned and shared) used in forward and reverse communication.
- Returns:
The required buffer size
-
inline std::int32_t remote_buffer_size() const noexcept
Buffer size for remote data (ghosts) used in forward and reverse communication.
- Returns:
The required buffer size
-
inline const std::vector<std::int32_t> &local_indices() const noexcept
Return a vector of local indices (owned) used to pack/unpack local data. These indices are grouped by neighbor process (process for which an index is a ghost).
-
inline const std::vector<std::int32_t> &remote_indices() const noexcept
Return a vector of remote indices (ghosts) used to pack/unpack ghost data. These indices are grouped by neighbor process (ghost owners).
-
inline int bs() const noexcept
The number values (block size) to send per index in the common::IndexMap use to create the scatterer.
- Returns:
The block size
-
inline std::vector<MPI_Request> create_request_vector(Scatterer::type type = type::neighbor)
Create a vector of MPI_Requests for a given Scatterer::type.
- Returns:
A vector of MPI requests
-
inline Scatterer(const IndexMap &map, int bs, const Allocator &alloc = Allocator())
-
class TimeLogger
- #include <TimeLogger.h>
Timer logging.
Public Functions
-
TimeLogger() = default
Constructor.
-
~TimeLogger() = default
Destructor.
-
void register_timing(std::string task, double wall, double user, double system)
Register timing (for later summary)
-
Table timings(std::set<TimingType> type)
Return a summary of timings and tasks in a Table.
-
void list_timings(MPI_Comm comm, std::set<TimingType> type, Table::Reduction reduction)
List a summary of timings and tasks. Reduction type is printed.
- Parameters:
comm – MPI Communicator
type – Set of possible timings: wall, user or system
reduction – Reduction type (min, max or average)
-
std::tuple<int, double, double, double> timing(std::string task)
Return timing
- Parameters:
task – [in] The task name to retrieve the timing for
- Returns:
Values (count, total wall time, total user time, total system time) for given task.
-
TimeLogger() = default
-
class TimeLogManager
- #include <TimeLogManager.h>
Logger initialisation.
Public Static Functions
-
static TimeLogger &logger()
Singleton instance of logger.
-
static TimeLogger &logger()
-
class Timer
- #include <Timer.h>
A timer can be used for timing tasks. The basic usage is
Timer timer(“Assembling over cells”);
The timer is started at construction and timing ends when the timer is destroyed (goes out of scope). It is also possible to start and stop a timer explicitly by
timer.start(); timer.stop();
Timings are stored globally and a summary may be printed by calling
list_timings();
Public Functions
-
Timer(std::optional<std::string> task = std::nullopt)
Create timer
If a task name is provided this enables logging to logger, otherwise (i.e. no task provided) nothing gets logged.
-
~Timer()
Destructor.
-
void start()
Zero and start timer.
-
void resume()
Resume timer. Not well-defined for logging timer.
-
double stop()
Stop timer, return wall time elapsed and store timing data into logger
-
std::array<double, 3> elapsed() const
Return wall, user and system time in seconds.
-
Timer(std::optional<std::string> task = std::nullopt)
-
std::vector<int32_t> compute_owned_indices(std::span<const std::int32_t> indices, const IndexMap &map)