Common (dolfinx::common
)
-
namespace dolfinx::common
Miscellaneous classes, functions and types.
This namespace provides utility type functions for managing subsystems, convenience classes and library-wide typedefs.
Functions
-
std::vector<int32_t> compute_owned_indices(const xtl::span<const std::int32_t> &indices, const IndexMap &map)
Given a vector of indices (local numbering, owned or ghost) and an index map, this function returns the indices owned by this process, including indices that might have been in the list of indices on another processes.
- Parameters
indices – [in] List of indices
map – [in] The index map
- Returns
Vector of indices owned by the process
-
std::tuple<std::int64_t, std::vector<std::int32_t>, std::vector<std::vector<std::int64_t>>, std::vector<std::vector<int>>> stack_index_maps(const std::vector<std::pair<std::reference_wrapper<const common::IndexMap>, int>> &maps)
Compute layout data and ghost indices for a stacked (concatenated) index map, i.e. ‘splice’ multiple maps into one. Communication is required to compute the new ghost indices.
- Parameters
maps – [in] List of (index map, block size) pairs
- Returns
The (0) global offset of a stacked map for this rank, (1) local offset for each submap in the stacked map, and (2) new indices for the ghosts for each submap (3) owner rank of each ghost entry for each submap
-
template<typename U, typename V>
std::pair<std::vector<typename U::value_type>, std::vector<typename V::value_type>> sort_unique(const U &indices, const V &values) Sort two arrays based on the values in array
indices
. Any duplicate indices and the corresponding value are removed. In the case of duplicates, the entry with the smallest value is retained.- Parameters
indices – [in] Array of indices
values – [in] Array of values
- Returns
Sorted (indices, values), with sorting based on indices
-
template<class T>
std::size_t hash_local(const T &x) Compute a hash of a given object.
The hash is computed using Boost container hash (https://www.boost.org/doc/libs/release/libs/container_hash/).
- Parameters
x – [in] The object to compute a hash of.
- Returns
The hash values.
-
template<class T>
std::size_t hash_global(MPI_Comm comm, const T &x) Compute a hash for a distributed (MPI) object.
A hash is computed on each process for the local part of the obejct. Then, a hash of the std::vector containing each local hash key in rank order is returned.
Note
Collective
- Parameters
comm – [in] The communicator on which to compute the hash.
x – [in] The object to compute a hash of.
- Returns
The hash values.
-
class IndexMap
- #include <IndexMap.h>
This class represents the distribution index arrays across processes. An index array is a contiguous collection of N+1 indices [0, 1, …, N] that are distributed across M processes. On a given process, the IndexMap stores a portion of the index set using local indices [0, 1, … , n], and a map from the local indices to a unique global index.
Public Types
Public Functions
-
IndexMap(MPI_Comm comm, std::int32_t local_size)
Create an non-overlapping index map with local_size owned on this process.
Note
Collective
- Parameters
comm – [in] The MPI communicator
local_size – [in] Local size of the IndexMap, i.e. the number of owned entries
-
IndexMap(MPI_Comm comm, std::int32_t local_size, const xtl::span<const int> &dest_ranks, const xtl::span<const std::int64_t> &ghosts, const xtl::span<const int> &src_ranks)
Create an index map with local_size owned indiced on this process.
Note
Collective
- Parameters
comm – [in] The MPI communicator
local_size – [in] Local size of the IndexMap, i.e. the number of owned entries
dest_ranks – [in] Ranks that ‘ghost’ indices that are owned by the calling rank. I.e., ranks that the caller will send data to when updating ghost values.
ghosts – [in] The global indices of ghost entries
src_ranks – [in] Owner rank (on global communicator) of each entry in
ghosts
-
~IndexMap() = default
Destructor.
-
std::array<std::int64_t, 2> local_range() const noexcept
Range of indices (global) owned by this process.
-
std::int32_t num_ghosts() const noexcept
Number of ghost indices on this process.
-
std::int32_t size_local() const noexcept
Number of indices owned by on this process.
-
std::int64_t size_global() const noexcept
Number indices across communicator.
-
const std::vector<std::int64_t> &ghosts() const noexcept
Local-to-global map for ghosts (local indexing beyond end of local range)
-
MPI_Comm comm() const
Return the MPI communicator used to create the index map.
- Returns
Communicator
-
MPI_Comm comm(Direction dir) const
Return a MPI communicator with attached distributed graph topology information.
- Parameters
dir – [in] Edge direction of communicator (forward, reverse)
- Returns
A neighborhood communicator for the specified edge direction
-
void local_to_global(const xtl::span<const std::int32_t> &local, const xtl::span<std::int64_t> &global) const
Compute global indices for array of local indices.
- Parameters
local – [in] Local indices
global – [out] The global indices
-
void global_to_local(const xtl::span<const std::int64_t> &global, const xtl::span<std::int32_t> &local) const
Compute local indices for array of global indices.
- Parameters
global – [in] Global indices
local – [out] The local of the corresponding global index in ‘global’. Returns -1 if the local index does not exist on this process.
-
std::vector<std::int64_t> global_indices() const
Global indices.
- Returns
The global index for all local indices (0, 1, 2, …) on this process, including ghosts
-
const graph::AdjacencyList<std::int32_t> &scatter_fwd_indices() const noexcept
Local (owned) indices shared with neighbor processes, i.e. are ghosts on other processes, grouped by sharing (neighbor) process (destination ranks in forward communicator and source ranks in the reverse communicator).
scatter_fwd_indices().links(p)
gives the list of owned indices that needs to be sent to neighbourhood rankp
during a forward scatter.Entries are ordered such that
scatter_fwd_indices.offsets()
is the send displacement array for a forward scatter andscatter_fwd_indices.array()[i]
in the index of the owned index that should be placed at positioni
in the send buffer for a forward scatter.- Returns
List of indices that are ghosted on other processes
-
const std::vector<std::int32_t> &scatter_fwd_ghost_positions() const noexcept
Position of ghost entries in the receive buffer after a forward scatter, e.g. for a receive buffer
b
and a set operation, the ghost values should be updated byghost_value[i] = b[scatter_fwd_ghost_positions[i]]
.- Returns
Position of the ith ghost entry in the received buffer
-
std::vector<int> ghost_owner_rank() const
Owner rank on the global communicator of each ghost entry.
-
std::vector<int> ghost_owner_neighbor_rank() const
Compute the owner on the neighborhood communicator of ghost indices.
- Todo:
Aim to remove this function? If it’s kept, should it work with neighborhood ranks?
Compute map from each local (owned) index to the set of ranks that have the index as a ghost
- Returns
shared indices
-
std::pair<IndexMap, std::vector<std::int32_t>> create_submap(const xtl::span<const std::int32_t> &indices) const
Create new index map from a subset of indices in this index map. The order of the indices is preserved, with new map effectively a ‘compressed’ map.
- Parameters
indices – [in] Local indices in the map that should appear in the new index map. All indices must be owned, i.e. indices must be less than
this->size_local()
.- Pre
indices
must be sorted and contain no duplicates- Returns
The (i) new index map and (ii) a map from the ghost position in the new map to the ghost position in the original (this) map
-
template<typename T>
inline void scatter_fwd_begin(const xtl::span<const T> &send_buffer, MPI_Datatype &data_type, MPI_Request &request, const xtl::span<T> &recv_buffer) const Start a non-blocking send of owned data to ranks that ghost the data. The communication is completed by calling IndexMap::scatter_fwd_end. The send and receive buffer should not be changed until after IndexMap::scatter_fwd_end has been called.
- Parameters
send_buffer – [in] Local data associated with each owned local index to be sent to process where the data is ghosted. It must not be changed until after a call to IndexMap::scatter_fwd_end. The order of data in the buffer is given by IndexMap::scatter_fwd_indices.
data_type – The MPI data type. To send data with a block size use
MPI_Type_contiguous
with sizen
request – The MPI request handle for tracking the status of the non-blocking communication
recv_buffer – A buffer used for the received data. The position of ghost entries in the buffer is given by IndexMap::scatter_fwd_ghost_positions. The buffer must not be accessed or changed until after a call to IndexMap::scatter_fwd_end.
-
inline void scatter_fwd_end(MPI_Request &request) const
Complete a non-blocking send from the local owner of to process ranks that have the index as a ghost. This function complete the communication started by IndexMap::scatter_fwd_begin.
- Parameters
request – [in] The MPI request handle for tracking the status of the send
-
template<typename T>
inline void scatter_fwd(const xtl::span<const T> &local_data, xtl::span<T> remote_data, int n) const Send n values for each index that is owned to processes that have the index as a ghost. The size of the input array local_data must be the same as n * size_local().
- Parameters
local_data – [in] Local data associated with each owned local index to be sent to process where the data is ghosted. Size must be n * size_local().
remote_data – [inout] Ghost data on this process received from the owning process. Size will be n * num_ghosts().
n – [in] Number of data items per index
-
template<typename T>
inline void scatter_rev_begin(const xtl::span<const T> &send_buffer, MPI_Datatype &data_type, MPI_Request &request, const xtl::span<T> &recv_buffer) const Start a non-blocking send of ghost values to the owning rank. The non-blocking communication is completed by calling IndexMap::scatter_rev_end. A reverse scatter is the transpose of IndexMap::scatter_fwd_begin.
- Parameters
send_buffer – [in] Send buffer filled with ghost data on this process to be sent to the owning rank. The order of the data is given by IndexMap::scatter_fwd_ghost_positions, with IndexMap::scatter_fwd_ghost_positions()[i] being the index of the ghost data that should be placed in position
i
of the buffer.data_type – The MPI data type. To send data with a block size use
MPI_Type_contiguous
with sizen
request – The MPI request handle for tracking the status of the send
recv_buffer – A buffer used for the received data. It must not be changed until after a call to IndexMap::scatter_rev_end. The ordering of the data is given by IndexMap::scatter_fwd_indices, with IndexMap::scatter_fwd_indices()[i] being the position in the owned data array that corresponds to position
i
in the buffer.
-
inline void scatter_rev_end(MPI_Request &request) const
Complete a non-blocking send of ghost values to the owning rank. This function complete the communication started by IndexMap::scatter_rev_begin.
- Parameters
request – [in] The MPI request handle for tracking the status of the send
-
template<typename T>
inline void scatter_rev(xtl::span<T> local_data, const xtl::span<const T> &remote_data, int n, IndexMap::Mode op) const Send n values for each ghost index to owning to the process.
- Parameters
local_data – [inout] Local data associated with each owned local index to be sent to process where the data is ghosted. Size must be n * size_local().
remote_data – [in] Ghost data on this process received from the owning process. Size will be n * num_ghosts().
n – [in] Number of data items per index
op – [in] Sum or set received values in local_data
-
IndexMap(MPI_Comm comm, std::int32_t local_size)
-
class TimeLogger
- #include <TimeLogger.h>
Timer logging.
Public Functions
-
TimeLogger() = default
Constructor.
-
~TimeLogger() = default
Destructor.
-
void register_timing(std::string task, double wall, double user, double system)
Register timing (for later summary)
-
Table timings(std::set<TimingType> type)
Return a summary of timings and tasks in a Table.
-
void list_timings(MPI_Comm comm, std::set<TimingType> type, Table::Reduction reduction)
List a summary of timings and tasks. Reduction type is printed.
- Parameters
comm – MPI Communicator
type – Set of possible timings: wall, user or system
reduction – Reduction type (min, max or average)
-
std::tuple<int, double, double, double> timing(std::string task)
Return timing.
- Parameters
task – [in] The task name to retrieve the timing for
- Returns
Values (count, total wall time, total user time, total system time) for given task.
-
TimeLogger() = default
-
class TimeLogManager
- #include <TimeLogManager.h>
Logger initialisation.
Public Static Functions
-
static TimeLogger &logger()
Singleton instance of logger.
-
static TimeLogger &logger()
-
class Timer
- #include <Timer.h>
A timer can be used for timing tasks. The basic usage is.
Timer timer(“Assembling over cells”);
The timer is started at construction and timing ends when the timer is destroyed (goes out of scope). It is also possible to start and stop a timer explicitly by
timer.start(); timer.stop();
Timings are stored globally and a summary may be printed by calling
list_timings();
Public Functions
-
Timer()
Create timer without logging.
-
Timer(const std::string &task)
Create timer with logging.
-
~Timer()
Destructor.
-
void start()
Zero and start timer.
-
void resume()
Resume timer. Not well-defined for logging timer.
-
double stop()
Stop timer, return wall time elapsed and store timing data into logger.
-
std::array<double, 3> elapsed() const
Return wall, user and system time in seconds.
-
Timer()
-
namespace impl
-
std::vector<int32_t> compute_owned_indices(const xtl::span<const std::int32_t> &indices, const IndexMap &map)