DOLFINx 0.9.0
DOLFINx C++ interface
Loading...
Searching...
No Matches
Scatterer< Allocator > Class Template Reference

A Scatterer supports the MPI scattering and gathering of data that is associated with a common::IndexMap. More...

#include <Scatterer.h>

Public Types

enum class  type { neighbor , p2p }
 Types of MPI communication pattern used by the Scatterer.
 
using allocator_type = Allocator
 The allocator type.
 

Public Member Functions

 Scatterer (const IndexMap &map, int bs, const Allocator &alloc=Allocator())
 Create a scatterer.
 
template<typename T >
void scatter_fwd_begin (std::span< const T > send_buffer, std::span< T > recv_buffer, std::span< MPI_Request > requests, Scatterer::type type=type::neighbor) const
 Start a non-blocking send of owned data to ranks that ghost the data.
 
void scatter_fwd_end (std::span< MPI_Request > requests) const
 Complete a non-blocking send from the local owner to process ranks that have the index as a ghost.
 
template<typename T , typename F >
requires std::is_invocable_v<F, std::span<const T>, std::span<const std::int32_t>, std::span<T>>
void scatter_fwd_begin (std::span< const T > local_data, std::span< T > local_buffer, std::span< T > remote_buffer, F pack_fn, std::span< MPI_Request > requests, Scatterer::type type=type::neighbor) const
 Scatter data associated with owned indices to ghosting ranks.
 
template<typename T , typename F >
requires std::is_invocable_v<F, std::span<const T>, std::span<const std::int32_t>, std::span<T>, std::function<T(T, T)>>
void scatter_fwd_end (std::span< const T > remote_buffer, std::span< T > remote_data, F unpack_fn, std::span< MPI_Request > requests) const
 Complete a non-blocking send from the local owner to process ranks that have the index as a ghost, and unpack received buffer into remote data.
 
template<typename T >
void scatter_fwd (std::span< const T > local_data, std::span< T > remote_data) const
 Scatter data associated with owned indices to ghosting ranks.
 
template<typename T >
void scatter_rev_begin (std::span< const T > send_buffer, std::span< T > recv_buffer, std::span< MPI_Request > requests, Scatterer::type type=type::neighbor) const
 Start a non-blocking send of ghost data to ranks that own the data.
 
void scatter_rev_end (std::span< MPI_Request > request) const
 End the reverse scatter communication.
 
template<typename T , typename F >
requires std::is_invocable_v<F, std::span<const T>, std::span<const std::int32_t>, std::span<T>>
void scatter_rev_begin (std::span< const T > remote_data, std::span< T > remote_buffer, std::span< T > local_buffer, F pack_fn, std::span< MPI_Request > request, Scatterer::type type=type::neighbor) const
 Scatter data associated with ghost indices to owning ranks.
 
template<typename T , typename F , typename BinaryOp >
requires std::is_invocable_v<F, std::span<const T>, std::span<const std::int32_t>, std::span<T>, BinaryOp> and std::is_invocable_r_v<T, BinaryOp, T, T>
void scatter_rev_end (std::span< const T > local_buffer, std::span< T > local_data, F unpack_fn, BinaryOp op, std::span< MPI_Request > request)
 End the reverse scatter communication, and unpack the received local buffer into local data.
 
template<typename T , typename BinaryOp >
void scatter_rev (std::span< T > local_data, std::span< const T > remote_data, BinaryOp op)
 Scatter data associated with ghost indices to ranks that own the indices.
 
std::int32_t local_buffer_size () const noexcept
 Size of buffer for local data (owned and shared) used in forward and reverse communication.
 
std::int32_t remote_buffer_size () const noexcept
 Buffer size for remote data (ghosts) used in forward and reverse communication.
 
const std::vector< std::int32_t > & local_indices () const noexcept
 
const std::vector< std::int32_t > & remote_indices () const noexcept
 
int bs () const noexcept
 The number values (block size) to send per index in the common::IndexMap use to create the scatterer.
 
std::vector< MPI_Request > create_request_vector (Scatterer::type type=type::neighbor)
 Create a vector of MPI_Requests for a given Scatterer::type.
 

Detailed Description

template<class Allocator = std::allocator<std::int32_t>>
class dolfinx::common::Scatterer< Allocator >

A Scatterer supports the MPI scattering and gathering of data that is associated with a common::IndexMap.

Scatter and gather operations uses MPI neighbourhood collectives. The implementation is designed is for sparse communication patterns, as it typical of patterns based on and IndexMap.

Constructor & Destructor Documentation

◆ Scatterer()

template<class Allocator = std::allocator<std::int32_t>>
Scatterer ( const IndexMap & map,
int bs,
const Allocator & alloc = Allocator() )
inline

Create a scatterer.

Parameters
[in]mapThe index map that describes the parallel layout of data.
[in]bsThe block size of data associated with each index in map that will be scattered/gathered.
[in]allocThe memory allocator for indices.

Member Function Documentation

◆ bs()

template<class Allocator = std::allocator<std::int32_t>>
int bs ( ) const
inlinenoexcept

The number values (block size) to send per index in the common::IndexMap use to create the scatterer.

Returns
The block size

◆ create_request_vector()

template<class Allocator = std::allocator<std::int32_t>>
std::vector< MPI_Request > create_request_vector ( Scatterer< Allocator >::type type = type::neighbor)
inline

Create a vector of MPI_Requests for a given Scatterer::type.

Returns
A vector of MPI requests

◆ local_buffer_size()

template<class Allocator = std::allocator<std::int32_t>>
std::int32_t local_buffer_size ( ) const
inlinenoexcept

Size of buffer for local data (owned and shared) used in forward and reverse communication.

Returns
The required buffer size

◆ local_indices()

template<class Allocator = std::allocator<std::int32_t>>
const std::vector< std::int32_t > & local_indices ( ) const
inlinenoexcept

Return a vector of local indices (owned) used to pack/unpack local data. These indices are grouped by neighbor process (process for which an index is a ghost).

◆ remote_buffer_size()

template<class Allocator = std::allocator<std::int32_t>>
std::int32_t remote_buffer_size ( ) const
inlinenoexcept

Buffer size for remote data (ghosts) used in forward and reverse communication.

Returns
The required buffer size

◆ remote_indices()

template<class Allocator = std::allocator<std::int32_t>>
const std::vector< std::int32_t > & remote_indices ( ) const
inlinenoexcept

Return a vector of remote indices (ghosts) used to pack/unpack ghost data. These indices are grouped by neighbor process (ghost owners).

◆ scatter_fwd()

template<class Allocator = std::allocator<std::int32_t>>
template<typename T >
void scatter_fwd ( std::span< const T > local_data,
std::span< T > remote_data ) const
inline

Scatter data associated with owned indices to ghosting ranks.

Parameters
[in]local_dataAll data associated with owned indices. Size is size_local() from the IndexMap used to create the scatterer, multiplied by the block size. The data for each index is blocked
[out]remote_dataReceived data associated with the ghost indices. The order follows the order of the ghost indices in the IndexMap used to create the scatterer. The size equal to the number of ghosts in the index map multiplied by the block size. The data for each index is blocked.

◆ scatter_fwd_begin() [1/2]

template<class Allocator = std::allocator<std::int32_t>>
template<typename T , typename F >
requires std::is_invocable_v<F, std::span<const T>, std::span<const std::int32_t>, std::span<T>>
void scatter_fwd_begin ( std::span< const T > local_data,
std::span< T > local_buffer,
std::span< T > remote_buffer,
F pack_fn,
std::span< MPI_Request > requests,
Scatterer< Allocator >::type type = type::neighbor ) const
inline

Scatter data associated with owned indices to ghosting ranks.

Note
This function is intended for advanced usage, and in particular when using CUDA/device-aware MPI.
Parameters
[in]local_dataAll data associated with owned indices. Size is size_local() from the IndexMap used to create the scatterer, multiplied by the block size. The data for each index is blocked.
[in]local_bufferWorking buffer. The required size is given by Scatterer::local_buffer_size.
[out]remote_bufferWorking buffer. The required size is given by Scatterer::remote_buffer_size.
[in]pack_fnFunction to pack data from local_data into the send buffer. It is passed as an argument to support CUDA/device-aware MPI.
[in]requestsThe MPI request handle for tracking the status of the send
[in]typeThe type of MPI communication pattern used by the Scatterer, either Scatterer::type::neighbor or Scatterer::type::p2p.

◆ scatter_fwd_begin() [2/2]

template<class Allocator = std::allocator<std::int32_t>>
template<typename T >
void scatter_fwd_begin ( std::span< const T > send_buffer,
std::span< T > recv_buffer,
std::span< MPI_Request > requests,
Scatterer< Allocator >::type type = type::neighbor ) const
inline

Start a non-blocking send of owned data to ranks that ghost the data.

The communication is completed by calling Scatterer::scatter_fwd_end. The send and receive buffer should not be changed until after Scatterer::scatter_fwd_end has been called.

Parameters
[in]send_bufferLocal data associated with each owned local index to be sent to process where the data is ghosted. It must not be changed until after a call to Scatterer::scatter_fwd_end. The order of data in the buffer is given by Scatterer::local_indices.
recv_bufferA buffer used for the received data. The position of ghost entries in the buffer is given by Scatterer::remote_indices. The buffer must not be accessed or changed until after a call to Scatterer::scatter_fwd_end.
requestsThe MPI request handle for tracking the status of the non-blocking communication
[in]typeThe type of MPI communication pattern used by the Scatterer, either Scatterer::type::neighbor or Scatterer::type::p2p.

◆ scatter_fwd_end() [1/2]

template<class Allocator = std::allocator<std::int32_t>>
template<typename T , typename F >
requires std::is_invocable_v<F, std::span<const T>, std::span<const std::int32_t>, std::span<T>, std::function<T(T, T)>>
void scatter_fwd_end ( std::span< const T > remote_buffer,
std::span< T > remote_data,
F unpack_fn,
std::span< MPI_Request > requests ) const
inline

Complete a non-blocking send from the local owner to process ranks that have the index as a ghost, and unpack received buffer into remote data.

This function completes the communication started by Scatterer::scatter_fwd_begin.

Parameters
[in]remote_bufferWorking buffer, same used in Scatterer::scatter_fwd_begin.
[out]remote_dataReceived data associated with the ghost indices. The order follows the order of the ghost indices in the IndexMap used to create the scatterer. The size equal to the number of ghosts in the index map multiplied by the block size. The data for each index is blocked.
[in]unpack_fnFunction to unpack the received buffer into remote_data. It is passed as an argument to support CUDA/device-aware MPI.
[in]requestsThe MPI request handle for tracking the status of the send

◆ scatter_fwd_end() [2/2]

template<class Allocator = std::allocator<std::int32_t>>
void scatter_fwd_end ( std::span< MPI_Request > requests) const
inline

Complete a non-blocking send from the local owner to process ranks that have the index as a ghost.

This function completes the communication started by Scatterer::scatter_fwd_begin.

Parameters
[in]requestsThe MPI request handle for tracking the status of the send

◆ scatter_rev_begin() [1/2]

template<class Allocator = std::allocator<std::int32_t>>
template<typename T , typename F >
requires std::is_invocable_v<F, std::span<const T>, std::span<const std::int32_t>, std::span<T>>
void scatter_rev_begin ( std::span< const T > remote_data,
std::span< T > remote_buffer,
std::span< T > local_buffer,
F pack_fn,
std::span< MPI_Request > request,
Scatterer< Allocator >::type type = type::neighbor ) const
inline

Scatter data associated with ghost indices to owning ranks.

Note
This function is intended for advanced usage, and in particular when using CUDA/device-aware MPI.
Template Parameters
TThe data type to send
FThe pack function
Parameters
[in]remote_dataReceived data associated with the ghost indices. The order follows the order of the ghost indices in the IndexMap used to create the scatterer. The size equal to the number of ghosts in the index map multiplied by the block size. The data for each index is blocked.
[out]remote_bufferWorking buffer. The requires size is given by Scatterer::remote_buffer_size.
[out]local_bufferWorking buffer. The requires size is given by Scatterer::local_buffer_size.
[in]pack_fnFunction to pack data from local_data into the send buffer. It is passed as an argument to support CUDA/device-aware MPI.
requestMPI request handles for tracking the status of the non-blocking communication.
[in]typeType of MPI communication pattern used by the Scatterer, either Scatterer::type::neighbor or Scatterer::type::p2p.

◆ scatter_rev_begin() [2/2]

template<class Allocator = std::allocator<std::int32_t>>
template<typename T >
void scatter_rev_begin ( std::span< const T > send_buffer,
std::span< T > recv_buffer,
std::span< MPI_Request > requests,
Scatterer< Allocator >::type type = type::neighbor ) const
inline

Start a non-blocking send of ghost data to ranks that own the data.

The communication is completed by calling Scatterer::scatter_rev_end. The send and receive buffers should not be changed until after Scatterer::scatter_rev_end has been called.

Parameters
[in]send_bufferData associated with each ghost index. This data is sent to process that owns the index. It must not be changed until after a call to Scatterer::scatter_ref_end.
recv_bufferBuffer used for the received data. The position of owned indices in the buffer is given by Scatterer::local_indices. Scatterer::local_displacements()[i] is the location of the first entry in recv_buffer received from neighbourhood rank i. The number of entries received from neighbourhood rank i is Scatterer::local_displacements()[i + 1] - Scatterer::local_displacements()[i]. recv_buffer[j] is the data associated with the index Scatterer::local_indices()[j] in the index map.

The buffer must not be accessed or changed until after a call to Scatterer::scatter_fwd_end.

Parameters
requestsThe MPI request handle for tracking the status of the non-blocking communication
[in]typeThe type of MPI communication pattern used by the Scatterer, either Scatterer::type::neighbor or Scatterer::type::p2p.

◆ scatter_rev_end() [1/2]

template<class Allocator = std::allocator<std::int32_t>>
template<typename T , typename F , typename BinaryOp >
requires std::is_invocable_v<F, std::span<const T>, std::span<const std::int32_t>, std::span<T>, BinaryOp> and std::is_invocable_r_v<T, BinaryOp, T, T>
void scatter_rev_end ( std::span< const T > local_buffer,
std::span< T > local_data,
F unpack_fn,
BinaryOp op,
std::span< MPI_Request > request )
inline

End the reverse scatter communication, and unpack the received local buffer into local data.

This function must be called after Scatterer::scatter_rev_begin. The buffers passed to Scatterer::scatter_rev_begin must not be modified until after the function has been called.

Parameters
[in]local_bufferWorking buffer. Same buffer should be used in Scatterer::scatter_rev_begin.
[out]local_dataAll data associated with owned indices. Size is size_local() from the IndexMap used to create the scatterer, multiplied by the block size. The data for each index is blocked.
[in]unpack_fnFunction to unpack the receive buffer into local_data. It is passed as an argument to support CUDA/device-aware MPI.
[in]opThe reduction operation when accumulating received values. To add the received values use std::plus<T>().
[in]requestThe handle used when calling Scatterer::scatter_rev_begin

◆ scatter_rev_end() [2/2]

template<class Allocator = std::allocator<std::int32_t>>
void scatter_rev_end ( std::span< MPI_Request > request) const
inline

End the reverse scatter communication.

This function must be called after Scatterer::scatter_rev_begin. The buffers passed to Scatterer::scatter_rev_begin must not be modified until after the function has been called.

Parameters
[in]requestThe handle used when calling Scatterer::scatter_rev_begin

The documentation for this class was generated from the following file: