DOLFINx 0.10.0.0
DOLFINx C++ interface
Loading...
Searching...
No Matches
Scatterer< Container > Class Template Reference

A Scatterer supports the scattering and gathering of distributed data that is associated with a common::IndexMap, using MPI. More...

#include <Scatterer.h>

Public Types

using container_type = Container
 Container type used to store local and remote indices.

Public Member Functions

 Scatterer (const IndexMap &map, int bs)
 Create a scatterer for data with a layout described by an IndexMap, and with a block size.
template<typename T>
void scatter_fwd_begin (const T *send_buffer, T *recv_buffer, MPI_Request &request) const
 Start a non-blocking send of owned data to ranks that ghost the data using MPI neighbourhood collective communication (recommended).
template<typename T>
void scatter_fwd_begin (const T *send_buffer, T *recv_buffer, std::span< MPI_Request > requests) const
 Start a non-blocking send of owned data to ranks that ghost the data using point-to-point MPI communication.
template<typename T>
void scatter_rev_begin (const T *send_buffer, T *recv_buffer, MPI_Request &request) const
 Start a non-blocking send of ghost data to ranks that own the data using MPI neighbourhood collective communication (recommended).
template<typename T>
void scatter_rev_begin (const T *send_buffer, T *recv_buffer, std::span< MPI_Request > requests) const
 Start a non-blocking send of ghost data to ranks that own the data using point-to-point MPI communication.
void scatter_end (std::span< MPI_Request > requests) const
 Complete non-blocking MPI point-to-point sends.
void scatter_end (MPI_Request &request) const
 Complete a non-blocking MPI neighbourhood collective send.
const container_typelocal_indices () const noexcept
 Array of indices for packing/unpacking owned data to/from a send/receive buffer.
const container_typeremote_indices () const noexcept
 Array of indices for packing/unpacking ghost data to/from a send/receive buffer.
std::size_t num_p2p_requests ()
 Number of required MPI_Requests for point-to-point communication.

Detailed Description

template<class Container = std::vector<std::int32_t>>
requires std::is_integral_v<typename Container::value_type>
class dolfinx::common::Scatterer< Container >

A Scatterer supports the scattering and gathering of distributed data that is associated with a common::IndexMap, using MPI.

Scatter and gather operations can use:

  1. MPI neighbourhood collectives (recommended), or
  2. Non-blocking point-to-point communication modes.

The implementation is designed for sparse communication patterns, as is typical of patterns based on an IndexMap.

Template Parameters
ContainerContainer type for storing the 'local' and 'remote' indices. On CPUs this is normally std::vector<std::int32_t>. For GPUs the container should store the indices on the device, e.g. using thrust::device_vector<std::int32_t>.

Constructor & Destructor Documentation

◆ Scatterer()

template<class Container = std::vector<std::int32_t>>
Scatterer ( const IndexMap & map,
int bs )
inline

Create a scatterer for data with a layout described by an IndexMap, and with a block size.

Parameters
[in]mapIndex map that describes the parallel layout of data.
[in]bsNumber of values associated with each map index (block size).

Member Function Documentation

◆ local_indices()

template<class Container = std::vector<std::int32_t>>
const container_type & local_indices ( ) const
inlinenoexcept

Array of indices for packing/unpacking owned data to/from a send/receive buffer.

For a forward scatter, the indices are used to copy required entries in the owned part of the data array into the appropriate position in a send buffer. For a reverse scatter, indices are used for assigning (accumulating) the receive buffer values into correct position in the owned part of the data array.

For a forward scatter, if x is the owned part of an array and send_buffer is the send buffer, send_buffer is packed such that:

auto& idx = scatterer.local_indices()
std::vector<T> send_buffer(idx.size())
for (std::size_t i = 0; i < idx.size(); ++i)
    send_buffer[i] = x[idx[i]];

For a reverse scatter, if recv_buffer is the received buffer, then x is updated by

auto& idx = scatterer.local_indices()
std::vector<T> recv_buffer(idx.size())
for (std::size_t i = 0; i < idx.size(); ++i)
    x[idx[i]] = op(recv_buffer[i], x[idx[i]]);

where op is a binary operation, e.g. x[idx[i]] = buffer[i] or x[idx[i]] += buffer[i].

Returns
Indices container.

◆ num_p2p_requests()

template<class Container = std::vector<std::int32_t>>
std::size_t num_p2p_requests ( )
inline

Number of required MPI_Requests for point-to-point communication.

Returns
Numer of required MPI request handles.

◆ remote_indices()

template<class Container = std::vector<std::int32_t>>
const container_type & remote_indices ( ) const
inlinenoexcept

Array of indices for packing/unpacking ghost data to/from a send/receive buffer.

For a forward scatter, the indices are to copy required entries in the owned array into the appropriate position in a send buffer. For a reverse scatter, indices are used for assigning (accumulating) the receive buffer values to correct position in the owned array.

For a forward scatter, if xg is the ghost part of the data array and recv_buffer is the receive buffer, xg is updated that

auto& idx = scatterer.remote_indices()
std::vector<T> recv_buffer(idx.size())
for (std::size_t i = 0; i < idx.size(); ++i)
    xg[idx[i]] = recv_buffer[i];

For a reverse scatter, if send_buffer is the send buffer, then send_buffer is packaged such that:

auto& idx = scatterer.local_indices()
std::vector<T> send_buffer(idx.size())
for (std::size_t i = 0; i < idx.size(); ++i)
    send_buffer[i] = xg[idx[i]];
Returns
Indices container.

◆ scatter_end() [1/2]

template<class Container = std::vector<std::int32_t>>
void scatter_end ( MPI_Request & request) const
inline

Complete a non-blocking MPI neighbourhood collective send.

This function completes the communication started by scatter_fwd_begin or scatter_rev_begin.

Parameters
[in]requestMPI request handle for tracking the status of the send.

◆ scatter_end() [2/2]

template<class Container = std::vector<std::int32_t>>
void scatter_end ( std::span< MPI_Request > requests) const
inline

Complete non-blocking MPI point-to-point sends.

This function completes the communication started by scatter_fwd_begin or scatter_rev_begin.

Parameters
[in]requestsMPI request handles for tracking the status of sends.

◆ scatter_fwd_begin() [1/2]

template<class Container = std::vector<std::int32_t>>
template<typename T>
void scatter_fwd_begin ( const T * send_buffer,
T * recv_buffer,
MPI_Request & request ) const
inline

Start a non-blocking send of owned data to ranks that ghost the data using MPI neighbourhood collective communication (recommended).

The communication is completed by calling Scatterer::scatter_end. See local_indices for instructions on packing send_buffer and remote_indices for instructions on unpacking recv_buffer.

Note
The send and receive buffers must not to be changed or accessed until after a call to Scatterer::scatter_end.
The pointers send_buffer and recv_buffer must be pointers to the data on the target device. E.g., if the send and receive buffers are allocated on a GPU, the send_buffer and recv_buffer should be device pointers.
Parameters
[in]send_bufferPacked local data associated with each owned local index to be sent to processes where the data is ghosted. See Scatterer::local_indices for the order of the buffer and how to pack.
[in,out]recv_bufferBuffer for storing received data. See Scatterer::remote_indices for the order of the buffer and how to unpack.
[in]requestMPI request handle for tracking the status of the non-blocking communication. The same request handle should be passed to Scatterer::scatter_end to complete the communication.

◆ scatter_fwd_begin() [2/2]

template<class Container = std::vector<std::int32_t>>
template<typename T>
void scatter_fwd_begin ( const T * send_buffer,
T * recv_buffer,
std::span< MPI_Request > requests ) const
inline

Start a non-blocking send of owned data to ranks that ghost the data using point-to-point MPI communication.

See scatter_fwd_begin for a detailed explanation of usage, including on the send and receive buffer packing and unpacking

Note
Use of the neighbourhood version of scatter_fwd_begin is recommended over this version.
Parameters
[in]send_bufferSend buffer.
[in,out]recv_bufferReceive buffer.
[in]requestsList of MPI request handles. The length of the list must be num_p2p_requests()

◆ scatter_rev_begin() [1/2]

template<class Container = std::vector<std::int32_t>>
template<typename T>
void scatter_rev_begin ( const T * send_buffer,
T * recv_buffer,
MPI_Request & request ) const
inline

Start a non-blocking send of ghost data to ranks that own the data using MPI neighbourhood collective communication (recommended).

The communication is completed by calling Scatterer::scatter_end. See remote_indices for instructions on packing send_buffer and local_indices for instructions on unpacking recv_buffer.

Note
The send and receive buffers must not to be changed or accessed until after a call to Scatterer::scatter_end.
The pointers send_buffer and recv_buffer must be pointers to the data on the target device. E.g., if the send and receive buffers are allocated on a GPU, the send_buffer and recv_buffer should be device pointers.
Parameters
[in]send_bufferData associated with each ghost index. This data is sent to process that owns the index. See Scatterer::remote_indices for the order of the buffer and how to pack.
[in,out]recv_bufferBuffer for storing received data. See Scatterer::local_indices for the order of the buffer and how to unpack.
[in]requestMPI request handle for tracking the status of the non-blocking communication. The same request handle should be passed to Scatterer::scatter_end to complete the communication.

◆ scatter_rev_begin() [2/2]

template<class Container = std::vector<std::int32_t>>
template<typename T>
void scatter_rev_begin ( const T * send_buffer,
T * recv_buffer,
std::span< MPI_Request > requests ) const
inline

Start a non-blocking send of ghost data to ranks that own the data using point-to-point MPI communication.

See scatter_rev_begin for a detailed explanation of usage, including on the send and receive buffer packing and unpacking

Note
Use of the neighbourhood version of scatter_rev_begin is recommended over this version
Parameters
[in]send_bufferSend buffer.
[in,out]recv_bufferReceive buffer.
[in]requestsList of MPI request handles. The length of the list must be num_p2p_requests()

The documentation for this class was generated from the following file: