DOLFINx 0.10.0.0
DOLFINx C++ interface
Loading...
Searching...
No Matches
Vector< T, Container, ScatterContainer > Class Template Reference

A vector that can be distributed across processes. More...

#include <Vector.h>

Public Types

using container_type = Container
 Container type.
using value_type = container_type::value_type
 Scalar type.

Public Member Functions

 Vector (std::shared_ptr< const common::IndexMap > map, int bs)
 Create a distributed vector.
 Vector (const Vector &x)=default
 Copy constructor.
 Vector (Vector &&x)=default
 Move constructor.
Vectoroperator= (const Vector &x)=delete
Vectoroperator= (Vector &&x)=default
 Move assignment operator.
void set (value_type v)
 Set all entries (including ghosts).
template<typename U, typename GetPtr>
requires VectorPackKernel<U, container_type, ScatterContainer> and GetPtrConcept<GetPtr, Container, T>
void scatter_fwd_begin (U pack, GetPtr get_ptr)
 Begin scatter (send) of local data that is ghosted on other processes.
void scatter_fwd_begin ()
 Begin scatter (send) of local data that is ghosted on other processes (simplified CPU version).
template<typename U>
requires VectorPackKernel<U, container_type, ScatterContainer>
void scatter_fwd_end (U unpack)
 End scatter (send) of local data values that are ghosted on other processes.
void scatter_fwd_end ()
 End scatter (send) of local data values that are ghosted on other processes (simplified CPU version).
void scatter_fwd ()
 Scatter (send) of local data values that are ghosted on other processes and update ghost entry values (simplified CPU version).
template<typename U, typename GetPtr>
requires VectorPackKernel<U, container_type, ScatterContainer> and GetPtrConcept<GetPtr, Container, T>
void scatter_rev_begin (U pack, GetPtr get_ptr)
 Start scatter (send) of ghost entry data to the owning process of an index.
void scatter_rev_begin ()
 Start scatter (send) of ghost entry data to the owning process of an index (simplified CPU version).
template<typename U>
requires VectorPackKernel<U, container_type, ScatterContainer>
void scatter_rev_end (U unpack)
 End scatter of ghost data to owner and update owned entries.
template<class BinaryOperation>
requires requires(Container c) { { c.data() } -> std::same_as<T*>; }
void scatter_rev (BinaryOperation op)
 Scatter (send) of ghost data values to the owning process and assign/accumulate into the owned data entries (simplified CPU version).
std::shared_ptr< const common::IndexMapindex_map () const
 Get IndexMap.
constexpr int bs () const
 Get block size.
container_typearray ()
 Get the process-local part of the vector.
const container_typearray () const
 Get the process-local part of the vector (const version).
container_typemutable_array ()
 Get local part of the vector.

Detailed Description

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
class dolfinx::la::Vector< T, Container, ScatterContainer >

A vector that can be distributed across processes.

Template Parameters
TScalar type of the vector.
ContainerData container type. This is typically std::vector<T> on CPUs, and thrust::device_vector<T> on GPUs.
ScatterContainerStorage container type for the scatterer indices. This is typically std::vector<std::int32_t> on CPUs, and thrust::device_vector<std::int32_t> on GPUs.

Constructor & Destructor Documentation

◆ Vector()

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
Vector ( std::shared_ptr< const common::IndexMap > map,
int bs )
inline

Create a distributed vector.

Parameters
mapIndex map that describes the parallel layout of the data.
bsNumber of entries per index map 'index' (block size).

Member Function Documentation

◆ array() [1/2]

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
container_type & array ( )
inline

Get the process-local part of the vector.

Owned entries appear first, followed by ghosted entries.

◆ array() [2/2]

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
const container_type & array ( ) const
inline

Get the process-local part of the vector (const version).

Owned entries appear first, followed by ghosted entries.

◆ mutable_array()

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
container_type & mutable_array ( )
inline

Get local part of the vector.

Deprecated
Use array instead.

◆ scatter_fwd()

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
void scatter_fwd ( )
inline

Scatter (send) of local data values that are ghosted on other processes and update ghost entry values (simplified CPU version).

Suitable for scatter operations on a CPU with std::vector storage. The send buffer is packed and the receive buffer unpacked by a function that is suitable for use on a CPU.

Note
Collective MPI operation

◆ scatter_fwd_begin() [1/2]

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
void scatter_fwd_begin ( )
inline

Begin scatter (send) of local data that is ghosted on other processes (simplified CPU version).

Suitable for scatter operations on a CPU with std::vector storage. The send buffer is packed internally by a function that is suitable for use on a CPU.

Note
Collective MPI operation.

◆ scatter_fwd_begin() [2/2]

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
template<typename U, typename GetPtr>
requires VectorPackKernel<U, container_type, ScatterContainer> and GetPtrConcept<GetPtr, Container, T>
void scatter_fwd_begin ( U pack,
GetPtr get_ptr )
inline

Begin scatter (send) of local data that is ghosted on other processes.

The user provides the function to pack to the send buffer. Typically usage would be a specialised function to pack data that resides on a GPU.

Note
Collective MPI operation
Template Parameters
UPack function type.
GetPtr
Parameters
packFunction that packs owned data into a send buffer.
get_ptrFunction that for a Container type returns the pointer to the underlying data.

◆ scatter_fwd_end() [1/2]

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
void scatter_fwd_end ( )
inline

End scatter (send) of local data values that are ghosted on other processes (simplified CPU version).

Suitable for scatter operations on a CPU with std::vector storage. The receive buffer is unpacked internally by a function that is suitable for use on a CPU.

Note
Collective MPI operation.

◆ scatter_fwd_end() [2/2]

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
template<typename U>
requires VectorPackKernel<U, container_type, ScatterContainer>
void scatter_fwd_end ( U unpack)
inline

End scatter (send) of local data values that are ghosted on other processes.

The user provides the function to unpack the receive buffer. Typically usage would be a specialised function to unpack data that resides on a GPU.

Note
Collective MPI operation.
Parameters
unpackFunction to unpack the receive buffer into the ghost entries.

◆ scatter_rev()

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
template<class BinaryOperation>
requires requires(Container c) { { c.data() } -> std::same_as<T*>; }
void scatter_rev ( BinaryOperation op)
inline

Scatter (send) of ghost data values to the owning process and assign/accumulate into the owned data entries (simplified CPU version).

For an owned entry, data from more than one process may be received. The received data can be summed or inserted into the owning entry. The this is controlled by the op function.

Note
Collective MPI operation
Parameters
opIndexMap operation (add or insert).

◆ scatter_rev_begin() [1/2]

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
void scatter_rev_begin ( )
inline

Start scatter (send) of ghost entry data to the owning process of an index (simplified CPU version).

Suitable for scatter operations on a CPU with std::vector storage. The send buffer is packed internally by a function that is suitable for use on a CPU.

Note
Collective MPI operation

◆ scatter_rev_begin() [2/2]

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
template<typename U, typename GetPtr>
requires VectorPackKernel<U, container_type, ScatterContainer> and GetPtrConcept<GetPtr, Container, T>
void scatter_rev_begin ( U pack,
GetPtr get_ptr )
inline

Start scatter (send) of ghost entry data to the owning process of an index.

The user provides the function to pack to the send buffer. Typically usage would be a specialised function to pack data that resides on a GPU.

Note
Collective MPI operation
Template Parameters
UPack function type.
GetPtr
Parameters
packFunction that packs ghost data into a send buffer.
get_ptrFunction that for a Container type returns the pointer to the underlying data.

◆ scatter_rev_end()

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
template<typename U>
requires VectorPackKernel<U, container_type, ScatterContainer>
void scatter_rev_end ( U unpack)
inline

End scatter of ghost data to owner and update owned entries.

For an owned entry, data from more than one process may be received. The received data can be summed or inserted into the owning entry by the unpack function.

Note
Collective MPI operation

◆ set()

template<typename T, typename Container = std::vector<T>, typename ScatterContainer = std::vector<std::int32_t>>
void set ( value_type v)
inline

Set all entries (including ghosts).

Deprecated
Use std::ranges::fill(u.array(), v) instead.
Parameters
[in]vValue to set all entries to (on calling rank).

The documentation for this class was generated from the following file: