Linear algebra (dolfinx::la)

namespace la

Linear algebra interface.

Interface to linear algebra data structures and solvers.

Enums

enum class BlockMode : int

Modes for representing block structured matrices.

Values:

enumerator compact
enumerator expanded

Each entry in the sparsity pattern of the matrix refers to a block of data of size (bs[0], bs[1]).

enum class Norm

Norm types.

Values:

enumerator l1
enumerator l2
enumerator linf
enumerator frobenius

Functions

template<class V>
auto inner_product(const V &a, const V &b)

Compute the inner product of two vectors. The two vectors must have the same parallel layout

Note

Collective MPI operation

Parameters:
  • a – A vector

  • b – A vector

Returns:

Returns a^{H} b (a^{T} b if a and b are real)

template<class V>
auto squared_norm(const V &a)

Compute the squared L2 norm of vector

Note

Collective MPI operation

template<class V>
auto norm(const V &x, Norm type = Norm::l2)

Compute the norm of the vector

Note

Collective MPI operation

Parameters:
  • x – A vector

  • type – Norm type

template<class V>
void orthonormalize(std::vector<std::reference_wrapper<V>> basis)

Orthonormalize a set of vectors

Parameters:

basis[inout] The set of vectors to orthonormalise. The vectors must have identical parallel layouts. The vectors are modified in-place.

Template Parameters:

Vdolfinx::la::Vector

template<class V>
bool is_orthonormal(std::vector<std::reference_wrapper<const V>> basis, dolfinx::scalar_value_type_t<typename V::value_type> eps = std::numeric_limits<dolfinx::scalar_value_type_t<typename V::value_type>>::epsilon())

Test if basis is orthonormal.

Returns true if ||x_i - x_j|| - delta_{ij} < eps fro all i, j, and otherwise false.

Parameters:
  • basis[in] Set of vectors to check.

  • eps[in] Tolerance.

Returns:

True is basis is orthonormal, otherwise false.

template<class Scalar, class Container = std::vector<Scalar>, class ColContainer = std::vector<std::int32_t>, class RowPtrContainer = std::vector<std::int64_t>>
class MatrixCSR
#include <MatrixCSR.h>

Distributed sparse matrix.

The matrix storage format is compressed sparse row. The matrix is partitioned row-wise across MPI ranks.

Warning

Experimental storage of a matrix in CSR format which can be assembled into using the usual DOLFINx assembly routines. Matrix internal data can be accessed for interfacing with other code.

Template Parameters:
  • Scalar – Scalar type of matrix entries

  • Container – Sequence container type to store matrix entries

  • ColContainer – Column index container type

  • RowPtrContainer – Row pointer container type

Public Types

using value_type = Scalar

Scalar type.

using container_type = Container

Matrix entries container type.

using column_container_type = ColContainer

Column index container type.

using rowptr_container_type = RowPtrContainer

Row pointer container type.

Public Functions

template<int BS0 = 1, int BS1 = 1>
inline auto mat_set_values()

Insertion functor for setting values in a matrix. It is typically used in finite element assembly functions.

Create a function to set values in a MatrixCSR. The function signature is int mat_set_fn(std::span<const std::int32_t rows, std::span<const / std::int32_t cols, std::span<const value_type> data). The rows and columns use process local indexing, and the given rows and columns must pre-exist in the sparsity pattern of the matrix. Insertion into “ghost” rows (in the ghost region of the row IndexMap) is permitted, so long as there are correct entries in the sparsity pattern.

Note

Using rows or columns which are not in the sparsity will result in undefined behaviour (or an assert failure in Debug mode).

Note

Matrix block size may be (1, 1) or (BS0, BS1)

Note

Data block size may be (1, 1) or (BS0, BS1)

Template Parameters:
  • BS0 – Row block size of data for insertion

  • BS1 – Column block size of data for insertion

Returns:

Function for inserting values into A

template<int BS0 = 1, int BS1 = 1>
inline auto mat_add_values()

Insertion functor for adding values to a matrix. It is typically used in finite element assembly functions.

Create a function to add values to a MatrixCSR. The function signature is int mat_add_fn(std::span<const std::int32_t rows, std::span<const / std::int32_t cols, std::span<const value_type> data). The rows and columns use process local indexing, and the given rows and columns must pre-exist in the sparsity pattern of the matrix. Insertion into “ghost” rows (in the ghost region of the row IndexMap) is permitted, so long as there are correct entries in the sparsity pattern.

Note

Using rows or columns which are not in the sparsity will result in undefined behaviour (or an assert failure in Debug mode).

Note

Matrix block size may be (1, 1) or (BS0, BS1)

Note

Data block size may be (1, 1) or (BS0, BS1)

Template Parameters:
  • BS0 – Row block size of data for insertion

  • BS1 – Column block size of data for insertion

Returns:

Function for inserting values into A

MatrixCSR(const SparsityPattern &p, BlockMode mode = BlockMode::compact)

Create a distributed matrix.

The structure of the matrix depends entirely on the input SparsityPattern, which must be finalized. The matrix storage is distributed Compressed Sparse Row: the matrix is distributed by row across processes, and on each process, there is a list of column indices and matrix entries for each row stored. This exactly matches the layout of the SparsityPattern. There is some overlap of matrix rows between processes to allow for independent Finite Element assembly, after which, the ghost rows should be sent to the row owning processes by calling scatter_rev().

Note

The block size of the matrix is given by the block size of the input SparsityPattern.

Parameters:
  • p[in] The sparsity pattern which describes the parallel distribution and the non-zero structure.

  • mode[in] Block mode. When the block size is greater than one, the storage can be “compact” where each matrix entry refers to a block of data (stored row major), or “expanded” where each matrix entry is individual. In the “expanded” case, the sparsity is expanded for every entry in the block, and the block size of the matrix is set to (1, 1).

MatrixCSR(MatrixCSR &&A) = default

Move constructor

Todo:

Check handling of MPI_Request

inline void set(value_type x)

Set all non-zero local entries to a value including entries in ghost rows.

Parameters:

x[in] The value to set non-zero matrix entries to

template<int BS0, int BS1>
inline void set(std::span<const value_type> x, std::span<const std::int32_t> rows, std::span<const std::int32_t> cols)

Set values in the matrix.

Note

Only entries included in the sparsity pattern used to initialize the matrix can be set.

Note

All indices are local to the calling MPI rank and entries cannot be set in ghost rows.

Note

This should be called after scatter_rev. Using before scatter_rev will set the values correctly, but incoming values may get added to them during a subsequent reverse scatter operation.

Template Parameters:
  • BS0 – Data row block size

  • BS1 – Data column block size

Parameters:
  • x[in] The m by n dense block of values (row-major) to set in the matrix

  • rows[in] The row indices of x

  • cols[in] The column indices of x

template<int BS0 = 1, int BS1 = 1>
inline void add(std::span<const value_type> x, std::span<const std::int32_t> rows, std::span<const std::int32_t> cols)

Accumulate values in the matrix.

Note

Only entries included in the sparsity pattern used to initialize the matrix can be accumulated into.

Note

All indices are local to the calling MPI rank and entries may go into ghost rows.

Note

Use scatter_rev after all entries have been added to send ghost rows to owners. Adding more entries after scatter_rev is allowed, but another call to scatter_rev will then be required.

Template Parameters:
  • BS0 – Row block size of data

  • BS1 – Column block size of data

Parameters:
  • x[in] The m by n dense block of values (row-major) to add to the matrix

  • rows[in] The row indices of x

  • cols[in] The column indices of x

inline std::int32_t num_owned_rows() const

Number of local rows excluding ghost rows.

inline std::int32_t num_all_rows() const

Number of local rows including ghost rows.

std::vector<value_type> to_dense() const

Copy to a dense matrix

Note

This function is typically used for debugging and not used in production

Note

Ghost rows are also returned, and these can be truncated manually by using num_owned_rows() if required.

Note

If the block size is greater than 1, the entries are expanded.

Returns:

Dense copy of the part of the matrix on the calling rank. Storage is row-major.

inline void scatter_rev()

Transfer ghost row data to the owning ranks accumulating received values on the owned rows, and zeroing any existing data in ghost rows. This process is analogous to scatter_rev for Vector except that the values are always accumulated on the owning process.

void scatter_rev_begin()

Begin transfer of ghost row data to owning ranks, where it will be accumulated into existing owned rows.

Note

Calls to this function must be followed by MatrixCSR::scatter_rev_end(). Between the two calls matrix values must not be changed.

Note

This function does not change the matrix data. Data update only occurs with scatter_rev_end().

void scatter_rev_end()

End transfer of ghost row data to owning ranks.

Note

Must be preceded by MatrixCSR::scatter_rev_begin().

Note

Matrix data received from other processes will be accumulated into locally owned rows, and ghost rows will be zeroed.

double squared_norm() const

Compute the Frobenius norm squared across all processes.

Note

MPI Collective

inline std::shared_ptr<const common::IndexMap> index_map(int dim) const

Index maps for the row and column space.

The row IndexMap contains ghost entries for rows which may be inserted into and the column IndexMap contains all local and ghost columns that may exist in the owned rows.

Returns:

Row (0) or column (1) index maps

inline container_type &values()

Get local data values

Note

Includes ghost values

inline const container_type &values() const

Get local values (const version)

Note

Includes ghost values

inline const rowptr_container_type &row_ptr() const

Get local row pointers

Note

Includes pointers to ghost rows

inline const column_container_type &cols() const

Get local column indices

Note

Includes columns in ghost rows

inline const rowptr_container_type &off_diag_offset() const

Get the start of off-diagonal (unowned columns) on each row, allowing the matrix to be split (virtually) into two parts. Operations (such as matrix-vector multiply) between the owned parts of the matrix and vector can then be performed separately from operations on the unowned parts.

Note

Includes ghost rows, which should be truncated manually if not required.

inline std::array<int, 2> block_size() const

Block size

Returns:

block sizes for rows and columns

class SLEPcEigenSolver
#include <slepc.h>

This class provides an eigenvalue solver for PETSc matrices. It is a wrapper for the SLEPc eigenvalue solver.

Public Functions

explicit SLEPcEigenSolver(MPI_Comm comm)

Create eigenvalue solver.

SLEPcEigenSolver(EPS eps, bool inc_ref_count)

Create eigenvalue solver from EPS object.

SLEPcEigenSolver(SLEPcEigenSolver &&solver)

Move constructor.

~SLEPcEigenSolver()

Destructor.

SLEPcEigenSolver &operator=(SLEPcEigenSolver &&solver)

Move assignment.

void set_operators(const Mat A, const Mat B)

Set operators (B may be nullptr for regular eigenvalues problems)

void solve()

Compute all eigenpairs of the matrix A (solve \(A x = \lambda x\))

void solve(std::int64_t n)

Compute the n first eigenpairs of the matrix A (solve \(A x = \lambda x\))

std::complex<PetscReal> get_eigenvalue(int i) const

Get ith eigenvalue.

void get_eigenpair(PetscScalar &lr, PetscScalar &lc, Vec r, Vec c, int i) const

Get ith eigenpair.

int get_iteration_number() const

Get the number of iterations used by the solver.

std::int64_t get_number_converged() const

Get the number of converged eigenvalues.

void set_options_prefix(std::string options_prefix)

Sets the prefix used by PETSc when searching the PETSc options database

std::string get_options_prefix() const

Returns the prefix used by PETSc when searching the PETSc options database

void set_from_options() const

Set options from PETSc options database.

EPS eps() const

Return SLEPc EPS pointer.

MPI_Comm comm() const

Return MPI communicator.

class SparsityPattern
#include <SparsityPattern.h>

Sparsity pattern data structure that can be used to initialize sparse matrices. After assembly, column indices are always sorted in increasing order. Ghost entries are kept after assembly.

Public Functions

SparsityPattern(MPI_Comm comm, const std::array<std::shared_ptr<const common::IndexMap>, 2> &maps, const std::array<int, 2> &bs)

Create an empty sparsity pattern with specified dimensions.

Parameters:
  • comm[in] Communicator that the pattern is defined on.

  • maps[in] Index maps describing the [0] row and [1] column index ranges (up to a block size).

  • bs[in] Block sizes for the [0] row and [1] column maps.

SparsityPattern(MPI_Comm comm, const std::vector<std::vector<const SparsityPattern*>> &patterns, const std::array<std::vector<std::pair<std::reference_wrapper<const common::IndexMap>, int>>, 2> &maps, const std::array<std::vector<int>, 2> &bs)

Create a new sparsity pattern by concatenating sub-patterns, e.g. pattern =[ pattern00 ][ pattern 01] [ pattern10 ][ pattern 11]

Parameters:
  • comm[in] Communicator that the pattern is defined on.

  • patterns[in] Rectangular array of sparsity pattern. The patterns must not be finalised. Null block are permitted/

  • maps[in] Pairs of (index map, block size) for each row block (maps[0]) and column blocks (maps[1])/

  • bs[in] Block sizes for the sparsity pattern entries/

SparsityPattern(SparsityPattern &&pattern) = default

Move constructor.

~SparsityPattern() = default

Destructor.

SparsityPattern &operator=(SparsityPattern &&pattern) = default

Move assignment.

void insert(std::int32_t row, std::int32_t col)

Insert non-zero locations using local (process-wise) indices.

Parameters:
  • row[in] local row index

  • col[in] local column index

void insert(std::span<const std::int32_t> rows, std::span<const std::int32_t> cols)

Insert non-zero locations using local (process-wise) indices.

This routine inserts non-zero locations at the outer product of rows and cols into the sparsity pattern, i.e. adds the matrix entries at A[row[i], col[j]] for all i, j.

Parameters:
  • rows[in] list of the local row indices

  • cols[in] list of the local column indices

void insert_diagonal(std::span<const std::int32_t> rows)

Insert non-zero locations on the diagonal.

Parameters:

rows[in] Rows in local (process-wise) indices. The indices must exist in the row IndexMap.

void finalize()

Finalize sparsity pattern and communicate off-process entries.

std::shared_ptr<const common::IndexMap> index_map(int dim) const

Index map for given dimension dimension. Returns the index map for rows and columns that will be set by the current MPI rank.

Parameters:

dim[in] Requested map, row (0) or column (1).

Returns:

The index map.

std::vector<std::int64_t> column_indices() const

Global indices of non-zero columns on owned rows.

Note

The ghosts are computed only once SparsityPattern::finalize has been called.

Returns:

Global index non-zero columns on this process, including ghosts.

common::IndexMap column_index_map() const

Builds the index map for columns after assembly of the sparsity pattern.

Todo:

Should this be compted and stored when finalising the SparsityPattern?

Returns:

Map for all non-zero columns on this process, including ghosts

int block_size(int dim) const

Return index map block size for dimension dim.

std::int64_t num_nonzeros() const

Number of nonzeros on this rank after assembly, including ghost rows.

std::int32_t nnz_diag(std::int32_t row) const

Number of non-zeros in owned columns (diagonal block) on a given row.

Note

Can also be used on ghost rows

std::int32_t nnz_off_diag(std::int32_t row) const

Number of non-zeros in unowned columns (off-diagonal block) on a given row.

Note

Can also be used on ghost rows

std::pair<std::span<const std::int32_t>, std::span<const std::int64_t>> graph() const

Sparsity pattern graph after assembly. Uses local indices for the columns.

Note

Column global indices can be obtained from SparsityPattern::column_index_map()

Note

Includes ghost rows

Returns:

Adjacency list edges and offsets

std::span<const std::int32_t> off_diagonal_offsets() const

Row-wise start of off-diagonals (unowned columns) for each row.

Note

Includes ghost rows

MPI_Comm comm() const

Return MPI communicator.

template<typename T, typename Container = std::vector<T>>
class Vector
#include <Vector.h>

Distributed vector

Template Parameters:
  • T – Scalar type

  • Container – data container type

Public Types

using value_type = T

Scalar type.

using container_type = Container

Container type.

Public Functions

inline Vector(std::shared_ptr<const common::IndexMap> map, int bs)

Create a distributed vector

Parameters:
  • map – IndexMap for parallel distribution of the data

  • bs – Block size

inline Vector(const Vector &x)

Copy constructor.

inline Vector(Vector &&x)

Move constructor.

Vector &operator=(Vector &&x) = default

Move Assignment operator.

inline void set(value_type v)

Set all entries (including ghosts)

Parameters:

v[in] The value to set all entries to (on calling rank)

inline void scatter_fwd_begin()

Begin scatter of local data from owner to ghosts on other ranks

Note

Collective MPI operation

inline void scatter_fwd_end()

End scatter of local data from owner to ghosts on other ranks

Note

Collective MPI operation

inline void scatter_fwd()

Scatter local data to ghost positions on other ranks

Note

Collective MPI operation

inline void scatter_rev_begin()

Start scatter of ghost data to owner

Note

Collective MPI operation

template<class BinaryOperation>
inline void scatter_rev_end(BinaryOperation op)

End scatter of ghost data to owner. This process may receive data from more than one process, and the received data can be summed or inserted into the local portion of the vector.

Note

Collective MPI operation

Parameters:

op – The operation to perform when adding/setting received values (add or insert)

template<class BinaryOperation>
inline void scatter_rev(BinaryOperation op)

Scatter ghost data to owner. This process may receive data from more than one process, and the received data can be summed or inserted into the local portion of the vector.

Note

Collective MPI operation

Parameters:

op – IndexMap operation (add or insert)

inline std::shared_ptr<const common::IndexMap> index_map() const

Get IndexMap.

inline int bs() const

Get block size.

inline std::span<const value_type> array() const

Get local part of the vector (const version)

inline std::span<value_type> mutable_array()

Get local part of the vector.

namespace impl

Functions

template<int BS0, int BS1, typename OP, typename U, typename V, typename W, typename X, typename Y>
void insert_csr(U &&data, const V &cols, const W &row_ptr, const X &x, const Y &xrows, const Y &xcols, OP op, typename Y::value_type num_rows)

Incorporate data into a CSR matrix.

0 1 | 2 3 4 5 | 6 7

8 9 | 10 11 12 13 | 14 15

Note

In the case of block data, where BS0 or BS1 are greater than one, the layout of the input data is still the same. For example, the following can be inserted into the top-left corner with xrows={0,1} and xcols={0,1}, BS0=2, BS1=2 and x={0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}.

Template Parameters:
  • BS0 – Row block size (of both matrix and data)

  • BS1 – Column block size (of both matrix and data)

  • OP – The operation (usually “set” or “add”)

Parameters:
  • data[out] The CSR matrix data

  • cols[in] The CSR column indices

  • row_ptr[in] The pointer to the ith row in the CSR data

  • x[in] The m by n dense block of values (row-major) to add to the matrix

  • xrows[in] The row indices of x

  • xcols[in] The column indices of x

  • op[in] The operation (set or add)

  • num_rows[in] The maximum row index that can be set. Used when debugging to check that rows beyond a permitted range are not being set.

template<int BS0, int BS1, typename OP, typename U, typename V, typename W, typename X, typename Y>
void insert_blocked_csr(U &&data, const V &cols, const W &row_ptr, const X &x, const Y &xrows, const Y &xcols, OP op, typename Y::value_type num_rows)

Incorporate blocked data with given block sizes into a non-blocked MatrixCSR.

Note

Matrix block size (bs=1). Matrix sparsity must be correct to accept the data.

Note

see insert_csr for data layout

Template Parameters:
  • BS0 – Row block size of Data

  • BS1 – Column block size of Data

  • OP – The operation (usually “set” or “add”)

Parameters:
  • data[out] The CSR matrix data

  • cols[in] The CSR column indices

  • row_ptr[in] The pointer to the ith row in the CSR data

  • x[in] The m by n dense block of values (row-major) to add to the matrix

  • xrows[in] The row indices of x

  • xcols[in] The column indices of x

  • op[in] The operation (set or add)

  • num_rows[in] The maximum row index that can be set. Used when debugging to check that rows beyond a permitted range are not being set.

template<typename OP, typename U, typename V, typename W, typename X, typename Y>
void insert_nonblocked_csr(U &&data, const V &cols, const W &row_ptr, const X &x, const Y &xrows, const Y &xcols, OP op, typename Y::value_type num_rows, int bs0, int bs1)

Incorporate non-blocked data into a blocked matrix (Data block size = 1)

Note

Matrix sparsity must be correct to accept the data

Note

see insert_csr for data layout

Parameters:
  • data[out] The CSR matrix data

  • cols[in] The CSR column indices

  • row_ptr[in] The pointer to the ith row in the CSR data

  • x[in] The m by n dense block of values (row-major) to add to the matrix

  • xrows[in] The row indices of x

  • xcols[in] The column indices of x

  • op[in] The operation (set or add)

  • num_rows[in] The maximum row index that can be set. Used when debugging to check that rows beyond a permitted range are not being set.

  • bs0[in] Row block size of Matrix

  • bs1[in] Column block size of Matrix

namespace petsc

PETSc linear algebra functions.

Functions

void error(int error_code, std::string filename, std::string petsc_function)

Print error message for PETSc calls that return an error.

std::vector<Vec> create_vectors(MPI_Comm comm, const std::vector<std::span<const PetscScalar>> &x)

Create PETsc vectors from the local data. The data is copied into the PETSc vectors and is not shared.

Note

Caller is responsible for destroying the returned object

Parameters:
  • comm[in] The MPI communicator

  • x[in] The vector data owned by the calling rank. All components must have the same length.

Returns:

Array of PETSc vectors

Vec create_vector(const common::IndexMap &map, int bs)

Create a ghosted PETSc Vec

Note

Caller is responsible for destroying the returned object

Parameters:
  • map[in] The index map describing the parallel layout (by block)

  • bs[in] The block size

Returns:

A PETSc Vec

Vec create_vector(MPI_Comm comm, std::array<std::int64_t, 2> range, std::span<const std::int64_t> ghosts, int bs)

Create a ghosted PETSc Vec from a local range and ghost indices

Note

Caller is responsible for freeing the returned object

Parameters:
  • comm[in] The MPI communicator

  • range[in] The local ownership range (by blocks)

  • ghosts[in] Ghost blocks

  • bs[in] The block size. The total number of local entries is bs * (range[1] - range[0]).

Returns:

A PETSc Vec

Vec create_vector_wrap(const common::IndexMap &map, int bs, std::span<const PetscScalar> x)

Create a PETSc Vec that wraps the data in an array

Note

The array x must be kept alive to use the PETSc Vec object

Note

The caller should call VecDestroy to free the return PETSc vector

Parameters:
  • map[in] The index map that describes the parallel layout of the distributed vector (by block)

  • bs[in] Block size

  • x[in] The local part of the vector, including ghost entries

Returns:

A PETSc Vec object that shares the data in x

template<class V>
Vec create_vector_wrap(const la::Vector<V> &x)

Create a PETSc Vec that wraps the data in an array

Parameters:

x[in] The vector to be wrapped

Returns:

A PETSc Vec object that shares the data in x

std::vector<IS> create_index_sets(const std::vector<std::pair<std::reference_wrapper<const common::IndexMap>, int>> &maps)

Compute PETSc IndexSets (IS) for a stack of index maps.

If map[0] = {0, 1, 2, 3, 4, 5, 6} and map[1] = {0, 1, 2, 4} (in local indices) then IS[0] = {0, 1, 2, 3, 4, 5, 6} and IS[1] = {7, 8, 9, 10}.

Todo:

This function could take just the local sizes.

Note

The caller is responsible for destruction of each IS.

Parameters:

maps[in] Vector of IndexMaps and corresponding block sizes

Returns:

Vector of PETSc Index Sets, created onPETSC_COMM_SELF

std::vector<std::vector<PetscScalar>> get_local_vectors(const Vec x, const std::vector<std::pair<std::reference_wrapper<const common::IndexMap>, int>> &maps)

Copy blocks from Vec into local vectors.

void scatter_local_vectors(Vec x, const std::vector<std::span<const PetscScalar>> &x_b, const std::vector<std::pair<std::reference_wrapper<const common::IndexMap>, int>> &maps)

Scatter local vectors to Vec.

Mat create_matrix(MPI_Comm comm, const SparsityPattern &sp, std::string type = std::string())

Create a PETSc Mat. Caller is responsible for destroying the returned object.

MatNullSpace create_nullspace(MPI_Comm comm, std::span<const Vec> basis)

Create PETSc MatNullSpace. Caller is responsible for destruction returned object.

Parameters:
  • comm[in] The MPI communicator

  • basis[in] The nullspace basis vectors

Returns:

A PETSc nullspace object

class KrylovSolver
#include <petsc.h>

This class implements Krylov methods for linear systems of the form Ax = b. It is a wrapper for the Krylov solvers of PETSc.

Public Functions

explicit KrylovSolver(MPI_Comm comm)

Create Krylov solver for a particular method and named preconditioner

KrylovSolver(KSP ksp, bool inc_ref_count)

Create solver wrapper of a PETSc KSP object

Parameters:
  • ksp[in] The PETSc KSP object. It should already have been created

  • inc_ref_count[in] Increment the reference count on ksp if true

KrylovSolver(KrylovSolver &&solver)

Move constructor.

~KrylovSolver()

Destructor.

KrylovSolver &operator=(KrylovSolver &&solver)

Move assignment.

void set_operator(const Mat A)

Set operator (Mat)

void set_operators(const Mat A, const Mat P)

Set operator and preconditioner matrix (Mat)

int solve(Vec x, const Vec b, bool transpose = false) const

Solve linear system Ax = b and return number of iterations (A^t x = b if transpose is true)

void set_options_prefix(std::string options_prefix)

Sets the prefix used by PETSc when searching the PETSc options database

std::string get_options_prefix() const

Returns the prefix used by PETSc when searching the PETSc options database

void set_from_options() const

Set options from PETSc options database.

KSP ksp() const

Return PETSc KSP pointer.

class Matrix : public Operator
#include <petsc.h>

It is a simple wrapper for a PETSc matrix pointer (Mat). Its main purpose is to assist memory management of PETSc Mat objects.

For advanced usage, access the PETSc Mat pointer using the function mat() and use the standard PETSc interface.

Public Types

enum class AssemblyType : std::int32_t

Assembly type FINAL - corresponds to PETSc MAT_FINAL_ASSEMBLY FLUSH - corresponds to PETSc MAT_FLUSH_ASSEMBLY

Values:

enumerator FINAL
enumerator FLUSH

Public Functions

Matrix(MPI_Comm comm, const SparsityPattern &sp, std::string type = std::string())

Create holder for a PETSc Mat object from a sparsity pattern.

Matrix(Mat A, bool inc_ref_count)

Create holder of a PETSc Mat object/pointer. The Mat A object should already be created. If inc_ref_count is true, the reference counter of the Mat will be increased. The Mat reference count will always be decreased upon destruction of the petsc::Matrix.

Matrix(Matrix &&A) = default

Move constructor (falls through to base class move constructor)

~Matrix() = default

Destructor.

Matrix &operator=(const Matrix &A) = delete

Assignment operator (deleted)

Matrix &operator=(Matrix &&A) = default

Move assignment operator.

void apply(AssemblyType type)

Finalize assembly of tensor. The following values are recognized for the mode parameter:

Parameters:

type – FINAL - corresponds to PETSc MatAssemblyBegin+End(MAT_FINAL_ASSEMBLY) FLUSH - corresponds to PETSc MatAssemblyBegin+End(MAT_FLUSH_ASSEMBLY)

double norm(Norm norm_type) const

Return norm of matrix.

void set_options_prefix(std::string options_prefix)

Sets the prefix used by PETSc when searching the options database

std::string get_options_prefix() const

Returns the prefix used by PETSc when searching the options database

void set_from_options()

Call PETSc function MatSetFromOptions on the PETSc Mat object.

Public Static Functions

static inline auto set_fn(Mat A, InsertMode mode)

Return a function with an interface for adding or inserting values into the matrix A (calls MatSetValuesLocal)

Parameters:
  • A[in] The matrix to set values in

  • mode[in] The PETSc insert mode (ADD_VALUES, INSERT_VALUES, …)

static inline auto set_block_fn(Mat A, InsertMode mode)

Return a function with an interface for adding or inserting values into the matrix A using blocked indices (calls MatSetValuesBlockedLocal)

Parameters:
  • A[in] The matrix to set values in

  • mode[in] The PETSc insert mode (ADD_VALUES, INSERT_VALUES, …)

static inline auto set_block_expand_fn(Mat A, int bs0, int bs1, InsertMode mode)

Return a function with an interface for adding or inserting blocked values to the matrix A using non-blocked insertion (calls MatSetValuesLocal). Internally it expands the blocked indices into non-blocked arrays.

Parameters:
  • A[in] The matrix to set values in

  • bs0[in] Block size for the matrix rows

  • bs1[in] Block size for the matrix columns

  • mode[in] The PETSc insert mode (ADD_VALUES, INSERT_VALUES, …)

class Operator
#include <petsc.h>

This class is a base class for matrices that can be used in petsc::KrylovSolver.

Subclassed by Matrix

Public Functions

Operator(Mat A, bool inc_ref_count)

Constructor.

Operator(Operator &&A)

Move constructor.

virtual ~Operator()

Destructor.

Operator &operator=(const Operator &A) = delete

Assignment operator (deleted)

Operator &operator=(Operator &&A)

Move assignment operator.

std::array<std::int64_t, 2> size() const

Return number of rows and columns (num_rows, num_cols). PETSc returns -1 if size has not been set.

Vec create_vector(std::size_t dim) const

Initialize vector to be compatible with the matrix-vector product y = Ax. In the parallel case, size and layout are both important.

Parameters:

dim[in] The dimension (axis): dim = 0 –> z = y, dim = 1 –> z = x

Mat mat() const

Return PETSc Mat pointer.

class Vector
#include <petsc.h>

A simple wrapper for a PETSc vector pointer (Vec). Its main purpose is to assist with memory/lifetime management of PETSc Vec objects.

Access the underlying PETSc Vec pointer using the function Vector::vec() and use the full PETSc interface.

Public Functions

Vector(const common::IndexMap &map, int bs)

Create a vector

Note

Collective

Parameters:
  • map[in] Index map describing the parallel layout

  • bs[in] the block size

Vector(Vector &&x)

Move constructor.

Vector(Vec x, bool inc_ref_count)

Create holder of a PETSc Vec object/pointer. The Vec x object should already be created. If inc_ref_count is true, the reference counter of the Vec object will be increased. The Vec reference count will always be decreased upon destruction of the PETScVector.

Note

Collective

Parameters:
  • x[in] The PETSc Vec

  • inc_ref_count[in] True if the reference count of x should be incremented

virtual ~Vector()

Destructor.

Vector &operator=(Vector &&x)

Move Assignment operator.

Vector copy() const

Create a copy of the vector

Note

Collective

std::int64_t size() const

Return global size of the vector.

std::int32_t local_size() const

Return local size of vector (belonging to the call rank)

std::array<std::int64_t, 2> local_range() const

Return ownership range for calling rank.

MPI_Comm comm() const

Return MPI communicator.

void set_options_prefix(std::string options_prefix)

Sets the prefix used by PETSc when searching the options database.

std::string get_options_prefix() const

Returns the prefix used by PETSc when searching the options database

void set_from_options()

Call PETSc function VecSetFromOptions on the underlying Vec object.

Vec vec() const

Return pointer to PETSc Vec object.

namespace options

These class provides static functions that permit users to set and retrieve PETSc options via the PETSc option/parameter system. The option must not be prefixed by ‘-’, e.g.

la::petsc::options::set("mat_mumps_icntl_14", 40);

Functions

void set(std::string option)

Set PETSc option that takes no value.

template<typename T>
void set(std::string option, const T value)

Generic function for setting PETSc option.

void clear(std::string option)

Clear a PETSc option.

void clear()

Clear PETSc global options database.