![]() |
Blaze
3.6
|
Matrices are just as easy and intuitive to create as vectors. Still, there are a few rules to be aware of:
StaticMatrix
or HybridMatrix
are default initialized (i.e. built-in data types are initialized to 0, class types are initialized via the default constructor).DynamicMatrix
or CompressedMatrix
remain uninitialized if they are of built-in type and are default constructed if they are of class type.
The DynamicMatrix
, HybridMatrix
, and CompressedMatrix
classes offer a constructor that allows to immediately give the matrices a specific number of rows and columns:
Note that dense matrices (in this case DynamicMatrix
and HybridMatrix
) immediately allocate enough capacity for all matrix elements. Sparse matrices on the other hand (in this example CompressedMatrix
) merely acquire the size, but don't necessarily allocate memory.
All dense matrix classes offer a constructor for a direct, homogeneous initialization of all matrix elements. In contrast, for sparse matrices the predicted number of non-zero elements can be specified.
Alternatively, all dense matrix classes offer a constructor for an initialization with a dynamic or static array. If the matrix is initialized from a dynamic array, the constructor expects the dimensions of values provided by the array as first and second argument, the array as third argument. In case of a static array, the fixed size of the array is used:
In addition, all dense and sparse matrix classes can be directly initialized by means of an initializer list:
Dynamically sized matrices (such as e.g. HybridMatrix, DynamicMatrix or CompressedMatrix) are sized according to the size of the initializer list and all their elements are (copy) assigned the values of the list. For fixed size matrices (such as e.g. StaticMatrix) missing values are initialized as default and in case the size of the top-level initializer list does not match the number of rows of the matrix or the size of any nested list exceeds the number of columns, a std::invalid_argument exception is thrown. In case of sparse matrices, only the non-zero elements are used to initialize the matrix.
All dense and sparse matrices can be created as a copy of another dense or sparse matrix.
Note that it is not possible to create a StaticMatrix
as a copy of a matrix with a different number of rows and/or columns:
There are several types of assignment to dense and sparse matrices: Homogeneous Assignment, Array Assignment, Copy Assignment, and Compound Assignment.
It is possible to assign the same value to all elements of a dense matrix. All dense matrix classes provide an according assignment operator:
Dense matrices can also be assigned a static array:
Note that the dimensions of the static array have to match the size of a StaticMatrix
, whereas a DynamicMatrix
is resized according to the array dimensions:
Alternatively, it is possible to directly assign an initializer list to a dense or sparse matrix:
Dynamically sized matrices (such as e.g. HybridMatrix, DynamicMatrix or CompressedMatrix) are resized according to the size of the initializer list and all their elements are (copy) assigned the values of the list. For fixed size matrices (such as e.g. StaticMatrix) missing values are reset to their default value and in case the size of the top-level initializer list does not match the number of rows of the matrix or the size of any nested list exceeds the number of columns, a std::invalid_argument exception is thrown. In case of sparse matrices, only the non-zero elements are considered.
All kinds of matrices can be assigned to each other. The only restriction is that since a StaticMatrix
cannot change its size, the assigned matrix must match both in the number of rows and in the number of columns.
Compound assignment is also available for matrices: addition assignment, subtraction assignment, and multiplication assignment. In contrast to plain assignment, however, the number of rows and columns of the two operands have to match according to the arithmetic operation.
Note that the multiplication assignment potentially changes the number of columns of the target matrix:
Since a StaticMatrix
cannot change its size, only a square StaticMatrix can be used in a multiplication assignment with other square matrices of the same dimensions.
The easiest way to access a specific dense or sparse matrix element is via the function call operator. The indices to access a matrix are zero-based:
Since dense matrices allocate enough memory for all contained elements, using the function call operator on a dense matrix directly returns a reference to the accessed value. In case of a sparse matrix, if the accessed value is currently not contained in the matrix, the value is inserted into the matrix prior to returning a reference to the value, which can be much more expensive than the direct access to a dense matrix. Consider the following example:
Although the compressed matrix is only used for read access within the for loop, using the function call operator temporarily inserts 16 non-zero elements into the matrix. Therefore the preferred way to traverse the non-zero elements of a sparse matrix is to use iterators.
All matrices (sparse as well as dense) offer an alternate way via the begin()
, cbegin()
, end()
and cend()
functions to traverse all contained elements by iterator. Note that it is not possible to traverse all elements of the matrix, but that it is only possible to traverse elements in a row/column-wise fashion. In case of a non-const matrix, begin()
and end()
return an Iterator
, which allows a manipulation of the non-zero value, in case of a constant matrix or in case cbegin()
or cend()
are used a ConstIterator
is returned:
Note that begin()
, cbegin()
, end()
, and cend()
are also available as free functions:
Whereas a dense matrix always provides enough capacity to store all matrix elements, a sparse matrix only stores the non-zero elements. Therefore it is necessary to explicitly add elements to the matrix.
The first possibility to add elements to a sparse matrix is the function call operator:
In case the element at the given position is not yet contained in the sparse matrix, it is automatically inserted. Otherwise the old value is replaced by the new value 2. The operator returns a reference to the sparse vector element.
An alternative to the function call operator is the set()
function: In case the element is not yet contained in the matrix the element is inserted, else the element's value is modified:
The insertion of elements can be better controlled via the insert()
function. In contrast to the function call operator and the set()
function it emits an exception in case the element is already contained in the matrix. In order to check for this case, the find()
function can be used:
Although the insert()
function is very flexible, due to performance reasons it is not suited for the setup of large sparse matrices. A very efficient, yet also very low-level way to fill a sparse matrix is the append()
function. It requires the sparse matrix to provide enough capacity to insert a new element in the specified row/column. Additionally, the index of the new element must be larger than the index of the previous element in the same row/column. Violating these conditions results in undefined behavior!
The most efficient way to fill a sparse matrix with elements, however, is a combination of reserve()
, append()
, and the finalize()
function:
finalize()
function has to be explicitly called for each row or column, even for empty ones! append()
does not allocate new memory, it still invalidates all iterators returned by the end()
functions!
The erase()
member functions can be used to remove elements from a sparse matrix. The following example gives an impression of the five different flavors of erase()
:
A sparse matrix only stores the non-zero elements contained in the matrix. Therefore, whenever accessing a matrix element at a specific position a lookup operation is required. Whereas the function call operator is performing this lookup automatically, it is also possible to use the find()
, lowerBound()
, and upperBound()
member functions for a manual lookup.
The find()
function can be used to check whether a specific element is contained in the sparse matrix. It specifically searches for the element at the specified position. In case the element is found, the function returns an iterator to the element. Otherwise an iterator just past the last non-zero element of the according row or column (the end()
iterator) is returned. Note that the returned iterator is subject to invalidation due to inserting operations via the function call operator, the set()
function or the insert()
function!
In case of a row-major matrix, this function returns a row iterator to the first element with an index not less then the given column index. In case of a column-major matrix, the function returns a column iterator to the first element with an index not less then the given row index. In combination with the upperBound()
function this function can be used to create a pair of iterators specifying a range of indices. Note that the returned iterator is subject to invalidation due to inserting operations via the function call operator, the set()
function or the insert()
function!
In case of a row-major matrix, this function returns a row iterator to the first element with an index greater then the given column index. In case of a column-major matrix, the function returns a column iterator to the first element with an index greater then the given row index. In combination with the lowerBound()
function this function can be used to create a pair of iterators specifying a range of indices. Note that the returned iterator is subject to invalidation due to inserting operations via the function call operator, the set()
function or the insert()
function!
The current number of rows of a matrix can be acquired via the rows()
member function:
Alternatively, the free functions rows()
can be used to query the current number of rows of a matrix. In contrast to the member function, the free function can also be used to query the number of rows of a matrix expression:
The current number of columns of a matrix can be acquired via the columns()
member function:
There is also a free function columns()
available, which can also be used to query the number of columns of a matrix expression:
The size()
function returns the total number of elements of a matrix:
The total number of elements of a row or column of a dense matrix, including potential padding elements, can be acquired via the spacing
member function. In case of a row-major matrix (i.e. in case the storage order is set to blaze::rowMajor) the function returns the spacing between two rows, in case of a column-major matrix (i.e. in case the storage flag is set to blaze::columnMajor) the function returns the spacing between two columns:
Alternatively, the free functions spacing()
can be used to query the current number of elements in a row/column.
The capacity()
member function returns the internal capacity of a dense or sparse matrix. Note that the capacity of a matrix doesn't have to be equal to the size of a matrix. In case of a dense matrix the capacity will always be greater or equal than the total number of elements of the matrix. In case of a sparse matrix, the capacity will usually be much less than the total number of elements.
There is also a free function capacity()
available to query the capacity. However, please note that this function cannot be used to query the capacity of a matrix expression:
For both dense and sparse matrices the current number of non-zero elements can be queried via the nonZeros()
member function. In case of matrices there are two flavors of the nonZeros()
function: One returns the total number of non-zero elements in the matrix, the second returns the number of non-zero elements in a specific row (in case of a row-major matrix) or column (in case of a column-major matrix). Sparse matrices directly return their number of non-zero elements, dense matrices traverse their elements and count the number of non-zero elements.
The free nonZeros()
function can also be used to query the number of non-zero elements in a matrix expression. However, the result is not the exact number of non-zero elements, but may be a rough estimation:
The isEmpty()
function returns whether the total number of elements of the matrix is zero:
The isnan()
function provides the means to check a dense or sparse matrix for non-a-number elements:
If at least one element of the matrix is not-a-number, the function returns true
, otherwise it returns false
. Please note that this function only works for matrices with floating point elements. The attempt to use it for a matrix with a non-floating point element type results in a compile time error.
The isDefault()
function returns whether the given dense or sparse matrix is in default state:
A matrix is in default state if it appears to just have been default constructed. All resizable matrices (HybridMatrix
, DynamicMatrix
, or CompressedMatrix
) and CustomMatrix
are in default state if its size is equal to zero. A non-resizable matrix (StaticMatrix
and all submatrices) is in default state if all its elements are in default state. For instance, in case the matrix is instantiated for a built-in integral or floating point data type, the function returns true
in case all matrix elements are 0 and false
in case any matrix element is not 0.
Whether a dense or sparse matrix is a square matrix (i.e. if the number of rows is equal to the number of columns) can be checked via the isSquare()
function:
Via the isSymmetric()
function it is possible to check whether a dense or sparse matrix is symmetric:
Note that non-square matrices are never considered to be symmetric!
In order to check if all matrix elements are identical, the isUniform()
function can be used:
Note that in case of a sparse matrix also the zero elements are also taken into account!
In order to check if all matrix elements are zero, the isZero()
function can be used:
Via the isLower()
function it is possible to check whether a dense or sparse matrix is lower triangular:
Note that non-square matrices are never considered to be lower triangular!
Via the isUniLower()
function it is possible to check whether a dense or sparse matrix is lower unitriangular:
Note that non-square matrices are never considered to be lower unitriangular!
Via the isStrictlyLower()
function it is possible to check whether a dense or sparse matrix is strictly lower triangular:
Note that non-square matrices are never considered to be strictly lower triangular!
Via the isUpper()
function it is possible to check whether a dense or sparse matrix is upper triangular:
Note that non-square matrices are never considered to be upper triangular!
Via the isUniUpper()
function it is possible to check whether a dense or sparse matrix is upper unitriangular:
Note that non-square matrices are never considered to be upper unitriangular!
Via the isStrictlyUpper()
function it is possible to check whether a dense or sparse matrix is strictly upper triangular:
Note that non-square matrices are never considered to be strictly upper triangular!
The isDiagonal()
function checks if the given dense or sparse matrix is a diagonal matrix, i.e. if it has only elements on its diagonal and if the non-diagonal elements are default elements:
Note that non-square matrices are never considered to be diagonal!
The isIdentity()
function checks if the given dense or sparse matrix is an identity matrix, i.e. if all diagonal elements are 1 and all non-diagonal elements are 0:
Note that non-square matrices are never considered to be identity matrices!
The determinant of a square dense matrix can be computed by means of the det()
function:
In case the given dense matrix is not a square matrix, a std::invalid_argument
exception is thrown.
det()
function can only be used for dense matrices with float
, double
, complex<float>
or complex<double>
element type. The attempt to call the function with matrices of any other element type or with a sparse matrix results in a compile time error!
Matrices can be transposed via the trans()
function. Row-major matrices are transposed into a column-major matrix and vice versa:
The conjugate transpose of a dense or sparse matrix (also called adjoint matrix, Hermitian conjugate, or transjugate) can be computed via the ctrans()
function:
Note that the ctrans()
function has the same effect as manually applying the conj()
and trans()
function in any order:
Via the reverse()
function is is possible to reverse the rows or columns of a dense or sparse matrix. The following examples gives an impression of both alternatives:
The evaluate()
function forces an evaluation of the given matrix expression and enables an automatic deduction of the correct result type of an operation. The following code example demonstrates its intended use for the multiplication of a lower and a strictly lower dense matrix:
In this scenario, the evaluate()
function assists in deducing the exact result type of the operation via the auto
keyword. Please note that if evaluate()
is used in this way, no temporary matrix is created and no copy operation is performed. Instead, the result is directly written to the target matrix due to the return value optimization (RVO). However, if evaluate()
is used in combination with an explicit target type, a temporary will be created and a copy operation will be performed if the used type differs from the type returned from the function:
Sometimes it might be desirable to explicitly evaluate a sub-expression within a larger expression. However, please note that evaluate()
is not intended to be used for this purpose. This task is more elegantly and efficiently handled by the eval()
function:
In contrast to the evaluate()
function, eval()
can take the complete expression into account and therefore can guarantee the most efficient way to evaluate it (see also Intra-Statement Optimization).
The dimensions of a StaticMatrix
are fixed at compile time by the second and third template parameter and a CustomMatrix
cannot be resized. In contrast, the number or rows and columns of DynamicMatrix
, HybridMatrix
, and CompressedMatrix
can be changed at runtime:
Note that resizing a matrix invalidates all existing views (see e.g. Submatrices) on the matrix:
When the internal capacity of a matrix is no longer sufficient, the allocation of a larger junk of memory is triggered. In order to avoid frequent reallocations, the reserve()
function can be used up front to set the internal capacity:
Additionally it is possible to reserve memory in a specific row (for a row-major matrix) or column (for a column-major matrix):
The internal capacity of matrices with dynamic memory is preserved in order to minimize the number of reallocations. For that reason, the resize()
and reserve()
functions can lead to memory overhead. The shrinkToFit()
member function can be used to minimize the internal capacity:
Please note that due to padding the capacity might not be reduced exactly to rows()
times columns()
. Please also note that in case a reallocation occurs, all iterators (including end()
iterators), all pointers and references to elements of this matrix are invalidated.
In order to reset all elements of a dense or sparse matrix, the reset()
function can be used. The number of rows and columns of the matrix are preserved:
Alternatively, only a single row or column of the matrix can be resetted:
In order to reset a row of a column-major matrix or a column of a row-major matrix, use a row or column view (see Rows and views_colums).
In order to return a matrix to its default state (i.e. the state of a default constructed matrix), the clear()
function can be used:
In addition to the non-modifying trans()
function, matrices can be transposed in-place via the transpose()
function:
Note however that the transpose operation fails if ...
The ctranspose()
function can be used to perform an in-place conjugate transpose operation:
Note however that the conjugate transpose operation fails if ...
Via the
it is possible to completely swap the contents of two matrices of the same type:swap()
function
The min()
and max()
functions can be used for a single vector or multiple vectors. If passed a single matrix, the functions return the smallest and largest element of the given dense matrix or the smallest and largest non-zero element of the given sparse matrix, respectively:
For more information on the unary min()
and max()
reduction operations see the Reduction Operations section.
If passed two or more dense matrices, the min()
and max()
functions compute the componentwise minimum or maximum of the given matrices, respectively:
Please note that sparse matrices can only be used in the unary min()
and max()
functions. Also note that all forms of the min()
and max()
functions can be used to compute the smallest and largest element of a matrix expression:
The softmax function, also called the normalized exponential function, of a given dense matrix can be computed via softmax()
. The resulting dense matrix consists of real values in the range (0..1], which add up to 1.
Alternatively it is possible to compute a row- or columnwise softmax()
function. The resulting dense matrix consists of real values in the range (0..1], which add up to the number of rows or columns, respectively.
The trace()
function sums the diagonal elements of a square dense or sparse matrix:
In case the given matrix is not a square matrix, a std::invalid_argument
exception is thrown.
The abs()
function can be used to compute the absolute values of each element of a matrix. For instance, the following computation
results in the matrix
The sign()
function can be used to evaluate the sign of each element of a matrix A. For each element (i,j) the corresponding result is 1 if A(i,j) is greater than zero, 0 if A(i,j) is zero, and -1 if A(i,j) is less than zero. For instance, the following use of the
sign()
function
results in the matrix
The floor()
, ceil()
, trunc()
, and round()
functions can be used to round down/up each element of a matrix, respectively:
The conj()
function can be applied on a dense or sparse matrix to compute the complex conjugate of each element of the matrix:
Additionally, matrices can be conjugated in-place via the conjugate()
function:
The real()
function can be used on a dense or sparse matrix to extract the real part of each element of the matrix:
The imag()
function can be used on a dense or sparse matrix to extract the imaginary part of each element of the matrix:
Via the sqrt()
and invsqrt()
functions the (inverse) square root of each element of a matrix can be computed:
Note that in case of sparse matrices only the non-zero elements are taken into account!
The cbrt()
and invcbrt()
functions can be used to compute the the (inverse) cubic root of each element of a matrix:
Note that in case of sparse matrices only the non-zero elements are taken into account!
The hypot()
function can be used to compute the componentwise hypotenous for a pair of dense matrices:
The clamp()
function can be used to restrict all elements of a matrix to a specific range:
Note that in case of sparse matrices only the non-zero elements are taken into account!
The pow()
function can be used to compute the exponential value of each element of a matrix. If passed a matrix and a numeric exponent, the function computes the exponential value of each element of the matrix using the same exponent. If passed a second matrix, the function computes the componentwise exponential value:
exp()
, exp2()
and exp10()
compute the base e/2/10 exponential of each element of a matrix, respectively:
Note that in case of sparse matrices only the non-zero elements are taken into account!
The log()
, log2()
and log10()
functions can be used to compute the natural, binary and common logarithm of each element of a matrix:
The following trigonometric functions are available for both dense and sparse matrices:
Note that in case of sparse matrices only the non-zero elements are taken into account!
The following hyperbolic functions are available for both dense and sparse matrices:
The multi-valued inverse tangent is available for a pair of dense matrices:
The erf()
and erfc()
functions compute the (complementary) error function of each element of a matrix:
Note that in case of sparse matrices only the non-zero elements are taken into account!
Via the unary and binary map()
functions it is possible to execute componentwise custom operations on matrices. The unary map()
function can be used to apply a custom operation on each element of a dense or sparse matrix. For instance, the following example demonstrates a custom square root computation via a lambda:
The binary map()
function can be used to apply an operation pairwise to the elements of two dense matrices. The following example demonstrates the merging of two matrices of double precision values into a matrix of double precision complex numbers:
Although the computation can be parallelized it is not vectorized and thus cannot perform at peak performance. However, it is also possible to create vectorized custom operations. See Custom Operations for a detailed overview of the possibilities of custom operations.
Please note that unary custom operations on vectors have been introduced in Blaze 3.0 in form of the forEach()
function. With the introduction of binary custom functions, the forEach()
function has been renamed to map()
. The forEach()
function can still be used (even for binary custom operations), but the function might be deprecated in future releases of Blaze.
The reduce()
function performs either a total reduction, a rowwise reduction or a columnwise reduction of the elements of the given dense matrix or the non-zero elements of the given sparse matrix. The following examples demonstrate the total reduction of a dense and sparse matrix:
By specifying blaze::columnwise
or blaze::rowwise
the reduce()
function performs a column-wise or row-wise reduction, respectively. In case blaze::columnwise
is specified, the (non-zero) elements of the matrix are reduced column-wise and the result is a row vector. In case blaze::rowwise
is specified, the (non-zero) elements of the matrix are reduced row-wise and the result is a column vector:
As demonstrated in the examples it is possible to pass any binary callable as custom reduction operation. However, for instance in the case of lambdas the vectorization of the reduction operation is compiler dependent and might not perform at peak performance. However, it is also possible to create vectorized custom operations. See Custom Operations for a detailed overview of the possibilities of custom operations.
Please note that the evaluation order of the reduce()
function is unspecified. Thus the behavior is non-deterministic if the given reduction operation is not associative or not commutative. Also, the operation is undefined if the given reduction operation modifies the values.
The sum()
function reduces the elements of the given dense vector or the non-zero elements of the given sparse vector by means of addition:
By specifying blaze::columnwise
or blaze::rowwise
the sum()
function performs a column-wise or row-wise summation, respectively. In case blaze::columnwise
is specified, the (non-zero) elements of the matrix are summed up column-wise and the result is a row vector. In case blaze::rowwise
is specified, the (non-zero) elements of the matrix are summed up row-wise and the result is a column vector:
Please note that the evaluation order of the sum()
function is unspecified.
The prod()
function reduces the elements of the given dense vector or the non-zero elements of the given sparse vector by means of multiplication:
By specifying blaze::columnwise
or blaze::rowwise
the prod()
function performs a column-wise or row-wise multiplication, respectively. In case blaze::columnwise
is specified, the (non-zero) elements of the matrix are multiplied column-wise and the result is a row vector. In case blaze::rowwise
is specified, the (non-zero) elements of the matrix are multiplied row-wise and the result is a column vector:
Please note that the evaluation order of the prod()
function is unspecified.
The unary min()
function returns the smallest element of the given dense matrix or the smallest non-zero element of the given sparse matrix. This function can only be used for element types that support the smaller-than relationship. In case the given matrix currently has either 0 rows or 0 columns, the returned value is the default value (e.g. 0 in case of fundamental data types).
By specifying blaze::columnwise
or blaze::rowwise
the min()
function determines the smallest (non-zero) element in each row or column, respectively. In case blaze::columnwise
is specified, the smallest (non-zero) element of each column is determined and the result is a row vector. In case blaze::rowwise
is specified, the smallest (non-zero) element of each row is determined and the result is a column vector.
The unary max()
function returns the largest element of the given dense matrix or the largest non-zero element of the given sparse matrix. This function can only be used for element types that support the smaller-than relationship. In case the given matrix currently has either 0 rows or 0 columns, the returned value is the default value (e.g. 0 in case of fundamental data types).
By specifying blaze::columnwise
or blaze::rowwise
the max()
function determines the largest (non-zero) element in each row or column, respectively. In case blaze::columnwise
is specified, the largest (non-zero) element of each column is determined and the result is a row vector. In case blaze::rowwise
is specified, the largest (non-zero) element of each row is determined and the result is a column vector.
The norm()
function computes the L2 norm of the given dense or sparse matrix:
The sqrNorm()
function computes the squared L2 norm of the given dense or sparse matrix:
The l1Norm()
function computes the squared L1 norm of the given dense or sparse matrix:
The l2Norm()
function computes the squared L2 norm of the given dense or sparse matrix:
The l3Norm()
function computes the squared L3 norm of the given dense or sparse matrix:
The l4Norm()
function computes the squared L4 norm of the given dense or sparse matrix:
The lpNorm()
function computes the general Lp norm of the given dense or sparse matrix, where the norm is specified by either a compile time or a runtime argument:
The linfNorm()
and maxNorm()
functions compute the infinity/maximum norm of the given dense or sparse matrix:
By means of the uniform()
function it is possible to expand a scalar value into a dense, uniform matrix. By default, the resulting uniform matrix is a row-major matrix, but it is possible to specify the storage order explicitly:
The (arithmetic) mean of a dense or sparse matrix can be computed via the mean()
function. In case of a sparse matrix, both the non-zero and zero elements are taken into account. The following example demonstrates the computation of the mean of a dense matrix:
In case the number of rows or columns of the given matrix is 0, a std::invalid_argument is thrown.
Alternatively it is possible to compute the row- or columnwise mean:
In case the rowwise mean is computed and the number of columns of the given matrix is 0 or in case the columnwise mean is computed and the number of rows of the given matrix is 0, a std::invalid_argument is thrown.
The variance of a dense or sparse matrix can be computed via the var()
function. In case of a sparse vector, both the non-zero and zero elements are taken into account. The following example demonstrates the computation of the variance of a dense matrix:
In case the size of the given matrix is smaller than 2, a std::invalid_argument is thrown.
Alternatively it is possible to compute the row- or columnwise variance:
In case the rowwise varoamce is computed and the number of columns of the given matrix is smaller than 2 or in case the columnwise mean is computed and the number of rows of the given matrix is smaller than 2, a std::invalid_argument is thrown.
The standard deviation of a dense or sparse matrix can be computed via the stddev()
function. In case of a sparse vector, both the non-zero and zero elements are taken into account. The following example demonstrates the computation of the standard deviation of a dense matrix:
In case the size of the given matrix is smaller than 2, a std::invalid_argument is thrown.
Alternatively it is possible to compute the row- or columnwise standard deviation:
In case the rowwise standard deviation is computed and the number of columns of the given matrix is smaller than 2 or in case the columnwise mean is computed and the number of rows of the given matrix is smaller than 2, a std::invalid_argument is thrown.
The declsym()
operation can be used to explicitly declare any matrix or matrix expression as symmetric:
Any matrix or matrix expression that has been declared as symmetric via declsym()
will gain all the benefits of a symmetric matrix, which range from reduced runtime checking to a considerable speed-up in computations:
declsym()
operation has the semantics of a cast: The caller is completely responsible and the system trusts the given information. Declaring a non-symmetric matrix or matrix expression as symmetric via the declsym()
operation leads to undefined behavior (which can be violated invariants or wrong computation results)!
The declherm()
operation can be used to explicitly declare any matrix or matrix expression as Hermitian:
Any matrix or matrix expression that has been declared as Hermitian via declherm()
will gain all the benefits of an Hermitian matrix, which range from reduced runtime checking to a considerable speed-up in computations:
declherm()
operation has the semantics of a cast: The caller is completely responsible and the system trusts the given information. Declaring a non-Hermitian matrix or matrix expression as Hermitian via the declherm()
operation leads to undefined behavior (which can be violated invariants or wrong computation results)!
The decllow()
operation can be used to explicitly declare any matrix or matrix expression as lower triangular:
Any matrix or matrix expression that has been declared as lower triangular via decllow()
will gain all the benefits of a lower triangular matrix, which range from reduced runtime checking to a considerable speed-up in computations:
decllow()
operation has the semantics of a cast: The caller is completely responsible and the system trusts the given information. Declaring a non-lower matrix or matrix expression as lower triangular via the decllow()
operation leads to undefined behavior (which can be violated invariants or wrong computation results)!
The declupp()
operation can be used to explicitly declare any matrix or matrix expression as upper triangular:
Any matrix or matrix expression that has been declared as upper triangular via declupp()
will gain all the benefits of an upper triangular matrix, which range from reduced runtime checking to a considerable speed-up in computations:
declupp()
operation has the semantics of a cast: The caller is completely responsible and the system trusts the given information. Declaring a non-upper matrix or matrix expression as upper triangular via the declupp()
operation leads to undefined behavior (which can be violated invariants or wrong computation results)!
The decldiag()
operation can be used to explicitly declare any matrix or matrix expression as diagonal:
Any matrix or matrix expression that has been declared as diagonal via decldiag()
will gain all the benefits of a diagonal matrix, which range from reduced runtime checking to a considerable speed-up in computations:
decldiag()
operation has the semantics of a cast: The caller is completely responsible and the system trusts the given information. Declaring a non-diagonal matrix or matrix expression as diagonal via the decldiag()
operation leads to undefined behavior (which can be violated invariants or wrong computation results)!
The declid()
operation can be used to explicitly declare any matrix or matrix expression as identity matrix:
Any matrix or matrix expression that has been declared as identity matrix via declid()
will gain all the benefits of an identity matrix, which range from reduced runtime checking to a considerable speed-up in computations:
declid()
operation has the semantics of a cast: The caller is completely responsible and the system trusts the given information. Declaring a non-identity matrix or matrix expression as identity matrix via the declid()
operation leads to undefined behavior (which can be violated invariants or wrong computation results)!
The declzero()
operation can be used to explicitly declare any matrix or matrix expression as zero matrix:
Any matrix or matrix expression that has been declared as zero matrix via declzero()
will gain all the benefits of a zero matrix, which range from reduced runtime checking to a considerable speed-up in computations:
declzero()
operation has the semantics of a cast: The caller is completely responsible and the system trusts the given information. Declaring a non-zero matrix or matrix expression as zero matrix via the declzero()
operation leads to undefined behavior (which can be violated invariants or wrong computation results)!
The inverse of a square dense matrix can be computed via the inv()
function:
Alternatively, an in-place inversion of a dense matrix can be performed via the invert()
function:
Both the inv()
and the invert()
functions will automatically select the most suited matrix inversion algorithm depending on the size and type of the given matrix. For small matrices of up to 6x6, both functions use manually optimized kernels for maximum performance. For matrices larger than 6x6 the inversion is performed by means of the most suited matrix decomposition method: In case of a general matrix the LU decomposition is used, for symmetric matrices the LDLT decomposition is applied, for Hermitian matrices the LDLH decomposition is performed, and for triangular matrices the inverse is computed via a forward or back substitution.
In case the type of the matrix does not provide additional compile time information about its structure (symmetric, lower, upper, diagonal, ...), the information can be provided manually when calling the invert()
function:
Alternatively, via the invert()
function it is possible to explicitly specify the inversion algorithm:
Whereas the inversion by means of an LU decomposition works for every general square matrix, the inversion by LDLT only works for symmetric indefinite matrices, the inversion by LDLH is restricted to Hermitian indefinite matrices and the Cholesky decomposition (LLH) only works for Hermitian positive definite matrices. Please note that it is in the responsibility of the function caller to guarantee that the selected algorithm is suited for the given matrix. In case this precondition is violated the result can be wrong and might not represent the inverse of the given matrix!
For both the inv()
and invert()
function the matrix inversion fails if ...
In all failure cases either a compilation error is created if the failure can be predicted at compile time or a std::invalid_argument
exception is thrown.
float
, double
, complex<float>
or complex<double>
element type. The attempt to call the function with matrices of any other element type or with a sparse matrix results in a compile time error!inv()
function. Also, it is not possible to access individual elements via the function call operator on the expression object:
float
, double
, complex<float>
or complex<double>
element type. The attempt to call the function with matrices of any other element type or with a sparse matrix results in a compile time error!The LU decomposition of a dense matrix can be computed via the lu()
function:
The function works for both rowMajor
and columnMajor
matrices. Note, however, that the three matrices A
, L
and U
are required to have the same storage order. Also, please note that the way the permutation matrix P
needs to be applied differs between row-major and column-major matrices, since the algorithm uses column interchanges for row-major matrices and row interchanges for column-major matrices.
Furthermore, lu()
can be used with adaptors. For instance, the following example demonstrates the LU decomposition of a symmetric matrix into a lower and upper triangular matrix:
The Cholesky (LLH) decomposition of a dense matrix can be computed via the llh()
function:
The function works for both rowMajor
and columnMajor
matrices and the two matrices A
and L
can have any storage order.
Furthermore, llh()
can be used with adaptors. For instance, the following example demonstrates the LLH decomposition of a symmetric matrix into a lower triangular matrix:
The QR decomposition of a dense matrix can be computed via the qr()
function:
The function works for both rowMajor
and columnMajor
matrices and the three matrices A
, Q
and R
can have any storage order.
Furthermore, qr()
can be used with adaptors. For instance, the following example demonstrates the QR decomposition of a symmetric matrix into a general matrix and an upper triangular matrix:
Similar to the QR decomposition, the RQ decomposition of a dense matrix can be computed via the rq()
function:
The function works for both rowMajor
and columnMajor
matrices and the three matrices A
, R
and Q
can have any storage order.
Also the rq()
function can be used in combination with matrix adaptors. For instance, the following example demonstrates the RQ decomposition of an Hermitian matrix into a general matrix and an upper triangular matrix:
The QL decomposition of a dense matrix can be computed via the ql()
function:
The function works for both rowMajor
and columnMajor
matrices and the three matrices A
, Q
and L
can have any storage order.
Also the ql()
function can be used in combination with matrix adaptors. For instance, the following example demonstrates the QL decomposition of a symmetric matrix into a general matrix and a lower triangular matrix:
The LQ decomposition of a dense matrix can be computed via the lq()
function:
The function works for both rowMajor
and columnMajor
matrices and the three matrices A
, L
and Q
can have any storage order.
Furthermore, lq()
can be used with adaptors. For instance, the following example demonstrates the LQ decomposition of an Hermitian matrix into a lower triangular matrix and a general matrix:
The eigenvalues and eigenvectors of a dense matrix can be computed via the eigen()
functions:
The first function computes only the eigenvalues of the given n-by-n matrix, the second function additionally computes the eigenvectors. The eigenvalues are returned in the given vector w and the eigenvectors are returned in the given matrix V, which are both resized to the correct dimensions (if possible and necessary).
Depending on the given matrix type, the resulting eigenvalues are either of floating point or complex type: In case the given matrix is either a compile time symmetric matrix with floating point elements or an Hermitian matrix with complex elements, the resulting eigenvalues will be of floating point type and therefore the elements of the given eigenvalue vector are expected to be of floating point type. In all other cases they are expected to be of complex type. Please note that for complex eigenvalues no order of eigenvalues can be assumed, except that complex conjugate pairs of eigenvalues appear consecutively with the eigenvalue having the positive imaginary part first.
In case A is a row-major matrix, V will contain the left eigenvectors, otherwise V will contain the right eigenvectors. In case V is a row-major matrix the eigenvectors are returned in the rows of V, in case V is a column-major matrix the eigenvectors are returned in the columns of V. In case the given matrix is a compile time symmetric matrix with floating point elements, the resulting eigenvectors will be of floating point type and therefore the elements of the given eigenvector matrix are expected to be of floating point type. In all other cases they are expected to be of complex type.
The following examples give an impression of the computation of eigenvalues and eigenvectors for a general, a symmetric, and an Hermitian matrix:
The functions fail if ...
In all failure cases an exception is thrown.
eigen()
functions can only be used for dense matrices with float
, double
, complex<float>
or complex<double>
element type. The attempt to call the function with matrices of any other element type or with a sparse matrix results in a compile time error!
The singular value decomposition (SVD) of a dense matrix can be computed via the svd()
functions:
The first and third function compute only singular values of the given general m-by-n matrix, the second and fourth function additionally compute singular vectors. The resulting singular values are returned in the given vector s, the left singular vectors are returned in the given matrix U, and the right singular vectors are returned in the matrix V. s, U, and V are resized to the correct dimensions (if possible and necessary).
The third and fourth function allow for the specification of a subset of singular values and/or vectors. The number of singular values and vectors to be computed is specified by the lower bound low and the upper bound upp, which either form an integral or a floating point range.
In case low and upp form are of integral type, the function computes all singular values in the index range . The num resulting real and non-negative singular values are stored in descending order in the given vector s, which is either resized (if possible) or expected to be a num-dimensional vector. The resulting left singular vectors are stored in the given matrix U, which is either resized (if possible) or expected to be a m-by-num matrix. The resulting right singular vectors are stored in the given matrix V, which is either resized (if possible) or expected to be a num-by-n matrix.
In case low and upp are of floating point type, the function computes all singular values in the half-open interval . The resulting real and non-negative singular values are stored in descending order in the given vector s, which is either resized (if possible) or expected to be a min(m,n)-dimensional vector. The resulting left singular vectors are stored in the given matrix U, which is either resized (if possible) or expected to be a m-by-min(m,n) matrix. The resulting right singular vectors are stored in the given matrix V, which is either resized (if possible) or expected to be a min(m,n)-by-n matrix.
The functions fail if ...
In all failure cases an exception is thrown.
Examples:
svd()
functions can only be used for dense matrices with float
, double
, complex<float>
or complex<double>
element type. The attempt to call the function with matrices of any other element type or with a sparse matrix results in a compile time error!
Previous: Matrix Types Next: Adaptors