decompose / decompose-matrixproperties.tex

\setcounter{tocdepth}{2} %%where n is the level,starting with 0 (chapters only)
%%% this one adds the keyword 
%command: an italic emphasize and addition to the index.
\newcommand{\matlab}[1]{\texttt{#1}} %%%% this one adds the keyword 

\geometry{inner=4.5cm} \geometry{outer=4cm}
\geometry{top=0.6cm} \geometry{bottom=9cm}


\chapter{Appendix: Basic Matrix Properties}

\section{Norm of a Vector}
To get a precise and reliable measure of nearness to singularity, we need to
introduce the concept of a \keyword{norm of a vector}. This is a single number
that measures the general size of the elements of the vector.
The family of vector norms known as $l_p$ depends on a parameter $p$. Let $p \in
 1 .. \infty$ be a real number.
    \|\mathbf{x}\|_p := \bigg( \sum_{i=1}^n |x_i|^p \bigg)^{1/p}.
The general p-norm is sometimes called \keyword{H\"{o}lder

\subsection{Frequently used norms}
There are some frequently used norms:
 \item for $p = 1$ we get the \keyword{Manhattan norm}: $ \|\boldsymbol{x}\|_1
:= \sum_{i=1}^{n} |x_i|$. The name relates to the distance a taxi has to drive
in a rectangular street grid to get from the origin to the point
$x$\cite{molernumericalcompmatlab}. The 1-norm is simply the sum of the absolute
values of the columns.
 \item for $p = 2$ we get the intuitive notion of length of the vector $x =
(x_1, x_2, ..., x_n)$ is captured by the formula of \keyword{Euclidean norm}: 
$\|\boldsymbol{x}\| := \sqrt{x_1^2 + \cdots + x_n^2}$.
 \item for $p \rightarrow \infty$, we have the Chebyshev norm.

The particular value of $p$ is often unimportant\cite{molernumericalcompmatlab}
and we simply use $||x||$.

\subsection{MATLAB computation of vector Norms}
In Matlab\cite{molernumericalcompmatlab}, $||x||_p$ is computed by
\texttt{norm(x,p)} and \texttt{norm(x)} is the same as \texttt{norm(x,2)}. For

   x = (1:4)/5
   norm1 = norm(x,1)
   norm2 = norm(x)
   norminf = norm(x,inf)
   x =
        0.2000   0.4000  0.6000 0.8000
   norm1 =
   norm2 =
   norminf =

This is the true meaning of the \textit{normalisation} - when you divide your
original vector/matrix on its \textit{norm}.

\section{Norm of a Matrix}
A \keyword{matrix norm}\cite{hogbenhandbook} is a family of real-valued
functions on $\mathbb{F}^{m\times n}$ for all positive integers $m$ and $n$,
uniformly by $||A||$ with the following properties for all matrices $A$ and $B$
and all scalars $\alpha \in F$:
 \item Positive definiteness: $||A || \geq 0$;  $\|A\|= 0$ if and only if $A=0$.
 \item Homogeneity: $ \|\alpha A\|=|\alpha| \|A\|$ for all $\alpha$
 \item Triangle inequality: $\|A+B\| \le \|A\|+\|B\|$ for all matrices $A$ and
$B$ in $\mathbb{F}^{m\times n}$
 \item Consistency:$ \|AB\| \le \|A\|\|B\| $ where $A$ and $B$ are compatible
for matrix multiplication\cite{hogbenhandbook}.

If $||\cdot||$ is a family of vector norms on $\mathbb{F}^{m\times n}$ for $n =
1, 2, 3, . . . ,$ then the matrix norm on $\mathbb{F}^{m\times n}$
\textbf{induced} by $||\cdot||$ is:
\left \| A \right \| _p = \max \limits _{x \ne 0} \frac{\left \| A x\right \|
_p}{\left \| x\right \| _p}.  
Induced matrix norms are also called \keyword{operator norms} or
\keyword{natural norms}\cite{hogbenhandbook}.

\subsection{Frequently used matrix norms}
The following are commonly encountered matrix norms:

 \item \keyword{Maximum absolute column sum norm}: $ \left \| A \right \| _1 =
\max \limits _{1 \leq j \leq n} \sum _{i=1} ^m | a_{ij} |$, which is simply the
maximum absolute column sum of the matrix.

 \item \keyword{Maximum absolute row sum norm}: $\left \| A \right \| _\infty =
\max \limits _{1 \leq i \leq m} \sum _{j=1}
^n | a_{ij} |$, which is simply the maximum absolute row sum of the matrix.

 \item \keyword{Spectral norm}: $\left \| A \right \|
_2=\sqrt{\lambda_{\text{max}}(A^{^*} A)}=\sigma_{\text{max}}(A)$, where $A^*$
denotes the conjugate transpose of $A$. The spectral norm of a matrix $A$ is the
largest singular value $\sigma_{\text{max}}(A)$ of $A$, or the square root of
the largest eigenvalue of the positive-semidefinite matrix $A*A$. 

\item \keyword{Euclidean norm} or \keyword{Frobenius norm}:
|a_{ij}|^2}=$ \\ 
n\}} \sigma_{i}^2}$, where $A^*$ denotes the conjugate transpose of $A$,
$\sigma_{i}$ are the singular values of $A$. The Frobenius norm comes from an
inner product on the space of all matrices. 

The Frobenius norm is useful for numerical linear algebra since the Frobenius
norm is often easier to compute than induced norms. The Frobenius norm is also
has the useful property of being invariant under

\subsection{MATLAB computation of matrix norms}

\paragraph{Function \texttt{norm}}
The norm of a matrix is a scalar that gives some measure of the magnitude of the
elements of the matrix. The norm function calculates several different types of
matrix norms:

\texttt{n = norm(A)} returns the largest singular value of A,

\texttt{n = norm(A,p)} returns a different kind of norm, depending on the value
of $p$:

 \item $p=1$ gives the 1-norm, or largest column sum of $A$, max(sum(abs(A)).
 \item $p=2$ gives the largest singular value (same as \texttt{norm(A)}).
 \item $p='fro'$ gives the Frobenius-norm of matrix A,
 \item $p=inf$ gives the  infinity norm, or largest row sum of A,

\paragraph{Function \texttt{normest}}
Returns 2-norm estimate. \textbf{This function is intended primarily for sparse
matrices}, although it works correctly and may be useful for large, full
matrices as well. 

\texttt{nrm = normest(S)} returns an estimate of the 2-norm of the matrix S. 

\texttt{nrm = normest(S,tol)} uses relative error tol instead of the default
tolerance 1.e-6. The value of tol determines when the estimate is considered

The power iteration involves repeated multiplication by the matrix S and its
transpose, S'. The iteration is carried out until two successive estimates agree
to within the specified relative tolerance

\section{Condition number}
How can we measure the sensitivity of $x$ to changes in $A$ and $b$ in a system
of linear equations $Ax = b$?
The answer to this question lies in making the idea of \textit{nearly singular}

Data have limited precision. Measurements are inexact, floating point arithmetic
introduces errors. Consequently, the results of non-trivial calculations using
data of limited precision also have
limited precision\cite{hogbenhandbook}. The topic of conditioning: how much
errors in data can affect the
results of a calculation. See\cite{rice1966theory} for an authoritative
treatment of conditioning.

The relationship between the size of the residual and the size of the error is
determined in part by a quantity known as the \keyword{condition number} of the

We use the condition number of a matrix because it tells when to expect problems
when solving systems of linear equations. One looks at the order of magnitude of
the condition number, not its exact value. There are many methods to compute the
condition number.

\subsection{Norm-wise condition number}
The \keyword{norm-wise condition number} of a non-singular matrix $A$ (for
solving a linear
system) is $\kappa(A) = ||A^{-1}||\cdot ||A||$ . If $A$ is singular, then by
convention, $\kappa(A) = \infty$. For a specific norm $||\cdot||_\mu$, the
condition number of $A$ is denoted $\kappa_\mu (A)$. A large condition
number indicate the matrix $A$ is ill conditioned

The condition number is also a measure of nearness to singularity.
Although we have not yet developed the mathematical tools necessary to make the
idea precise, the condition number can be thought of as the reciprocal of the
relative distance
from the matrix to the set of singular matrices. So, if $\kappa(A)$ is large,
$A$ is close to singular.

The condition number may also be infinite, in which case the algorithm will not
reliably find a solution to the problem, not even a weak approximation of it
(and not even its order of magnitude) with any reasonable and provable accuracy.

The basic properties of the condition number are\cite{hogbenhandbook}:
 \item $\kappa(A) \geq 1$.
 \item  $\kappa(AB) \leq \kappa(A)\kappa(B)$.
 \item  $\kappa_2 (A) = 1$ if and only if $A$ is a non-zero scalar multiple of
an orthogonal matrix, i.e., $A^T A = \alpha I$ for some scalar $\alpha$.
  \item  $\kappa_2 (A) = \kappa_2 (A^T )$.
  \item  $\kappa_2 (A^T A) = (\kappa_2 (A))^2 $.
   \item  $\kappa_2 (A) = ||A||_2 ||A^{-1}||_2 = \sigma_{max} /\sigma_{min}$ ,
where $\sigma_{max}$ and $\sigma_{min}$ are the largest and smallest singular
values of $A$.
   \item  $\kappa(\alpha A) = \kappa(A)$, for all scalars $\alpha \neq 0$.
   \item  $\kappa(D) = \frac{max(D_{ii})}{min(D_{ii})}$, where $D$ is a diagonal

These last two properties are two of the reasons that $\kappa(A)$ is a better
measure of nearness to singularity than the determinant of

One should remember that the definition of the conidition number depends on the
choice of norm. Usually, the $l_2$ norm is used, then $\kappa(A) =
\frac{\sigma_{\max}(A)}{\sigma_{\min}(A)}$, where $\sigma_{\max}(A)$ and
$\sigma_{\min}(A)$ are maximal and minimal singular values of the matrix $A$.

In case when the matrix $A$ is normal, $\kappa(A) =
\left|\frac{\lambda_{\max}(A)}{\lambda_{\min}(A)}\right| $, where
$\lambda_{\max}(A)$ and $\lambda_{\min}(A)$ are maximal and minimal (by modulus)
eigenvalues of $A$.

If $A$ is unitary then $\kappa(A) = 1 .\,$

The condition number also plays a fundamental role in the analysis of the
round-off errors introduced during the solution by Gaussian elimination.

\subsection{MATLAB computation of condition number}
Matlab computes matrix norms with \matlab{norm(A,p)} for $p = 1, 2, or
inf$. The \textit{actual computation} of $\kappa(A)$ requires knowing
$||A^{-1}||$. But computing
$A^{-1}$ requires roughly three times as much work as solving a single linear
Computing the $l_2$ condition number requires the singular value decomposition
and even more work. Fortunately, the exact value of $\kappa(A)$ is rarely
required. Any reasonably good estimate of it is satisfactory.

Matlab has several functions\cite{molernumericalcompmatlab} for computing or
estimating condition numbers. Note that an error is generated indicating a very
small value for RCOND. This
indicates that the condition number is very large, since rcond estimates the
reciprocal condition number in the 1-norm. There are several other tools for
computing condition numbers including cond and condest. The most accurate is
\texttt{cond}, but \texttt{rcond} and \texttt{condest} are faster.

\paragraph{Function \matlab{cond(A)}}
or \matlab{cond(A,2)} computes $\kappa_2(A)$. Uses
\matlab{svd(A)} and suitable for smaller matrices where the geometric properties
of the $l_2$ norm are important.

\paragraph{Function \matlab{cond(A,1)}}
computes $\kappa_1(A)$. Uses \matlab{inv(A)}. Less work
than \matlab{cond(A,2)}.

\paragraph{Function \matlab{cond(A,inf)}}
computes $\kappa_\infty(A)$. Uses \matlab{inv(A)}. Same as

\paragraph{Function \matlab{cond(A,'fro')}}
computes Frobenius norm condition number.

\paragraph{Function \matlab{condest(A)}}
estimates $\kappa_1(A)$ and gives 1-norm condition number estimate. The MATLAB
command \matlab{c = condest(A)} computes a lower bound $C$ for the 1-norm
condition number of a square matrix $A$. Especially suitable for large, sparse

The algorithm uses \matlab{lu(A)} and a recent algorithm of Higham and
The function \matlab{condest} is based on the 1-norm condition estimator of
Hager\cite{hager1984condition} and a block
oriented generalization of Hager's estimator given by Higham and
The heart of the algorithm involves an iterative search to estimate $||A^{-1}||$
computing $A^{-1}$. This is posed as the convex, but non-differentiable,
optimization problem
$max(||A^{-1}x||), \mbox{subject to } ||x|| =1$.

Note that \matlab{condest} invokes \matlab{rand}. If repeatable results are
required then invoke \matlab{rand('state',j)}, for some j, before calling this

\paragraph{Function \matlab{rcond(A)}}
estimates reciprocal condition number 1/$\kappa_1(A)$. Uses \matlab{lu(A)} and
an older algorithm developed by the LINPACK and LAPACK
projects\cite{lapackuserguide1999}. Primarily of
historical interest.

If $A$ is well conditioned, \matlab{rcond(A)} is near 1.0. 
If $A$ is badly conditioned, \textit{rcond(A)} is near 0.0. Compared to \matlab{cond},
\texttt{rcond} is a more efficient, but less reliable, method of estimating the
condition of a matrix. 

\paragraph{Function \matlab{condeig(A)}}
computes the condition number with respect to eigenvalues. The command 
\texttt{c = condeig(A)} returns a vector of condition numbers for the
eigenvalues of $A$.
These condition numbers are the reciprocals of the cosines of the angles between
the left and right eigenvectors.

Large condition numbers imply that A is near a matrix with multiple

\subsection{Skeel condition number}
Define\cite{watkins2002fundamentals} the Skeel condition number of A as:

skeel(A) = ||  |A^{-1}|\cdot |A|  ||_{\infty}

Skeel condition number is usualy determined by iterative procedure. Skeel condition number is insensitive to row scaling, so it remains small while the normwise condition number $\kappa_\infty (A) $ becomes large in proportion to the badness of the sacling. 

For further information, see\cite{skeel1980iterative} about the general definittion of Skeel numebr and\cite{higham1996accuracy} about the inexpensive methods of estimationg of skeel(A).

\bibliographystyle{unsrt}  \bibliography{biblio/kmvlinalgdecompose}
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.