Commits

Anonymous committed 22d290e Draft

Cleaning up the little mess with duplicates

Comments (0)

Files changed (2)

decompose-matrixproperties.tex

-% \documentclass[a4paper,10pt]{extreport}
-% \usepackage{amssymb,amsfonts,amsmath,cite,enumerate}
-% \setcounter{tocdepth}{2} %%where n is the level,
-% starting with 0 (chapters only)
-% \usepackage{makeidx}
-% \makeindex 
-% \newcommand{\keyword}[1]{\textit{#1}\index{#1}} 
-%%%% this one adds the keyword 
-% %command: an italic emphasize and addition to the index.
-% \newcommand{\matlab}[1]{\texttt{#1}} %%%% this one adds the keyword 
-% 
-% 
-% \usepackage{geometry}
-% \geometry{inner=4.5cm} \geometry{outer=4cm}
-% \geometry{top=0.6cm} \geometry{bottom=9cm}
-% 
-% \begin{document}
-
-
-\chapter{Appendix: Basic Matrix Properties}
-
-
-\section{Norm of a Vector}
-To get a precise and reliable measure of nearness to singularity, we need to
-introduce the concept of a \keyword{norm of a vector}. This is a single number
-that measures the general size of the elements of the vector.
-The family of vector norms known as $l_p$ depends on a parameter $p$. Let $p \in
- 1 .. \infty$ be a real number.
-\begin{equation}
-    \|\mathbf{x}\|_p := \bigg( \sum_{i=1}^n |x_i|^p \bigg)^{1/p}.
-\end{equation}
-The general p-norm is sometimes called \keyword{H\"{o}lder
-norm}\cite{hogbenhandbook}.
-
-
-\subsection{Frequently used norms}
-There are some frequently used norms:
-\begin{itemize}
- \item for $p = 1$ we get the \keyword{Manhattan norm}: $ \|\boldsymbol{x}\|_1
-:= \sum_{i=1}^{n} |x_i|$. The name relates to the distance a taxi has to drive
-in a rectangular street grid to get from the origin to the point
-$x$\cite{molernumericalcompmatlab}. The 1-norm is simply the sum of the absolute
-values of the columns.
- \item for $p = 2$ we get the intuitive notion of length of the vector $x =
-(x_1, x_2, ..., x_n)$ is captured by the formula of \keyword{Euclidean norm}: 
-$\|\boldsymbol{x}\| := \sqrt{x_1^2 + \cdots + x_n^2}$.
- \item for $p \rightarrow \infty$, we have the Chebyshev norm.
-\end{itemize}
-
-The particular value of $p$ is often unimportant\cite{molernumericalcompmatlab}
-and we simply use $||x||$.
-
-
-
-
-
-\subsection{MATLAB computation of vector Norms}
-In Matlab\cite{molernumericalcompmatlab}, $||x||_p$ is computed by
-\texttt{norm(x,p)} and \texttt{norm(x)} is the same as \texttt{norm(x,2)}. For
-example:
-
-\begin{verbatim}
-   x = (1:4)/5
-   norm1 = norm(x,1)
-   norm2 = norm(x)
-   norminf = norm(x,inf)
-produces
-   x =
-        0.2000   0.4000  0.6000 0.8000
-   norm1 =
-         2.0000
-   norm2 =
-         1.0954
-   norminf =
-         0.8000
-\end{verbatim}
-
-This is the true meaning of the \textit{normalisation} - when you divide your
-original vector/matrix on its \textit{norm}.
-
-
-
-
-\section{Norm of a Matrix}
-A \keyword{matrix norm}\cite{hogbenhandbook} is a family of real-valued
-functions on $\mathbb{F}^{m\times n}$ for all positive integers $m$ and $n$,
-denoted
-uniformly by $||A||$ with the following properties for all matrices $A$ and $B$
-and all scalars $\alpha \in F$:
-\begin{itemize}
- \item Positive definiteness: $||A || \geq 0$;  $\|A\|= 0$ if and only if $A=0$.
- \item Homogeneity: $ \|\alpha A\|=|\alpha| \|A\|$ for all $\alpha$
- \item Triangle inequality: $\|A+B\| \le \|A\|+\|B\|$ for all matrices $A$ and
-$B$ in $\mathbb{F}^{m\times n}$
- \item Consistency:$ \|AB\| \le \|A\|\|B\| $ where $A$ and $B$ are compatible
-for matrix multiplication\cite{hogbenhandbook}.
-\end{itemize}
-
-If $||\cdot||$ is a family of vector norms on $\mathbb{F}^{m\times n}$ for $n =
-1, 2, 3, . . . ,$ then the matrix norm on $\mathbb{F}^{m\times n}$
-\textbf{induced} by $||\cdot||$ is:
- \begin{align}
-\left \| A \right \| _p = \max \limits _{x \ne 0} \frac{\left \| A x\right \|
-_p}{\left \| x\right \| _p}.  
-\end{align} 
-Induced matrix norms are also called \keyword{operator norms} or
-\keyword{natural norms}\cite{hogbenhandbook}.
-
-
-\subsection{Frequently used matrix norms}
-The following are commonly encountered matrix norms:
-\begin{itemize}
-
- \item \keyword{Maximum absolute column sum norm}: $ \left \| A \right \| _1 =
-\max \limits _{1 \leq j \leq n} \sum _{i=1} ^m | a_{ij} |$, which is simply the
-maximum absolute column sum of the matrix.
-
- \item \keyword{Maximum absolute row sum norm}: $\left \| A \right \| _\infty =
-\max \limits _{1 \leq i \leq m} \sum _{j=1}
-^n | a_{ij} |$, which is simply the maximum absolute row sum of the matrix.
-
- \item \keyword{Spectral norm}: $\left \| A \right \|
-_2=\sqrt{\lambda_{\text{max}}(A^{^*} A)}=\sigma_{\text{max}}(A)$, where $A^*$
-denotes the conjugate transpose of $A$. The spectral norm of a matrix $A$ is the
-largest singular value $\sigma_{\text{max}}(A)$ of $A$, or the square root of
-the largest eigenvalue of the positive-semidefinite matrix $A*A$. 
-
-\item \keyword{Euclidean norm} or \keyword{Frobenius norm}:
-$\|A\|_F=\sqrt{\sum_{i=1}^m\sum_{j=1}^n
-|a_{ij}|^2}=$ \\ 
-$=\sqrt{\operatorname{trace}(A^{{}^*}A)}=\sqrt{\sum_{i=1}^{\min\{m,\,
-n\}} \sigma_{i}^2}$, where $A^*$ denotes the conjugate transpose of $A$,
-$\sigma_{i}$ are the singular values of $A$. The Frobenius norm comes from an
-inner product on the space of all matrices. 
-\end{itemize}
-
-The Frobenius norm is useful for numerical linear algebra since the Frobenius
-norm is often easier to compute than induced norms. The Frobenius norm is also
-has the useful property of being invariant under
-rotations\cite{navarra2010guide}.
-
-
-\subsection{MATLAB computation of matrix norms}
-
-\paragraph{Function \texttt{norm}}
-The norm of a matrix is a scalar that gives some measure of the magnitude of the
-elements of the matrix. The norm function calculates several different types of
-matrix norms:
-
-\texttt{n = norm(A)} returns the largest singular value of A,
-\texttt{max(svd(A))}.
-
-\texttt{n = norm(A,p)} returns a different kind of norm, depending on the value
-of $p$:
-
-\begin{itemize}
- \item $p=1$ gives the 1-norm, or largest column sum of $A$, max(sum(abs(A)).
- \item $p=2$ gives the largest singular value (same as \texttt{norm(A)}).
- \item $p='fro'$ gives the Frobenius-norm of matrix A,
-\texttt{sqrt(sum(diag(A'*A)))}
- \item $p=inf$ gives the  infinity norm, or largest row sum of A,
-\texttt{max(sum(abs(A')))}.
-\end{itemize}
-
-
-\paragraph{Function \texttt{normest}}
-Returns 2-norm estimate. \textbf{This function is intended primarily for sparse
-matrices}, although it works correctly and may be useful for large, full
-matrices as well. 
-
-\texttt{nrm = normest(S)} returns an estimate of the 2-norm of the matrix S. 
-
-\texttt{nrm = normest(S,tol)} uses relative error tol instead of the default
-tolerance 1.e-6. The value of tol determines when the estimate is considered
-acceptable.
-
-The power iteration involves repeated multiplication by the matrix S and its
-transpose, S'. The iteration is carried out until two successive estimates agree
-to within the specified relative tolerance
-
-
-
-
-
-
-\section{Condition number}
-How can we measure the sensitivity of $x$ to changes in $A$ and $b$ in a system
-of linear equations $Ax = b$?
-The answer to this question lies in making the idea of \textit{nearly singular}
-precise\cite{molernumericalcompmatlab}.
-
-
-Data have limited precision. Measurements are inexact, floating point arithmetic
-introduces errors. Consequently, the results of non-trivial calculations using
-data of limited precision also have
-limited precision\cite{hogbenhandbook}. The topic of conditioning: how much
-errors in data can affect the
-results of a calculation. See\cite{rice1966theory} for an authoritative
-treatment of conditioning.
-
-The relationship between the size of the residual and the size of the error is
-determined in part by a quantity known as the \keyword{condition number} of the
-matrix. 
-
-We use the condition number of a matrix because it tells when to expect problems
-when solving systems of linear equations. One looks at the order of magnitude of
-the condition number, not its exact value. There are many methods to compute the
-condition number.
-
-
-
-
-\subsection{Norm-wise condition number}
-The \keyword{norm-wise condition number} of a non-singular matrix $A$ (for
-solving a linear
-system) is $\kappa(A) = ||A^{-1}||\cdot ||A||$ . If $A$ is singular, then by
-convention, $\kappa(A) = \infty$. For a specific norm $||\cdot||_\mu$, the
-condition number of $A$ is denoted $\kappa_\mu (A)$. A large condition
-number indicate the matrix $A$ is ill conditioned
-
-
-The condition number is also a measure of nearness to singularity.
-Although we have not yet developed the mathematical tools necessary to make the
-idea precise, the condition number can be thought of as the reciprocal of the
-relative distance
-from the matrix to the set of singular matrices. So, if $\kappa(A)$ is large, A
-is close to singular.
-
-The basic properties of the condition number are\cite{hogbenhandbook}:
-\begin{enumerate}
- \item $\kappa(A) \geq 1$.
- \item  $\kappa(AB) \leq \kappa(A)\kappa(B)$.
- \item  $\kappa_2 (A) = 1$ if and only if $A$ is a non-zero scalar multiple of
-an
-orthogonal matrix, i.e., $A^T A = \alpha I$ for some scalar $\alpha$.
-  \item  $\kappa_2 (A) = \kappa_2 (A^T )$.
-  \item  $\kappa_2 (A^T A) = (\kappa_2 (A))^2 $.
-   \item  $\kappa_2 (A) = ||A||_2 ||A^{-1}||_2 = \sigma_{max} /\sigma_{min}$ ,
-where $\sigma_{max}$ and $\sigma_{min}$ are the largest and smallest singular
-values of $A$.
-   \item  $\kappa(\alpha A) = \kappa(A)$, for all scalars $\alpha \neq 0$.
-   \item  $\kappa(D) = \frac{max(D_{ii})}{min(D_{ii})}$, where $D$ is a diagonal
-matrix.
-\end{enumerate}
-
-These last two properties are two of the reasons that $\kappa(A)$ is a better
-measure of nearness to singularity than the determinant of
-$A$\cite{molernumericalcompmatlab}.
-
-The condition number also plays a fundamental role in the analysis of the
-round-off errors introduced during the solution by Gaussian elimination.
-
-
-
-
-
-\subsection{MATLAB computation of condition number}
-Computing the matrix norm corresponding to the $l_2$ vector norm involves the
-SVD. Matlab computes
-matrix norms with \matlab{norm(A,p)} for $p = 1, 2, or inf$.
-
-
-
-The \textit{actual computation} of $\kappa(A)$ requires knowing $||A^{-1}||$.
-But computing
-$A^{-1}$ requires roughly three times as much work as solving a single linear
-system\cite{molernumericalcompmatlab}.
-Computing the $l_2$ condition number requires the singular value decomposition
-and
-even more work. Fortunately, the exact value of $\kappa(A)$ is rarely required.
-Any
-reasonably good estimate of it is satisfactory.
-
-Matlab has several functions\cite{molernumericalcompmatlab} for computing or
-estimating condition numbers. Note that an error is generated indicating a very
-small value for RCOND. This
-indicates that the condition number is very large, since rcond estimates the
-reciprocal condition number in the 1-norm. There are several other tools for
-computing condition numbers including cond and condest. The most accurate is
-\texttt{cond}, but \texttt{rcond} and \texttt{condest} are faster.
-
-
-\paragraph{Function \matlab{cond(A)}}
-or \matlab{cond(A,2)} computes $\kappa_2(A)$. Uses
-\matlab{svd(A)} and suitable for smaller matrices where the geometric properties
-of the $l_2$ norm are important.
-
-
-\paragraph{Function \matlab{cond(A,1)}}
-computes $\kappa_1(A)$. Uses \matlab{inv(A)}. Less work
-than \matlab{cond(A,2)}.
-
-
-\paragraph{Function \matlab{cond(A,inf)}}
-computes $\kappa_\infty(A)$. Uses \matlab{inv(A)}. Same as
-\matlab{cond($A^T$,1)}.
-
-
-\paragraph{Function \matlab{cond(A,'fro')}}
-computes Frobenius norm condition number.
-
-
-
-
-\paragraph{Function \matlab{condest(A)}}
-estimates $\kappa_1(A)$ and gives 1-norm condition number estimate. The MATLAB
-command \matlab{c = condest(A)} computes a lower bound $C$ for the 1-norm
-condition number of a square matrix $A$. Especially suitable for large, sparse
-matrices. 
-
-The algorithm uses \matlab{lu(A)} and a recent algorithm of Higham and
-Tisseur\cite{higham2000block}.
-The function \matlab{condest} is based on the 1-norm condition estimator of
-Hager\cite{hager1984condition} and a block
-oriented generalization of Hager's estimator given by Higham and
-Tisseur\cite{higham2000block}.
-The heart of the algorithm involves an iterative search to estimate $||A^{-1}||$
-without
-computing $A^{-1}$. This is posed as the convex, but non-differentiable,
-optimization problem
-$max(||A^{-1}x||), \mbox{subject to } ||x|| =1$.
-
-
-Note that \matlab{condest} invokes \matlab{rand}. If repeatable results are
-required then invoke \matlab{rand('state',j)}, for some j, before calling this
-function.
-
-
-
-
-
-
-\paragraph{Function \matlab{rcond(A)}}
-estimates reciprocal condition number 1/$\kappa_1(A)$. Uses \matlab{lu(A)} and
-an older algorithm developed by the LINPACK and LAPACK
-projects\cite{lapackuserguide1999}. Primarily of
-historical interest.
-
-
-If $A$ is well conditioned, \matlab{rcond(A)} is near 1.0. 
-If A is badly conditioned, rcond(A) is near 0.0. Compared to \matlab{cond},
-\texttt{rcond} is a more efficient, but less reliable, method of estimating the
-condition of a matrix. 
-
-
-
-
-\paragraph{Function \matlab{condeig(A)}}
-computes the condition number with respect to eigenvalues. The command \texttt{c
-= condeig(A)} returns a vector of condition numbers for the eigenvalues of $A$.
-These condition numbers are the reciprocals of the cosines of the angles between
-the left and right eigenvectors.
-
-Large condition numbers imply that A is near a matrix with multiple
-eigenvalues.
-
-
-
 % \documentclass[a4paper,10pt]{extreport}
 % \usepackage{amssymb,amsfonts,amsmath,cite,enumerate}
 % \setcounter{tocdepth}{2} %%where n is the level,

digest-LinearAlgDecompositions.tex

 This brief digest aims to collect the knowledge (and its god-damned
 application!) for not-too-exotic decompositions.
 
+
 Rules of contributors:
 \begin{itemize}
- \item use static word wrap (Word Wrap Document in Kile) - it is easier for revision control systems to track editions;
- \item contributions with unclear sources or \textit{direct} wikipedia copypaste\footnote{There is nothing wrong with Wikipedia - you can add the mathematical
-formulas from there to save your time, but \textbf{please take a time to check the math!} Better use some reliable scientific sources like \textit{Linear
-Algebra Handbook}.} will be challenged and erased;
  \item use static word wrap (Word Wrap Document in Kile) - it is easier for
 revision control systems to track editions;
  \item contributions with unclear sources or \textit{direct} wikipedia
 copypaste\footnote{There is nothing wrong with Wikipedia - you can add the
-mathematical formulas from there to save your time, but \textbf{please take a
-time to check the math!} Better use some reliable scientific sources like
-\textit{Linear Algebra Handbook}.} will be challenged and erased;
+mathematical
+formulas from there to save your time, but \textbf{please take a time to check
+the math!} Better use some reliable scientific sources like \textit{Linear
+Algebra Handbook}.} will be challenged and erased;
+ \item use static word wrap (Word Wrap Document in Kile) - it is easier for
+revision control systems to track editions;
  \item add citations in BiBTeX database in \verb+biblio/linalgdecompose+;
  \item use British/Australian variant of English.
 \end{itemize}
 \input{decompose-matrixproperties}
 %%%%%%%%%%%% Chapter:  Matrix properties, like norm, condition number
 
-\bibliographystyle{unsrt}  \bibliography{biblio/linalgdecompose}
+\bibliographystyle{unsrt} 
+\bibliography{biblio/linalgdecompose,biblio/kmvlinalgdecompose}
 \end{document}