\( \newcommand{\matr}[1] {\mathbf{#1}} \newcommand{\vertbar} {\rule[-1ex]{0.5pt}{2.5ex}} \newcommand{\horzbar} {\rule[.5ex]{2.5ex}{0.5pt}} \newcommand{\E} {\mathrm{E}} \)
deepdream of
          a sidewalk
Show Question
\( \newcommand{\cat}[1] {\mathrm{#1}} \newcommand{\catobj}[1] {\operatorname{Obj}(\mathrm{#1})} \newcommand{\cathom}[1] {\operatorname{Hom}_{\cat{#1}}} \newcommand{\multiBetaReduction}[0] {\twoheadrightarrow_{\beta}} \newcommand{\betaReduction}[0] {\rightarrow_{\beta}} \newcommand{\betaEq}[0] {=_{\beta}} \newcommand{\string}[1] {\texttt{"}\mathtt{#1}\texttt{"}} \newcommand{\symbolq}[1] {\texttt{`}\mathtt{#1}\texttt{'}} \newcommand{\groupMul}[1] { \cdot_{\small{#1}}} \newcommand{\groupAdd}[1] { +_{\small{#1}}} \newcommand{\inv}[1] {#1^{-1} } \newcommand{\bm}[1] { \boldsymbol{#1} } \require{physics} \require{ams} \require{mathtools} \)
Math and science::Algebra

A = LU decomposition when A is symmetric

The LU decomposition of a symmetric matrix is special in that it can be further decomposed as follows:

\[ A = LU = L D L^T \]

when \( A \) is a symmetric matrix.


Intuition

Consider the first column of a symmetric matrix \( A \), and remember that it is equal to the first row. During elimitation, \( L \) records in it's first column the multipliers used to eliminate the first column by the first pivot. But then, when the first row of \( A \) is scaled by the same multiplier to achieve a unity pivot, the elements in the first row get scaled by the same factor. These values are recorded in the first row of \( U \) then scaled by decomposing into \( DU' \), but we find that \( U' = L^T \) by the above argument.

Illustrated with an example

The matrix:

\[ \begin{bmatrix} 4 & 2 & 0 \\ 2 & 2 & 3 \\ 0 & 3 & 1 \end{bmatrix} \]

Goes through the steps of elimination:

\[ \begin{bmatrix} 4 & 2 & 0 \\ 2 & 2 & 3 \\ 0 & 3 & 1 \end{bmatrix} \to \begin{bmatrix} 4 & 2 & 0 \\ 0 & 1 & 3 \\ 0 & 3 & 1 \end{bmatrix} \to U = \begin{bmatrix} 4 & 2 & 0 \\ 0 & 1 & 3 \\ 0 & 0 & -8 \end{bmatrix} \]

Corresponding to the elimination matrices:

\[ \begin{bmatrix} 1 & 0 & 0 \\ -\frac{1}{2} & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \to \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -3 & 1 \end{bmatrix} \]

Knowing the elimination matricies gives us \( L \) by filling in the multiplies as the negative of the values in the elimination matrices:

\[ L = \begin{bmatrix} 1 & 0 & 0 \\ \frac{1}{2} & 1 & 0 \\ 0 & 3 & 1 \end{bmatrix} \]

And so we have:

\[ A = LU = \begin{bmatrix} 1 & 0 & 0 \\ \frac{1}{2} & 1 & 0 \\ 0 & 3 & 1 \end{bmatrix} \begin{bmatrix} 4 & 2 & 0 \\ 0 & 1 & 3 \\ 0 & 0 & -8 \end{bmatrix} \]

So far, this is standard practice. Now we seek to normalize the diagonals of \( U \) to 1. We do this with a diagonal matrix \( D \):

\[ A = LDU' = \begin{bmatrix} 1 & 0 & 0 \\ \frac{1}{2} & 1 & 0 \\ 0 & 3 & 1 \end{bmatrix} \begin{bmatrix} 4 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -8 \end{bmatrix} \begin{bmatrix} 1 & \frac{1}{2} & 0 \\ 0 & 1 & -3 \\ 0 & 0 & 1 \end{bmatrix} \]

Notice that \( U' = L^T \). How is this not a surprise? The entries \( \bigl( \begin{smallmatrix} 1 \\ 0.5 \\ 0 \end{smallmatrix} \bigr) \) are the \( \bigl( \begin{smallmatrix}4 \\ 2 \\ 0 \end{smallmatrix} \bigr) \) entries of \( A \) after scaling by the first pivot. But the act of separating out \( D \) from \( U \) causes the exact same scaling to occur in the first row of \( U \) which is also the first row of \( A \). This argument can be extended sequentially to the whole matrix as it undergoes elimination and then decomposition with \( D \).