\( \newcommand{\matr}[1] {\mathbf{#1}} \newcommand{\vertbar} {\rule[-1ex]{0.5pt}{2.5ex}} \newcommand{\horzbar} {\rule[.5ex]{2.5ex}{0.5pt}} \newcommand{\E} {\mathrm{E}} \)
deepdream of
          a sidewalk
Show Question
\( \newcommand{\cat}[1] {\mathrm{#1}} \newcommand{\catobj}[1] {\operatorname{Obj}(\mathrm{#1})} \newcommand{\cathom}[1] {\operatorname{Hom}_{\cat{#1}}} \newcommand{\multiBetaReduction}[0] {\twoheadrightarrow_{\beta}} \newcommand{\betaReduction}[0] {\rightarrow_{\beta}} \newcommand{\betaEq}[0] {=_{\beta}} \newcommand{\string}[1] {\texttt{"}\mathtt{#1}\texttt{"}} \newcommand{\symbolq}[1] {\texttt{`}\mathtt{#1}\texttt{'}} \newcommand{\groupMul}[1] { \cdot_{\small{#1}}} \newcommand{\groupAdd}[1] { +_{\small{#1}}} \newcommand{\inv}[1] {#1^{-1} } \newcommand{\bm}[1] { \boldsymbol{#1} } \require{physics} \require{ams} \require{mathtools} \)
Math and science::Algebra

Spectral Theorem

Spectral theorem

Every real symmetric matrix \( S \) can be expressed as

\[ S = Q \Lambda Q^T \]

There are two arguments on the reverse that motivate this statement.


Both of the below arguments don't cover the existance of the eigenvectors.

Motivation 1: eigenvectors are in the column space and row space

Consider two eigenvector-value pairs \( (v_1, \lambda_1) \) and \( (v_2, \lambda_2) \).

Firstly, observe that for an eigenvector-value pair \( (v_1, \lambda_1) \), the defining statement:

\[ Sv_1 = \lambda_1 v_1 \]

also gives us the property: \[ (S - \alpha I)v_1 = (\lambda_1 - \alpha)v_1 \]

Which implies that \( v_1 \) is in the column space of \( S - \alpha I \) as long as \( \lambda_1 \neq \alpha \). When \( \lambda_1 = \alpha \), the eigenvector is in the null space of \( S - \alpha I \). Being in the null space implies that the eigenvector is orthogonal to all rows of \( S - \alpha I \).

When \( v_1 \) is in the null space of \( S - \lambda_1 I \), \( v_2 \) is in the column space, and as \( S \) is symmetric, \( v_2 \) is also in the row space of \( S - \lambda_1 I \). So this leads us to conclude that \( v_1 \) is orthogonal to \( v_2 \).

This argument extends to more eigenvectors, and we can conclude that a symmetric matrix can be diagonalized by orthogonal eigenvectors.

Motivation 2: SVD of a symmetric matrix

If we assume that symmetric matrix \( S \) can be decomposed via SVD as:

\[ S = U \Sigma V^T \]

where \( U \) and \( V \) are orthonormal matrices with the same dimensionality as \( S \).

Then:

\[ S^T = V \Sigma U^T \]

Equating these two expressions, we get:

\[ U \Sigma V^T = V \Sigma U^T \]

TODO: how to justify this next jump?

This equality suggests that \( U = V \). And SVD is instead an eigenvector decomposition:

\[ S = U \Sigma U^T \]