\( \newcommand{\matr}[1] {\mathbf{#1}} \newcommand{\vertbar} {\rule[-1ex]{0.5pt}{2.5ex}} \newcommand{\horzbar} {\rule[.5ex]{2.5ex}{0.5pt}} \newcommand{\E} {\mathrm{E}} \)
deepdream of
          a sidewalk
Show Question
\( \newcommand{\cat}[1] {\mathrm{#1}} \newcommand{\catobj}[1] {\operatorname{Obj}(\mathrm{#1})} \newcommand{\cathom}[1] {\operatorname{Hom}_{\cat{#1}}} \newcommand{\multiBetaReduction}[0] {\twoheadrightarrow_{\beta}} \newcommand{\betaReduction}[0] {\rightarrow_{\beta}} \newcommand{\betaEq}[0] {=_{\beta}} \newcommand{\string}[1] {\texttt{"}\mathtt{#1}\texttt{"}} \newcommand{\symbolq}[1] {\texttt{`}\mathtt{#1}\texttt{'}} \newcommand{\groupMul}[1] { \cdot_{\small{#1}}} \newcommand{\groupAdd}[1] { +_{\small{#1}}} \newcommand{\inv}[1] {#1^{-1} } \newcommand{\bm}[1] { \boldsymbol{#1} } \require{physics} \require{ams} \require{mathtools} \)
Math and science::Algebra

Product of eigenvalues and sum of eigenvalues

Let \( A \) be an \( N \times N \) matrix. Then the following two statements are true.

The product of the N eigenvalues equals the determinant of \( A \).

The product includes repeated eigenvalues.

The sum of the N eigenvalues equals the trace of \( A \).


Product of eigenvalues equals the determinant.

Intuition followed by the angle of attack for a more formal justification.

Intuition

Consider the 2D case. The column space of \( A \) can be spanned by the two independent eigenvectors. Consider the quadrilateral formed by these two vectors. Applying \( A \) to the space will a stretch in volume of the space by a factor of \( \det(A) \); the quadrilateral will grow in volume by a factor \( \det(A) \). The two sides defining this quadrilateral don't rotate or skew under the \( A \) transformation, as they are eigenvectors—they only scale. And so the factor by which the quadrilateral grows must be decomposable into the two scaling factors by which each eigenvector grows.

Proof idea

A more formal justification is to express \( \det(A - \lambda I) \) in terms of the N polynomial roots (always possible), then setting \( \lambda = 0 \) we are left with \( \det(A) = \lambda_1 \lambda_2 ... \lambda_N \).

\[ \begin{array}{rcl} \det (A-\lambda I)=p(\lambda)&=&(-1)^n (\lambda - \lambda_1 )(\lambda - \lambda_2)\cdots (\lambda - \lambda_n) \\ &=&(-1) (\lambda - \lambda_1 )(-1)(\lambda - \lambda_2)\cdots (-1)(\lambda - \lambda_n) \\ &=&(\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots (\lambda_n - \lambda) \end{array} \]

The trace equals the sum of eigenvalues.

For the 2D case, this can be shown easily through the quadratic formula.

Let:

\[ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \]

and then observer that:

\[ \begin{align*} \det(A - \lambda I) = 0 \\ &\implies (a - \lambda)(b - \lambda) - bc = 0 \\ &\implies \lambda^2 - \lambda(a + d) + ad - bc = 0 \\ &\implies (\lambda - \frac{a+d}{2})^2 + ad - bc - (\frac{a+d}{2})^2 = 0 \\ &\implies \lambda = \frac{a+d}{2} \mp \sqrt{bc - ad + (\frac{a+d}{2})^2}. \\ \end{align*} \]

The sum of these two eigenvalues will be the trace, \( a + d \).

Some easy examples of the trace = sum of eigenvalues proposition.

A few forms of 2D matrices make it easy to see how their trace is the sum of eigenvalues. Some examples.

General observation

An observation is that adding \( \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \) only affects the diagonal \( \begin{bmatrix}x & . \\ . & y\end{bmatrix} \).

A zero entry

When there is a zero entry, there are two ways to make a matrix singular by additions/subtractions of \( I \):

  1. the "axis" column vector (only 1 non-zero value) falls to the origin, \( [0, 0] \).
  2. the other column vector falls onto the axis of the "axis" column vector.
\[ \begin{align*} A &= \begin{bmatrix} 1 & 7 \\ 0 & 2 \\ \end{bmatrix} \\ & = \begin{bmatrix} 0 & 7 \\ 0 & 1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} && \text{(first eigenvalue)} \\ &= \begin{bmatrix} -1 & 7 \\ 0 & 0 \\ \end{bmatrix} + \begin{bmatrix} 2 & 0 \\ 0 & 2 \\ \end{bmatrix} && \text{(second eigenvalue)} \end{align*} \]
The same deal with \( B \):
\[ B = \begin{bmatrix} 3 & 0 \\ 1 & 1 \\ \end{bmatrix} \]

Symmetric matrices with equal diagonal

Symmetric matrices with the additional property that all diagonal entries are the same have simple eigenvalues, as the addition/subtraction of \( I \) acts the same on all columns.

\[ \begin{align*} C &= \begin{bmatrix} 4 & 1 \\ 1 & 4 \\ \end{bmatrix} \\ &= \begin{bmatrix} 1 & 1 \\ 1 & 1 \\ \end{bmatrix} + \begin{bmatrix} 3 & 0 \\ 0 & 3 \\ \end{bmatrix} && \text{(first eigenvalue)} \\ &= \begin{bmatrix} -1 & 1 \\ 1 & -1 \\ \end{bmatrix} + \begin{bmatrix} 4 & 0 \\ 0 & 4 \\ \end{bmatrix} && \text{(second eigenvalue)} \end{align*} \]