From eigen-equation Ax=λx, we have (A−λI)x=0. That is the marix A−λI must be singular and det(A−λI)=0.
The n-degree polynomial pA(λ)=det(λI−A) is called the characteristic polynomial. Eigenvalues are the roots of pA(λ).
Example: For A=(2112), the characteristic polynomial is pA(λ)=det(λ−2−1−1λ−2)=λ2−4λ+3=(λ−1)(λ−3). Therefore A's eigenvalues are λ1=1 and λ2=3. Solving linear equations (λ−2−1−1λ−2)x=0 now gives the corresponding eigenvectors x1=(1−1),x1=(11). We observe that (1) tr(A)=λ1+λ2, (2) det(A)=λ1λ2, and (3) the two eigenvectors are orthogonal to each other.
using LinearAlgebra
A = [2.0 1.0; 1.0 2.0]
eigen(A)
Q = [0.0 -1.0; 1.0 0.0]
eigen(Q)
If Ax=λx, then (BAB−1)(Bx)=BAx=λ(Bx). That is Bx is an eigenvector of the matrix BAB−1.
We say the matrix BAB−1 is similar to A because they share the same eigenvalues.
Geometric multiplicity (GM) of an eigenvalue λ: count the independent eigenvectors for λ. Look at the null space of A−λI.
Algebraic multiplicity (AM) of an eigenvalue λ: count the repetition for λ. Look at the roots of characteristic polynomial det(λI−A).
Always GM≤AM.
The shortage of eigenvectors when GM<AM means that A is not diagonalizable. There is no invertible matrix such that A=XΛX−1.
Classical example: A=(0100). AM = 2, GM=1. Eigenvalue 0 is repeated twice but there is only one eigenvector (1,0).
More examples: all three matrices (5105),(6−114),(72−23) have AM=2 and GM=1.
eigen([0 1; 0 0])
eigen([5 1; 0 5])
eigen([6 -1; 1 4])
eigen([7 2; -2 3])
Multiplying both sides of the eigen-equation Ax=λx by A gives A2x=λAx=λ2x, showing that λ2 is an eigenvalue of A with eigenvector x.
Similarly λk is an eigenvalue of Ak with eigenvector x.
For a diagonalizable matrix A=XΛX−1, we have Ak=XΛkX−1.
Shifting A shifts all eigenvalues. (A+sI)x=λx+sx=(λ+s)x.
A is singular if and only if it has at least one 0 eigenvalue.
Eigenvectors associated with distinct eigenvalues are linearly independent.
Proof: Let Ax1=λ1x1,Ax2=λ2x2, and λ1≠λ2. Suppose x1 and x2 are linealy dependent. Then there is α≠0 such that x2=αx1. Hence αλ1x1=αAx1=Ax2=λ2x2=αλ2x1, or α(λ1−λ2)x1=0. Since α≠0, λ1≠λ2, x1=0, a contradiction.
The eigenvalues of a triangular matrix are its diagonal entries.
Proof: pA(λ)=(λ−a11)⋯(λ−ann).
Eigenvalues of an idempotent matrix are either 0 or 1.
Proof: λx=Ax=AAx=λ2x. So λ=λ2 or λ=0,1.
Eigenvalues of an orthogonal matrix have complex modulus 1.
Proof: Since A′A=I, x∗x=x∗A′Ax=λ∗λx∗x. Since x∗x≠0, we have λ∗λ=|λ|=1.
Let A∈Rn×n (not required to be diagonalizable), then tr(A)=∑iλi and det(A)=∏iλi. The general version can be proved by the Jordan canonical form, a generalization of the eigen-decomposition.
For a symmetric matrix A∈Rn,
the eigenvectors corresponding to distinct eigenvalues are orthogonal to each other.
Proof of 1: Pre-multiplying the eigen-equation Ax=λx by x∗ (conjugate transpose) gives x∗Ax=λx∗x. Now both x∗x=x∗1x1+⋯+x∗nxn and x∗Ax=∑i,jaijx∗ixj=∑iaiix∗ixi+∑i<jaij(x∗ixj+xix∗j)=a11x∗1x1+a12(x∗1x2+x1x∗2)+⋯ are real numbers. Therefore λ is a real number.
Proof of 2: Suppose Ax1=λ1x1,Ax2=λ2x2, and λ1≠λ2. Then (A−λ2I)x1=(λ1−λ2)x1(A−λ2I)x2=0. Thus x1∈C(A−λ2I) and x2∈N(A−λ2I). By the fundamental theorem of linear algebra, x1⊥x2.
For an eigenvalue with multiplicity, we can choose its eigenvectors to be orthogonal to each other. Also we normalize each eigenvector to have unit ℓ2 norm. Thus we obtain the extremely useful spectral decomposition A=QΛQ′=(∣∣q1⋯qn∣∣)(λ1⋱λn)(−q′1−⋮−q′n−)=n∑i=1λiqiq′i, where Q is orthogonal (columns are eigenvectors) and Λ=diag(λ1,…,λn) (diagonal entries are eigenvalues).