赛派号

什么牌子的吹风机好又便宜又好用 4.2: Properties of Eigenvalues and Eigenvectors

The eigenvalues and eigenvectors of \(A\) and The Determinant.

Again, the eigenvalues of \(A\) are \(-6\) and \(12\), and the determinant of \(A\) is \(-72\). The eigenvalues of \(B\) are \(-1\), \(2\) and \(3\); the determinant of \(B\) is \(-6\). It seems as though the product of the eigenvalues is the determinant.

This is indeed true; we defend this with our argument from above. We know that the determinant of a triangular matrix is the product of the diagonal elements. Therefore, given a matrix \(A\), we can find \(P\) such that \(P^{-1}AP\) is upper triangular with the eigenvalues of \(A\) on the diagonal. Thus \(\text{det}(P^{-1}AP)\) is the product of the eigenvalues. Using Theorem 3.4.3, we know that \(\text{det}(P^{-1}AP)=\text{det}(P^{-1}PA)=\text{det}(A)\). Thus the determinant of \(A\) is the product of the eigenvalues.

We summarize the results of our example with the following theorem.

Theorem \(\PageIndex{1}\)

Properties of Eigenvalues and Eigenvectors

Let \(A\) be an \(n\times n\) invertible matrix. The following are true:

If \(A\) is triangular, then the diagonal elements of \(A\) are the eigenvalues of \(A\). If \(\lambda\) is an eigenvalue of \(A\) with eigenvector \(\vec{x}\), then \(\frac{1}{\lambda}\) is an eigenvalue of \(A^{−1}\) with eigenvector \(\vec{x}\). If \(\lambda\) is an eigenvalue of \(A\) then \(\lambda\) is an eigenvalue of \(A^{T}\). The sum of the eigenvalues of \(A\) is equal to \(\text{tr}(A)\), the trace of \(A\). The product of the eigenvalues of \(A\) is the equal to \(\text{det}(A)\), the determinant of \(A\).

There is one more concept concerning eigenvalues and eigenvectors that we will explore. We do so in the context of an example.

Example \(\PageIndex{3}\)

Find the eigenvalues and eigenvectors of the matrix \(A=\left[\begin{array}{cc}{1}&{2}\\{1}&{2}\end{array}\right]\).

Solution

To find the eigenvalues, we compute \(\text{det}(A-\lambda I)\):

\[\begin{align}\begin{aligned}\text{det}(A-\lambda I)&=\left|\begin{array}{cc}{1-\lambda}&{2}\\{1}&{2-\lambda}\end{array}\right| \\ &=(1-\lambda)(2-\lambda)-2 \\ &=\lambda^{2}-3\lambda \\ &=\lambda (\lambda -3)\end{aligned}\end{align} \nonumber \]

Our eigenvalues are therefore \(\lambda = 0, 3\).

For \(\lambda = 0\), we find the eigenvectors:

\[\left[\begin{array}{ccc}{1}&{2}&{0}\\{1}&{2}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{2}&{0}\\{0}&{0}&{0}\end{array}\right] \nonumber \]

This shows that \(x_1 = -2x_2\), and so our eigenvectors \(\vec{x}\) are

\[\vec{x}=x_{2}\left[\begin{array}{c}{-2}\\{1}\end{array}\right]. \nonumber \]

For \(\lambda = 3\), we find the eigenvectors:

\[\left[\begin{array}{ccc}{-2}&{2}&{0}\\{1}&{-1}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{-1}&{0}\\{0}&{0}&{0}\end{array}\right] \nonumber \]

This shows that \(x_1 = x_2\), and so our eigenvectors \(\vec{x}\) are

\[\vec{x}=x_{2}\left[\begin{array}{c}{1}\\{1}\end{array}\right]. \nonumber \]

One interesting thing about the above example is that we see that \(0\) is an eigenvalue of \(A\); we he not officially encountered this before. Does this mean anything significant?\(^{3}\)

Think about what an eigenvalue of \(0\) means: there exists an nonzero vector \(\vec{x}\) where \(A\vec{x}=0\vec{x}=\vec{0}\). That is, we he a nontrivial solution to \(A\vec{x}=\vec{0}\). We know this only happens when \(A\) is not invertible.

So if \(A\) is invertible, there is no nontrivial solution to \(A\vec{x}=\vec{0}\), and hence \(0\) is not an eigenvalue of \(A\). If \(A\) is not invertible, then there is a nontrivial solution to \(A\vec{x}=\vec{0}\), and hence \(0\) is an eigenvalue of \(A\). This leads us to our final addition to the Invertible Matrix Theorem.

Theorem \(\PageIndex{2}\)

Invertible Matrix Theorem

Let \(A\) be an \(n\times n\) matrix. The following statements are equivalent.

\(A\) is invertible. \(A\) does not he an eigenvalue of \(0\).

This section is about the properties of eigenvalues and eigenvectors. Of course, we he not investigated all of the numerous properties of eigenvalues and eigenvectors; we he just surveyed some of the most common (and most important) concepts. Here are four quick examples of the many things that still exist to be explored.

First, recall the matrix

\[A=\left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right] \nonumber \]

that we used in Example 4.1.1. It’s characteristic polynomial is \(p(\lambda)=\lambda^2-4\lambda-5\). Compute \(p(A)\); that is, compute \(A^2-4A-5I\). You should get something “interesting,” and you should wonder “does this always work?"\(^{4}\)

Second, in all of our examples, we he considered matrices where eigenvalues “appeared only once.” Since we know that the eigenvalues of a triangular matrix appear on the diagonal, we know that the eigenvalues of

\[A=\left[\begin{array}{cc}{1}&{1}\\{0}&{1}\end{array}\right] \nonumber \]

are “1 and 1;” that is, the \(\lambda = 1\) appears twice. What does that mean when we consider the eigenvectors of \(\lambda = 1\)? Compare the result of this to the matrix

\[A=\left[\begin{array}{cc}{1}&{0}\\{0}&{1}\end{array}\right], \nonumber \]

which also has the \(\lambda =1\) appearing twice.\(^{5}\)

Third, consider the matrix

\[A=\left[\begin{array}{cc}{0}&{-1}\\{1}&{0}\end{array}\right]. \nonumber \]

What are the eigenvalues?\(^{6}\) We quickly compute the characteristic polynomial to be \(p(\lambda) = \lambda^2 + 1\). Therefore the eigenvalues are \(\pm \sqrt{-1} = \pm i\). What does this mean?

Finally, we he found the eigenvalues of matrices by finding the roots of the characteristic polynomial. We he limited our examples to quadratic and cubic polynomials; one would expect for larger sized matrices that a computer would be used to factor the characteristic polynomials. However, in general, this is not how the eigenvalues are found. Factoring high order polynomials is too unreliable, even with a computer – round off errors can cause unpredictable results. Also, to even compute the characteristic polynomial, one needs to compute the determinant, which is also expensive (as discussed in the previous chapter).

So how are eigenvalues found? There are iterative processes that can progressively transform a matrix \(A\) into another matrix that is almost an upper triangular matrix (the entries below the diagonal are almost zero) where the entries on the diagonal are the eigenvalues. The more iterations one performs, the better the approximation is.

These methods are so fast and reliable that some computer programs convert polynomial root finding problems into eigenvalue problems!

Most textbooks on Linear Algebra will provide direction on exploring the above topics and give further insight to what is going on. We he mentioned all the eigenvalue and eigenvector properties in this section for the same reasons we ge in the previous section. First, knowing these properties helps us solve numerous real world problems, and second, it is fascinating to see how rich and deep the theory of matrices is.

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至lsinopec@gmail.com举报,一经查实,本站将立刻删除。

上一篇 没有了

下一篇没有了