You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -816,12 +819,12 @@ plane, although some might be repeated.
816
819
817
820
Some nice facts about the eigenvalues of a square matrix $A$ are as follows:
818
821
819
-
1.The determinant of $A$ equals the product of the eigenvalues.
820
-
1. The trace of $A$ (the sum of the elements on the principal diagonal) equals the sum of the eigenvalues.
821
-
1. If $A$ is symmetric, then all of its eigenvalues are real.
822
-
1. If $A$ is invertible and $\lambda_1, \ldots, \lambda_n$ are its eigenvalues, then the eigenvalues of $A^{-1}$ are $1/\lambda_1, \ldots, 1/\lambda_n$.
822
+
1.the determinant of $A$ equals the product of the eigenvalues
823
+
2. the trace of $A$ (the sum of the elements on the principal diagonal) equals the sum of the eigenvalues
824
+
3. if $A$ is symmetric, then all of its eigenvalues are real
825
+
4. if $A$ is invertible and $\lambda_1, \ldots, \lambda_n$ are its eigenvalues, then the eigenvalues of $A^{-1}$ are $1/\lambda_1, \ldots, 1/\lambda_n$.
823
826
824
-
A corollary of the first statement is that a matrix is invertible if and only if all its eigenvalues are nonzero.
827
+
A corollary of the last statement is that a matrix is invertible if and only if all its eigenvalues are nonzero.
825
828
826
829
### Computation
827
830
@@ -866,7 +869,7 @@ many applications in economics.
866
869
867
870
### Scalar series
868
871
869
-
Here's a fundamental result about series that you surely know:
872
+
Here's a fundamental result about series:
870
873
871
874
If $a$ is a number and $|a| < 1$, then
872
875
@@ -971,7 +974,7 @@ result which illustrates the result of the Neumann Series Lemma.
971
974
```{exercise}
972
975
:label: eig1_ex1
973
976
974
-
Power iteration is a method for finding the largest absolute eigenvalue of a diagonalizable matrix.
977
+
Power iteration is a method for finding the greatest absolute eigenvalue of a diagonalizable matrix.
975
978
976
979
The method starts with a random vector $b_0$ and repeatedly applies the matrix $A$ to it
977
980
@@ -981,7 +984,7 @@ $$
981
984
982
985
A thorough discussion of the method can be found [here](https://pythonnumericalmethods.berkeley.edu/notebooks/chapter15.02-The-Power-Method.html).
983
986
984
-
In this exercise, first implement the power iteration method and use it to find the largest eigenvalue and its corresponding eigenvector.
987
+
In this exercise, first implement the power iteration method and use it to find the greatest absolute eigenvalue and its corresponding eigenvector.
985
988
986
989
Then visualize the convergence.
987
990
```
@@ -1014,7 +1017,7 @@ b = np.random.rand(A.shape[1])
1014
1017
# Get the leading eigenvector of matrix A
1015
1018
eigenvector = np.linalg.eig(A)[1][:, 0]
1016
1019
1017
-
norm_ls = []
1020
+
errors = []
1018
1021
res = []
1019
1022
1020
1023
# Power iteration loop
@@ -1025,24 +1028,25 @@ for i in range(num_iters):
1025
1028
b = b / np.linalg.norm(b)
1026
1029
# Append b to the list of eigenvector approximations
1027
1030
res.append(b)
1028
-
norm = np.linalg.norm(np.array(b)
1031
+
err = np.linalg.norm(np.array(b)
1029
1032
- eigenvector)
1030
-
norm_ls.append(norm)
1033
+
errors.append(err)
1031
1034
1032
-
dominant_eigenvalue = np.dot(A @ b, b) / np.dot(b, b)
1033
-
print(f'The approximated dominant eigenvalue is {dominant_eigenvalue:.2f}')
1035
+
greatest_eigenvalue = np.dot(A @ b, b) / np.dot(b, b)
1036
+
print(f'The approximated greatest absolute eigenvalue is \
1037
+
{greatest_eigenvalue:.2f}')
1034
1038
print('The real eigenvalue is', np.linalg.eig(A)[0])
1035
1039
1036
1040
# Plot the eigenvector approximations for each iteration
1037
-
plt.figure(figsize=(10, 6))
1041
+
fig, ax = plt.subplots(figsize=(10, 6))
1042
+
ax.plot(errors)
1038
1043
plt.xlabel('iterations')
1039
-
plt.ylabel('Norm')
1040
-
_ = plt.plot(norm_ls)
1044
+
plt.ylabel('error')
1041
1045
```
1042
1046
1043
1047
+++ {"user_expressions": []}
1044
1048
1045
-
Then we can look at the trajectory of the eigenvector approximation
1049
+
Then we can look at the trajectory of the eigenvector approximation.
0 commit comments