You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/greek_square.md
+12-20Lines changed: 12 additions & 20 deletions
Original file line number
Diff line number
Diff line change
@@ -11,22 +11,20 @@ kernelspec:
11
11
name: python3
12
12
---
13
13
14
-
+++ {"user_expressions": []}
15
-
16
14
# Computing Square Roots
17
15
18
16
19
17
## Introduction
20
18
21
19
22
-
This lecture can be viewed as a sequel to this QuantEcon lecture {doc}`eigen_I`
20
+
This lecture can be viewed as a sequel to {doc}`eigen_I`
23
21
24
22
It provides an example of how eigenvectors isolate *invariant subspaces* that help construct and analyze solutions of linear difference equations.
25
23
26
24
When vector $x_t$ starts in an invariant subspace, iterating the different equation keeps $x_{t+j}$
27
25
in that subspace for all $j \geq 1$.
28
26
29
-
Invariant subspace methods are used throughout applied economic dynamics, for example, in this QuantEcon lecture {doc}`money_inflation`
27
+
Invariant subspace methods are used throughout applied economic dynamics, for example, in the lecture {doc}`money_inflation`
30
28
31
29
Our approach here is to illustrate the method with an ancient example, one that ancient Greek mathematicians used to compute square roots of positive integers.
32
30
@@ -76,7 +74,7 @@ $\{y_t\}_{t=0}^\infty$.
76
74
77
75
There is one equation each for $t = 0, 1, 2, \ldots$.
78
76
79
-
We could follow an approach taken in this QuantEcon lecture {doc}`present values<pv>` and stack all of these equations into a single matrix equation that we would then solve by using matrix inversion.
77
+
We could follow an approach taken in the lecture on {doc}`present values<pv>` and stack all of these equations into a single matrix equation that we would then solve by using matrix inversion.
80
78
81
79
```{note}
82
80
In the present instance, the matrix equation would multiply a countably infinite dimensional square matrix by a countably infinite dimensional vector. With some qualifications, matrix multiplication and inversion tools apply to such an equation.
@@ -172,11 +170,9 @@ If we choose $(y_{-1}, y_{-2})$ to set $(\eta_1, \eta_2) = (1, 0)$, then $y_t =
172
170
173
171
If we choose $(y_{-1}, y_{-2})$ to set $(\eta_1, \eta_2) = (0, 1)$, then $y_t = \delta_2^t$ for all $t \geq 0$.
174
172
175
-
Soon we'll relate the preceding calculations to components an eigen decomposition of a transition
176
-
matrix that represents difference equation {eq}`eq:2diff1` in a very convenient way.
173
+
Soon we'll relate the preceding calculations to components an eigen decomposition of a transition matrix that represents difference equation {eq}`eq:2diff1` in a very convenient way.
177
174
178
-
We'll turn to that after we describe how Ancient Greeks figured out how to compute square roots of
179
-
positive integers that are not perfect squares.
175
+
We'll turn to that after we describe how Ancient Greeks figured out how to compute square roots of positive integers that are not perfect squares.
180
176
181
177
182
178
## Algorithm of the Ancient Greeks
@@ -201,8 +197,7 @@ $$ (eq:second_order)
201
197
202
198
together with a pair of integers that are initial conditions for $y_{-1}, y_{-2}$.
203
199
204
-
First, we'll deploy some techniques for solving the difference equations that are also deployed in this QuantEcon lecture about the multiplier-accelerator model:
205
-
<https://python.quantecon.org/samuelson.html>
200
+
First, we'll deploy some techniques for solving the difference equations that are also deployed in {doc}`dynam:samuelson`
206
201
207
202
208
203
@@ -212,7 +207,7 @@ $$
212
207
c(x) \equiv x^2 - 2 x + (1 - \sigma) = 0
213
208
$$ (eq:cha_eq0)
214
209
215
-
+++
210
+
216
211
217
212
(Notice how this is an instance of equation {eq}`eq:2diff6` above.)
Next, we'll represent the preceding analysis by first vectorizing our second-order difference equation {eq}`eq:second_order` and then using eigendecompositions of an associated state transition matrix.
431
424
432
425
## Vectorizing the difference equation
@@ -650,8 +643,6 @@ $$ (eq:deactivate2)
650
643
651
644
Let's verify {eq}`eq:deactivate1` and {eq}`eq:deactivate2` below
652
645
653
-
+++
654
-
655
646
To deactivate $\lambda_1$ we use {eq}`eq:deactivate1`
656
647
657
648
```{code-cell} ipython3
@@ -665,8 +656,6 @@ np.round(V_inv @ xd_1, 8)
665
656
666
657
We find $x_{1,0}^* = 0$.
667
658
668
-
+++
669
-
670
659
Now we deactivate $\lambda_2$ using {eq}`eq:deactivate2`
0 commit comments