You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/greek_square.md
+26-25Lines changed: 26 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -19,20 +19,20 @@ kernelspec:
19
19
## Introduction
20
20
21
21
22
-
This lecture can be viewed as a sequel to this QuantEcon lecture {doc}`eigenvalues and eigenvectors <eigen_I>`
22
+
This lecture can be viewed as a sequel to this QuantEcon lecture {doc}`eigen_I`
23
23
24
-
It provides an example of how eigen vectors isolate **invariant subspaces** that help construct and analyze solutions of linear difference equations.
24
+
It provides an example of how eigenvectors isolate *invariant subspaces* that help construct and analyze solutions of linear difference equations.
25
25
26
26
When vector $x_t$ starts in an invariant subspace, iterating the different equation keeps $x_{t+j}$
27
27
in that subspace for all $j \geq 1$.
28
28
29
-
Invariant subspace methods are used throughout applied economic dynamics, for example, in this QuantEcon lecture {doc}`money financed government deficits and inflation <money_inflation>`
29
+
Invariant subspace methods are used throughout applied economic dynamics, for example, in this QuantEcon lecture {doc}`money_inflation`
30
30
31
31
Our approach here is to illustrate the method with an ancient example, one that ancient Greek mathematicians used to compute square roots of positive integers.
32
32
33
33
In this lecture we assume that we have yet
34
34
35
-
## Perfect Squares and Irrational Numbers
35
+
## Perfect squares and irrational numbers
36
36
37
37
An integer is called a **perfect square** if its square root is also an integer.
38
38
@@ -58,10 +58,10 @@ In this lecture, we'll describe this method.
58
58
59
59
We'll also use invariant subspaces to describe variations on this method that are faster.
60
60
61
-
## Secondorder linear difference equations
61
+
## Second-order linear difference equations
62
62
63
63
Before telling how the ancient Greeks computed square roots, we'll provide a quick introduction
64
-
to secondorder linear difference equations.
64
+
to second-order linear difference equations.
65
65
66
66
We'll study the following second-order linear difference equation
67
67
@@ -78,14 +78,15 @@ There is one equation each for $t = 0, 1, 2, \ldots$.
78
78
79
79
We could follow an approach taken in this QuantEcon lecture {doc}`present values<pv>` and stack all of these equations into a single matrix equation that we would then solve by using matrix inversion.
80
80
81
-
```{note} In the present instance, the matrix equation would multiply a countably infinite dimensional square matrix by a countably infinite dimensional vector. With some qualifications, matrix multiplication and inversion tools apply to such an equation.
81
+
```{note}
82
+
In the present instance, the matrix equation would multiply a countably infinite dimensional square matrix by a countably infinite dimensional vector. With some qualifications, matrix multiplication and inversion tools apply to such an equation.
82
83
```
83
84
84
85
But we won't pursue that approach here.
85
86
86
87
87
88
Instead, we'll seek to find a time-invariant function that *solves* our difference equation, meaning
88
-
thatit provides a formula for a $\{y_t\}_{t=0}^\infty$ sequence that satisfies
89
+
that it provides a formula for a $\{y_t\}_{t=0}^\infty$ sequence that satisfies
89
90
equation {eq}`eq:2diff1` for each $t \geq 0$.
90
91
91
92
We seek an expression for $y_t, t \geq 0$ as functions of the initial conditions $(y_{-1}, y_{-2})$:
@@ -94,7 +95,7 @@ $$
94
95
y_t = g((y_{-1}, y_{-2});t), \quad t \geq 0
95
96
$$ (eq:2diff2)
96
97
97
-
We call such a function $g$ a **solution** of the difference equation {eq}`eq:2diff1`.
98
+
We call such a function $g$ a *solution* of the difference equation {eq}`eq:2diff1`.
98
99
99
100
One way to discover a solution is to use a guess and verify method.
100
101
@@ -120,7 +121,7 @@ $$
120
121
\left(a_1 + \frac{a_2}{\delta}\right) = \delta
121
122
$$ (eq:2diff5)
122
123
123
-
which we can rewrite as the **characteristic equation**
124
+
which we can rewrite as the *characteristic equation*
124
125
125
126
$$
126
127
\delta^2 - a_1 \delta - a_2 = 0
@@ -139,13 +140,13 @@ $$
139
140
y_t = \delta^t y_0 , \forall t \geq 0
140
141
$$ (eq:2diff8)
141
142
142
-
provded that we set
143
+
provided that we set
143
144
144
145
$$
145
146
y_0 = \delta y_{-1} .
146
147
$$
147
148
148
-
The **general** solution of difference equation {eq}`eq:2diff1` takes the form
149
+
The *general* solution of difference equation {eq}`eq:2diff1` takes the form
together with a pair of integers that are initial conditions for $y_{-1}, y_{-2}$.
202
203
203
-
First, we'll deploy some techniques for solving difference equations that are also deployed in this QuantEcon lecture about the multiplier-accelerator model:
204
+
First, we'll deploy some techniques for solving the difference equations that are also deployed in this QuantEcon lecture about the multiplier-accelerator model:
204
205
<https://python.quantecon.org/samuelson.html>
205
206
206
207
@@ -257,16 +258,16 @@ $$
257
258
where $\eta_1$ and $\eta_2$ are chosen to satisfy prescribed initial conditions $y_{-1}, y_{-2}$:
@@ -316,7 +317,7 @@ System {eq}`eq:leq_sq` of simultaneous linear equations can be used in various w
316
317
317
318
Notice how we used the second approach above when we set $\eta_1, \eta_2$ either to $(0, 1)$, for example, or $(1, 0)$, for example.
318
319
319
-
In taking this second approach, we constructed an **invariant subspace** of ${\bf R}^2$.
320
+
In taking this second approach, we constructed an *invariant subspace* of ${\bf R}^2$.
320
321
321
322
Here is what is going on.
322
323
@@ -426,7 +427,7 @@ We find that convergence is immediate.
426
427
427
428
+++
428
429
429
-
Next, we'll represent the preceding analysis by first vectorizing our secondorder difference equation {eq}`eq:second_order` and then using eigendecompositions of an associated state transition matrix.
430
+
Next, we'll represent the preceding analysis by first vectorizing our second-order difference equation {eq}`eq:second_order` and then using eigendecompositions of an associated state transition matrix.
430
431
431
432
## Vectorizing the difference equation
432
433
@@ -558,9 +559,9 @@ plt.ylim(-1.5, 1.5)
558
559
plt.show()
559
560
```
560
561
561
-
## Invariant Subspace Approach
562
+
## Invariant subspace approach
562
563
563
-
The preceding calculation indicates that we can use the eigenvectors $V$ to construct 2-dimensional **invariant subspaces**.
564
+
The preceding calculation indicates that we can use the eigenvectors $V$ to construct 2-dimensional *invariant subspaces*.
564
565
565
566
We'll pursue that possibility now.
566
567
@@ -722,12 +723,12 @@ plt.tight_layout()
722
723
plt.show()
723
724
```
724
725
725
-
## Concluding Remarks
726
+
## Concluding remarks
726
727
727
-
This lecture sets the stage for many other applications of the **invariant subspace** methods.
728
+
This lecture sets the stage for many other applications of the *invariant subspace* methods.
728
729
729
730
All of these exploit very similar equations based on eigen decompositions.
730
731
731
732
We shall encounter equations very similar to {eq}`eq:deactivate1` and {eq}`eq:deactivate2`
732
-
in this QuantEcon lecture {doc}`money financed government deficits and inflation <money_inflation>`
733
+
in this QuantEcon lecture {doc}`money_inflation`
733
734
and in many other places in dynamic economic theory.
0 commit comments