Skip to content

Commit 901a940

Browse files
committed
Remove inline magic
1 parent 1d2563d commit 901a940

File tree

4 files changed

+31
-36
lines changed

4 files changed

+31
-36
lines changed

lectures/lp_intro.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,6 @@ from ortools.linear_solver import pywraplp
5151
from scipy.optimize import linprog
5252
import matplotlib.pyplot as plt
5353
from matplotlib.patches import Polygon
54-
%matplotlib inline
5554
```
5655

5756
Let's start with some examples of linear programming problem.

lectures/scalar_dynam.md

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,6 @@ and understand key concepts.
3737
Let's start with some standard imports:
3838

3939
```{code-cell} ipython
40-
%matplotlib inline
4140
import matplotlib.pyplot as plt
4241
import numpy as np
4342
```
@@ -50,27 +49,27 @@ This section sets out the objects of interest and the kinds of properties we stu
5049

5150
For this lecture you should know the following.
5251

53-
If
52+
If
5453

5554
* $g$ is a function from $A$ to $B$ and
56-
* $f$ is a function from $B$ to $C$,
55+
* $f$ is a function from $B$ to $C$,
5756

5857
then the **composition** $f \circ g$ of $f$ and $g$ is defined by
5958

60-
$$
59+
$$
6160
(f \circ g)(x) = f(g(x))
6261
$$
6362

64-
For example, if
63+
For example, if
6564

66-
* $A=B=C=\mathbb R$, the set of real numbers,
65+
* $A=B=C=\mathbb R$, the set of real numbers,
6766
* $g(x)=x^2$ and $f(x)=\sqrt{x}$, then $(f \circ g)(x) = \sqrt{x^2} = |x|$.
6867

6968
If $f$ is a function from $A$ to itself, then $f^2$ is the composition of $f$
7069
with itself.
7170

7271
For example, if $A = (0, \infty)$, the set of positive numbers, and $f(x) =
73-
\sqrt{x}$, then
72+
\sqrt{x}$, then
7473

7574
$$
7675
f^2(x) = \sqrt{\sqrt{x}} = x^{1/4}
@@ -113,7 +112,7 @@ a sequence $\{x_t\}$ of points in $S$ by setting
113112
```{math}
114113
:label: sdsod
115114
x_{t+1} = g(x_t)
116-
\quad \text{ with }
115+
\quad \text{ with }
117116
x_0 \text{ given}.
118117
```
119118

@@ -131,8 +130,8 @@ This sequence $\{x_t\}$ is called the **trajectory** of $x_0$ under $g$.
131130
In this setting, $S$ is called the **state space** and $x_t$ is called the
132131
**state variable**.
133132

134-
Recalling that $g^n$ is the $n$ compositions of $g$ with itself,
135-
we can write the trajectory more simply as
133+
Recalling that $g^n$ is the $n$ compositions of $g$ with itself,
134+
we can write the trajectory more simply as
136135

137136
$$
138137
x_t = g^t(x_0) \quad \text{ for } t \geq 0.
@@ -155,13 +154,13 @@ b$, where $a, b$ are fixed constants.
155154
This leads to the **linear difference equation**
156155

157156
$$
158-
x_{t+1} = a x_t + b
159-
\quad \text{ with }
157+
x_{t+1} = a x_t + b
158+
\quad \text{ with }
160159
x_0 \text{ given}.
161160
$$
162161

163162

164-
The trajectory of $x_0$ is
163+
The trajectory of $x_0$ is
165164

166165
```{math}
167166
:label: sdslinmodpath

lectures/schelling.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,6 @@ awarded the 2005 Nobel Prize in Economic Sciences (joint with Robert Aumann).
7070
Let's start with some imports:
7171

7272
```{code-cell} ipython3
73-
%matplotlib inline
7473
import matplotlib.pyplot as plt
7574
from random import uniform, seed
7675
from math import sqrt

lectures/time_series_with_matrices.md

Lines changed: 19 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,6 @@ We will use the following imports:
5050

5151
```{code-cell} ipython
5252
import numpy as np
53-
%matplotlib inline
5453
import matplotlib.pyplot as plt
5554
from matplotlib import cm
5655
plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
@@ -180,15 +179,15 @@ Now let’s solve for the path of $y$.
180179
If $y_t$ is GNP at time $t$, then we have a version of
181180
Samuelson’s model of the dynamics for GNP.
182181

183-
To solve $y = A^{-1} b$ we can either invert $A$ directly, as in
182+
To solve $y = A^{-1} b$ we can either invert $A$ directly, as in
184183

185184
```{code-cell} python3
186185
A_inv = np.linalg.inv(A)
187186
188187
y = A_inv @ b
189188
```
190189

191-
or we can use `np.linalg.solve`:
190+
or we can use `np.linalg.solve`:
192191

193192

194193
```{code-cell} python3
@@ -204,7 +203,7 @@ np.allclose(y, y_second_method)
204203

205204
```{note}
206205
In general, `np.linalg.solve` is more numerically stable than using
207-
`np.linalg.inv` directly.
206+
`np.linalg.inv` directly.
208207
However, stability is not an issue for this small example. Moreover, we will
209208
repeatedly use `A_inv` in what follows, so there is added value in computing
210209
it directly.
@@ -366,9 +365,9 @@ $$
366365
367366
You can read about multivariate normal distributions in this lecture [Multivariate Normal Distribution](https://python.quantecon.org/multivariate_normal.html).
368367
369-
Let's write our model as
368+
Let's write our model as
370369
371-
$$
370+
$$
372371
y = \tilde A (b + u)
373372
$$
374373
@@ -382,11 +381,11 @@ $$
382381
383382
where
384383
385-
$$
384+
$$
386385
\mu_y = \tilde A b
387386
$$
388387
389-
and
388+
and
390389
391390
$$
392391
\Sigma_y = \tilde A (\sigma_u^2 I_{T \times T} ) \tilde A^T
@@ -425,7 +424,7 @@ class population_moments:
425424
A_inv = np.linalg.inv(A)
426425
427426
self.A, self.b, self.A_inv, self.sigma_u, self.T = A, b, A_inv, sigma_u, T
428-
427+
429428
def sample_y(self, n):
430429
"""
431430
Give a sample of size n of y.
@@ -451,14 +450,14 @@ class population_moments:
451450
452451
my_process = population_moments(
453452
alpha0=10.0, alpha1=1.53, alpha2=-.9, T=80, y_1=28., y0=24., sigma_u=1)
454-
453+
455454
mu_y, Sigma_y = my_process.get_moments()
456455
A_inv = my_process.A_inv
457456
```
458457
459458
It is enlightening to study the $\mu_y, \Sigma_y$'s implied by various parameter values.
460459
461-
Among other things, we can use the class to exhibit how **statistical stationarity** of $y$ prevails only for very special initial conditions.
460+
Among other things, we can use the class to exhibit how **statistical stationarity** of $y$ prevails only for very special initial conditions.
462461
463462
Let's begin by generating $N$ time realizations of $y$ plotting them together with population mean $\mu_y$ .
464463
@@ -496,15 +495,15 @@ Let's print out the covariance matrix $\Sigma_y$ for a time series $y$
496495
497496
```{code-cell} ipython3
498497
my_process = population_moments(alpha0=0, alpha1=.8, alpha2=0, T=6, y_1=0., y0=0., sigma_u=1)
499-
498+
500499
mu_y, Sigma_y = my_process.get_moments()
501500
print("mu_y = ",mu_y)
502501
print("Sigma_y = ", Sigma_y)
503502
```
504503
505504
Notice that the covariance between $y_t$ and $y_{t-1}$ -- the elements on the superdiagonal -- are **not** identical.
506505
507-
This is is an indication that the time series respresented by our $y$ vector is not **stationary**.
506+
This is is an indication that the time series respresented by our $y$ vector is not **stationary**.
508507
509508
To make it stationary, we'd have to alter our system so that our **initial conditions** $(y_1, y_0)$ are not fixed numbers but instead a jointly normally distributed random vector with a particular mean and covariance matrix.
510509
@@ -530,7 +529,7 @@ There is a lot to be learned about the process by staring at the off diagonal el
530529
531530
## Moving Average Representation
532531
533-
Let's print out $A^{-1}$ and stare at its structure
532+
Let's print out $A^{-1}$ and stare at its structure
534533
535534
* is it triangular or almost triangular or $\ldots$ ?
536535
@@ -546,7 +545,7 @@ with np.printoptions(precision=3, suppress=True):
546545
547546
548547
549-
Evidently, $A^{-1}$ is a lower triangular matrix.
548+
Evidently, $A^{-1}$ is a lower triangular matrix.
550549
551550
552551
Let's print out the lower right hand corner of $A^{-1}$ and stare at it.
@@ -561,13 +560,13 @@ Notice how every row ends with the previous row's pre-diagonal entries.
561560
562561
563562
564-
565563
566-
Since $A^{-1}$ is lower triangular, each row represents $ y_t$ for a particular $t$ as the sum of
567-
- a time-dependent function $A^{-1} b$ of the initial conditions incorporated in $b$, and
564+
565+
Since $A^{-1}$ is lower triangular, each row represents $ y_t$ for a particular $t$ as the sum of
566+
- a time-dependent function $A^{-1} b$ of the initial conditions incorporated in $b$, and
568567
- a weighted sum of current and past values of the IID shocks $\{u_t\}$
569568
570-
Thus, let $\tilde{A}=A^{-1}$.
569+
Thus, let $\tilde{A}=A^{-1}$.
571570
572571
Evidently, for $t\geq0$,
573572
@@ -577,7 +576,7 @@ $$
577576
578577
This is a **moving average** representation with time-varying coefficients.
579578
580-
Just as system {eq}`eq:eqma` constitutes a
579+
Just as system {eq}`eq:eqma` constitutes a
581580
**moving average** representation for $y$, system {eq}`eq:eqar` constitutes an **autoregressive** representation for $y$.
582581
583582
@@ -692,4 +691,3 @@ plt.legend()
692691
693692
plt.show()
694693
```
695-

0 commit comments

Comments
 (0)