Skip to content

Commit fbe4c82

Browse files
SylviaZhaooojstacmmcky
authored
[AR1] Update editorial suggestions (#458)
* [AR1] Update editorial suggestions * Update ar1_processes.md * misc * misc --------- Co-authored-by: John Stachurski <john.stachurski@gmail.com> Co-authored-by: Matt McKay <mmcky@users.noreply.github.com>
1 parent 2b7dd96 commit fbe4c82

File tree

1 file changed

+68
-33
lines changed

1 file changed

+68
-33
lines changed

lectures/ar1_processes.md

Lines changed: 68 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ kernelspec:
1919
```
2020

2121
(ar1_processes)=
22-
# AR1 Processes
22+
# AR(1) Processes
2323

2424
```{index} single: Autoregressive processes
2525
```
@@ -35,10 +35,8 @@ These simple models are used again and again in economic research to represent t
3535
* dividends
3636
* productivity, etc.
3737

38-
AR(1) processes can take negative values but are easily converted into positive processes when necessary by a transformation such as exponentiation.
39-
4038
We are going to study AR(1) processes partly because they are useful and
41-
partly because they help us understand important concepts.
39+
partly because they help us understand important concepts.
4240

4341
Let's start with some imports:
4442

@@ -48,9 +46,9 @@ import matplotlib.pyplot as plt
4846
plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
4947
```
5048

51-
## The AR(1) Model
49+
## The AR(1) model
5250

53-
The **AR(1) model** (autoregressive model of order 1) takes the form
51+
The *AR(1) model* (autoregressive model of order 1) takes the form
5452

5553
```{math}
5654
:label: can_ar1
@@ -60,18 +58,30 @@ X_{t+1} = a X_t + b + c W_{t+1}
6058

6159
where $a, b, c$ are scalar-valued parameters.
6260

63-
This law of motion generates a time series $\{ X_t\}$ as soon as we
64-
specify an initial condition $X_0$.
61+
For example, $X_t$ might be
62+
63+
* the log of labor income for a given household, or
64+
* the log of money demand in a given economy.
65+
66+
In either case, {eq}`can_ar1` shows that the current value evolves as a linear function
67+
of the previous value and an IID shock $W_{t+1}$.
6568

66-
This is called the **state process** and the state space is $\mathbb R$.
69+
(We use $t+1$ for the subscript of $W_{t+1}$ because this random variable is not
70+
observed at time $t$.)
71+
72+
The specification {eq}`can_ar1` generates a time series $\{ X_t\}$ as soon as we
73+
specify an initial condition $X_0$.
6774

6875
To make things even simpler, we will assume that
6976

70-
* the process $\{ W_t \}$ is IID and standard normal,
77+
* the process $\{ W_t \}$ is {ref}`IID <iid-theorem>` and standard normal,
7178
* the initial condition $X_0$ is drawn from the normal distribution $N(\mu_0, v_0)$ and
7279
* the initial condition $X_0$ is independent of $\{ W_t \}$.
7380

74-
### Moving Average Representation
81+
82+
83+
84+
### Moving average representation
7585

7686
Iterating backwards from time $t$, we obtain
7787

@@ -99,7 +109,7 @@ Equation {eq}`ar1_ma` shows that $X_t$ is a well defined random variable, the va
99109
Throughout, the symbol $\psi_t$ will be used to refer to the
100110
density of this random variable $X_t$.
101111

102-
### Distribution Dynamics
112+
### Distribution dynamics
103113

104114
One of the nice things about this model is that it's so easy to trace out the sequence of distributions $\{ \psi_t \}$ corresponding to the time
105115
series $\{ X_t\}$.
@@ -110,10 +120,9 @@ This is immediate from {eq}`ar1_ma`, since linear combinations of independent
110120
normal random variables are normal.
111121

112122
Given that $X_t$ is normally distributed, we will know the full distribution
113-
$\psi_t$ if we can pin down its first two moments.
123+
$\psi_t$ if we can pin down its first two [moments](https://en.wikipedia.org/wiki/Moment_(mathematics)).
114124

115-
Let $\mu_t$ and $v_t$ denote the mean and variance
116-
of $X_t$ respectively.
125+
Let $\mu_t$ and $v_t$ denote the mean and variance of $X_t$ respectively.
117126

118127
We can pin down these values from {eq}`ar1_ma` or we can use the following
119128
recursive expressions:
@@ -140,8 +149,7 @@ $$
140149
\psi_t = N(\mu_t, v_t)
141150
$$
142151

143-
The following code uses these facts to track the sequence of marginal
144-
distributions $\{ \psi_t \}$.
152+
The following code uses these facts to track the sequence of marginal distributions $\{ \psi_t \}$.
145153

146154
The parameters are
147155

@@ -173,9 +181,21 @@ ax.legend(bbox_to_anchor=[1.05,1],loc=2,borderaxespad=1)
173181
plt.show()
174182
```
175183

176-
## Stationarity and Asymptotic Stability
177184

178-
Notice that, in the figure above, the sequence $\{ \psi_t \}$ seems to be converging to a limiting distribution.
185+
186+
## Stationarity and asymptotic stability
187+
188+
When we use models to study the real world, it is generally preferable that our
189+
models have clear, sharp predictions.
190+
191+
For dynamic problems, sharp predictions are related to stability.
192+
193+
For example, if a dynamic model predicts that inflation always converges to some
194+
kind of steady state, then the model gives a sharp prediction.
195+
196+
(The prediction might be wrong, but even this is helpful, because we can judge the quality of the model.)
197+
198+
Notice that, in the figure above, the sequence $\{ \psi_t \}$ seems to be converging to a limiting distribution, suggesting some kind of stability.
179199

180200
This is even clearer if we project forward further into the future:
181201

@@ -248,16 +268,21 @@ plt.show()
248268

249269
As claimed, the sequence $\{ \psi_t \}$ converges to $\psi^*$.
250270

251-
### Stationary Distributions
271+
We see that, at least for these parameters, the AR(1) model has strong stability
272+
properties.
273+
274+
275+
252276

253-
A stationary distribution is a distribution that is a fixed
254-
point of the update rule for distributions.
277+
### Stationary distributions
255278

256-
In other words, if $\psi_t$ is stationary, then $\psi_{t+j} =
257-
\psi_t$ for all $j$ in $\mathbb N$.
279+
Let's try to better understand the limiting distribution $\psi^*$.
258280

259-
A different way to put this, specialized to the current setting, is as follows: a
260-
density $\psi$ on $\mathbb R$ is **stationary** for the AR(1) process if
281+
A stationary distribution is a distribution that is a "fixed point" of the update rule for the AR(1) process.
282+
283+
In other words, if $\psi_t$ is stationary, then $\psi_{t+j} = \psi_t$ for all $j$ in $\mathbb N$.
284+
285+
A different way to put this, specialized to the current setting, is as follows: a density $\psi$ on $\mathbb R$ is **stationary** for the AR(1) process if
261286

262287
$$
263288
X_t \sim \psi
@@ -279,8 +304,8 @@ Thus, when $|a| < 1$, the AR(1) model has exactly one stationary density and tha
279304

280305
The concept of ergodicity is used in different ways by different authors.
281306

282-
One way to understand it in the present setting is that a version of the Law
283-
of Large Numbers is valid for $\{X_t\}$, even though it is not IID.
307+
One way to understand it in the present setting is that a version of the law
308+
of large numbers is valid for $\{X_t\}$, even though it is not IID.
284309

285310
In particular, averages over time series converge to expectations under the
286311
stationary distribution.
@@ -310,11 +335,21 @@ $$
310335
\quad \text{as } m \to \infty
311336
$$
312337

313-
In other words, the time series sample mean converges to the mean of the
314-
stationary distribution.
338+
In other words, the time series sample mean converges to the mean of the stationary distribution.
339+
340+
341+
Ergodicity is important for a range of reasons.
342+
343+
For example, {eq}`ar1_ergo` can be used to test theory.
344+
345+
In this equation, we can use observed data to evaluate the left hand side of {eq}`ar1_ergo`.
346+
347+
And we can use a theoretical AR(1) model to calculate the right hand side.
348+
349+
If $\frac{1}{m} \sum_{t = 1}^m X_t$ is not close to $\psi^(x)$, even for many
350+
observations, then our theory seems to be incorrect and we will need to revise
351+
it.
315352

316-
As will become clear over the next few lectures, ergodicity is a very
317-
important concept for statistics and simulation.
318353

319354
## Exercises
320355

@@ -339,7 +374,7 @@ M_k =
339374
\end{cases}
340375
$$
341376
342-
Here $n!!$ is the double factorial.
377+
Here $n!!$ is the [double factorial](https://en.wikipedia.org/wiki/Double_factorial).
343378
344379
According to {eq}`ar1_ergo`, we should have, for any $k \in \mathbb N$,
345380

0 commit comments

Comments
 (0)