You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/ar1_processes.md
+25-14Lines changed: 25 additions & 14 deletions
Original file line number
Diff line number
Diff line change
@@ -194,9 +194,9 @@ For dynamic problems, sharp predictions are related to stability.
194
194
For example, if a dynamic model predicts that inflation always converges to some
195
195
kind of steady state, then the model gives a sharp prediction.
196
196
197
-
(The prediction might be wrong, but even this is helpful, because we can judge
197
+
(The prediction might be wrong, but even this is helpful, because we can judge the quality of the model.)
198
198
199
-
Notice that, in the figure above, the sequence $\{ \psi_t \}$ seems to be converging to a limiting distribution.
199
+
Notice that, in the figure above, the sequence $\{ \psi_t \}$ seems to be converging to a limiting distribution, suggesting some kind of stability.
200
200
201
201
This is even clearer if we project forward further into the future:
202
202
@@ -269,16 +269,21 @@ plt.show()
269
269
270
270
As claimed, the sequence $\{ \psi_t \}$ converges to $\psi^*$.
271
271
272
+
We see that, at least for these parameters, the AR(1) model has strong stability
273
+
properties.
274
+
275
+
276
+
277
+
272
278
### Stationary distributions
273
279
274
-
A stationary distribution is a distribution that is a fixed
275
-
point of the update rule for distributions.
280
+
Let's try to better understand the limiting distribution $\psi^*$.
281
+
282
+
A stationary distribution is a distribution that is a "fixed point" of the update rule for the AR(1) process.
276
283
277
-
In other words, if $\psi_t$ is stationary, then $\psi_{t+j} =
278
-
\psi_t$ for all $j$ in $\mathbb N$.
284
+
In other words, if $\psi_t$ is stationary, then $\psi_{t+j} = \psi_t$ for all $j$ in $\mathbb N$.
279
285
280
-
A different way to put this, specialized to the current setting, is as follows: a
281
-
density $\psi$ on $\mathbb R$ is **stationary** for the AR(1) process if
286
+
A different way to put this, specialized to the current setting, is as follows: a density $\psi$ on $\mathbb R$ is **stationary** for the AR(1) process if
282
287
283
288
$$
284
289
X_t \sim \psi
@@ -331,15 +336,21 @@ $$
331
336
\quad \text{as } m \to \infty
332
337
$$
333
338
334
-
In other words, the time series sample mean converges to the mean of the
335
-
stationary distribution.
339
+
In other words, the time series sample mean converges to the mean of the stationary distribution.
340
+
341
+
342
+
Ergodicity is important for a range of reasons.
343
+
344
+
For example, {eq}`ar1_ergo` can be used to test theory.
345
+
346
+
In this equation, we can use observed data to evaluate the left hand side of {eq}`ar1_ergo`.
336
347
337
-
In reality, if an economy is ergodic, its long-term average growth rate is stable. For example, observing an economy's behavior over time can give a reliable estimate of its long-term growth potential.
348
+
And we can use a theoretical AR(1) model to calculate the right hand side.
338
349
339
-
However, ergodicity fails when persistent shocks or structural changes affect growth dynamics, making past observations unreliable for predicting future growth.
350
+
If $\frac{1}{m} \sum_{t = 1}^m X_t$ is not close to $\psi^(x)$, even for many
351
+
observations, then our theory seems to be incorrect and we will need to revise
352
+
it.
340
353
341
-
As will become clear over the next few lectures, ergodicity is a very
0 commit comments