You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/markov_chains_I.md
+20-16Lines changed: 20 additions & 16 deletions
Original file line number
Diff line number
Diff line change
@@ -27,20 +27,24 @@ In addition to what's in Anaconda, this lecture will need the following librarie
27
27
28
28
## Overview
29
29
30
-
Markov chains are a standard way to model time series with some dependence
31
-
between observations.
30
+
Markov chains provide a way to model situations in which the past casts shadows on the future.
31
+
32
+
By this we mean that observing measurements about a present situation can help us forecast future situations.
33
+
34
+
This can be possible when there are statistical dependencies among measurements of something taken at different points of time.
32
35
33
36
For example,
34
37
35
-
* inflation next year depends on inflation this year
36
-
* unemployment next month depends on unemployment this month
38
+
* inflation next year might co-vary with inflation this year
39
+
* unemployment next month might co-vary with unemployment this month
40
+
37
41
38
-
Markov chains are one of the workhorse models of economics and finance.
42
+
Markov chains are a workhorse for economics and finance.
39
43
40
44
The theory of Markov chains is beautiful and provides many insights into
41
45
probability and dynamics.
42
46
43
-
In this introductory lecture, we will
47
+
In this lecture, we will
44
48
45
49
* review some of the key ideas from the theory of Markov chains and
46
50
* show how Markov chains appear in some economic applications.
@@ -58,7 +62,7 @@ import matplotlib as mpl
58
62
59
63
## Definitions and examples
60
64
61
-
In this section we provide the basic definitions and some elementary examples.
65
+
In this section we provide some definitions and elementary examples.
62
66
63
67
(finite_dp_stoch_mat)=
64
68
### Stochastic matrices
@@ -82,13 +86,11 @@ Checking this in {ref}`the first exercises <mc1_ex_3>` below.
82
86
83
87
84
88
### Markov chains
85
-
86
89
Now we can introduce Markov chains.
87
90
88
-
First we will give some examples and then we will define them more carefully.
91
+
Before defining a Markov chain rigorously, we'll give some examples.
89
92
90
-
At that time, the connection between stochastic matrices and Markov chains
91
-
will become clear.
93
+
(Among other things, defining a Markov chain will clarify a connection between **stochastic matrices** and **Markov chains**.)
92
94
93
95
94
96
(mc_eg2)=
@@ -292,8 +294,10 @@ We can also find a higher probability from collapse to growth in democratic regi
292
294
293
295
### Defining Markov chains
294
296
295
-
So far we've given examples of Markov chains but now let's define them more
296
-
carefully.
297
+
298
+
So far we've given examples of Markov chains but we haven't defined them.
299
+
300
+
Let's do that now.
297
301
298
302
To begin, let $S$ be a finite set $\{x_1, \ldots, x_n\}$ with $n$ elements.
299
303
@@ -313,9 +317,9 @@ This means that, for any date $t$ and any state $y \in S$,
313
317
= \mathbb P \{ X_{t+1} = y \,|\, X_t, X_{t-1}, \ldots \}
314
318
```
315
319
316
-
In other words, knowing the current state is enough to know probabilities for the future states.
320
+
This means that once we know the current state $X_t$, adding knowledge of earlier states $X_{t-1}, X_{t-2}$ provides no additional information about probabilities of **future** states.
317
321
318
-
In particular, the dynamics of a Markov chain are fully determined by the set of values
322
+
Thus, the dynamics of a Markov chain are fully determined by the set of **conditional probabilities**
319
323
320
324
```{math}
321
325
:label: mpp
@@ -352,7 +356,7 @@ By construction, the resulting process satisfies {eq}`mpp`.
352
356
```{index} single: Markov Chains; Simulation
353
357
```
354
358
355
-
One natural way to answer questions about Markov chains is to simulate them.
359
+
A good way to study a Markov chains is to simulate it.
356
360
357
361
Let's start by doing this ourselves and then look at libraries that can help
0 commit comments