Skip to content

Commit 443c0c2

Browse files
authored
fix a few typos (#187)
* fix a few typos * fix format errata
1 parent 22ff517 commit 443c0c2

File tree

5 files changed

+10
-4
lines changed

5 files changed

+10
-4
lines changed

Errata.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,15 @@
55

66
| Page | Printed text | Correct text | Note |
77
|---|---|---|---|
8+
|xvi| If you have a good understanding of statistics, either by practice or formal training, but you have never being ... | If you have a good understanding of statistics, either by practice or formal training, but you have never **being**... | Thanks Behrouz B. |
89
| xvi | ...and may require a couple **read troughs** | ...and may require a couple **read-throughs** | Thanks John M. Shea |
910
| xvi | For a reference on Python, or how to setup the computation environment needed for this book, go to README.md in Github to understand how to setup a code environment | For a reference on how to setup the computation environment needed for this book, go to README.md in GitHhub. | |
1011
| 1 | ...we introduce these concepts and methods, many, which... | ...we introduce these concepts and methods, many **of** which... | Thanks Thomas Ogden |
1112
| 2 | ...though this is not a **guaranteed** of any Bayesian model. | ...though this is not a **guarantee** of any Bayesian model. | Thanks Guilherme Costa |
13+
|7| ...(conceptually it means it is equally likely **are** we are... | ...(conceptually it means it is equally likely we are... | Thanks Behrouz B. |
14+
|8| At line **20**.... | At line **14**... | Thanks Behrouz B. |
1215
| 8 | ...it will **depends** on the result... | ...it will **depend** on the result.. | Thanks Ero Carrera |
16+
|9| Some people make the distinction that a sample is made up by a collection of draws, **other**... | Some people make the distinction that a sample is made up by a collection of draws, **others**... | Thanks Behrouz B. |
1317
| 9 | ...or **simple** the posterior. | ...or **simply** the posterior. | Thanks Ero Carrera |
1418
| 23 | An absolute value mean... | An absolute deviation to the mean... | Thanks Zhengchen Cai|
1519
| 24 | One is what could called... | One is what could **be** called... | Thanks Sebastian |
@@ -31,6 +35,7 @@
3135
| 58 | ...comparing p_loo to the number of parameters $p$ can **provides** us with... | ...comparing p_loo to the number of parameters $p$ can **provide** us with... | Thanks Ero Carrera |
3236
| 59 | ...which is transformation in 1D where we can... | ...which is **a** transformation in 1D where we can... | Thanks Ero Carrera |
3337
| 61 | When using a logarithmic scoring rule this is **equivalently** to **compute**: | When using a logarithmic scoring rule this is **equivalent** to **computing**: | Thanks Ero Carrera |
38+
|61| $\max_{n} \frac{1}{n} \sum_{i=1}^{n}log\sum_{j=1}^{k} w_j p(y_i \mid y_{-i}, M_j)$ | $\max_{w} \frac{1}{n} \sum_{i=1}^{n}log\sum_{j=1}^{k} w_j p(y_i \mid y_{-i}, M_j)$ | Thanks Ikaro Silva |
3439
| 61 | ...the computation of the weights **take** into account all models together. | ...the computation of the weights **takes** into account all models together. | Thanks Ero Carrera |
3540
| 61 | ...the weights computed with `az.compare(., method="stacking")`~~**,**~~ makes a lot of sense. | ...the weights computed with `az.compare(., method="stacking")` makes a lot of sense. | Thanks Ero Carrera |
3641
| 62 | ...Reproduce Figure 2.7, but using **az.plot_loo(ecdf=True)**... | ...Reproduce Figure 2.7, but using **az.plot_loo_pit(ecdf=True)**... | Thanks Alihan Zihna |
@@ -63,6 +68,7 @@
6368
| 189 | Equation (6.9) $y_t = \alpha + \sum_{i=1}^{p}\phi_i y_{t-period-i} + \sum_{j=1}^{q}\theta_j \epsilon_{t-period-j} + \epsilon_t$ | $y_t = \alpha + \sum_{i=1}^{p}\phi_i y_{t-period \cdot i} + \sum_{j=1}^{q}\theta_j \epsilon_{t-period \cdot j} + \epsilon_t$ | Thanks Marcin Elantkowski |
6469
| 191 | (footnote) The Stan implementation of SARIMA can be found in https://github.com/asael697/**varstan**. | The Stan implementation of SARIMA can be found in **e.g.,** https://github.com/asael697/**bayesforecast**. | |
6570
| 197 | we can apply the Kalman filter to **to** obtain**s** the posterior | we can apply the Kalman filter to obtain the posterior | |
71+
|227| Only the first 2 independent variables are **unrelated**... | Only the first 2 independent variables are **related**... | Thanks icfly2 |
6672
| 261 | Some **commons** elements to all Bayesian analyses, | Some **common** elements to all Bayesian analyses, | Thanks Ero Carrera |
6773
| 262 | (In Figure 9.1.) Model **Compasion** | Model **Comparison** | Thanks Ben Vincent |
6874
| 262 | ...averaging some **of** all of them, or even presenting all the models and discussing their **strength** and... | ...averaging some **or** all of them, or even presenting all the models and discussing their **strengths** and... | Thanks Ero Carrera |

markdown/chp_01.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -515,7 +515,7 @@ When we use Markov chain Monte Carlo Methods to do Bayesian inference,
515515
we typically refer to them as MCMC samplers. At each iteration we draw a
516516
random sample from the sampler, so naturally we refer to the output from
517517
MCMC as *samples* or *draws*. Some people make the distinction that a
518-
sample is made up by a collection of draws, other treat samples and
518+
sample is made up by a collection of draws, others treat samples and
519519
draws as interchangeably.
520520

521521
Since MCMC draws samples sequentially we also say we get a *chain* of

markdown/chp_02.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1614,7 +1614,7 @@ logarithmic scoring rule this is equivalent to computing:
16141614

16151615
```{math}
16161616
:label: eq_stacking
1617-
\max_{n} \frac{1}{n} \sum_{i=1}^{n}log\sum_{j=1}^{k} w_j p(y_i \mid y_{-i}, M_j)
1617+
\max_{w} \frac{1}{n} \sum_{i=1}^{n}log\sum_{j=1}^{k} w_j p(y_i \mid y_{-i}, M_j)
16181618
16191619
```
16201620

markdown/chp_07.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -792,7 +792,7 @@ different datasets from known generative processes.
792792
- $Y \sim \mathcal{N}(0, 1)$ $X_{0} \sim \mathcal{N}(Y, 0.1)$ and
793793
$X_{1} \sim \mathcal{N}(Y, 0.2)$
794794
$\boldsymbol{X}_{2:9} \sim \mathcal{N}(0, 1)$. Only the first 2
795-
independent variables are unrelated to the predictor, and the first
795+
independent variables are related to the predictor, and the first
796796
is more related than the second.
797797

798798
- $Y = 10 \sin(\pi X_0 X_1 ) + 20(X_2 - 0.5)^2 + 10X_3 + 5X_4 + \epsilon$

markdown/preface.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Probability {cite:p}`blitzstein_2019`. The latter is a little bit more
8585
theoretical, but both keep application in mind.
8686

8787
If you have a good understanding of statistics, either by practice or
88-
formal training, but you have never being exposed to Bayesian
88+
formal training, but you have never been exposed to Bayesian
8989
statistics, you may still use this book as an introduction to the
9090
subject, the pace at the start (mostly the first two chapters) will be a
9191
bit rapid, and may require a couple read-throughs.

0 commit comments

Comments
 (0)