Skip to content

Commit 02c67ca

Browse files
authored
Fix typos (#266)
* Update simple_linear_regression.md * Update time_series_with_matrices.md * Update input_output.md
1 parent 5127a43 commit 02c67ca

File tree

3 files changed

+21
-21
lines changed

3 files changed

+21
-21
lines changed

lectures/input_output.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ A basic framework for their analysis is
118118

119119

120120

121-
After introducing the input-ouput model, we describe some of its connections to {doc}`linear programming lecture <lp_intro>`.
121+
After introducing the input-output model, we describe some of its connections to {doc}`linear programming lecture <lp_intro>`.
122122

123123

124124
## Input output analysis
@@ -307,7 +307,7 @@ L
307307
```
308308
309309
```{code-cell} ipython3
310-
x = L @ d # solving for gross ouput
310+
x = L @ d # solving for gross output
311311
x
312312
```
313313
@@ -434,7 +434,7 @@ $$
434434
435435
The primal problem chooses a feasible production plan to minimize costs for delivering a pre-assigned vector of final goods consumption $d$.
436436
437-
The dual problem chooses prices to maxmize the value of a pre-assigned vector of final goods $d$ subject to prices covering costs of production.
437+
The dual problem chooses prices to maximize the value of a pre-assigned vector of final goods $d$ subject to prices covering costs of production.
438438
439439
By the [strong duality theorem](https://en.wikipedia.org/wiki/Dual_linear_program#Strong_duality),
440440
optimal value of the primal and dual problems coincide:
@@ -482,7 +482,7 @@ plt.show()
482482
483483
## Leontief inverse
484484
485-
We have discussed that gross ouput $x$ is given by {eq}`eq:inout_2`, where $L$ is called the Leontief Inverse.
485+
We have discussed that gross output $x$ is given by {eq}`eq:inout_2`, where $L$ is called the Leontief Inverse.
486486
487487
Recall the {doc}`Neumann Series Lemma <eigen_II>` which states that $L$ exists if the spectral radius $r(A)<1$.
488488
@@ -551,7 +551,7 @@ The above figure indicates that manufacturing is the most dominant sector in the
551551
552552
### Output multipliers
553553
554-
Another way to rank sectors in input output networks is via outuput multipliers.
554+
Another way to rank sectors in input output networks is via output multipliers.
555555
556556
The **output multiplier** of sector $j$ denoted by $\mu_j$ is usually defined as the
557557
total sector-wide impact of a unit change of demand in sector $j$.

lectures/simple_linear_regression.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -297,7 +297,7 @@ Calculating $\beta$
297297
```{code-cell} ipython3
298298
df = df[['X','Y']].copy() # Original Data
299299
300-
# Calcuate the sample means
300+
# Calculate the sample means
301301
x_bar = df['X'].mean()
302302
y_bar = df['Y'].mean()
303303
```
@@ -393,7 +393,7 @@ df
393393
Sometimes it can be useful to rename your columns to make it easier to work with in the DataFrame
394394
395395
```{code-cell} ipython3
396-
df.columns = ["cntry", "year", "life_expectency", "gdppc"]
396+
df.columns = ["cntry", "year", "life_expectancy", "gdppc"]
397397
df
398398
```
399399
@@ -415,10 +415,10 @@ It is always a good idea to spend a bit of time understanding what data you actu
415415
416416
For example, you may want to explore this data to see if there is consistent reporting for all countries across years
417417
418-
Let's first look at the Life Expectency Data
418+
Let's first look at the Life Expectancy Data
419419
420420
```{code-cell} ipython3
421-
le_years = df[['cntry', 'year', 'life_expectency']].set_index(['cntry', 'year']).unstack()['life_expectency']
421+
le_years = df[['cntry', 'year', 'life_expectancy']].set_index(['cntry', 'year']).unstack()['life_expectancy']
422422
le_years
423423
```
424424
@@ -453,13 +453,13 @@ df = df[df.year == 2018].reset_index(drop=True).copy()
453453
```
454454
455455
```{code-cell} ipython3
456-
df.plot(x='gdppc', y='life_expectency', kind='scatter', xlabel="GDP per capita", ylabel="Life Expectency (Years)",);
456+
df.plot(x='gdppc', y='life_expectancy', kind='scatter', xlabel="GDP per capita", ylabel="Life Expectancy (Years)",);
457457
```
458458
459459
This data shows a couple of interesting relationships.
460460
461461
1. there are a number of countries with similar GDP per capita levels but a wide range in Life Expectancy
462-
2. there appears to be a positive relationship between GDP per capita and life expectancy. Countries with higher GDP per capita tend to have higher life expectency outcomes
462+
2. there appears to be a positive relationship between GDP per capita and life expectancy. Countries with higher GDP per capita tend to have higher life expectancy outcomes
463463
464464
Even though OLS is solving linear equations -- one option we have is to transform the variables, such as through a log transform, and then use OLS to estimate the transformed variables
465465
@@ -470,7 +470,7 @@ ln -> ln == elasticities
470470
By specifying `logx` you can plot the GDP per Capita data on a log scale
471471
472472
```{code-cell} ipython3
473-
df.plot(x='gdppc', y='life_expectency', kind='scatter', xlabel="GDP per capita", ylabel="Life Expectancy (Years)", logx=True);
473+
df.plot(x='gdppc', y='life_expectancy', kind='scatter', xlabel="GDP per capita", ylabel="Life Expectancy (Years)", logx=True);
474474
```
475475
476476
As you can see from this transformation -- a linear model fits the shape of the data more closely.
@@ -486,11 +486,11 @@ df
486486
**Q4:** Use {eq}`eq:optimal-alpha` and {eq}`eq:optimal-beta` to compute optimal values for $\alpha$ and $\beta$
487487
488488
```{code-cell} ipython3
489-
data = df[['log_gdppc', 'life_expectency']].copy() # Get Data from DataFrame
489+
data = df[['log_gdppc', 'life_expectancy']].copy() # Get Data from DataFrame
490490
491491
# Calculate the sample means
492492
x_bar = data['log_gdppc'].mean()
493-
y_bar = data['life_expectency'].mean()
493+
y_bar = data['life_expectancy'].mean()
494494
```
495495
496496
```{code-cell} ipython3
@@ -499,7 +499,7 @@ data
499499
500500
```{code-cell} ipython3
501501
# Compute the Sums
502-
data['num'] = data['log_gdppc'] * data['life_expectency'] - y_bar * data['log_gdppc']
502+
data['num'] = data['log_gdppc'] * data['life_expectancy'] - y_bar * data['log_gdppc']
503503
data['den'] = pow(data['log_gdppc'],2) - x_bar * data['log_gdppc']
504504
β = data['num'].sum() / data['den'].sum()
505505
print(β)
@@ -513,13 +513,13 @@ print(α)
513513
**Q5:** Plot the line of best fit found using OLS
514514
515515
```{code-cell} ipython3
516-
data['life_expectency_hat'] = α + β * df['log_gdppc']
517-
data['error'] = data['life_expectency_hat'] - data['life_expectency']
516+
data['life_expectancy_hat'] = α + β * df['log_gdppc']
517+
data['error'] = data['life_expectancy_hat'] - data['life_expectancy']
518518
519519
fig, ax = plt.subplots()
520-
data.plot(x='log_gdppc',y='life_expectency', kind='scatter', ax=ax)
521-
data.plot(x='log_gdppc',y='life_expectency_hat', kind='line', ax=ax, color='g')
522-
plt.vlines(data['log_gdppc'], data['life_expectency_hat'], data['life_expectency'], color='r')
520+
data.plot(x='log_gdppc',y='life_expectancy', kind='scatter', ax=ax)
521+
data.plot(x='log_gdppc',y='life_expectancy_hat', kind='line', ax=ax, color='g')
522+
plt.vlines(data['log_gdppc'], data['life_expectancy_hat'], data['life_expectancy'], color='r')
523523
```
524524
525525
:::{solution-end}

lectures/time_series_with_matrices.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -504,7 +504,7 @@ print("Sigma_y = ", Sigma_y)
504504
505505
Notice that the covariance between $y_t$ and $y_{t-1}$ -- the elements on the superdiagonal -- are **not** identical.
506506
507-
This is is an indication that the time series respresented by our $y$ vector is not **stationary**.
507+
This is is an indication that the time series represented by our $y$ vector is not **stationary**.
508508
509509
To make it stationary, we'd have to alter our system so that our **initial conditions** $(y_1, y_0)$ are not fixed numbers but instead a jointly normally distributed random vector with a particular mean and covariance matrix.
510510

0 commit comments

Comments
 (0)