Skip to content

Fix typos #266

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions lectures/input_output.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ A basic framework for their analysis is



After introducing the input-ouput model, we describe some of its connections to {doc}`linear programming lecture <lp_intro>`.
After introducing the input-output model, we describe some of its connections to {doc}`linear programming lecture <lp_intro>`.


## Input output analysis
Expand Down Expand Up @@ -307,7 +307,7 @@ L
```

```{code-cell} ipython3
x = L @ d # solving for gross ouput
x = L @ d # solving for gross output
x
```

Expand Down Expand Up @@ -434,7 +434,7 @@ $$

The primal problem chooses a feasible production plan to minimize costs for delivering a pre-assigned vector of final goods consumption $d$.

The dual problem chooses prices to maxmize the value of a pre-assigned vector of final goods $d$ subject to prices covering costs of production.
The dual problem chooses prices to maximize the value of a pre-assigned vector of final goods $d$ subject to prices covering costs of production.

By the [strong duality theorem](https://en.wikipedia.org/wiki/Dual_linear_program#Strong_duality),
optimal value of the primal and dual problems coincide:
Expand Down Expand Up @@ -482,7 +482,7 @@ plt.show()

## Leontief inverse

We have discussed that gross ouput $x$ is given by {eq}`eq:inout_2`, where $L$ is called the Leontief Inverse.
We have discussed that gross output $x$ is given by {eq}`eq:inout_2`, where $L$ is called the Leontief Inverse.

Recall the {doc}`Neumann Series Lemma <eigen_II>` which states that $L$ exists if the spectral radius $r(A)<1$.

Expand Down Expand Up @@ -551,7 +551,7 @@ The above figure indicates that manufacturing is the most dominant sector in the

### Output multipliers

Another way to rank sectors in input output networks is via outuput multipliers.
Another way to rank sectors in input output networks is via output multipliers.

The **output multiplier** of sector $j$ denoted by $\mu_j$ is usually defined as the
total sector-wide impact of a unit change of demand in sector $j$.
Expand Down
30 changes: 15 additions & 15 deletions lectures/simple_linear_regression.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,7 +297,7 @@ Calculating $\beta$
```{code-cell} ipython3
df = df[['X','Y']].copy() # Original Data

# Calcuate the sample means
# Calculate the sample means
x_bar = df['X'].mean()
y_bar = df['Y'].mean()
```
Expand Down Expand Up @@ -393,7 +393,7 @@ df
Sometimes it can be useful to rename your columns to make it easier to work with in the DataFrame

```{code-cell} ipython3
df.columns = ["cntry", "year", "life_expectency", "gdppc"]
df.columns = ["cntry", "year", "life_expectancy", "gdppc"]
df
```

Expand All @@ -415,10 +415,10 @@ It is always a good idea to spend a bit of time understanding what data you actu

For example, you may want to explore this data to see if there is consistent reporting for all countries across years

Let's first look at the Life Expectency Data
Let's first look at the Life Expectancy Data

```{code-cell} ipython3
le_years = df[['cntry', 'year', 'life_expectency']].set_index(['cntry', 'year']).unstack()['life_expectency']
le_years = df[['cntry', 'year', 'life_expectancy']].set_index(['cntry', 'year']).unstack()['life_expectancy']
le_years
```

Expand Down Expand Up @@ -453,13 +453,13 @@ df = df[df.year == 2018].reset_index(drop=True).copy()
```

```{code-cell} ipython3
df.plot(x='gdppc', y='life_expectency', kind='scatter', xlabel="GDP per capita", ylabel="Life Expectency (Years)",);
df.plot(x='gdppc', y='life_expectancy', kind='scatter', xlabel="GDP per capita", ylabel="Life Expectancy (Years)",);
```

This data shows a couple of interesting relationships.

1. there are a number of countries with similar GDP per capita levels but a wide range in Life Expectancy
2. there appears to be a positive relationship between GDP per capita and life expectancy. Countries with higher GDP per capita tend to have higher life expectency outcomes
2. there appears to be a positive relationship between GDP per capita and life expectancy. Countries with higher GDP per capita tend to have higher life expectancy outcomes

Even though OLS is solving linear equations -- one option we have is to transform the variables, such as through a log transform, and then use OLS to estimate the transformed variables

Expand All @@ -470,7 +470,7 @@ ln -> ln == elasticities
By specifying `logx` you can plot the GDP per Capita data on a log scale

```{code-cell} ipython3
df.plot(x='gdppc', y='life_expectency', kind='scatter', xlabel="GDP per capita", ylabel="Life Expectancy (Years)", logx=True);
df.plot(x='gdppc', y='life_expectancy', kind='scatter', xlabel="GDP per capita", ylabel="Life Expectancy (Years)", logx=True);
```

As you can see from this transformation -- a linear model fits the shape of the data more closely.
Expand All @@ -486,11 +486,11 @@ df
**Q4:** Use {eq}`eq:optimal-alpha` and {eq}`eq:optimal-beta` to compute optimal values for $\alpha$ and $\beta$

```{code-cell} ipython3
data = df[['log_gdppc', 'life_expectency']].copy() # Get Data from DataFrame
data = df[['log_gdppc', 'life_expectancy']].copy() # Get Data from DataFrame

# Calculate the sample means
x_bar = data['log_gdppc'].mean()
y_bar = data['life_expectency'].mean()
y_bar = data['life_expectancy'].mean()
```

```{code-cell} ipython3
Expand All @@ -499,7 +499,7 @@ data

```{code-cell} ipython3
# Compute the Sums
data['num'] = data['log_gdppc'] * data['life_expectency'] - y_bar * data['log_gdppc']
data['num'] = data['log_gdppc'] * data['life_expectancy'] - y_bar * data['log_gdppc']
data['den'] = pow(data['log_gdppc'],2) - x_bar * data['log_gdppc']
β = data['num'].sum() / data['den'].sum()
print(β)
Expand All @@ -513,13 +513,13 @@ print(α)
**Q5:** Plot the line of best fit found using OLS

```{code-cell} ipython3
data['life_expectency_hat'] = α + β * df['log_gdppc']
data['error'] = data['life_expectency_hat'] - data['life_expectency']
data['life_expectancy_hat'] = α + β * df['log_gdppc']
data['error'] = data['life_expectancy_hat'] - data['life_expectancy']

fig, ax = plt.subplots()
data.plot(x='log_gdppc',y='life_expectency', kind='scatter', ax=ax)
data.plot(x='log_gdppc',y='life_expectency_hat', kind='line', ax=ax, color='g')
plt.vlines(data['log_gdppc'], data['life_expectency_hat'], data['life_expectency'], color='r')
data.plot(x='log_gdppc',y='life_expectancy', kind='scatter', ax=ax)
data.plot(x='log_gdppc',y='life_expectancy_hat', kind='line', ax=ax, color='g')
plt.vlines(data['log_gdppc'], data['life_expectancy_hat'], data['life_expectancy'], color='r')
```

:::{solution-end}
Expand Down
2 changes: 1 addition & 1 deletion lectures/time_series_with_matrices.md
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,7 @@ print("Sigma_y = ", Sigma_y)

Notice that the covariance between $y_t$ and $y_{t-1}$ -- the elements on the superdiagonal -- are **not** identical.

This is is an indication that the time series respresented by our $y$ vector is not **stationary**.
This is is an indication that the time series represented by our $y$ vector is not **stationary**.

To make it stationary, we'd have to alter our system so that our **initial conditions** $(y_1, y_0)$ are not fixed numbers but instead a jointly normally distributed random vector with a particular mean and covariance matrix.

Expand Down