-
-
Notifications
You must be signed in to change notification settings - Fork 360
New chapter "Metropolis" with Python Implementation #929
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
leios
merged 82 commits into
algorithm-archivists:main
from
shudipto-amin:metropolis_in_python
Dec 3, 2021
Merged
Changes from 41 commits
Commits
Show all changes
82 commits
Select commit
Hold shift + click to select a range
f60614c
Add new chapter: metropolis_hastings; in python
kazi-shudipto-amin dcd5e13
Add markdown and python files for metropolis
kazi-shudipto-amin 7af56cb
Add image for metropolis
kazi-shudipto-amin 640cd9e
Merge branch 'master' into metropolis_hastings_in_python
kazi-shudipto-amin 701cbbc
Update .gitignore, SUMMARY.md, and metropolis
kazi-shudipto-amin 32ae451
Untrack .ipynb_checkpoints
kazi-shudipto-amin 37f40db
Untrack ipynb_checkpoints
kazi-shudipto-amin 9420dc0
Fix algorithm steps list
kazi-shudipto-amin 5c894c2
Really fix markdown list
kazi-shudipto-amin 794c177
Add plot of P to chapter
kazi-shudipto-amin 39758e9
Add metropolis animation and update plot of P(x)
kazi-shudipto-amin 3252fb0
Add minor update to chapter text
kazi-shudipto-amin 088dbb4
Generate gif and mp4 for random walk
kazi-shudipto-amin d20b8bf
Complete first draft!
kazi-shudipto-amin 4379822
Final version before Pull Request
kazi-shudipto-amin 8372746
Merge branch 'master' into metropolis_hastings_in_python
kazi-shudipto-amin 30ecff7
Add metropolis citation
kazi-shudipto-amin fddfcf1
Remove unnecessary lines from code and bib file
kazi-shudipto-amin 2bc76fc
Fix display of code in md
kazi-shudipto-amin b49514f
Fix random walk capitalization.
kazi-shudipto-amin 61698a4
Apply Amaras' suggestions from code review
shudipto-amin 00c17ec
Merge 'metropolis_in_python' from origin
kazi-shudipto-amin 63750c3
Fix the code import lines in md file.
kazi-shudipto-amin cb8b905
Change to in metropolis.py
kazi-shudipto-amin a74aa56
Add probability section to metropolis.md
kazi-shudipto-amin 4517bd8
Move Probability section in metropolis to own chapter.
kazi-shudipto-amin 5947e98
Fix SUMMARY.md spelling mistake from previous commit
kazi-shudipto-amin 0346e3d
Add figures for probability distribution chapter
kazi-shudipto-amin 3458cc7
Update image of normal distribution
kazi-shudipto-amin 09bc82c
Finish first draft of probability chapter
kazi-shudipto-amin 06a841e
Minor edits to distributions.md.
kazi-shudipto-amin 88f87d4
"Minor changes to distributions.md"
kazi-shudipto-amin 7c4f3a4
Complete the Example/Application section of metropolis.
kazi-shudipto-amin 2f64976
Address most issues in review of PR#929
kazi-shudipto-amin 615e839
Add image of 1D_particles
kazi-shudipto-amin d81065e
Update citations in md file
kazi-shudipto-amin 4fdc98f
Add testing function to metropolis python code.
kazi-shudipto-amin 59f42ff
Numpyfy metropolis f function and fix errors.
kazi-shudipto-amin a88b3bc
Implement generator to iterate.
kazi-shudipto-amin cda5941
Reformat output of test and nrmsd error reporting.
kazi-shudipto-amin 5acde92
Add description of video and fix code display.
kazi-shudipto-amin f4ff116
Update contents/probability/distributions/distributions.md
shudipto-amin e29552f
Update contents/probability/distributions/distributions.md
shudipto-amin c7bc496
Update contents/probability/distributions/distributions.md
shudipto-amin 224f9e5
Update contents/probability/distributions/distributions.md
shudipto-amin 84825be
Update contents/probability/distributions/distributions.md
shudipto-amin a6d3be1
Update contents/probability/distributions/distributions.md
shudipto-amin 0d2a9b5
Update contents/probability/distributions/distributions.md
shudipto-amin 7a8c3ed
Update contents/probability/distributions/distributions.md
shudipto-amin 97dea27
Update contents/probability/distributions/distributions.md
shudipto-amin ddb5ee6
Put sentences in `metropolis.md` on separate lines.
kazi-shudipto-amin e531c8a
Put sentences in distributions.md chapter on separate lines.
kazi-shudipto-amin ac0b860
Addresses issues raised by Leios' 1st review of probability chapter.
kazi-shudipto-amin fead1d3
Add minor edits.
kazi-shudipto-amin 6d279d9
Update contents/metropolis/metropolis.md title
shudipto-amin 91408c9
Simplify intro to contents/metropolis/metropolis.md
shudipto-amin 51d51b3
Minor formatting to contents/metropolis/metropolis.md
shudipto-amin 82649af
Fix spelling contents/metropolis/metropolis.md
shudipto-amin ac87374
Simplify line 71 of contents/metropolis/metropolis.md
shudipto-amin 9ae11ef
Update contents/metropolis/metropolis.md
shudipto-amin 070418e
Update example application in metropolis according to Leios comments.
kazi-shudipto-amin e5a52f7
Minor edit: contents/probability/distributions/distributions.md
shudipto-amin 34f5938
Minor edit: contents/probability/distributions/distributions.md
shudipto-amin bbd57fc
Minor edit: contents/probability/distributions/distributions.md
shudipto-amin b15e968
Minor edit: contents/probability/distributions/distributions.md
shudipto-amin 0b7ad1f
Minor edit: contents/probability/distributions/distributions.md
shudipto-amin d4860e4
Apply minor suggestions from Leios' code review
shudipto-amin befed18
Add minor edits from code Leios' review
shudipto-amin 1cd4548
Add minor edit
kazi-shudipto-amin 3fea175
Apply minor edit suggestions from Leios
shudipto-amin 8bd7837
Add minor formatting, such as periods after equations.
kazi-shudipto-amin 3473e53
Fix integer interval notation using hack.
kazi-shudipto-amin d7ef99c
Add minor edits missed previously in probability chapter.
kazi-shudipto-amin bd7fb4c
Add minor edits to metropolis, mostly punctuation.
kazi-shudipto-amin 19160b5
Apply minor edits from Leios
shudipto-amin 3e20269
Add intro line to Algorithm section of metropolis.
kazi-shudipto-amin edd0f2a
Merge branch 'master' into metropolis_in_python
shudipto-amin d547ef9
Add name to contributor.md
kazi-shudipto-amin 581d1f7
Merge branch 'metropolis_in_python' of github.com:shudipto-amin/algor…
kazi-shudipto-amin 18c4d44
Apply suggestions from Leios
shudipto-amin 25dd460
Apply suggestions from code review
leios 51823be
Merge branch 'main' into metropolis_in_python
leios File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
import numpy as np | ||
|
||
|
||
def f(x, normalize=False): | ||
''' | ||
Function proportional to target distribution, a sum of Gaussians. | ||
For testing, set normalize to True, to get target distribution exactly. | ||
''' | ||
# Gaussian heights, width parameters, and mean positions respectively: | ||
a = np.array([10., 3., 1.]).reshape(3, 1) | ||
b = np.array([ 4., 0.2, 2.]).reshape(3, 1) | ||
xs = np.array([-4., -1., 5.]).reshape(3, 1) | ||
|
||
if normalize: | ||
norm = (np.sqrt(np.pi) * (a / np.sqrt(b))).sum() | ||
a /= norm | ||
|
||
return (a * np.exp(-b * (x - xs)**2)).sum(axis=0) | ||
|
||
def g(): | ||
'''Random step vector.''' | ||
return np.random.uniform(-1,1) | ||
|
||
def metropolis_step(x, f=f, g=g): | ||
'''Perform one full iteration and return new position.''' | ||
|
||
x_proposed = x + g() | ||
a = min(1, (f(x_proposed) / f(x)).item()) | ||
|
||
x_new = np.random.choice([x_proposed, x], p=[a, 1-a]) | ||
|
||
return x_new | ||
|
||
def metropolis_iterate(x0, num_steps): | ||
'''Iterate metropolis algorithm for num_steps using iniital position x_0''' | ||
|
||
for n in range(num_steps): | ||
if n == 0: | ||
x = x0 | ||
else: | ||
x = metropolis_step(x) | ||
yield x | ||
|
||
|
||
def test_metropolis_iterate(num_steps, xmin, xmax, x0): | ||
''' | ||
Calculate error in normalized density histogram of data | ||
generated by metropolis_iterate() by using | ||
normalized-root-mean-square-deviation metric. | ||
''' | ||
|
||
bin_width = 0.25 | ||
bins = np.arange(xmin, xmax + bin_width/2, bin_width) | ||
centers = np.arange(xmin + bin_width/2, xmax, bin_width) | ||
|
||
true_values = f(centers, normalize=True) | ||
mean_value = np.mean(true_values - min(true_values)) | ||
|
||
x_dat = list(metropolis_iterate(x0, num_steps)) | ||
heights, _ = np.histogram(x_dat, bins=bins, density=True) | ||
|
||
nmsd = np.average((heights - true_values)**2 / mean_value) | ||
nrmsd = np.sqrt(nmsd) | ||
|
||
return nrmsd | ||
|
||
|
||
|
||
if __name__ == "__main__": | ||
xmin, xmax = -7, 7 | ||
leios marked this conversation as resolved.
Show resolved
Hide resolved
|
||
x0 = np.random.uniform(xmin, xmax) | ||
|
||
num_steps = 50_000 | ||
|
||
x_dat = list(metropolis_iterate(x0, 50_000)) | ||
|
||
# Write data to file | ||
output_string = "\n".join(str(x) for x in x_dat) | ||
|
||
with open("output.dat", "w") as out: | ||
out.write(output_string) | ||
out.write("\n") | ||
|
||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
# Testing | ||
print(f"Testing with x0 = {x0:5.2f}") | ||
print(f"{'num_steps':>10s} {'NRMSD':10s}") | ||
for num_steps in (500, 5_000, 50_000): | ||
nrmsd = test_metropolis_iterate(num_steps, xmin, xmax, x0) | ||
print(f"{num_steps:10d} {nrmsd:5.1%}") |
Large diffs are not rendered by default.
Oops, something went wrong.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,152 @@ | ||
# What's a probability distribution? | ||
|
||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
## Discrete Probability Distributions | ||
|
||
It's intuitive for us to understand what a __discrete__ probability distribution is - for example, we understand the outcomes of a coin toss very well, and also that of a dice roll. For a single coin toss, the probability distribution can be formally written as, | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
$$ | ||
P(n) = \begin{cases} | ||
\frac 1 2 & n \in [H,T] \\ | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
0 & n \notin [H,T] | ||
\end{cases} | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
which is basically saying that the probability that the outcome $$n$$ takes on any specific value is 0.5, if those specific values are heads (H) or tails (T). The second line states that the probability of any other possibility is zero. We can usually ignore this line, as it is quite trivial, and it is understood that anything outside of heads or tails is impossible. | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
One important thing to always take note of for a probability distribution, is the set of possibilities, or the __domain__ of the distribution. Here, $$[H,T]$$ is the domain of $$P(n)$$, telling us that $$n$$ can only be $$H$$ or $$T$$. | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
The outcome $$n$$ can also be a number. For example, the outcome of a __dice roll__ has the probability distribution, | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
|
||
$$ | ||
P(n) = \begin{matrix} | ||
\displaystyle\frac 1 6 &;& n \in [1..6] | ||
\end{matrix} | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
which is saying that the probability of $$n$$ being a whole number between 1 and 6 is $$1/6$$, and we assume that the probability of getting any other $$n$$ is 0. This is a discrete probability function because $$n$$ is an integer, and thus only takes discrete values. | ||
|
||
Both of the above examples are rather boring, because the value of $$P(n)$$ is the same for all $$n$$. An example of a discrete probability function where the probability actually depends on $$n$$, is when $$n$$ is the sum of numbers on a __roll of two die__. In this case, $$P(n)$$ is different for each $$n$$ as some possibilities like $$n=2$$ can happen in only one possible way (by getting a 1 on both die), whereas $$n=4$$ can happen in 3 ways (1 and 3; or 2 and 2; or 3 and 1). | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
The rolling two die is a great case study for how we can construct a probability distribution, since the probability varies and it is not immediately obvious how it varies. So let's go ahead and construct it! | ||
|
||
Let's first define the domain of our target $$P(n)$$. We know that the lowest sum of two die is 2 (a 1 on both die), so $$n \geq 2$$ for sure. Similarly, the maximum is sum of two sixes, or 12, so $$n \leq 12$$ also. | ||
|
||
So now we the domain of possibilites, i.e., $$n \in [2..12]$$. Next, we take a very common approach - we count up the number of different ways each of the possbile values of $$n$$ can occur. Let's call this the frequency, $$f(n)$$ of each possible $$n$$. We already know that $$f(2)=1$$, as there is only one way to get a pair of 1s. For $$n=3$$, we see that there are two possible ways: a $$1$$ and $$2$$, or a $$2$$ and $$1$$, so $$f(3)=2$$. If you continue doing this for all $$n$$, you may see a pattern (homework for the reader!). Once you have all the $$f(n)$$, we can visualize it in a plot, | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
<p> | ||
<img class="center" src="res/double_die_frequencies.png" alt="<FIG> Die Roll" style="width:80%"/> | ||
</p> | ||
|
||
So the most common sum of two die is a $$7$$, and the further away from $$7$$ you get, the less likely the outcome. Good to know, for a prospective gambler! | ||
|
||
### Normalization | ||
|
||
The $$f(n)$$ plotted above is technically NOT the probability $$P(n)$$ - because we know that the sum of all probabilities should be 1, which clearly isn't the case for $$f(n)$$. But we can just get that by dividing $$f(n)$$ by the _total_ number of possibilities, $$N$$. For two die, that is $$N = 6 \times 6 = 36$$, but we could also express it as the _sum of all frequencies_, | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
$$ | ||
N = \sum_n f(n) | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
which would also equal to 36 in this case. So, by dividing $$f(n)$$ by $$\sum_n f(n)$$ we get our target probability distribution, $$P(n)$$. This process is called __normalization__ and is crucial for determining almost any probability distribution. So in general, if we have the function $$f(n)$$, we can get the probability as | ||
|
||
$$ | ||
P(n) = \frac{f(n)}{\displaystyle\sum_{n} f(n)} | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
Note that $$f(n)$$ does not necessarily have to be the frequency of $$n$$ - it could really be any function which is _proportional_ to $$P(n)$$, and the above definition of $$P(n)$$ would still hold. And it's easy to check that the sum is now equal to 1, since | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
$$ | ||
\sum_n P(n) = \frac{\displaystyle\sum_{n}f(n)}{\displaystyle\sum_{n} f(n)} = 1 | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
Once we have the probability function $$P(n)$$, we can calculate all sorts of probabilites. For example, let's say we want to find the probability that $$n$$ will be between two integers $$a$$ and $$b$$ inclusive. For brevity, we will use the notation $$\mathbb{P}(a \leq n \leq b)$$ to denote this probability. And to calculate it, we simply have to sum up all the probabilities for each value of $$n$$ in that range, i.e., | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
$$ | ||
\mathbb{P}(a \leq n \leq b) = \sum_{n=a}^{b} P(n) | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
## Probability Density Functions | ||
|
||
What if instead of a discrete variable $$n$$, we had a continuous variable $$x$$, like temperature or weight? In that case, it doesn't make sense to ask what the probability is of $$x$$ being _exactly_ a particular number - there are infinite possible real numbers, after all, so the probability of $$x$$ being exactly any one of them is essentially zero! But it _does_ make sense to ask what the probability is that $$x$$ will be _between_ a certain range of values. For example, one might say that there is 50% chance that the temperature tomorrow noon will be between 5 and 15, or 5% chance that it will be between 16 and 16.5. But how do we put all that information, for every possible range, in a single function? The answer is to use a __probability density function__. | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
What does that mean? Well, suppose $$x$$ is a continous quantity, and we have a probability density function, $$P(x)$$ which looks like | ||
|
||
<p> | ||
<img class="center" src="res/normal_distribution.png" alt="<FIG> probability density" style="width:100%"/> | ||
</p> | ||
|
||
Now, if we are interested in the probability of the range of values that lie between $$x_0$$ and $$x_0 + dx$$, all we have to do is calculate the _area_ of the green sliver above. This is the defining feature of a probability density function: | ||
|
||
> the probability of a range of values is the _area_ of the region under the probability density curve which is within that range. | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
But how do we quantify this area? Imagine that the green sliver in the diagram is really, really thin - infinitesimally thin, to be precise, with the width $$dx$$ almost vanishing to zero. In that case, the area of the green sliver is approximated by a rectangle of height $$P(x)$$ and width $$dx$$. So the area will be $$P(x)dx$$, and thus | ||
|
||
$$ | ||
\mathbb{P}(x_0 \leq x \leq x_0 + dx) = P(x)dx | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
So strictly speaking, $$P(x)$$ itself is NOT a probability, but rather the probability is the quantity $$P(x)dx$$, or any area under the curve. That is why we call $$P(x)$$ the probability _density_ at $$x$$, while the actual probability is only defined for ranges of $$x$$. | ||
|
||
But what about large ranges of $$x$$, which are not infinitesimally thin? We do exactly what we did for the discrete case - sum up the probabilities of each and every distinct range of values, each with an infinitesimal width $$dx$$. And what do we call such a sum over a continuous variable? Why, an integral, of course! Who knew calculus would come in handy one day? And so we have, | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
$$ | ||
\mathbb{P}(a \leq x \leq b ) = \int_a^b P(x)dx | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
And the fact that all probabilities must sum to 1 translates to | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
$$ | ||
\int_D P(x)dx = 1 | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
where $$D$$ denotes the __domain__ of $$P(x)$$, i.e., the entire range of possible values of $$x$$ for which $$P(x)$$ is defined. | ||
|
||
### Normalization of a Density Function | ||
|
||
Just like in the discrete case, we often first calculate some density or frequency function $$f(x)$$, which is NOT $$P(x)$$, but proportional to it. We can get the probability density function by normalizing it in a similar way, except that we integrate instead of sum: | ||
|
||
$$ | ||
P(\mathbf{x}) = \frac{f(\mathbf{x})}{\int_D f(\mathbf{x})d\mathbf{x}} | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
For example, consider the __normal distribution function__, | ||
|
||
$$ | ||
f(x) = e^{-x^2} | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
which is defined for all real numbers $$x$$. We first integrate it (or do a quick google search, as it is rather tricky) to get | ||
|
||
$$ | ||
N = \int_{-\infty}^{\infty} e^{-x^2} dx = \sqrt{\pi} | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
(yes, we get $$\pi$$ out of nowhere, which is an interesting topic for another chapter!) and so we have | ||
|
||
$$ | ||
P(x) = \frac{1}{N} e^{-x^2} = \frac{1}{\sqrt{\pi}} e^{-x^2} | ||
shudipto-amin marked this conversation as resolved.
Show resolved
Hide resolved
|
||
$$ | ||
|
||
In general, normalization can allow us to create a probability distribution out of almost any function $$f(x)$$. There are really only two rules that $$f(\mathbf{x})$$ must satisfy to be a candidate for a probability density distribution: | ||
1. $$\int_{S\in D}f(\mathbf{x})d\mathbf{x}$$ is non-negative for any subdomain $$S$$ of $$D$$. | ||
2. $$\int_D f(\mathbf{x})d\mathbf{x}$$ must be finite. | ||
|
||
## License | ||
|
||
##### Images/Graphics | ||
|
||
- The image "[Frequency distribution of a double die roll](res/double_die_frequencies.png)" was created by [K. Shudipto Amin](https://github.com/shudipto-amin) and is licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/legalcode). | ||
|
||
- The image "[Probability Density](res/normal_distribution.png)" was created by [K. Shudipto Amin](https://github.com/shudipto-amin) and is licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/legalcode). | ||
|
||
##### Text | ||
|
||
The text of this chapter was written by [K. Shudipto Amin](https://github.com/shudipto-amin) and is licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/legalcode). | ||
|
||
[<p><img class="center" src="../cc/CC-BY-SA_icon.svg" /></p>](https://creativecommons.org/licenses/by-sa/4.0/) | ||
|
||
##### Pull Requests | ||
|
||
After initial licensing ([#560](https://github.com/algorithm-archivists/algorithm-archive/pull/560)), the following pull requests have modified the text or graphics of this chapter: | ||
- none | ||
|
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't we say in the chapter that our domain was from -10 to 10?