Skip to content

Fixing 'Example Code' #245

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jul 12, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ In the end, the code should look something like this:

{% references %} {% endreferences %}

### Example Code
## Example Code

{% method %}
{% sample lang="jl" %}
Expand All @@ -61,6 +61,7 @@ In the end, the code should look something like this:
### C
[import, lang:"c_cpp"](code/c/graham.c)
{% sample lang="js" %}
### Javascript
[import, lang:"javascript"](code/javascript/graham-scan.js)
{% endmethod %}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Since this algorithm, there have been many other algorithms that have advanced t

{% references %} {% endreferences %}

### Example Code
## Example Code

{% method %}
{% sample lang="cs" %}
Expand Down
2 changes: 1 addition & 1 deletion chapters/data_compression/huffman/huffman.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ and `bibbity_bobbity` becomes `01000010010111011110111000100101110`.
As mentioned this uses the minimum number of bits possible for encoding.
The fact that this algorithm is both conceptually simple and provably useful is rather extraordinary to me and is why Huffman encoding will always hold a special place in my heart.

# Example Code
## Example Code
In code, this can be a little tricky. It requires a method to continually sort the nodes as you add more and more nodes to the system.
The most straightforward way to do this in some languages is with a priority queue, but depending on the language, this might be more or less appropriate.
In addition, to read the tree backwards, some sort of [Depth First Search](../../tree_traversal/tree_traversal.md) needs to be implemented.
Expand Down
2 changes: 2 additions & 0 deletions chapters/decision_problems/stable_marriage/stable_marriage.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ To be clear, even though this algorithm seems conceptually simple, it is rather
I do not at all claim that the code provided here is efficient and we will definitely be coming back to this problem in the future when we have more tools under our belt.
I am incredibly interested to see what you guys do and how you implement the algorithm.

## Example Code

{% method %}
{% sample lang="jl" %}
### Julia
Expand Down
2 changes: 1 addition & 1 deletion chapters/differential_equations/euler/euler.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Verlet integration has a distinct advantage over the forward Euler method in bot
That said, in practice, due to the instability of the forward Euler method and the error with larger timesteps, this method is rarely used in practice.
That said, variations of this method *are* certainly used (for example Crank-Nicolson and [Runge-Kutta](../runge_kutta/runge_kutta.md), so the time spent reading this chapter is not a total waste!

### Example Code
## Example Code

Like in the case of [Verlet Integration](../../physics_solvers/verlet/verlet.md), the easiest way to test to see if this method works is to test it against a simple test-case.
Here, the most obvious test-case would be dropping a ball from 5 meters, which is my favorite example, but proved itself to be slightly less enlightening than I would have thought.
Expand Down
2 changes: 1 addition & 1 deletion chapters/euclidean_algorithm/euclidean.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ Here, we set `b` to be the remainder of `a%b` and `a` to be whatever `b` was las

The Euclidean Algorithm is truly fundamental to many other algorithms throughout the history of computer science and will definitely be used again later. At least to me, it's amazing how such an ancient algorithm can still have modern use and appeal. That said, there are still other algorithms out there that can find the greatest common divisor of two numbers that are arguably better in certain cases than the Euclidean algorithm, but the fact that we are discussing Euclid two millenia after his death shows how timeless and universal mathematics truly is. I think that's pretty cool.

# Example Code
## Example Code

{% method %}
{% sample lang="c" %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -282,16 +282,17 @@ Even though this seems straightforward, the pseudocode might not be as simple as

Now, as for what's next... Well, we are in for a treat! The above algorithm clearly has 3 `for` loops, and will thus have a complexity of $$\sim O(n^3)$$, which is abysmal! If we can reduce the matrix to a specifically **tridiagonal** matrix, we can actually solve the system in $$\sim O(n)$$! How? Well, we can use an algorithm known as the _Tri-Diagonal Matrix Algorithm_ \(TDMA\) also known as the _Thomas Algorithm_.

### Example Code

The full code can be seen here:
## Example Code

{% method %}
{% sample lang="jl" %}
### Julia
[import, lang:"julia"](code/julia/gaussian_elimination.jl)
{% sample lang="c" %}
### C
[import, lang:"c_cpp"](code/c/gaussian_elimination.c)
{% sample lang="rs" %}
### Rust
[import, lang:"rust"](code/rust/gaussian_elimination.rs)
{% endmethod %}

Expand Down
5 changes: 4 additions & 1 deletion chapters/matrix_methods/thomas/thomas.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,17 @@ d'_0 = \frac{d_0}{b_0}
\end{align}
$$

In code, this will look like this:
## Example Code

{% method %}
{% sample lang="jl" %}
### Julia
[import, lang:"julia"](code/julia/thomas.jl)
{% sample lang="c" %}
### C
[import, lang:"c_cpp"](code/c/thomas.c)
{% sample lang="py" %}
### Python
[import, lang:"python"](code/python/thomas.py)
{% endmethod %}

Expand Down
2 changes: 1 addition & 1 deletion chapters/monte_carlo/monte_carlo.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ As long as you can write some function to tell whether the provided point is ins
This is obviously an incredibly powerful tool and has been used time and time again for many different areas of physics and engineering.
I can guarantee that we will see similar methods crop up all over the place in the future!

# Example Code
## Example Code
Monte carlo methods are famous for their simplicity.
It doesn't take too many lines to get something simple going.
Here, we are just integrating a circle, like we described above; however, there is a small twist and trick.
Expand Down
2 changes: 1 addition & 1 deletion chapters/physics_solvers/quantum/split-op/split-op.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ And finally go step-by-step through the simulation:
And that's it.
The Split-Operator method is one of the most commonly used quantum simulation algorithms because of how straightforward it is to code and how quickly you can start really digging into the physics of the simulation results!

# Example Code
## Example Code
{% method %}
{% sample lang="jl" %}
### Julia
Expand Down
2 changes: 1 addition & 1 deletion chapters/physics_solvers/verlet/verlet.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ Unfortunately, this has not yet been implemented in LabVIEW, so here's Julia cod

Even though this method is more used than the simple Verlet method mentioned above, it unforunately has an error term of $$\mathcal{O} \Delta t^2$$, which is two orders of magnitude worse. That said, if you want to have a simulaton with many objects that depend on one another --- like a gravity simulation --- the Velocity Verlet algorithm is a handy choice; however, you may have to play further tricks to allow everything to scale appropriately. These types of simulatons are sometimes called *n-body* simulations and one such trick is the [Barnes-Hut](barnes_hut.md) algorithm, which cuts the complexity of n-body simulations from $$\sim \mathcal{O}(n^2)$$ to $$\sim \mathcal{O}(n\log(n))$$

# Example Code
## Example Code

Both of these methods work simply by iterating timestep-by-timestep and can be written straightforwardly in any language. For reference, here are snippets of code that use both the classic and velocity Verlet methods to find the time it takes for a ball to hit the ground after being dropped from a given height.

Expand Down
2 changes: 1 addition & 1 deletion chapters/tree_traversal/tree_traversal.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ This has not been implemented in your chosen language, so here is the Julia code
[import:17-20, lang:"haskell"](code/haskell/TreeTraversal.hs)
{% endmethod %}

# Example Code
## Example Code
{% method %}
{% sample lang="jl" %}
### Julia
Expand Down