Skip to content

Commit 8cb3ae7

Browse files
committed
Merge branch 'master' of github.com:leios/algorithm-archive
2 parents 7d83a9d + 2927ab0 commit 8cb3ae7

File tree

8 files changed

+351
-4
lines changed

8 files changed

+351
-4
lines changed

SUMMARY.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@
1818
* [FFT](chapters/computational_mathematics/cooley_tukey.md)
1919
* [Computational Physics](chapters/computational_physics/computational_physics.md)
2020
* [Euler Methods](chapters/computational_physics/euler.md)
21+
* [Verlet Integration](chapters/computational_physics/verlet.md)
22+
* [Barnes-Hut](chapters/computational_physics/barnes_hut.md)
2123
* [Computational Biology](chapters/computational_biology/computational_biology.md)
2224
* [Computational Creativity](chapters/computational_creativity/computational_creativity.md)
2325
* [Miscellaneous Algorithms](chapters/miscellaneous_algorithms/miscellaneous_algorithms.md)

chapters/computational_mathematics/cooley_tukey.md

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,15 +33,23 @@ Simply put, the Fourier Transform is a beautiful application of complex number s
3333

3434
To an outsider, the Fourier Transform looks like a mathematical mess -- certainly a far cry from the heroic portal between two domains I have depicted it to be; however, like most things, it's not as bad as it initially appears to be. So, here it is in all it's glory!
3535

36-
$F(\xi) = \int_{-\infty} ^\infty f(x) e^{-2 \pi i x \xi} dx$
36+
$$F(\xi) = \int_{-\infty} ^\infty f(x) e^{-2 \pi i x \xi} dx$$
3737

3838
and
3939

4040
$$f(x) = \int_{-\infty} ^\infty F(\xi) e^{2 \pi i \xi x} d\xi$$
4141

4242
Where $$F(\xi)$$ represents a function in frequency space and $$\xi$$ represents any number on the frequency plane, and $$f(x)$$ represents any number in real space and $$x$$ represents any value on the real plane. Note here that the only difference between the two exponential terms is a minus sign in the transformation to frequency space. As I mentioned, this is not intuitive syntax, so please allow me to explain a bit.
4343

44-
If we take a sinusoidal function like $$\sin(\omega t)$$ or $$\cos(\omega t)$$, we find a curve that goes from $$\pm1$$, shown in FIGURE1a. Both of these curves can be described by their corresponding frequencies, $$\omega$$. So, instead of representing these curves as seen in FIGURE1a, We could instead describe them as shown in FIGURE1b. Here, FIGRE1a is in real space and FIGURE1b is in frequency space.
44+
If we take a sinusoidal function like $$\sin(\omega t)$$ or $$\cos(\omega t)$$, we find a curve that goes from $$\pm1$$, shown in FIGURE1a. Both of these curves can be described by their corresponding frequencies, $$\omega$$. So, instead of representing these curves as seen in FIGURE1a, We could instead describe them as shown in FIGURE1b. Here, FIGURE1a is in real space and FIGURE1b is in frequency space.
45+
46+
![Complicated Sinusoidal Function](sinusoid.png)
47+
48+
*FIGURE1a: Complicated Sinusoidal Function*
49+
50+
![FFT of FIGURE1a](fft.png)
51+
52+
*FIGURE1b: abs(fft(FIGURE1a))*
4553

4654
Now, how does this relate to the transformations above? Well, the easiest way is to substitute in the following relation:
4755

@@ -97,7 +105,7 @@ Recursion!
97105

98106
### The Cooley-Tukey Algorithm
99107

100-
In some sense, I may have oversold this algorithm. In fact, I definitely have. It's like I have already given you the punchline to a joke and am now getting around to explaining it. Oh well. The problem with using a standard DFT is that it requires a large matrix multiplication, which is a prohibitively complex operation. The trick to the Cooley-Tukey algorithm is recursion. In particular, we split the matrix we wish to perform the FFT on into two parts: one for all elements with even indices and another for all odd indices. We then proceed to split the array again and again until we have a manageable array size to perform a DFT (or similar FFT) on. With recursion, we can reduce the complexity to $\sim O(n \log n)$, which is a feasible operation.
108+
In some sense, I may have oversold this algorithm. In fact, I definitely have. It's like I have already given you the punchline to a joke and am now getting around to explaining it. Oh well. The problem with using a standard DFT is that it requires a large matrix multiplication, which is a prohibitively complex operation. The trick to the Cooley-Tukey algorithm is recursion. In particular, we split the matrix we wish to perform the FFT on into two parts: one for all elements with even indices and another for all odd indices. We then proceed to split the array again and again until we have a manageable array size to perform a DFT (or similar FFT) on. With recursion, we can reduce the complexity to $$\sim \,athcal{O}(n \log n)$$, which is a feasible operation.
101109

102110
For me, it is usually easist to think of the Cooley-Tukey algorithm as a method to circumvent a complicated matrix multiplication rather than a method to perform a Fourier Transform; however, this is only because Fourier Transforms seem like mathematical magic. Matrix multiplications do not.
103111

4.92 KB
Loading
7.68 KB
Loading
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
<script>
2+
MathJax.Hub.Queue(["Typeset",MathJax.Hub]);
3+
</script>
4+
$$
5+
\newcommand{\d}{\mathrm{d}}
6+
\newcommand{\bff}{\boldsymbol{f}}
7+
\newcommand{\bfg}{\boldsymbol{g}}
8+
\newcommand{\bfp}{\boldsymbol{p}}
9+
\newcommand{\bfq}{\boldsymbol{q}}
10+
\newcommand{\bfx}{\boldsymbol{x}}
11+
\newcommand{\bfu}{\boldsymbol{u}}
12+
\newcommand{\bfv}{\boldsymbol{v}}
13+
\newcommand{\bfA}{\boldsymbol{A}}
14+
\newcommand{\bfB}{\boldsymbol{B}}
15+
\newcommand{\bfC}{\boldsymbol{C}}
16+
\newcommand{\bfM}{\boldsymbol{M}}
17+
\newcommand{\bfJ}{\boldsymbol{J}}
18+
\newcommand{\bfR}{\boldsymbol{R}}
19+
\newcommand{\bfT}{\boldsymbol{T}}
20+
\newcommand{\bfomega}{\boldsymbol{\omega}}
21+
\newcommand{\bftau}{\boldsymbol{\tau}}
22+
$$
23+
24+
##### Dependencies
25+
* [Verlet Integration](verlet.md)
26+
27+
# Barnes Hut Algorithm
28+
29+
The Barnes-Hut algorithm divides space into an octree in order to cut computational time without losing much precision in n-body simulations. If you want to learn more soon, bug me about it!
30+
31+
TODO
32+

chapters/computational_physics/euler.md

Lines changed: 46 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,56 @@ $$
2323

2424
##### Dependencies
2525

26+
* [Differential Equations](../mathematical_background/differential_equations.md)
2627
* [Taylor Series Expansions](../mathematical_background/taylor_series.md)
2728
* [Thomas Algorithm](../computational_mathematics/matrix_methods/thomas.md)
2829
* [Gaussian Elimination](../computational_mathematics/matrix_methods/gaussian_elimination.md)
2930

3031
# Euler Method
3132

32-
COMING SOON!
33+
The Euler methods are some of the simplest methods to solve ordinary differential equations numerically. These methods introduce a new set of methods called the [Runge Kutta](runge_kutta.md) methods, which will be discussed in the near future!
3334

35+
As a physicist, I tend to understand things through methods that I have learned before. In this case, it makes sense for me to see Euler methods as extensions of the [Taylor Series Expansion](../mathematical_background/taylor_series.md). These expansions basically approximate functions based on their derivatives, like so:
36+
37+
$$
38+
f(x) \simeq f(a) + \frac{1}{1!}\frac{df(a)}{da}(x-a)
39+
+ \frac{1}{2!}\frac{d^2f(a)}{da^2}(x-a)^2 +
40+
+ \frac{1}{3!}\frac{d^3f(a)}{da^3}{x-a}^3 + \cdots
41+
$$
42+
43+
Like before, $$f(x)$$ is some function along real or complex space, $$a$$ is the point that we are expanding from, and $$f^{(n)}(x)$$ denotes the $$n^{\text{th}}$$ derivative of $$f(x)$$.
44+
45+
So, what does this mean? Well, as mentioned, we can think of this similarly to the kinematic equation:
46+
47+
$$
48+
x = x_0 + vt + \frac{1}{2}at^2
49+
$$
50+
where $$x$$ is position, $$v$$ is velocity, and $$a$$ is acceleration.
51+
This equation allows us to find the position of an object based on it's previous position ($$x_0$$), the derivative of it's position with respect to time ($$\frac{dx}{dt} = v_0$$) and one derivative on top of that ($$\frac{d^2x}{dx^2} = a$$). As stated in the Tayor Series Expansion, the acceleration term must also have $$\frac{1}{2!}$$ in front of it.
52+
53+
Now, how does this relate to the Euler methods? Well, with these methods, we assume that we are looking for a position in some space, usually denoted as $$y(t)$$, but we can use any variable. The methods assume that we have some function to evaluate the derivative of $$y(t)$$. In other words, we know that $$\frac{dy(t)}{dt} = f(t,y(t))$$. For the kinematic equation, we know what this is!
54+
55+
$$
56+
\frac{dy(t)}{dt} = v = f(t,y(t)) = v_0 + a(t)
57+
$$
58+
59+
So, we can iteratively solve for position by first solving for velocity. By following the kinematic equation (or Taylor Series Expansion), we find that
60+
61+
$$
62+
y_{n+1} = y_n + dt f(t_n, y_n)
63+
$$
64+
65+
For any timestep $$dt$$. This means that if we are solving the kinematic equation, we simply have the following equations:
66+
67+
$$
68+
\begin{align}
69+
x_{n+1} &= x_n + v dt
70+
v_{n+1} &= a t_n
71+
\end{align}
72+
$$
73+
74+
Now, solving this set of equations in this way is known as the *forward* Euler Method. In fact, there is another method known as the *backward* Euler Method, which will be solved in the following section. For now, it is important to note that the stability of these methods depend on the timestep chosen. For example, in FIGURE we see dramatically different results for different timesteps.
75+
76+
### Backward Euler Method
77+
78+
Unlike the forward Euler Method above, the backward Euler Method is an *implicit method*, which means that it results in a system of equations to solve. Luckily, we know how to solve systems of equations (*hint*: [Thomas Algorithm](../computational_mathematics/matrix_methods/thomas.md), [Gaussian Elimination](../computational_mathematics/matrix_methods/gaussian_elimination.md))
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
##### Dependencies
2+
* [Euler Methods](euler.md)
3+
4+
# Runge Kutta Methods
5+
6+
COMING SOON!
Lines changed: 254 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,254 @@
1+
<script>
2+
MathJax.Hub.Queue(["Typeset",MathJax.Hub]);
3+
</script>
4+
$$
5+
\newcommand{\d}{\mathrm{d}}
6+
\newcommand{\bff}{\boldsymbol{f}}
7+
\newcommand{\bfg}{\boldsymbol{g}}
8+
\newcommand{\bfp}{\boldsymbol{p}}
9+
\newcommand{\bfq}{\boldsymbol{q}}
10+
\newcommand{\bfx}{\boldsymbol{x}}
11+
\newcommand{\bfu}{\boldsymbol{u}}
12+
\newcommand{\bfv}{\boldsymbol{v}}
13+
\newcommand{\bfA}{\boldsymbol{A}}
14+
\newcommand{\bfB}{\boldsymbol{B}}
15+
\newcommand{\bfC}{\boldsymbol{C}}
16+
\newcommand{\bfM}{\boldsymbol{M}}
17+
\newcommand{\bfJ}{\boldsymbol{J}}
18+
\newcommand{\bfR}{\boldsymbol{R}}
19+
\newcommand{\bfT}{\boldsymbol{T}}
20+
\newcommand{\bfomega}{\boldsymbol{\omega}}
21+
\newcommand{\bftau}{\boldsymbol{\tau}}
22+
$$
23+
##### Dependencies
24+
25+
* [Taylor Series Expansions](../mathematical_background/taylor_series.md)
26+
27+
# Verlet Integration
28+
29+
Verlet Integration is essentially a solution to the kinematic equation for the motion of any object, which can be found with a [Taylor Series Expansion](../mathematical_background/taylor_series.md) to be
30+
31+
$$
32+
x(t) = x(0) + v(0)t + \frac{1}{2}at^2 + \frac{1}{6}bt^3 + \cdots
33+
$$
34+
35+
Where $$x$$ is the position, $$v$$ is the velocity, $$a$$ is the acceleration, $$b$$ is the often forgotten jerk term, and $$t$$ is time. This equation is a central equation to almost every Newtonian physics solver and brings up a class of algorithms known as *force integratiors*. One of the first force integrators to work with is *Verlet Integration*.
36+
37+
So, let's say we want to solve for the next timestep in $$x$$. To a close approximation, that might look like this:
38+
39+
$$
40+
x(t+\Delta t) = x(t) + v(t)\Delta t + \frac{1}{2}a\Delta t^2 + \frac{1}{6}b \Delta t^3 + \mathcal{O}(\Delta t^4)
41+
$$
42+
43+
This means that if we need to find the next $$x$$, we need the current $$x$$, $$v$$, $$a$$, etc. However, because few people calculate the jerk term, our error is typically $$\mathcal{O}(\Delta t^3)$$. That said, we can calculate $$x$$ with less knowledge and higher accuracy if we play a trick! Let's say we want to calculate $$x$$ of the *previous* timestep. Again, to a close approximation, that might look like this:
44+
45+
$$
46+
x(t-\Delta t) = x(t) - v(t)\Delta t + \frac{1}{2}a\Delta t^2 - \frac{1}{6}b \Delta t^3 + \mathcal{O}(\Delta t^4)
47+
$$
48+
49+
Now, we have two equations to solve for two different timesteps in x, one of which we already have. If we add the two equations together and solve for $$x(t+\Delta t)$$, we find
50+
51+
$$
52+
x(t+ \Delta t) = 2x(t) - x(t-\Delta t) + a\Delta t^2 + \mathcal{O}(\Delta t^4)
53+
$$
54+
55+
So, this means, we can find our next $$x$$ simply by knowing our current $$x$$, the $$x$$ before that, and the acceleration! No velocity necessary! In addition, this drops the error to $$\mathcal{O}(\Delta t^4)$$, which is great!
56+
57+
Now, obviously this poses a problem, what if we want to calculate a term that requires velocity, like the kinetic energy, $$\frac{1}{2}mv^2$$? In this case, we certainly cannot get rid of the velocity! Well, we can find the velocity to $$\mathcal{O}(\Delta t^2)$$ accuracy by using the St\"ormer-Verlet method, which is the same as before, but we calculate velocity like so
58+
59+
$$
60+
v(t) = \frac{x(t+\Delta t) - x(t-\Delta t)}{2\Delta t} + \mathcal{O}(\Delta t^2)
61+
$$
62+
63+
Note that the 2 in the denominator appears because we are going over 2 timesteps. It's essentially solving $$v=\frac{\Delta x}{\Delta t}$$. In addition, we can calculate the velocity of the next timestep like so
64+
65+
$$
66+
v(t+\Delta t) = \frac{x(t+\Delta t) - x(t)}{\Delta t} + \mathcal{O}(\Delta t)
67+
$$
68+
69+
However, the error for this is $$\mathcal{O}(\Delta t)$$, which is quite poor, but get's the job done in a pinch.
70+
71+
Now, let's say we actually need the velocity to calculate out next tiemstep. Well, in this case, we simply cannot use the above approximation and instead need to use the *Velocity Verlet* algorithm.
72+
73+
# Velocity Verlet
74+
75+
In some ways, this algorithm is even simpler than above. We can calculate everything like so
76+
77+
$$
78+
\begin{align}
79+
x(t+\Delta t) &=x(t) + v(t)\Delta t + \frac{1}{2}a(t)\Delta t^2 \\
80+
a(t+\Delta t) &= f(x(t+\Delta t)) \\
81+
v(t+\Delta t) &= v(t) + \frac{1}{2}(a(t) + a(t+\Delta t))\Delta t
82+
\end{align}
83+
$$
84+
85+
Which is literally the kinematic equation above, solving for $$x$$, $$v$$, and $$a$$ every timestep. You can also split up the equations like so
86+
87+
$$
88+
\begin{align}
89+
v(t+\frac{1}{2}\Delta t) &= v(t) + \frac{1}{2}a(t)\Delta t \\
90+
x(t+\Delta t) &=x(t) + v(t+\frac{1}{2}\Delta t)\Delta t \\
91+
a(t+\Delta t) &= f(x(t+\Delta t)) \\
92+
v(t+\Delta t) &= v(t+\frac{1}{2}\Delta t) + \frac{1}{2}a(t+\Delta t)\Delta t
93+
\end{align}
94+
$$
95+
96+
Even though this method is more used than the simple Verlet method mentioned above, it unforunately has an error term of $$\mathcal{O} \Delta t^2$$, which is two orders of magnitude worse. That said, if you want to have a simulaton with many objects that depend on one another --- like a gravity simulation --- the Velocity Verlet algorithm is a handy choice; however, you may have to play further tricks to allow everything to scale appropriately. These types of simulatons are sometimes called *n-body* simulations and one such trick is the [Barnes-Hut](barnes_hut.md) algorithm, which cuts the complexity of n-body simulations from $$\sim \mathcal{O}(n^2)$$ to $$\sim \mathcal{O}(n\log(n))$$
97+
98+
# Example Code
99+
100+
Both of these methods work simply by iterating timestep-by-timestep and can be written straightforwardly in any language. For reference, here are snippets of code that use both the classic and velocity Verlet methods to find the time it takes for a ball to hit the ground after being dropped from a given height.
101+
102+
#### C++
103+
```c++
104+
/*------------verlet.cpp------------------------------------------------------//
105+
*
106+
* Purpose: Simple demonstration of verlet algorithm
107+
*
108+
*-----------------------------------------------------------------------------*/
109+
110+
#include <iostream>
111+
112+
// Simple function for velocity-verlet
113+
void verlet(double pos, double acc, double dt){
114+
115+
// Note that we are using a temp variable for the previous position
116+
double prev_pos, temp_pos, time;
117+
prev_pos = pos;
118+
time = 0;
119+
120+
while (pos > 0){
121+
time += dt;
122+
temp_pos = pos;
123+
pos = pos*2 - prev_pos + acc * dt * dt;
124+
prev_pos = temp_pos;
125+
}
126+
127+
std::cout << time << '\n';
128+
129+
}
130+
131+
// Simple function for stormer-verlet
132+
void stormer_verlet(double pos, double acc, double dt){
133+
134+
// Note that we are using a temp variable for the previous position
135+
double prev_pos, temp_pos, time, vel;
136+
prev_pos = pos;
137+
vel = 0;
138+
time = 0;
139+
while (pos > 0){
140+
time += dt;
141+
temp_pos = pos;
142+
pos = pos*2 - prev_pos + acc * dt * dt;
143+
prev_pos = temp_pos;
144+
145+
// The acceleration is constant, so the velocity is straightforward
146+
vel += acc*dt;
147+
}
148+
149+
std::cout << time << '\n';
150+
151+
}
152+
153+
// Simple function for velocity-verlet
154+
void velocity_verlet(double pos, double acc, double dt){
155+
156+
double time, vel;
157+
vel = 0;
158+
time = 0;
159+
while (pos > 0){
160+
time += dt;
161+
pos += vel*dt + 0.5*acc * dt * dt;
162+
vel += acc*dt;
163+
}
164+
165+
std::cout << time << '\n';
166+
167+
}
168+
169+
int main(){
170+
171+
// Note that depending on the simulation, you might want to have the verlet
172+
// loop outside.
173+
174+
// For example, if your acceleration chages as a function of time, you might
175+
// need to also change the acceleration to be read into each of these
176+
// functions
177+
verlet(5.0, -10, 0.01);
178+
stormer_verlet(5.0, -10, 0.01);
179+
velocity_verlet(5.0, -10, 0.01);
180+
181+
}
182+
183+
```
184+
185+
#### Java
186+
```java
187+
188+
// Submitted by lolatomroflsinnlos
189+
// Thanks a lot!
190+
static void verlet(double pos, double acc, double dt){
191+
192+
// Note that we are using a temp variable for the previous position
193+
double prev_pos, temp_pos, time;
194+
prev_pos = pos;
195+
time = 0;
196+
197+
while (pos > 0){
198+
time += dt;
199+
temp_pos = pos;
200+
pos = pos*2 - prev_pos + acc * dt * dt;
201+
prev_pos = temp_pos;
202+
}
203+
204+
System.out.println(time);
205+
206+
}
207+
208+
// Simple function for stormer-verlet
209+
static void stormer_verlet(double pos, double acc, double dt){
210+
211+
// Note that we are using a temp variable for the previous position
212+
double prev_pos, temp_pos, time, vel;
213+
prev_pos = pos;
214+
vel = 0;
215+
time = 0;
216+
while (pos > 0){
217+
time += dt;
218+
temp_pos = pos;
219+
pos = pos*2 - prev_pos + acc * dt * dt;
220+
prev_pos = temp_pos;
221+
222+
// The acceleration is constant, so the velocity is straightforward
223+
vel += acc*dt;
224+
}
225+
226+
System.out.println(time);
227+
228+
}
229+
230+
// Simple function for velocity-verlet
231+
static void velocity_verlet(double pos, double acc, double dt){
232+
233+
// Note that we are using a temp variable for the previous position
234+
double time, vel;
235+
vel = 0;
236+
time = 0;
237+
while (pos > 0){
238+
time += dt;
239+
pos += vel*dt + 0.5*acc * dt * dt;
240+
vel += acc*dt;
241+
}
242+
243+
System.out.println(time);
244+
245+
}
246+
247+
public static void main(String[] args) {
248+
249+
verlet(5.0, -10, 0.01);
250+
stormer_verlet(5.0, -10, 0.01);
251+
velocity_verlet(5.0, -10, 0.01);
252+
253+
}
254+
```

0 commit comments

Comments
 (0)