Skip to content

Commit 28fa404

Browse files
committed
adding Thomas algorithm section.
1 parent 04d9205 commit 28fa404

File tree

3 files changed

+72
-3
lines changed

3 files changed

+72
-3
lines changed
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,64 @@
11
# Thomas Algorithm
2+
3+
As alluded to in the Gaussian Elimination Chapter, the Thomas Algorithm (or TDMA -- Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$\sim O(n^3) \rightarrow \sim O(n)$$! This is done by exploiting a particular case of Gaussian Elimination, particularly the case where our matrix looks like:
4+
5+
$$
6+
\left[
7+
\begin{array}{ccccc|c}
8+
b_0 & c_0 & & & & d_0 \\
9+
a_1 & b_1 & c_1 & & & d_1 \\
10+
& a_2 & \ddots & & & \vdots \\
11+
& & & & c_{n-1}& d_{n-1} \\
12+
& & & a_n & b_n & d_n
13+
\end{array}
14+
\right]
15+
$$
16+
17+
By this, I mean that our matrix is *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). Now, at first, it might not be obvious how this helps; however, we may divide this array into separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ and then solve for $$x$$ with back-substitution, like before.
18+
19+
In particular, we need to find an optimal scale factor for each row and use that. What is the scale factor? Well, it is the diagonal $$-$$ the multiplicative sum of the off-diagonal elements. In the end, we will update $$c$$ and $$d$$ to be$$c'$$ and $$d'$$ like so:
20+
21+
$$
22+
\begin{align}
23+
c'_i &= \frac{c_i}{b_i - a_i \times c'_{i-1}} \\
24+
d'_i &= \frac{d_i - a_i*d'_{i-1}}{b_i - a_i \times c'_{i-1}}
25+
\end{align}
26+
$$
27+
28+
Of course, the initial elements will need to be specifically defined as
29+
30+
$$
31+
\begin{align}
32+
c'_0 = \frac{c_0}{b_0}
33+
d'_0 = \frac{d_0}{b_0}
34+
\end{align}
35+
$$
36+
37+
In code, this will look like this:
38+
39+
```julia
40+
function(a::Vector{Float64}, b::Vector{Float64}, c::Vector{Float64},
41+
d::Vector{Float64}, soln::Vector{Float64})
42+
# Setting initial elements
43+
c[0] = c[0] / b[0]
44+
d[0] = d[0] / b[0]
45+
46+
for i = 1:n
47+
# Scale factor is for c and d
48+
scale = 1 / (b[i] - c[i-1]*a[i])
49+
c[i] = c[i] * scale
50+
d[i] = (d[i] - a[i] * d[i-1]) * scale
51+
end
52+
53+
# Set the last solution for back-substitution
54+
soln[n-1] = d[n-1]
55+
56+
# Back-substitution
57+
for i = n-2:0
58+
soln[i] = d[i] - c[i] * soln[i+1]
59+
end
60+
61+
end
62+
```
63+
64+
This is a much simpler implementation than Gaussian Elimination and only has one for loop before back-substitution, which is why it has a better complexity case.

chapters/mathematical_background/taylor_series.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,13 @@ I have been formally trained as a physicist. In my mind, there are several mathe
44

55
On the one hand, I can see how the expansion could be considered purely mathematical. I mean, here is the definition:
66
$$
7-
T = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n
7+
f(x) \simeq \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n
88
$$
99

10-
It looks like it is just a bunch of derivatives strung together! Where's the physics?
10+
It looks like it is just a bunch of derivatives strung together! Where's the physics? Well, let's expand this series for the first few derivatives
11+
12+
$$
13+
f(x) \simeq a + \frac{df(a)}{da} + \frac{1}{2}\frac{d^2f(a)}{da^2}
14+
$$
15+
16+
If we substitute

chapters/principles_of_code/building_blocks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ end
6969
Syntactically, they are a little different, but the content is identical.
7070
### Recursion
7171

72-
Simply put, recursion is the process of putting a function inside a function. Now, I know what you are thinking, "Why does this deserve a special name?" That is a very good question and one I have never been able to understand. That said, it is an incredibly good trick to have up your sleeve to speed up certain computations.
72+
Simply put, recursion is the process of putting a function inside itself. Now, I know what you are thinking, "Why does this deserve a special name?" That is a very good question and one I have never been able to understand. That said, it is an incredibly good trick to have up your sleeve to speed up certain computations.
7373

7474
### Classes and Structs
7575

0 commit comments

Comments
 (0)