Skip to content

Commit 5edf3f7

Browse files
jiegilletleios
authored andcommitted
Re-wrote Thomas Algorithm chapter to include more details (#329)
* Re-wrote Thomas Algorithm chapter to include more details * Included Leios' suggestions
1 parent f110a47 commit 5edf3f7

File tree

1 file changed

+67
-11
lines changed

1 file changed

+67
-11
lines changed

contents/thomas_algorithm/thomas_algorithm.md

Lines changed: 67 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Thomas Algorithm
22

3-
As alluded to in the [Gaussian Elimination chapter](../gaussian_elimination/gaussian_elimination.md), the Thomas Algorithm (or TDMA -- Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$\sim O(n^3) \rightarrow \sim O(n)$$! This is done by exploiting a particular case of Gaussian Elimination, particularly the case where our matrix looks like:
3+
As alluded to in the [Gaussian Elimination chapter](../gaussian_elimination/gaussian_elimination.md), the Thomas Algorithm (or TDMA, Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$ O(n^3)$$ to $$O(n)$$ in certain cases!
4+
This is done by exploiting a particular case of Gaussian Elimination where the matrix looks like this:
45

56
$$
67
\left[
@@ -14,27 +15,84 @@ $$
1415
\right]
1516
$$
1617

17-
By this, I mean that our matrix is *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). Now, at first, it might not be obvious how this helps; however, we may divide this array into separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ and then solve for $$x$$ with back-substitution, like before.
18+
This matrix shape is called *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!).
19+
Now, at first, it might not be obvious how this helps. Well, firstly, it makes the system easier to encode: we may divide it into four separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ (in some implementations, you will see the missing $$a_0$$ and $$c_n$$ set to zero to get four vectors of the same size).
20+
Secondly, and most importantly, equations this short and regular are easy to solve analytically.
1821

19-
In particular, we need to find an optimal scale factor for each row and use that. What is the scale factor? Well, it is the diagonal $$-$$ the multiplicative sum of the off-diagonal elements.
20-
In the end, we will update $$c$$ and $$d$$ to be $$c'$$ and $$d'$$ like so:
22+
We'll start by applying mechanisms familiar to those who have read the [Gaussian Elimination](../gaussian_elimination/gaussian_elimination.md) chapter.
23+
Our first goal is to eliminate the $$a_i$$ terms and set the diagonal values $$b_i$$ to $$1$$. The $$c_i$$ and $$d_i$$ terms will be transformed into $$c'_i$$ and $$d'_i$$.
24+
The first row is particularly easy to transform since there is no $$a_0$$, we simply need to divide the row by $$b_0$$:
2125

2226
$$
27+
\left\{
2328
\begin{align}
24-
c'_i &= \frac{c_i}{b_i - a_i \times c'_{i-1}} \\
25-
d'_i &= \frac{d_i - a_i \times d'_{i-1}}{b_i - a_i \times c'_{i-1}}
29+
c'_0 &= \frac{c_0}{b_0} \\
30+
d'_0 &= \frac{d_0}{b_0}
31+
\end{align}
32+
\right.
33+
$$
34+
35+
Let's assume that we found a way to transform the first $$i-1$$ rows. How would we transform the next one? We have
36+
37+
$$
38+
\begin{array}{ccccccc|c}
39+
& & \ddots & & & & & \\
40+
(i-1) & & 0 & 1 & c'_{i-1} & & & d'_{i-1} \\
41+
(i) & & & a_i & b_i & c_i & & d_i \\
42+
& & & & & \ddots & &
43+
\end{array}
44+
$$
45+
46+
Let's transform row $$(i)$$ in two steps.
47+
48+
**Step one**: eliminate $$a_i$$ with the transformation $$(i)^* = (i) - a_i \times (i-1)$$:
49+
50+
$$
51+
\left\{
52+
\begin{align}
53+
a^*_i &= 0 \\
54+
b^*_i &= b_i - a_i \times c'_{i-1} \\
55+
c^*_i &= c_i \\
56+
d^*_i &= d_i - a_i \times d'_{i-1}
2657
\end{align}
58+
\right.
2759
$$
2860

29-
Of course, the initial elements will need to be specifically defined as
61+
**Step two**: get $$b'_i=1$$ with the transformation $$(i)' = (i)^* / b^*_i $$:
3062

3163
$$
64+
\left\{
3265
\begin{align}
33-
c'_0 = \frac{c_0}{b_0}
34-
d'_0 = \frac{d_0}{b_0}
66+
a'_i &= 0 \\
67+
b'_i &= 1 \\
68+
c'_i &= \frac{c_i}{b_i - a_i \times c'_{i-1}} \\
69+
d'_i &= \frac{d_i - a_i \times d'_{i-1}}{b_i - a_i \times c'_{i-1}}
3570
\end{align}
71+
\right.
72+
$$
73+
74+
Brilliant! With the last two formula, we can calculate all the $$c'_i$$ and $$d'_i$$ in a single pass, starting from row $$1$$, since we already know the values of $$c'_0$$ and $$d'_0$$.
75+
76+
Of course, what we really need are the solutions $$x_i$$. It's back substitution time!
77+
78+
If we express our system in terms of equations instead of a matrix, we get
79+
80+
$$
81+
x_i + c'_i \times x_{i+1} = d'_i
3682
$$
3783

84+
plus the last row that is even simpler: $$x_n = d'_n$$. One solution for free!
85+
Maybe we can backtrack from the last solution? Let's (barely) transform the above equation:
86+
87+
$$
88+
x_i = d'_i - c'_i \times x_{i+1}
89+
$$
90+
91+
and that's all there is to it. We can calculate all the $$x_i$$ in a single pass starting from the end.
92+
93+
Overall, we only need two passes, and that's why our algorithm is $$O(n)$$!
94+
The transformations are quite easy too, isn't that neat?
95+
3896
## Example Code
3997

4098
{% method %}
@@ -48,8 +106,6 @@ $$
48106
[import, lang:"haskell"](code/haskell/thomas.hs)
49107
{% endmethod %}
50108

51-
This is a much simpler implementation than Gaussian Elimination and only has one for loop before back-substitution, which is why it has a better complexity case.
52-
53109
<script>
54110
MathJax.Hub.Queue(["Typeset",MathJax.Hub]);
55111
</script>

0 commit comments

Comments
 (0)