|
1 | 1 | # Thomas Algorithm
|
2 | 2 |
|
3 |
| -As alluded to in the [Gaussian Elimination chapter](../gaussian_elimination/gaussian_elimination.md), the Thomas Algorithm (or TDMA -- Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$\sim O(n^3) \rightarrow \sim O(n)$$! This is done by exploiting a particular case of Gaussian Elimination, particularly the case where our matrix looks like: |
| 3 | +As alluded to in the [Gaussian Elimination chapter](../gaussian_elimination/gaussian_elimination.md), the Thomas Algorithm (or TDMA, Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$ O(n^3)$$ to $$O(n)$$ in certain cases! |
| 4 | +This is done by exploiting a particular case of Gaussian Elimination where the matrix looks like this: |
4 | 5 |
|
5 | 6 | $$
|
6 | 7 | \left[
|
|
14 | 15 | \right]
|
15 | 16 | $$
|
16 | 17 |
|
17 |
| -By this, I mean that our matrix is *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). Now, at first, it might not be obvious how this helps; however, we may divide this array into separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ and then solve for $$x$$ with back-substitution, like before. |
| 18 | +This matrix shape is called *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). |
| 19 | +Now, at first, it might not be obvious how this helps. Well, firstly, it makes the system easier to encode: we may divide it into four separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ (in some implementations, you will see the missing $$a_0$$ and $$c_n$$ set to zero to get four vectors of the same size). |
| 20 | +Secondly, and most importantly, equations this short and regular are easy to solve analytically. |
18 | 21 |
|
19 |
| -In particular, we need to find an optimal scale factor for each row and use that. What is the scale factor? Well, it is the diagonal $$-$$ the multiplicative sum of the off-diagonal elements. |
20 |
| -In the end, we will update $$c$$ and $$d$$ to be $$c'$$ and $$d'$$ like so: |
| 22 | +We'll start by applying mechanisms familiar to those who have read the [Gaussian Elimination](../gaussian_elimination/gaussian_elimination.md) chapter. |
| 23 | +Our first goal is to eliminate the $$a_i$$ terms and set the diagonal values $$b_i$$ to $$1$$. The $$c_i$$ and $$d_i$$ terms will be transformed into $$c'_i$$ and $$d'_i$$. |
| 24 | +The first row is particularly easy to transform since there is no $$a_0$$, we simply need to divide the row by $$b_0$$: |
21 | 25 |
|
22 | 26 | $$
|
| 27 | +\left\{ |
23 | 28 | \begin{align}
|
24 |
| -c'_i &= \frac{c_i}{b_i - a_i \times c'_{i-1}} \\ |
25 |
| -d'_i &= \frac{d_i - a_i \times d'_{i-1}}{b_i - a_i \times c'_{i-1}} |
| 29 | +c'_0 &= \frac{c_0}{b_0} \\ |
| 30 | +d'_0 &= \frac{d_0}{b_0} |
| 31 | +\end{align} |
| 32 | +\right. |
| 33 | +$$ |
| 34 | + |
| 35 | +Let's assume that we found a way to transform the first $$i-1$$ rows. How would we transform the next one? We have |
| 36 | + |
| 37 | +$$ |
| 38 | +\begin{array}{ccccccc|c} |
| 39 | + & & \ddots & & & & & \\ |
| 40 | +(i-1) & & 0 & 1 & c'_{i-1} & & & d'_{i-1} \\ |
| 41 | +(i) & & & a_i & b_i & c_i & & d_i \\ |
| 42 | + & & & & & \ddots & & |
| 43 | +\end{array} |
| 44 | +$$ |
| 45 | + |
| 46 | +Let's transform row $$(i)$$ in two steps. |
| 47 | + |
| 48 | +**Step one**: eliminate $$a_i$$ with the transformation $$(i)^* = (i) - a_i \times (i-1)$$: |
| 49 | + |
| 50 | +$$ |
| 51 | +\left\{ |
| 52 | +\begin{align} |
| 53 | +a^*_i &= 0 \\ |
| 54 | +b^*_i &= b_i - a_i \times c'_{i-1} \\ |
| 55 | +c^*_i &= c_i \\ |
| 56 | +d^*_i &= d_i - a_i \times d'_{i-1} |
26 | 57 | \end{align}
|
| 58 | +\right. |
27 | 59 | $$
|
28 | 60 |
|
29 |
| -Of course, the initial elements will need to be specifically defined as |
| 61 | +**Step two**: get $$b'_i=1$$ with the transformation $$(i)' = (i)^* / b^*_i $$: |
30 | 62 |
|
31 | 63 | $$
|
| 64 | +\left\{ |
32 | 65 | \begin{align}
|
33 |
| -c'_0 = \frac{c_0}{b_0} |
34 |
| -d'_0 = \frac{d_0}{b_0} |
| 66 | +a'_i &= 0 \\ |
| 67 | +b'_i &= 1 \\ |
| 68 | +c'_i &= \frac{c_i}{b_i - a_i \times c'_{i-1}} \\ |
| 69 | +d'_i &= \frac{d_i - a_i \times d'_{i-1}}{b_i - a_i \times c'_{i-1}} |
35 | 70 | \end{align}
|
| 71 | +\right. |
| 72 | +$$ |
| 73 | + |
| 74 | +Brilliant! With the last two formula, we can calculate all the $$c'_i$$ and $$d'_i$$ in a single pass, starting from row $$1$$, since we already know the values of $$c'_0$$ and $$d'_0$$. |
| 75 | + |
| 76 | +Of course, what we really need are the solutions $$x_i$$. It's back substitution time! |
| 77 | + |
| 78 | +If we express our system in terms of equations instead of a matrix, we get |
| 79 | + |
| 80 | +$$ |
| 81 | +x_i + c'_i \times x_{i+1} = d'_i |
36 | 82 | $$
|
37 | 83 |
|
| 84 | +plus the last row that is even simpler: $$x_n = d'_n$$. One solution for free! |
| 85 | +Maybe we can backtrack from the last solution? Let's (barely) transform the above equation: |
| 86 | + |
| 87 | +$$ |
| 88 | +x_i = d'_i - c'_i \times x_{i+1} |
| 89 | +$$ |
| 90 | + |
| 91 | +and that's all there is to it. We can calculate all the $$x_i$$ in a single pass starting from the end. |
| 92 | + |
| 93 | +Overall, we only need two passes, and that's why our algorithm is $$O(n)$$! |
| 94 | +The transformations are quite easy too, isn't that neat? |
| 95 | + |
38 | 96 | ## Example Code
|
39 | 97 |
|
40 | 98 | {% method %}
|
|
48 | 106 | [import, lang:"haskell"](code/haskell/thomas.hs)
|
49 | 107 | {% endmethod %}
|
50 | 108 |
|
51 |
| -This is a much simpler implementation than Gaussian Elimination and only has one for loop before back-substitution, which is why it has a better complexity case. |
52 |
| - |
53 | 109 | <script>
|
54 | 110 | MathJax.Hub.Queue(["Typeset",MathJax.Hub]);
|
55 | 111 | </script>
|
0 commit comments