|
1 | 1 | # Thomas Algorithm
|
2 | 2 |
|
3 |
| -As alluded to in the [Gaussian Elimination chapter](../gaussian_elimination/gaussian_elimination.md), the Thomas Algorithm (or TDMA -- Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$\sim O(n^3) \rightarrow \sim O(n)$$! This is done by exploiting a particular case of Gaussian Elimination, particularly the case where our matrix looks like: |
| 3 | +As alluded to in the [Gaussian Elimination chapter](../gaussian_elimination/gaussian_elimination.md), the Thomas Algorithm (or TDMA, Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$ O(n^3)$$ to $$O(n)$$ in certain cases! This is done by exploiting a particular case of Gaussian Elimination, in the case where our matrix looks like: |
4 | 4 |
|
5 | 5 | $$
|
6 | 6 | \left[
|
|
14 | 14 | \right]
|
15 | 15 | $$
|
16 | 16 |
|
17 |
| -By this, I mean that our matrix is *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). Now, at first, it might not be obvious how this helps; however, we may divide this array into separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ and then solve for $$x$$ with back-substitution, like before. |
| 17 | +This matrix shape is called *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). |
| 18 | +Now, at first, it might not be obvious how this helps. Well, first of all, the system easier to encode: we may divide it into four separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ (in some implementation, you will see the missing $$a_0$$ and $$c_n$$ set to zero to get four vectors of the same size). |
| 19 | +Secondly, equations this short are easy to solve analytically, let's see how. |
18 | 20 |
|
19 |
| -In particular, we need to find an optimal scale factor for each row and use that. What is the scale factor? Well, it is the diagonal $$-$$ the multiplicative sum of the off-diagonal elements. |
20 |
| -In the end, we will update $$c$$ and $$d$$ to be $$c'$$ and $$d'$$ like so: |
| 21 | +We'll apply the mechanisms introduced in the Gaussian Elimination chapter. |
| 22 | +Our first goal is to eliminate the $$a_i$$ terms and set the diagonal values $$b_i$$ to $$1$$. The $$c_i$$ and $$d_i$$ terms will be transformed into $$c'_i$$ and $$d'_i$$. |
| 23 | +The first row is particularly easy to transform since there is no $$a_0$$, we simply need to divide the row by $$b_0$$: |
21 | 24 |
|
22 | 25 | $$
|
| 26 | +\left\{ |
23 | 27 | \begin{align}
|
24 |
| -c'_i &= \frac{c_i}{b_i - a_i \times c'_{i-1}} \\ |
25 |
| -d'_i &= \frac{d_i - a_i \times d'_{i-1}}{b_i - a_i \times c'_{i-1}} |
| 28 | +c'_0 &= \frac{c_0}{b_0} \\ |
| 29 | +d'_0 &= \frac{d_0}{b_0} |
26 | 30 | \end{align}
|
| 31 | +\right. |
| 32 | +$$ |
| 33 | + |
| 34 | +Let's assume that we found a way to transform the first $$i-1$$ rows. How would we transform the next one? We have |
| 35 | + |
| 36 | +$$ |
| 37 | +\begin{array}{ccccccc|c} |
| 38 | + & & \ddots & & & & & \\ |
| 39 | +(i-1) & & 0 & 1 & c'_{i-1} & & & d'_{i-1} \\ |
| 40 | +(i) & & & a_i & b_i & c_i & & d_i \\ |
| 41 | + & & & & & \ddots & & |
| 42 | +\end{array} |
27 | 43 | $$
|
28 | 44 |
|
29 |
| -Of course, the initial elements will need to be specifically defined as |
| 45 | +Let's transform row $$(i)$$ in two steps. |
| 46 | +**Step one**: eliminate $$a_i$$ with the transformation $$(i)^* = (i) - a_i \times (i-1)$$: |
30 | 47 |
|
31 | 48 | $$
|
| 49 | +\left\{ |
32 | 50 | \begin{align}
|
33 |
| -c'_0 = \frac{c_0}{b_0} |
34 |
| -d'_0 = \frac{d_0}{b_0} |
| 51 | +a^*_i &= 0 \\ |
| 52 | +b^*_i &= b_i - a_i \times c'_{i-1} \\ |
| 53 | +c^*_i &= c_i \\ |
| 54 | +d^*_i &= d_i - a_i \times d'_{i-1} |
35 | 55 | \end{align}
|
| 56 | +\right. |
36 | 57 | $$
|
37 | 58 |
|
| 59 | +**Step two**: get $$b'_i=1$$ with the transformation $$(i)' = (i)^* / b^*_i $$: |
| 60 | + |
| 61 | +$$ |
| 62 | +\left\{ |
| 63 | +\begin{align} |
| 64 | +a'_i &= 0 \\ |
| 65 | +b'_i &= 1 \\ |
| 66 | +c'_i &= \frac{c_i}{b_i - a_i \times c'_{i-1}} \\ |
| 67 | +d'_i &= \frac{d_i - a_i \times d'_{i-1}}{b_i - a_i \times c'_{i-1}} |
| 68 | +\end{align} |
| 69 | +\right. |
| 70 | +$$ |
| 71 | + |
| 72 | +Brilliant! With the last two formula, we can calculate all the $$c'_i$$ and $$d'_i$$ in a single pass, starting from row $$1$$, since we already know the values of $$c'_0$$ and $$d'_0$$. |
| 73 | + |
| 74 | +Of course, what we really need are the solutions $$x_i$$. It's back substitution time! |
| 75 | + |
| 76 | +If we express our system in terms of equations instead of a matrix, we get |
| 77 | + |
| 78 | +$$ |
| 79 | +x_i + c'_i \times x_{i+1} = d'_i |
| 80 | +$$ |
| 81 | + |
| 82 | +plus the last row that is even simpler: $$x_n = d'_n$$. One solution for free! |
| 83 | +Maybe we can backtrack from the last solution? Let's (barely) transform the above equation: |
| 84 | + |
| 85 | +$$ |
| 86 | +x_i = d'_i - c'_i \times x_{i+1} |
| 87 | +$$ |
| 88 | + |
| 89 | +and that's all there is to it. We can calculate all the $$x_i$$ in a single pass starting from the end. |
| 90 | + |
| 91 | +Overall, we only need two passes, and that's why our algorithm is $$O(n)$$! |
| 92 | +The transformations are quite easy too, isn't that neat? |
| 93 | + |
38 | 94 | ## Example Code
|
39 | 95 |
|
40 | 96 | {% method %}
|
|
48 | 104 | [import, lang:"haskell"](code/haskell/thomas.hs)
|
49 | 105 | {% endmethod %}
|
50 | 106 |
|
51 |
| -This is a much simpler implementation than Gaussian Elimination and only has one for loop before back-substitution, which is why it has a better complexity case. |
52 |
| - |
53 | 107 | <script>
|
54 | 108 | MathJax.Hub.Queue(["Typeset",MathJax.Hub]);
|
55 | 109 | </script>
|
56 |
| -$$ |
57 |
| -\newcommand{\d}{\mathrm{d}} |
58 |
| -\newcommand{\bff}{\boldsymbol{f}} |
59 |
| -\newcommand{\bfg}{\boldsymbol{g}} |
60 |
| -\newcommand{\bfp}{\boldsymbol{p}} |
61 |
| -\newcommand{\bfq}{\boldsymbol{q}} |
62 |
| -\newcommand{\bfx}{\boldsymbol{x}} |
63 |
| -\newcommand{\bfu}{\boldsymbol{u}} |
64 |
| -\newcommand{\bfv}{\boldsymbol{v}} |
65 |
| -\newcommand{\bfA}{\boldsymbol{A}} |
66 |
| -\newcommand{\bfB}{\boldsymbol{B}} |
67 |
| -\newcommand{\bfC}{\boldsymbol{C}} |
68 |
| -\newcommand{\bfM}{\boldsymbol{M}} |
69 |
| -\newcommand{\bfJ}{\boldsymbol{J}} |
70 |
| -\newcommand{\bfR}{\boldsymbol{R}} |
71 |
| -\newcommand{\bfT}{\boldsymbol{T}} |
72 |
| -\newcommand{\bfomega}{\boldsymbol{\omega}} |
73 |
| -\newcommand{\bftau}{\boldsymbol{\tau}} |
74 |
| -$$ |
|
0 commit comments