-
-
Notifications
You must be signed in to change notification settings - Fork 360
Re-wrote Thomas Algorithm chapter to include more details #329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few small nitpicks, but this is a major improvement to the text currently in the AAA
@@ -1,6 +1,6 @@ | |||
# Thomas Algorithm | |||
|
|||
As alluded to in the [Gaussian Elimination chapter](../gaussian_elimination/gaussian_elimination.md), the Thomas Algorithm (or TDMA -- Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$\sim O(n^3) \rightarrow \sim O(n)$$! This is done by exploiting a particular case of Gaussian Elimination, particularly the case where our matrix looks like: | |||
As alluded to in the [Gaussian Elimination chapter](../gaussian_elimination/gaussian_elimination.md), the Thomas Algorithm (or TDMA, Tri-Diagonal Matrix Algorithm) allows for programmers to **massively** cut the computational cost of their code from $$ O(n^3)$$ to $$O(n)$$ in certain cases! This is done by exploiting a particular case of Gaussian Elimination, in the case where our matrix looks like: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last sentence:
This is done by exploiting a particular case of Gaussian Elimination where the matrix looks like this:
@@ -14,27 +14,83 @@ $$ | |||
\right] | |||
$$ | |||
|
|||
By this, I mean that our matrix is *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). Now, at first, it might not be obvious how this helps; however, we may divide this array into separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ and then solve for $$x$$ with back-substitution, like before. | |||
This matrix shape is called *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). | |||
Now, at first, it might not be obvious how this helps. Well, first of all, the system easier to encode: we may divide it into four separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ (in some implementation, you will see the missing $$a_0$$ and $$c_n$$ set to zero to get four vectors of the same size). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"well, first of all" -> "Well, firstly, it makes..."
There was no verb in that sentence and also switching "first of all" to "firstly" to match "secondly" below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"in some implementations" fix to plural
By this, I mean that our matrix is *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). Now, at first, it might not be obvious how this helps; however, we may divide this array into separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ and then solve for $$x$$ with back-substitution, like before. | ||
This matrix shape is called *Tri-Diagonal* (excluding the right-hand side of our system of equations, of course!). | ||
Now, at first, it might not be obvious how this helps. Well, first of all, the system easier to encode: we may divide it into four separate vectors corresponding to $$a$$, $$b$$, $$c$$, and $$d$$ (in some implementation, you will see the missing $$a_0$$ and $$c_n$$ set to zero to get four vectors of the same size). | ||
Secondly, equations this short are easy to solve analytically, let's see how. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's see how
|
||
In particular, we need to find an optimal scale factor for each row and use that. What is the scale factor? Well, it is the diagonal $$-$$ the multiplicative sum of the off-diagonal elements. | ||
In the end, we will update $$c$$ and $$d$$ to be $$c'$$ and $$d'$$ like so: | ||
We'll apply the mechanisms introduced in the Gaussian Elimination chapter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What mechanisms, exactly? Maybe say "We'll start by applying mechanisms similar to those seen in the [Gaussian Elimination](../gaussian_elimination/gaussian_elimination.md)
chapter."
$$ | ||
|
||
Of course, the initial elements will need to be specifically defined as | ||
Let's transform row $$(i)$$ in two steps. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a newline here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, looks good!
Let me know what you think, maybe I went into too much detail?