Skip to content

Added python gaussian elimination #200

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import numpy as np

def gaussian_elimination(A):

rows = len(A)
cols = len(A[0])
n = min(rows, cols)

# Main loop going through all columns
for k in range(n):

# Step 1: finding the maximum element for each column
max_index = np.argmax(np.abs([A[i][k] for i in range(k,n)])) + k

# Check to make sure matrix is good!
if (A[max_index][k] == 0):
print("matrix is singular! End!")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Singular matrices have nothing to do with Gaussian elimination. If your pivot is zero, you continue with the next column. It's still a valid matrix in the echelon form.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://stackoverflow.com/questions/23985004/gaussian-elimination-of-a-singular-matrix

In my understanding a singular matrix was indicative of an improper set of systems of equations and should be underdetermined. This algorithm shouldn't work for this case (I don't think, anyway). Most example code I can find stops once the matrix is considered singular; however, I think you are right. We can also take the general case. I am just not sure the best version of the code here.

return

# Step 2: swap row with highest value for that column to the top
A[max_index], A[k] = A[k], A[max_index]

# Loop for all remaining rows
for i in range(k+1,rows):

# Step 3: finding fraction
fraction = A[i][k] / A[k][k]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pivot is not always in the diagonal, in the case that a previous pivot was zero. You have to consider this case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually related to the above discussion, I think. The code I have in the algorithm archive is meant to solve a system of equations, so these cases will be caught by the "singular matrix" condition.


# loop through all columns for that row
for j in range(k+1,cols):

# Step 4: re-evaluate each element
A[i][j] -= A[k][j]*fraction

# Step 5: Set lower elements to 0
A[i][k] = 0


def back_substitution(A):

rows = len(A)
cols = len(A[0])

# Creating the solution Vector
soln = [0]*rows

# initialize the final element
soln[-1] = A[-1][-1] / A[-1][-2]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again here, this is not general, in case the Echelon form is not a pure upper triangular matrix.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this is caught by the "not singular" condition.

Currently looking at the code and seeing how we should update things.


for i in range(rows - 2,-1,-1):
sum = 0
for j in range(rows-1,-1,-1):
sum += soln[j]*A[i][j]
soln[i] = (A[i][-1] - sum) / A[i][i]

return soln

def main():
A = [[ 2, 3, 4, 6],
[ 1, 2, 3, 4],
[ 3, -4, 0, 10]]

gaussian_elimination(A)
print(A)
soln = back_substitution(A)

for element in soln:
print(element)

if __name__ == '__main__':
main()
Original file line number Diff line number Diff line change
Expand Up @@ -257,6 +257,8 @@ In code, this looks like:
{% method %}
{% sample lang="jl" %}
[import:1-42, lang:"julia"](code/julia/gaussian_elimination.jl)
{% sample lang="py" %}
[import:3-36, lang:"python"](code/python/gaussian_elimination.py)
{% endmethod %}

As with all code, it takes time to fully absorb what is going on and why everything is happening; however, I have tried to comment the above psuedocode with the necessary steps. Let me know if anything is unclear!
Expand All @@ -270,6 +272,8 @@ Even though this seems straightforward, the pseudocode might not be as simple as
{% method %}
{% sample lang="jl" %}
[import:44-64, lang:"julia"](code/julia/gaussian_elimination.jl)
{% sample lang="py" %}
[import:39-56, lang:"python"](code/python/gaussian_elimination.py)
{% endmethod %}

Now, as for what's next... Well, we are in for a treat! The above algorithm clearly has 3 `for` loops, and will thus have a complexity of $$\sim O(n^3)$$, which is abysmal! If we can reduce the matrix to a specifically **tridiagonal** matrix, we can actually solve the system in $$\sim O(n)$$! How? Well, we can use an algorithm known as the _Tri-Diagonal Matrix Algorithm_ \(TDMA\) also known as the _Thomas Algorithm_.
Expand All @@ -281,6 +285,8 @@ The full code can be seen here:
{% method %}
{% sample lang="jl" %}
[import, lang:"julia"](code/julia/gaussian_elimination.jl)
{% sample lang="py" %}
[import, lang:"python"](code/python/gaussian_elimination.py)
{% sample lang="rs" %}
[import, lang:"rust"](code/rust/gaussian_elimination.rs)
{% endmethod %}
Expand Down