Skip to content

Improvements to Navier-Stokes demo #99

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 10, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .github/workflows/build-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,15 @@ jobs:
mpirun -n 3 python3 convergence.py
mpirun -n 3 python3 compiler_parameters.py

- name: Upload Navier-Stokes DFG 2D 3 plots
uses: actions/upload-artifact@v3
with:
name: DFG2D-3
path: chapter2/figures
retention-days: 2
if-no-files-found: error


build-book:
# The type of runner that the job will run on
runs-on: ubuntu-latest
Expand Down
176 changes: 96 additions & 80 deletions chapter2/ns_code2.ipynb

Large diffs are not rendered by default.

105 changes: 54 additions & 51 deletions chapter2/ns_code2.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
# To be able to solve this problem efficiently and ensure numerical stability, we will subsitute our first order backward difference scheme with a second order backward difference approximation, and use an explicit Adams-Bashforth approximation of the non-linear term.
#
# ```{admonition} Computationally demanding demo
# This demo is computationally demanding, with a run-time up to 25 minutes, as it is using parameters from the DFG 2D-3 benchmark, which consists of 12800 time steps. It is adviced to download this demo and not run it in a browser. Please also see the last part of the tutorial on how to use `mpirun` to speedup the run-time of the program.
# This demo is computationally demanding, with a run-time up to 15 minutes, as it is using parameters from the DFG 2D-3 benchmark, which consists of 12800 time steps. It is adviced to download this demo and not run it in a browser. This runtime of the demo can be increased by using 2 or 3 mpi processes.
# ```
#
# The computational geometry we would like to use is
Expand All @@ -42,19 +42,22 @@

# +
import gmsh
import os
import numpy as np
import matplotlib.pyplot as plt
import tqdm.notebook
import tqdm.autonotebook

from mpi4py import MPI
from petsc4py import PETSc

from dolfinx.cpp.mesh import to_type, cell_entity_type
from dolfinx.fem import Constant, Function, FunctionSpace, assemble_scalar, dirichletbc, form, locate_dofs_topological, set_bc
from dolfinx.fem.petsc import apply_lifting, assemble_matrix, assemble_vector, create_vector, set_bc
from dolfinx.fem import (Constant, Function, FunctionSpace,
assemble_scalar, dirichletbc, form, locate_dofs_topological, set_bc)
from dolfinx.fem.petsc import (apply_lifting, assemble_matrix, assemble_vector,
create_vector, create_matrix, set_bc)
from dolfinx.graph import create_adjacencylist
from dolfinx.geometry import BoundingBoxTree, compute_collisions, compute_colliding_cells
from dolfinx.io import (XDMFFile, distribute_entity_data, gmshio)
from dolfinx.io import (VTXWriter, distribute_entity_data, gmshio)
from dolfinx.mesh import create_mesh, meshtags_from_entities

from ufl import (FacetNormal, FiniteElement, Identity, Measure, TestFunction, TrialFunction, VectorElement,
Expand Down Expand Up @@ -123,31 +126,30 @@
# LcMin -o---------/
# | | |
# Point DistMin DistMax
res_min = r / 3.7
res_max = 1.5 * r
res_min = r / 3
if mesh_comm.rank == model_rank:
gmsh.model.mesh.field.add("Distance", 1)
gmsh.model.mesh.field.setNumbers(1, "EdgesList", obstacle)
gmsh.model.mesh.field.add("Threshold", 2)
gmsh.model.mesh.field.setNumber(2, "IField", 1)
gmsh.model.mesh.field.setNumber(2, "LcMin", res_min)
gmsh.model.mesh.field.setNumber(2, "LcMax", res_max)
gmsh.model.mesh.field.setNumber(2, "DistMin", 4*r)
gmsh.model.mesh.field.setNumber(2, "DistMax", 8*r)

# We take the minimum of the two fields as the mesh size
gmsh.model.mesh.field.add("Min", 5)
gmsh.model.mesh.field.setNumbers(5, "FieldsList", [2])
gmsh.model.mesh.field.setAsBackgroundMesh(5)
distance_field = gmsh.model.mesh.field.add("Distance")
gmsh.model.mesh.field.setNumbers(distance_field, "EdgesList", obstacle)
threshold_field = gmsh.model.mesh.field.add("Threshold")
gmsh.model.mesh.field.setNumber(threshold_field, "IField", distance_field)
gmsh.model.mesh.field.setNumber(threshold_field, "LcMin", res_min)
gmsh.model.mesh.field.setNumber(threshold_field, "LcMax", 0.25 * H)
gmsh.model.mesh.field.setNumber(threshold_field, "DistMin", r)
gmsh.model.mesh.field.setNumber(threshold_field, "DistMax", 2 * H)
min_field = gmsh.model.mesh.field.add("Min")
gmsh.model.mesh.field.setNumbers(min_field, "FieldsList", [threshold_field])
gmsh.model.mesh.field.setAsBackgroundMesh(min_field)

# ## Generating the mesh
# We are now ready to generate the mesh. However, we have to decide if our mesh should consist of triangles or quadrilaterals. In this demo, to match the DFG 2D-3 benchmark, we use quadrilateral elements. This is done by recombining the mesh, setting three gmsh options.
# We are now ready to generate the mesh. However, we have to decide if our mesh should consist of triangles or quadrilaterals. In this demo, to match the DFG 2D-3 benchmark, we use second order quadrilateral elements.

if mesh_comm.rank == model_rank:
gmsh.option.setNumber("Mesh.RecombinationAlgorithm", 8)
gmsh.option.setNumber("Mesh.RecombineAll", 2)
gmsh.option.setNumber("Mesh.Algorithm", 8)
gmsh.option.setNumber("Mesh.RecombinationAlgorithm", 2)
gmsh.option.setNumber("Mesh.RecombineAll", 1)
gmsh.option.setNumber("Mesh.SubdivisionAlgorithm", 1)
gmsh.model.mesh.generate(gdim)
gmsh.model.mesh.setOrder(2)
gmsh.model.mesh.optimize("Netgen")

# ## Loading mesh and boundary markers
Expand All @@ -161,7 +163,7 @@
# Following the DGF-2 benchmark, we define our problem specific parameters

t = 0
T = 1 #8 # Final time
T = 8 # Final time
dt = 1/1600 # Time step size
num_steps = int(T/dt)
k = Constant(mesh, PETSc.ScalarType(dt))
Expand Down Expand Up @@ -210,11 +212,11 @@ def __call__(self, x):
# -

# ## Variational form
# As opposed to [Pouseille flow](./ns_code1.ipynb), we will use a Crank-Nicolson discretization, and an explicit Adams-Bashforth approximation.
# As opposed to [Pouseille flow](./ns_code1.ipynb), we will use a Crank-Nicolson discretization, and an semi-implicit Adams-Bashforth approximation.
# The first step can be written as
#
# $$
# \rho\left(\frac{u^*- u^n}{\delta t} + \frac{3}{2} u^n \cdot \nabla u^n - \frac{1}{2} u^{n-1} \cdot \nabla u^{n-1}\right) - \frac{1}{2}\mu \Delta( u^*+ u^n )+ \nabla p^{n-1/2} = f^{n+\frac{1}{2}} \qquad \text{ in } \Omega
# \rho\left(\frac{u^*- u^n}{\delta t} + \left(\frac{3}{2}u^{n} - \frac{1}{2} u^{n-1}\right)\cdot \frac{1}{2}\nabla (u^*+u^n) \right) - \frac{1}{2}\mu \Delta( u^*+ u^n )+ \nabla p^{n-1/2} = f^{n+\frac{1}{2}} \qquad \text{ in } \Omega
# $$
#
# $$
Expand Down Expand Up @@ -263,13 +265,12 @@ def __call__(self, x):

f = Constant(mesh, PETSc.ScalarType((0,0)))
F1 = rho / k * dot(u - u_n, v) * dx
F1 += inner(1.5 * dot(u_n, nabla_grad(u_n)) - 0.5 * dot(u_n1, nabla_grad(u_n1)), v) * dx
F1 += 0.5 * mu * inner(grad(u+u_n), grad(v))*dx - dot(p_, div(v))*dx
F1 += inner(dot(1.5 * u_n - 0.5 * u_n1, 0.5 * nabla_grad(u + u_n)), v) * dx
F1 += 0.5 * mu * inner(grad(u + u_n), grad(v))*dx - dot(p_, div(v))*dx
F1 += dot(f, v) * dx
a1 = form(lhs(F1))
L1 = form(rhs(F1))
A1 = assemble_matrix(a1, bcs=bcu)
A1.assemble()
A1 = create_matrix(a1)
b1 = create_vector(L1)

# Next we define the second step
Expand Down Expand Up @@ -351,18 +352,17 @@ def __call__(self, x):
# ## Solving the time-dependent problem
# ```{admonition} Stability of the Navier-Stokes equation
# Note that the current splitting scheme has to fullfil the a [Courant–Friedrichs–Lewy condition](https://en.wikipedia.org/wiki/Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition). This limits the spatial discretization with respect to the inlet velocity and temporal discretization.
# Other temporal discretization schemes such as the second order backward difference discretization or Crank-Nicholson discretization with Adams-Bashforth linearization are better behaved than our simple backward differnce scheme.```
# Other temporal discretization schemes such as the second order backward difference discretization or Crank-Nicholson discretization with Adams-Bashforth linearization are better behaved than our simple backward difference scheme.
# ```
#
# As in the previous example, we create output files for the velocity and pressure and solve the time-dependent problem. As we are solving a time dependent problem with many time steps, we use the `tqdm`-package to visualize the progress. This package can be install with `pip3`.

# + tags=[]
xdmf = XDMFFile(MPI.COMM_WORLD, "dfg2D-3.xdmf", "w")
xdmf.write_mesh(mesh)
xdmf.write_function(u_, t)
xdmf.write_function(p_, t)

progress = tqdm.notebook.tqdm(desc="Solving PDE", total=num_steps)
# If running this as a python script, you should use the Progressbar command below
# progress = tqdm.tqdm(desc="Solving PDE", total=num_steps)
vtx_u = VTXWriter(mesh.comm, "dfg2D-3-u.bp", [u_])
vtx_p = VTXWriter(mesh.comm, "dfg2D-3-p.bp", [p_])
vtx_u.write(t)
vtx_p.write(t)
progress = tqdm.autonotebook.tqdm(desc="Solving PDE", total=num_steps)
for i in range(num_steps):
progress.update(1)
# Update current time step
Expand All @@ -371,7 +371,10 @@ def __call__(self, x):
inlet_velocity.t = t
u_inlet.interpolate(inlet_velocity)

# Step 1: Tentative veolcity step
# Step 1: Tentative velocity step
A1.zeroEntries()
assemble_matrix(A1, a1, bcs=bcu)
A1.assemble()
with b1.localForm() as loc:
loc.set(0)
assemble_vector(b1, L1)
Expand Down Expand Up @@ -403,8 +406,8 @@ def __call__(self, x):
u_.x.scatter_forward()

# Write solutions to file
xdmf.write_function(u_, t)
xdmf.write_function(p_, t)
vtx_u.write(t)
vtx_p.write(t)

# Update variable with solution form this time step
with u_.vector.localForm() as loc_, u_n.vector.localForm() as loc_n, u_n1.vector.localForm() as loc_n1:
Expand All @@ -426,7 +429,7 @@ def __call__(self, x):
p_back = mesh.comm.gather(p_back, root=0)
if mesh.comm.rank == 0:
t_u[i] = t
t_p[i] = t-dt/2
t_p[i] = t - dt / 2
C_D[i] = sum(drag_coeff)
C_L[i] = sum(lift_coeff)
# Choose first pressure that is found from the different processors
Expand All @@ -438,16 +441,17 @@ def __call__(self, x):
if pressure is not None:
p_diff[i] -= pressure[0]
break
# Close xmdf file
xdmf.close()
vtx_u.close()
vtx_p.close()
# -

# ## Verification using data from FEATFLOW
# As FEATFLOW has provided data for different discretization levels, we compare our numerical data with the data provided using `matplotlib`.
#

# + tags=[]
if mesh.comm.rank == 0:
if not os.path.exists("figures"):
os.mkdir("figures")
num_velocity_dofs = V.dofmap.index_map_bs * V.dofmap.index_map.size_global
num_pressure_dofs = Q.dofmap.index_map_bs * V.dofmap.index_map.size_global

Expand All @@ -460,7 +464,7 @@ def __call__(self, x):
plt.title("Drag coefficient")
plt.grid()
plt.legend()
plt.savefig("drag_comparison.png")
plt.savefig("figures/drag_comparison.png")

fig = plt.figure(figsize=(25,8))
l1 = plt.plot(t_u, C_L, label=r"FEniCSx ({0:d} dofs)".format(
Expand All @@ -470,7 +474,7 @@ def __call__(self, x):
plt.title("Lift coefficient")
plt.grid()
plt.legend()
plt.savefig("lift_comparison.png")
plt.savefig("figures/lift_comparison.png")

fig = plt.figure(figsize=(25,8))
l1 = plt.plot(t_p, p_diff, label=r"FEniCSx ({0:d} dofs)".format(num_velocity_dofs+num_pressure_dofs),linewidth=2)
Expand All @@ -479,8 +483,7 @@ def __call__(self, x):
plt.title("Pressure difference")
plt.grid()
plt.legend()
plt.savefig("pressure_comparison.png")

plt.savefig("figures/pressure_comparison.png")
# -

# We observe an offset in amplitude. This is due to the reduced number of degrees of freedom compared to FEATFLOW. If we change the parameters `res_min` to `r/5`, and `res_max` to `r`, we can obtain a result closer to the FEATFLOW benchmark. It is recommended to convert the notebook to a python-script using [nbconvert](https://nbconvert.readthedocs.io/en/latest/) and using `mpirun -n 4 python3 ns_code2.py` to run the python program distributed over 4 processors.