Reference documentation for deal.II version GIT relicensing-233-g802318d791 2024-03-28 20:20:02+00:00
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The 'Distributed LDG Method' code gallery program

This program was contributed by Michael Harmon <mdh266@gmail.com>.
It comes without any warranty or support by its authors or the authors of deal.II.

This program is part of the deal.II code gallery and consists of the following files (click to inspect):

Pictures from this code gallery program

Annotated version of README.md

Distributed Local Discontinuous Galerkin Methods

Introduction

This code is designed to numerically solve the Poisson equation

\begin{align} - \nabla \cdot \left(\ \nabla u \ \right)&= f(\textbf{x}) && \mbox{in} \ \Omega,\nonumber \\ -\nabla u \cdot \textbf{n} &= g_{N}(\textbf{x}) && \mbox{on} \ \partial \Omega_{N} \nonumber\\ u &= g_{D}(\textbf{x}) && \mbox{on} \ \partial \Omega_{D}. \nonumber \end{align}

in 2D and 3D using the local discontinuous Galerkin (LDG) method from scratch. The tutorial codes step-12 and step-39 use the MeshWorker interface to build discontinuous Galerkin (DG) methods. While this is very convenient, I could not use this framework for solving my research problem and I needed to write the LDG method from scratch. I thought it would be helpful for others to have access to this example that goes through writing a discontinuous Galerkin method from scatch and also shows how to do it in a distributed setting using the Trilinos library. This example may also be of interest to users that wish to use the LDG method, as the method is distinctly different from the Interior Penalty Discontinuous Galerkin (IPDG) methods and was not covered in other tutorials on DG methods. The LDG method is very useful when one is working with a differential equation and desires both approximations to the scalar unknown function as well as its flux. The application of a mixed method offers a mechanism whereby one can obtain both the scalar unknown function as well as its flux, however, the LDG method has fewer degrees of freedom compared to the mixed method with the Raviart-Thomas element. It also approximates the scalar unknown function and its flux using discontinuous polynomial basis functions and are much more suitable when one wishes to use local refinement.

Compiling and Running

To generate a makefile for this code using CMake, type the following command into the terminal from the main directory:

    cmake . -DDEAL_II_DIR=/path/to/deal.II

To compile the code in debug mode use:

    make

To compile the code in release mode use:

    make release    

Either of these commands will create the executable, main, however the release mode will make a faster executable.

To run the code on N processors type the following command into the terminal from the main directory,

    mpirun -np N ./main

The output of the code will be in .vtu and .pvtu format and be written to disk in parallel. The results can be viewed using ParaView.

Local Discontinuous Galerkin Method

In this section we discuss the LDG method and first introduce some notation. Let \(\mathcal{T}_{h} = \mathcal{T}_{h}(\Omega) \, = \, \left\{ \, \Omega_{e} \, \right\}_{e=1}^{N}\) be the general triangulation of a domain \(\Omega \; \subset \; \mathbb{R}^{d}, \; d \, = \, 1, 2, 3\), into \(N\) non-overlapping elements \(\Omega_{e}\) of diameter \(h_{e}\). The maximum size of the diameters of all elements is \(h = \max( \, h_{e}\, )\). We define \(\mathcal{E}_{h}\) to be the set of all element faces and \(\mathcal{E}_{h}^{i} \) to be the set of all interior faces of elements which do not intersect the total boundary \((\partial \Omega)\). We define \(\mathcal{E}_{D}\) and \(\mathcal{E}_{N}\) to be the sets of all element faces and on the Dirichlet and Neumann boundaries respectively. Let \(\partial \Omega_{e} \in \mathcal{E}_{h}^{i}\) be a interior boundary face element, we define the unit normal vector to be,

\begin{align} \textbf{n} \; = \; \text{unit normal vector to } \partial \Omega_{e} \text{ pointing from } \Omega_{e}^{-} \, \rightarrow \, \Omega_{e}^{+}. \end{align}

We take the following definition on limits of functions on element faces,

\begin{align} w^{-} (\textbf{x} ) \, \vert_{\partial \Omega_{e} } \; = \; \lim_{s \rightarrow 0^{-}} \, w(\textbf{x} + s \textbf{n}), && w^{+} (\textbf{x} ) \, \vert_{\partial \Omega_{e} } \; = \; \lim_{s \rightarrow 0^{+}} \, w(\textbf{x} + s \textbf{n}). \end{align}

We define the average and jump of a function across an element face as,

\begin{align} \{f\} \; = \; \frac{1}{2}(f^-+f^+) , \qquad \mbox{and} \qquad \left[ f \right] \; = \; f^+ \textbf{n}^+ + f^- \textbf{n}^- \; = \; (f^+ - f^-) \textbf{n}^+, \end{align}

and,

\begin{align} \{\textbf{f} \} \; = \; \frac{1}{2}(\textbf{f}^- + \textbf{f}^+), \qquad \mbox{and}\qquad \left[ \textbf{f} \right] \; = \; \textbf{f}^+ \cdot \textbf{n}^+ + \textbf{f}^- \cdot \textbf{n}^- \; = \; (\textbf{f}^+ - \textbf{f}^-) \cdot \textbf{n}^+ , \end{align}

where \(f\) is a scalar function and \(\textbf{f}\) is vector-valued function. We note that for faces that are on the boundary of the domain we have,

\begin{align} \left[ f \right] \; = \; f \, \textbf{n} \qquad \mbox{and}\qquad \left[ \textbf{f} \right] \; = \; \textbf{f} \cdot \textbf{n}. \end{align}

We denote the volume integrals and surface integrals using the \(L^{2}\) inner products by \(( \, \cdot \, , \, \cdot \, )_{\Omega}\) and \(\langle \, \cdot \, , \, \cdot \, \rangle_{\partial \Omega}\) respectively.

As with the mixed finite element method with the Raviart-Thomas element, the LDG discretization requires the Poisson equations be written as a first-order system. We do this by introducing an auxiliary variable which we call the current flux variable \(\textbf{q}\):

\begin{align} \nabla \cdot \textbf{q} \; &= \; f(\textbf{x}) && \text{in} \ \Omega, \label{eq:Primary} \\ \textbf{q} \; &= \; -\nabla u && \text{in} \ \Omega, \label{eq:Auxillary} \\ \textbf{q} \cdot \textbf{n} \; &= \; g_{N}(\textbf{x}) && \text{on} \ \partial \Omega_{N},\\ u &= g_{D}(\textbf{x}) && \mbox{on}\ \partial \Omega_{D}. \end{align}

In our numerical methods we will use approximations to scalar valued functions that reside in the finite-dimensional broken Sobolev spaces,

\begin{align} W_{h,k} \, &= \, \left\{ w \in L^{2}(\Omega) \, : \; w \vert_{\Omega_{e}} \in \mathcal{Q}_{k,k}(\Omega_{e}), \quad \forall \, \Omega_{e} \in \mathcal{T}_{h} \right\}, \end{align}

where \(\mathcal{Q}_{k,k}(\Omega_{e})\) denotes the tensor product of discontinuous polynomials of order \(k\) on the element \(\Omega_{e}\). We use approximations of vector valued functions that are in,

\begin{align} \textbf{W}_{h,k} \, &= \, \left\{ \textbf{w} \in \left(L^{2}(\Omega)\right)^{d} \, : \; \textbf{w} \vert_{\Omega_{e}} \in \left( \mathcal{Q}_{k,k}(\Omega_{e}) \right)^{d}, \quad \forall \, \Omega_{e} \in \mathcal{T}_{h} \right\} \end{align}

We seek approximations for densities \(u_{h} \in W_{h,k}\) and gradients \(\textbf{q}_{h}\in \textbf{W}_{h,k}\). Multiplying (6) by \(w \in W_{h,k}\) and (7) by \(\textbf{w} \in \textbf{W}_{h,k}\) and integrating the divergence terms by parts over an element \(\Omega_{e} \in \mathcal{T}_{h}\) we obtain,

\begin{align} - \left( \nabla w \, , \, \textbf{q}_{h} \right)_{\Omega_{e}} + \langle w \, , \, \textbf{q}_{h} \rangle_{\partial \Omega_{e}} \ &= \ \left( w , \, f \right)_{\Omega_{e}} , \\ \left( \textbf{w} \, , \, \textbf{q}_{h} \right)_{\Omega_{e}} - \left( \nabla \cdot \textbf{w} \, , \, u_{h} \right)_{\Omega_{e}} + \langle \textbf{w} \, , \, u_{h} \rangle_{\partial \Omega_{e}} \ &= \ 0 \end{align}

Summing over all the elements leads to the weak formulation:

Find \(u_{h} \in W_{h,k}\) and \(\textbf{q}_{h} \in \textbf{W}_{h,k} \) such that,

\begin{align} - \sum_{e} \left( \nabla w, \, \textbf{q}_{h} \right)_{\Omega_{e}} + \langle \left[ \, w \, \right] \, , \, \widehat{\textbf{q}_{h} } \rangle_{\mathcal{E}_{h}^{i} } + \langle \left[ \, w \, \right] \, , \, \widehat{\textbf{q}_{h} } \rangle_{\mathcal{E}_{D} \cup \mathcal{E}_{N}} \ &= \ \sum_{e} \left( w , \, f \right)_{\Omega_{e}} \\ \sum_{e} \left( \textbf{w} \, , \, \textbf{q}_{h} \right)_{\Omega_{e}} - \sum_{e} \left( \nabla \cdot \textbf{w} , \, u_{h} \right)_{\Omega_{e}} + \langle \, \left[ \, \textbf{w} \, \right] \, , \, \widehat{u_{h}} \rangle_{\mathcal{E}_{h}^{i}} + \langle \left[ \, \textbf{w} \, \right] \, , \, \widehat{u_{h}} \rangle_{\mathcal{E}_{D} \cup \mathcal{E}_{N}} \ &= \ 0 \end{align}

for all \((w,\textbf{w}) \in W_{h,k} \times \textbf{W}_{h,k}\).

The terms \(\widehat{\textbf{q}_{h}}\) and \(\widehat{u_{h}}\) are the numerical fluxes. The numerical fluxes are introduced to ensure consistency, stability, and enforce the boundary conditions weakly, for more info see the book: Nodal Discontinuous Galerkin Methods. The flux \(\widehat{u_{h}}\) is,

\begin{align} \widehat{u_{h}} \; = \; \left\{ \begin{array}{cl} \left\{ u_{h} \right\} \ + \ \boldsymbol \beta \cdot [ u_{h} ] \, & \ \text{in} \ \mathcal{E}_{h}^{i} \\ u_{h} & \ \text{in} \ \mathcal{E}_{N}\\ g_{D}(\textbf{x}) & \ \text{in} \ \mathcal{E}_{D} \\ \end{array} \right. \end{align}

The flux \(\widehat{\textbf{q}_{h}}\) is,

\begin{align} \widehat{\textbf{q}_{h}} \; = \; \left\{ \begin{array}{cl} \left\{ \textbf{q}_{h} \right\} \ - \ \left[ \textbf{q}_{h} \right] \, \boldsymbol \beta \ + \ \sigma \, \left[ \, u_{h} \, \right] & \ \text{in} \ \mathcal{E}_{h}^{i} \\ g_{N}(\textbf{x}) \, \textbf{n} \, & \ \text{in} \ \mathcal{E}_{N}\\ \textbf{q}_{h} \ + \ \sigma \, \left(u_{h} - g_{D}(\textbf{x}) \right) \, \textbf{n} & \ \text{in} \ \mathcal{E}_{D} \\ \end{array} \right. \end{align}

The term \(\boldsymbol \beta\) is a constant unit vector which does not lie parallel to any element face in \( \mathcal{E}_{h}^{i}\). For \(\boldsymbol \beta = 0\), \(\widehat{\textbf{q}_{h}}\) and \(\widehat{u_{h}}\) are called the central or Brezzi et. al. fluxes. For \(\boldsymbol \beta \neq 0\), \(\widehat{\textbf{q}_{h}}\) and \(\widehat{u_{h}}\) are called the LDG/alternating fluxes, see here and here.

The term \(\sigma\) is the penalty parameter that is defined as,

\begin{align} \sigma \; = \; \left\{ \begin{array}{cc} \tilde{\sigma} \, \min \left( h^{-1}_{e_{1}}, h^{-1}_{e_{2}} \right) & \textbf{x} \in \langle \Omega_{e_{1}}, \Omega_{e_{2}} \rangle \\ \tilde{\sigma} \, h^{-1}_{e} & \textbf{x} \in \partial \Omega_{e} \cap \in \mathcal{E}_{D} \end{array} \right. \label{eq:Penalty} \end{align}

with \(\tilde{\sigma}\) being a positive constant. There are other choices of penalty values \(\sigma\), but the one above produces in appoximations to solutions that are the most accurate, see this reference for more info.

We can now substitute (16) and (17) into (14) and (15) to obtain the solution pair \((u_{h}, \textbf{q}_{h})\) to the LDG approximation to the Poisson equation given by:

Find \(u_{h} \in W_{h,k}\) and \(\textbf{q}_{h} \in \textbf{W}_{h,k}\) such that,

\begin{align} a(\textbf{w}, \textbf{q}_{h}) \ + \ b^{T}(\textbf{w}, u_{h}) \ &= \ G(\textbf{w}) \nonumber \\ b(w, \textbf{q}_{h}) \ + \ c(w, u_{h}) \ &= \ F(w) \label{eq:LDG_bilinear} \end{align}

for all \((w, \textbf{w}) \in W_{h,k} \times \textbf{W}_{h,k}\). This leads to the linear system,

\begin{align} \left[ \begin{matrix} A & -B^{T} \\ B & C \end{matrix} \right] \left[ \begin{matrix} \textbf{Q}\\ \textbf{U} \end{matrix} \right] \ = \ \left[ \begin{matrix} \textbf{G}\\ \textbf{F} \end{matrix} \right] \end{align}

Where \(\textbf{U}\) and \(\textbf{Q}\) are the degrees of freedom vectors for \(u_{h}\) and \(\textbf{q}_{h}\) respectively. The terms \(\textbf{G}\) and \(\textbf{F}\) are the corresponding vectors to \(G(\textbf{w})\) and \(F(w)\) respectively. The matrix in for the LDG system is non-singular for any \(\sigma > 0\).

The bilinear forms in (19) and right hand functions are defined as,

\begin{align} b(w, \textbf{q}_{h}) \, &= \, - \sum_{e} \left(\nabla w, \textbf{q}_{h} \right)_{\Omega_{e}} + \langle \left[ w \right], \left\{\textbf{q}_{h} \right\} - \left[ \textbf{q}_{h} \right] \boldsymbol \beta \rangle_{\mathcal{E}_{h}^{i}} + \langle w, \textbf{n} \cdot \textbf{q}_{h} \rangle_{\mathcal{E}_{D}}\\ a(\textbf{w},\textbf{q}_{h}) \, &= \, \sum_{e} \left(\textbf{w}, \textbf{q}_{h} \right)_{\Omega_{e}} \\ -b^{T}(w, \textbf{q}_{h}) \, &= \, - \sum_{e} \left(\nabla \cdot \textbf{w}, u_{h} \right)_{\Omega_{e}} + \langle \left[ \textbf{w} \right], \left\{u_{h} \right\} + \boldsymbol \beta \cdot \left[ u_{h} \right] \rangle_{\mathcal{E}_h^{i} } + \langle w, u_{h} \rangle_{\mathcal{E}_{N} } \\ c(w,u_{h}) \, &= \, \langle \left[ w \right], \sigma \left[ u_{h} \right] \rangle_{\mathcal{E}_{h}^{i}} + \langle w, \sigma u_{h} \rangle_{\mathcal{E}_{D}} \\ G(\textbf{w}) \ & = \ - \langle \textbf{w}, g_{D} \rangle_{\mathcal{E}_{D}}\\ F(w) \ & = \ \sum_{e} (w,f)_{\Omega_{e}} - \langle w, g_{N} \rangle_{\mathcal{E}_{N}} + \langle w, \sigma g_{D} \rangle_{\mathcal{E}_{D}} \end{align}

As discussed in step-20, we won't be assembling the bilinear terms explicitly, instead we will assemble all the solid integrals and fluxes at once. We note that in order to actually build the flux terms in our local flux matrices we will substitute in the definitions in the bilinear terms above.

Useful References

These are some useful references on the LDG and DG methods:

The Commented Code

Annotated version of Functions.cc

  /* -----------------------------------------------------------------------------
  *
  * SPDX-License-Identifier: LGPL-2.1-or-later
  * Copyright (C) 2017 by Michael Harmon
  *
  * This file is part of the deal.II code gallery.
  *
  * -----------------------------------------------------------------------------
  */
 

Functions.cc

In this file we keep right hand side function, Dirichlet boundary conditions and solution to our Poisson equation problem. Since these classes and functions have been discussed extensively in the deal.ii tutorials we won't discuss them any further.

  #include <deal.II/base/function.h>
  #include <deal.II/base/tensor_function.h>
  #include <deal.II/lac/vector.h>
  #include <cmath>
 
  using namespace dealii;
 
  template <int dim>
  class RightHandSide : public Function<dim>
  {
  public:
  RightHandSide() : Function<dim>(1)
  {}
 
  virtual double value(const Point<dim> &p,
  const unsigned int component = 0 ) const override;
  };
 
  template <int dim>
  class DirichletBoundaryValues : public Function<dim>
  {
  public:
  DirichletBoundaryValues() : Function<dim>(1)
  {}
 
  virtual double value(const Point<dim> &p,
  const unsigned int component = 0 ) const override;
  };
 
  template<int dim>
  class TrueSolution : public Function<dim>
  {
  public:
  TrueSolution() : Function<dim>(dim+1)
  {}
 
  virtual void vector_value(const Point<dim> &p,
  Vector<double> &valuess) const override;
  };
 
  template <int dim>
  double
  RightHandSide<dim>::
  value(const Point<dim> &p,
  const unsigned int ) const
  {
  const double x = p[0];
  const double y = p[1];
  return 4*M_PI*M_PI*(cos(2*M_PI*y) - sin(2*M_PI*x));
 
  }
 
  template <int dim>
  double
  DirichletBoundaryValues<dim>::
  value(const Point<dim> &p,
  const unsigned int ) const
  {
  const double x = p[0];
  const double y = p[1];
  return cos(2*M_PI*y) -sin(2*M_PI*x) - x;
  }
 
 
  template <int dim>
  void
  TrueSolution<dim>::
  vector_value(const Point<dim> &p,
  Vector<double> &values) const
  {
  Assert(values.size() == dim+1,
  ExcDimensionMismatch(values.size(), dim+1) );
 
  double x = p[0];
  double y = p[1];
 
  values(0) = 1 + 2*M_PI*cos(2*M_PI*x);
  values(1) = 2*M_PI*sin(2*M_PI*y);
 
  values(2) = cos(2*M_PI*y) - sin(2*M_PI*x) - x;
  }
Definition point.h:111
#define Assert(cond, exc)
spacedim const Point< spacedim > & p
Definition grid_tools.h:980
::VectorizedArray< Number, width > cos(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > sin(const ::VectorizedArray< Number, width > &)

Annotated version of LDGPoisson.cc

  /* -----------------------------------------------------------------------------
  *
  * SPDX-License-Identifier: LGPL-2.1-or-later
  * Copyright (C) 2017 by Michael Harmon
  *
  * This file is part of the deal.II code gallery.
  *
  * -----------------------------------------------------------------------------
  */
 

LDGPoisson.cc

The code begins as per usual with a long list of the the included files from the deal.ii library.

  #include <deal.II/base/quadrature_lib.h>
  #include <deal.II/base/logstream.h>
  #include <deal.II/base/function.h>
  #include <deal.II/base/timer.h>
 
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/sparse_matrix.h>
 
 
  #include <deal.II/grid/tria.h>
  #include <deal.II/grid/grid_generator.h>
  #include <deal.II/grid/grid_tools.h>
  #include <deal.II/grid/tria_accessor.h>
  #include <deal.II/grid/tria_iterator.h>
 
  #include <deal.II/dofs/dof_handler.h>
  #include <deal.II/dofs/dof_renumbering.h>
  #include <deal.II/dofs/dof_accessor.h>
  #include <deal.II/dofs/dof_tools.h>
 
  #include <fstream>
  #include <iostream>
 

Here's where the classes for the DG methods begin. We can use either the Lagrange polynomials,

  #include <deal.II/fe/fe_dgq.h>

or the Legendre polynomials

  #include <deal.II/fe/fe_dgp.h>

as basis functions. I'll be using the Lagrange polynomials.

  #include <deal.II/fe/fe_system.h>
  #include <deal.II/fe/fe_values.h>
 
  #include <deal.II/numerics/vector_tools.h>
  #include <deal.II/numerics/matrix_tools.h>
  #include <deal.II/numerics/data_out.h>
 
 

Now we have to load in the deal.ii files that will allow us to use a distributed computing framework.

  #include <deal.II/base/utilities.h>
  #include <deal.II/base/index_set.h>
  #include <deal.II/base/conditional_ostream.h>
  #include <deal.II/lac/sparsity_tools.h>
  #include <deal.II/distributed/tria.h>
 
 

Additionally we load the files that will allow us to interact with the Trilinos library.

  #include <deal.II/lac/trilinos_sparse_matrix.h>
  #include <deal.II/lac/trilinos_vector.h>
  #include <deal.II/lac/trilinos_solver.h>
 
 

The functions class contains all the defintions of the functions we will use, i.e. the right hand side function, the boundary conditions and the test functions.

  #include "Functions.cc"
 
  using namespace dealii;
 
 

Here is the main class for the Local Discontinuous Galerkin method applied to Poisson's equation, we won't explain much of the the class and method declarations, but dive deeper into describing the functions when they are defined. The only thing I will menion about the class declaration is that this is where I labeled the different types of boundaries using enums.

  template <int dim>
  class LDGPoissonProblem
  {
 
  public:
  LDGPoissonProblem(const unsigned int degree,
  const unsigned int n_refine);
 
  ~LDGPoissonProblem();
 
  void run();
 
 
  private:
  void make_grid();
 
  void make_dofs();
 
  void assemble_system();
 
  void assemble_cell_terms(const FEValues<dim> &cell_fe,
  FullMatrix<double> &cell_matrix,
  Vector<double> &cell_vector);
 
  void assemble_Neumann_boundary_terms(const FEFaceValues<dim> &face_fe,
  FullMatrix<double> &local_matrix,
  Vector<double> &local_vector);
 
  void assemble_Dirichlet_boundary_terms(const FEFaceValues<dim> &face_fe,
  FullMatrix<double> &local_matrix,
  Vector<double> &local_vector,
  const double &h);
 
  void assemble_flux_terms(const FEFaceValuesBase<dim> &fe_face_values,
  const FEFaceValuesBase<dim> &fe_neighbor_face_values,
  FullMatrix<double> &vi_ui_matrix,
  FullMatrix<double> &vi_ue_matrix,
  FullMatrix<double> &ve_ui_matrix,
  FullMatrix<double> &ve_ue_matrix,
  const double &h);
 
  void distribute_local_flux_to_global(
  const FullMatrix<double> &vi_ui_matrix,
  const FullMatrix<double> &vi_ue_matrix,
  const FullMatrix<double> &ve_ui_matrix,
  const FullMatrix<double> &ve_ue_matrix,
  const std::vector<types::global_dof_index> &local_dof_indices,
  const std::vector<types::global_dof_index> &local_neighbor_dof_indices);
 
  void solve();
 
  void output_results() const;
 
  const unsigned int degree;
  const unsigned int n_refine;
  double penalty;
  double h_max;
  double h_min;
 
  enum
  {
  Dirichlet,
  Neumann
  };
 
  DoFHandler<dim> dof_handler;
 
 
  SparsityPattern sparsity_pattern;
 
  TrilinosWrappers::MPI::Vector locally_relevant_solution;
 
  TimerOutput computing_timer;
 
  SolverControl solver_control;
 
  const RightHandSide<dim> rhs_function;
  const DirichletBoundaryValues<dim> Dirichlet_bc_function;
  const TrueSolution<dim> true_solution;
  };
 
 
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

Class constructor and destructor

The constructor and destructor for this class is very much like the like those for step-40. The difference being that we'll be passing in an integer, degree, which tells us the maxiumum order of the polynomial to use as well as n_refine which is the global number of times we refine our mesh. The other main differences are that we use a FESystem object for our choice of basis functions. This is reminiscent of the mixed finite element method in step-20, however, in our case we use a FESystem of the form,

fe( FESystem<dim>(FE_DGQ<dim>(degree), dim), 1, FE_DGQ<dim>(degree), 1)

which tells us that the basis functions contain discontinous polynomials of order degree in each of the dim dimensions for the vector field. For the scalar unknown we use a discontinuous polynomial of the order degree. The LDG method for Poisson equations solves for both the primary variable as well as its gradient, just like the mixed finite element method. However, unlike the mixed method, the LDG method uses discontinuous polynomials to approximate both variables. The other difference bewteen our constructor and that of step-40 is that we all instantiate our linear solver in the constructor definition.

  template <int dim>
  LDGPoissonProblem<dim>::
  LDGPoissonProblem(const unsigned int degree,
  const unsigned int n_refine)
  :
  degree(degree),
  n_refine(n_refine),
  triangulation(MPI_COMM_WORLD,
  typename Triangulation<dim>::MeshSmoothing
  (Triangulation<dim>::smoothing_on_refinement |
  Triangulation<dim>::smoothing_on_coarsening)),
  fe( FESystem<dim>(FE_DGQ<dim>(degree), dim), 1,
  FE_DGQ<dim>(degree), 1),
  dof_handler(triangulation),
  pcout(std::cout,
  Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0),
  computing_timer(MPI_COMM_WORLD,
  pcout,
  TimerOutput::summary,
  TimerOutput::wall_times),
  solver_control(1),
  solver(solver_control),
  rhs_function(),
  Dirichlet_bc_function()
  {
  }
 
  template <int dim>
  LDGPoissonProblem<dim>::
  ~LDGPoissonProblem()
  {
  dof_handler.clear();
  }
 
STL namespace.

Make_grid

This function shows how to make a grid using local refinement and also shows how to label the boundaries using the defined enum.

  template <int dim>
  void
  LDGPoissonProblem<dim>::
  make_grid()
  {
  triangulation.refine_global(n_refine);
 
  unsigned int local_refine = 2;
  for (unsigned int i =0; i <local_refine; ++i)
  {
  cell = triangulation.begin_active(),
  endc = triangulation.end();
 
void hyper_cube(Triangulation< dim, spacedim > &tria, const double left=0., const double right=1., const bool colorize=false)

We loop over all the cells in the mesh and mark the appropriate cells for refinement. In this example we only choose cells which are near \(x=0\) and \(x=1\) in the the domain. This was just to show that the LDG method is working with local refinement and discussions on building more realistic refinement stategies are discussed elsewhere in the deal.ii documentation.

  for (; cell != endc; ++cell)
  {
  if ((cell->center()[1]) > 0.9 )
  {
  if ((cell->center()[0] > 0.9) || (cell->center()[0] < 0.1))
  cell->set_refine_flag();
  }
  }

Now that we have marked all the cells that we want to refine locally we can go ahead and refine them.

  triangulation.execute_coarsening_and_refinement();
  }
 

To label the boundary faces of the mesh with their type, i.e. Dirichlet or Neumann, we loop over all the cells in the mesh and then over all the faces of each cell. We then have to figure out which faces are on the bounadry and set all faces on the boundary to have boundary_id to be Dirichlet. We remark that one could easily set more complicated conditions where there are both Dirichlet or Neumann boundaries.

  cell = triangulation.begin(),
  endc = triangulation.end();
  for (; cell != endc; ++cell)
  {
  for (unsigned int face_no=0;
  face_no < GeometryInfo<dim>::faces_per_cell;
  face_no++)
  {
  if (cell->face(face_no)->at_boundary() )
  cell->face(face_no)->set_boundary_id(Dirichlet);
  }
  }
  }
 

make_dofs

This function is responsible for distributing the degrees of freedom (dofs) to the processors and allocating memory for the global system matrix, system_matrix, and global right hand side vector, system_rhs . The dofs are the unknown coefficients for the polynomial approximation of our solution to Poisson's equation in the scalar variable and its gradient.

  template <int dim>
  void
  LDGPoissonProblem<dim>::
  make_dofs()
  {
  TimerOutput::Scope t(computing_timer, "setup");
 

The first step to setting up our linear system is to distribute the degrees of freedom (dofs) across the processors, this is done with the distribute_dofs() method of the DoFHandler. We remark the same exact function call that occurs when using deal.ii on a single machine, the DoFHandler automatically knows that we are distributed setting because it was instantiated with a distributed triangulation!

  dof_handler.distribute_dofs(fe);
 

We now renumber the dofs so that the vector of unkonwn dofs that we are solving for, locally_relevant_solution, corresponds to a vector of the form,

\( \left[\begin{matrix} \textbf{Q} \\ \textbf{U} \end{matrix}\right] \)

 
void component_wise(DoFHandler< dim, spacedim > &dof_handler, const std::vector< unsigned int > &target_component=std::vector< unsigned int >())

Now we get the locally owned dofs, that is the dofs that our local to this processor. These dofs corresponding entries in the matrix and vectors that we will write to.

  const IndexSet &locally_owned_dofs = dof_handler.locally_owned_dofs();
 

In additon to the locally owned dofs, we also need the the locally relevant dofs. These are the dofs that have read access to and we need in order to do computations on our processor, but, that we do not have the ability to write to.

  const IndexSet locally_relevant_dofs =
 
  const std::vector<types::global_dof_index> dofs_per_component =
 
IndexSet extract_locally_relevant_dofs(const DoFHandler< dim, spacedim > &dof_handler)
std::vector< types::global_dof_index > count_dofs_per_fe_component(const DoFHandler< dim, spacedim > &dof_handler, const bool vector_valued_once=false, const std::vector< unsigned int > &target_component={})

Discontinuous Galerkin methods are fantanistic methods in part because many of the limitations of traditional finite element methods no longer exist. Specifically, the need to use constraint matrices in order handle hanging nodes is no longer necessary. However, we will continue to use the constraint matrices inorder to efficiently distribute local computations to the global system, i.e. to the system_matrix and system_rhs. Therefore, we just instantiate the constraints matrix object, clear and close it.

  constraints.clear();
  constraints.close();
 
 

Just like step-40 we create a dynamic sparsity pattern and distribute it to the processors. Notice how we do not have to explictly mention that we are using a FESystem for system of variables instead of a FE_DGQ for a scalar variable or that we are using a discributed DoFHandler. All these specifics are taken care of under the hood by the deal.ii library. In order to build the sparsity pattern we use the DoFTools::make_flux_sparsity_pattern function since we using a DG method and need to take into account the DG fluxes in the sparsity pattern.

  DynamicSparsityPattern dsp(dof_handler.n_dofs());
  dsp);
 
  dof_handler.locally_owned_dofs(),
  MPI_COMM_WORLD,
  locally_relevant_dofs);
 
void make_flux_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern)
void distribute_sparsity_pattern(DynamicSparsityPattern &dsp, const IndexSet &locally_owned_rows, const MPI_Comm mpi_comm, const IndexSet &locally_relevant_rows)

Here is one area that I had to learn the hard way. The local discontinuous Galerkin method like the mixed method with the Raviart-Thomas element is written in mixed form and will lead to a block-structured matrix. In step-20 we see that we that we initialize the system_martrix such that we explicitly declare it to be block-structured. It turns out there are reasons to do this when you are going to be using a Schur complement method to solve the system of equations. While the LDG method will lead to a block-structured matrix, we do not have to explicitly declare our matrix to be one. I found that most of the distributed linear solvers did not accept block structured matrices and since I was using a distributed direct solver it was unnecessary to explicitly use a block structured matrix.

  system_matrix.reinit(locally_owned_dofs,
  locally_owned_dofs,
  dsp,
  MPI_COMM_WORLD);
 

The final note that I will make in that this subroutine is that we initialize this processors solution and the right hand side vector the exact same was as we did in step-40. We should note that the locally_relevant_solution solution vector includes dofs that are locally relevant to our computations while the system_rhs right hand side vector will only include dofs that are locally owned by this processor.

  locally_relevant_solution.reinit(locally_relevant_dofs,
  MPI_COMM_WORLD);
 
  system_rhs.reinit(locally_owned_dofs,
  locally_relevant_dofs,
  MPI_COMM_WORLD,
  true);
 
  const unsigned int n_vector_field = dim * dofs_per_component[0];
  const unsigned int n_potential = dofs_per_component[dim];
 
  pcout << "Number of active cells : "
  << triangulation.n_global_active_cells()
  << std::endl
  << "Number of degrees of freedom: "
  << dof_handler.n_dofs()
  << " (" << n_vector_field << " + " << n_potential << ")"
  << std::endl;
  }
 

assemble_system

This is the function that will assemble the global system matrix and global right hand side vector for the LDG method. It starts out like many of the deal.ii tutorial codes: declaring quadrature formulas and UpdateFlags objects, as well as vectors that will hold the dof indices for the cells we are working on in the global system.

  template <int dim>
  void
  LDGPoissonProblem<dim>::
  assemble_system()
  {
  TimerOutput::Scope t(computing_timer, "assembly");
 
  QGauss<dim> quadrature_formula(fe.degree+1);
  QGauss<dim-1> face_quadrature_formula(fe.degree+1);
 
  const UpdateFlags update_flags = update_values
 
  const UpdateFlags face_update_flags = update_values
 
  const unsigned int dofs_per_cell = fe.dofs_per_cell;
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
  std::vector<types::global_dof_index>
  local_neighbor_dof_indices(dofs_per_cell);
 
UpdateFlags
@ update_values
Shape function values.
@ update_normal_vectors
Normal vectors.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.

We first remark that we have the FEValues objects for the values of our cell basis functions as was done in most other examples. Now because we are using discontinuous Galerkin methods we also introduce a FEFaceValues object, fe_face_values, for evaluating the basis functions on one side of an element face as well as another FEFaceValues object, fe_neighbor_face_values, for evaluating the basis functions on the opposite side of the face, i.e. on the neighoring element's face. In addition, we also introduce a FESubfaceValues object, fe_subface_values, that will be used for dealing with faces that have multiple refinement levels, i.e. hanging nodes. When we have to evaluate the fluxes across a face that multiple refinement levels, we need to evaluate the fluxes across all its childrens' faces; we'll explain this more when the time comes.

  FEValues<dim> fe_values(fe, quadrature_formula, update_flags);
 
  FEFaceValues<dim> fe_face_values(fe,face_quadrature_formula,
  face_update_flags);
 
  FEFaceValues<dim> fe_neighbor_face_values(fe,
  face_quadrature_formula,
  face_update_flags);
 
  FESubfaceValues<dim> fe_subface_values(fe, face_quadrature_formula,
  face_update_flags);
 

Here are the local (dense) matrix and right hand side vector for the solid integrals as well as the integrals on the boundaries in the local discontinuous Galerkin method. These terms will be built for each local element in the mesh and then distributed to the global system matrix and right hand side vector.

  FullMatrix<double> local_matrix(dofs_per_cell,dofs_per_cell);
  Vector<double> local_vector(dofs_per_cell);
 

The next four matrices are used to incorporate the flux integrals across interior faces of the mesh:

  FullMatrix<double> vi_ui_matrix(dofs_per_cell, dofs_per_cell);
  FullMatrix<double> vi_ue_matrix(dofs_per_cell, dofs_per_cell);
  FullMatrix<double> ve_ui_matrix(dofs_per_cell, dofs_per_cell);
  FullMatrix<double> ve_ue_matrix(dofs_per_cell, dofs_per_cell);

As explained in the section on the LDG method we take our test function to be v and multiply it on the left side of our differential equation that is on u and peform integration by parts as explained in the introduction. Using this notation for test and solution function, the matrices below will then stand for:

vi_ui - Taking the value of the test function from interior of this cell's face and the solution function from the interior of this cell.

vi_ue - Taking the value of the test function from interior of this cell's face and the solution function from the exterior of this cell.

ve_ui - Taking the value of the test function from exterior of this cell's face and the solution function from the interior of this cell.

ve_ue - Taking the value of the test function from exterior of this cell's face and the solution function from the exterior of this cell.

Now that we have gotten preliminary orders out of the way, we loop over all the cells and assemble the local system matrix and local right hand side vector using the DoFHandler::active_cell_iterator,

  cell = dof_handler.begin_active(),
  endc = dof_handler.end();
 
  for (; cell!=endc; ++cell)
  {
typename ActiveSelector::active_cell_iterator active_cell_iterator

Now, since we are working in a distributed setting, we can only work on cells and write to dofs in the system_matrix and rhs_vector that corresponds to cells that are locally owned by this processor. We note that while we can only write to locally owned dofs, we will still use information from cells that are locally relevant. This is very much the same as in step-40.

  if (cell->is_locally_owned())
  {

We now assemble the local contributions to the system matrix that includes the solid integrals in the LDG method as well as the right hand side vector. This involves resetting the local matrix and vector to contain all zeros, reinitializing the FEValues object for this cell and then building the local_matrix and local_rhs vector.

  local_matrix = 0;
  local_vector = 0;
 
  fe_values.reinit(cell);
  assemble_cell_terms(fe_values,
  local_matrix,
  local_vector);
 

We remark that we need to get the local indices for the dofs to to this cell before we begin to compute the contributions from the numerical fluxes, i.e. the boundary conditions and interior fluxes.

  cell->get_dof_indices(local_dof_indices);
 

Now is where we start to loop over all the faces of the cell and construct the local contribtuions from the numerical fluxes. The numerical fluxes will be due to 3 contributions: the interior faces, the faces on the Neumann boundary and the faces on the Dirichlet boundary. We instantiate a face_iterator to loop over all the faces of this cell and first see if the face is on the boundary. Notice how we do not reinitiaize the fe_face_values object for the face until we know that we are actually on face that lies on the boundary of the domain. The reason for doing this is for computational efficiency; reinitializing the FEFaceValues for each face is expensive and we do not want to do it unless we are actually going use it to do computations. After this, we test if the face is on the a Dirichlet or a Neumann segment of the boundary and call the appropriate subroutine to assemble the contributions for that boundary. Note that this assembles the flux contribution in the local_matrix as well as the boundary condition that ends up in the local_vector.

  for (unsigned int face_no=0;
  face_no< GeometryInfo<dim>::faces_per_cell;
  face_no++)
  {
  cell->face(face_no);
 
  if (face->at_boundary() )
  {
  fe_face_values.reinit(cell, face_no);
 
  if (face->boundary_id() == Dirichlet)
  {
typename ActiveSelector::face_iterator face_iterator

Here notice that in order to assemble the flux due to the penalty term for the the Dirichlet boundary condition we need the local cell diameter size and we can get that value for this specific cell with the following,

  double h = cell->diameter();
  assemble_Dirichlet_boundary_terms(fe_face_values,
  local_matrix,
  local_vector,
  h);
  }
  else if (face->boundary_id() == Neumann)
  {
  assemble_Neumann_boundary_terms(fe_face_values,
  local_matrix,
  local_vector);
  }
  else
  Assert(false, ExcNotImplemented() );
  }
  else
  {

At this point we know that the face we are on is an interior face. We can begin to assemble the interior flux matrices, but first we want to make sure that the neighbor cell to this face is a valid cell. Once we know that the neighbor is a valid cell then we also want to get the meighbor cell that shares this cell's face.

  Assert(cell->neighbor(face_no).state() ==
  ExcInternalError());
 
  typename DoFHandler<dim>::cell_iterator neighbor =
  cell->neighbor(face_no);
 
typename ActiveSelector::cell_iterator cell_iterator
@ valid
Iterator points to a valid object.

Now that we have the two cells whose face we want to compute the numerical flux across, we need to know if the face has been refined, i.e. if it has children faces. This occurs when one of the cells has a different level of refinement than the other cell. If this is the case, then this face has a different level of refinement than the other faces of the cell, i.e. on this face there is a hanging node. Hanging nodes are not a problem in DG methods, the only time we have to watch out for them is at this step and as you will see the changes we have to our make are minor.

  if (face->has_children())
  {

We now need to find the face of our neighbor cell such that neighbor(neigh_face_no) = cell(face_no).

  const unsigned int neighbor_face_no =
  cell->neighbor_of_neighbor(face_no);
 

Once we do this we then have to loop over all the subfaces (children faces) of our cell's face and compute the interior fluxes across the children faces and the neighbor's face.

  for (unsigned int subface_no=0;
  subface_no < face->n_children();
  ++subface_no)
  {

We then get the neighbor cell's subface that matches our cell face's subface and the specific subface number. We assert that the parent face cannot be more than one level of refinement above the child's face. This is because the deal.ii library does not allow neighboring cells to have refinement levels that are more than one level in difference.

  typename DoFHandler<dim>::cell_iterator neighbor_child =
  cell->neighbor_child_on_subface(face_no,
  subface_no);
 
  Assert(!neighbor_child->has_children(),
  ExcInternalError());
 

Now that we are ready to build the local flux matrices for this face we reset them e zero and reinitialize this fe_values to this cell's subface and neighbor_child's FEFaceValues and the FESubfaceValues objects on the appropriate faces.

  vi_ui_matrix = 0;
  vi_ue_matrix = 0;
  ve_ui_matrix = 0;
  ve_ue_matrix = 0;
 
  fe_subface_values.reinit(cell, face_no, subface_no);
  fe_neighbor_face_values.reinit(neighbor_child,
  neighbor_face_no);
 

In addition, we get the minimum of diameters of the two cells to include in the penalty term

  double h = std::min(cell->diameter(),
  neighbor_child->diameter());
 
::VectorizedArray< Number, width > min(const ::VectorizedArray< Number, width > &, const ::VectorizedArray< Number, width > &)

We now finally assemble the interior fluxes for the case of a face which has been refined using exactly the same subroutine as we do when both cells have the same refinement level.

  assemble_flux_terms(fe_subface_values,
  fe_neighbor_face_values,
  vi_ui_matrix,
  vi_ue_matrix,
  ve_ui_matrix,
  ve_ue_matrix,
  h);
 

Now all that is left to be done before distribuing the local flux matrices to the global system is get the neighbor child faces dof indices.

  neighbor_child->get_dof_indices(local_neighbor_dof_indices);
 

Once we have this cells dof indices and the neighboring cell's dof indices we can use the ConstraintMatrix to distribute the local flux matrices to the global system matrix. This is done through the class function distribute_local_flux_to_global().

  distribute_local_flux_to_global(
  vi_ui_matrix,
  vi_ue_matrix,
  ve_ui_matrix,
  ve_ue_matrix,
  local_dof_indices,
  local_neighbor_dof_indices);
  }
  }
  else
  {

At this point we know that this cell and the neighbor of this cell are on the same refinement level and the work to assemble the interior flux matrices is very much the same as before. Infact it is much simpler since we do not have to loop through the subfaces. However, we have to check that we do not compute the same contribution twice. This would happen because we are looping over all the faces of all the cells in the mesh and assembling the interior flux matrices for each face. To avoid doing assembling the interior flux matrices twice we only compute the interior fluxes once for each face by restricting that the following computation only occur on the on the cell face with the lower CellId.

  if (neighbor->level() == cell->level() &&
  cell->id() < neighbor->id())
  {

Here we find the neighbor face such that neighbor(neigh_face_no) = cell(face_no). In addition we, reinitialize the FEFaceValues and neighbor cell's FEFaceValues on their respective cells' faces, as well as get the minimum diameter of this cell and the neighbor cell and assign it to h.

  const unsigned int neighbor_face_no =
  cell->neighbor_of_neighbor(face_no);
 
  vi_ui_matrix = 0;
  vi_ue_matrix = 0;
  ve_ui_matrix = 0;
  ve_ue_matrix = 0;
 
  fe_face_values.reinit(cell, face_no);
  fe_neighbor_face_values.reinit(neighbor,
  neighbor_face_no);
 
  double h = std::min(cell->diameter(),
  neighbor->diameter());
 

Just as before we assemble the interior fluxes using the assemble_flux_terms subroutine, get the neighbor cell's face dof indices and use the constraint matrix to distribute the local flux matrices to the global system_matrix using the class function distribute_local_flux_to_global()

  assemble_flux_terms(fe_face_values,
  fe_neighbor_face_values,
  vi_ui_matrix,
  vi_ue_matrix,
  ve_ui_matrix,
  ve_ue_matrix,
  h);
 
  neighbor->get_dof_indices(local_neighbor_dof_indices);
 
  distribute_local_flux_to_global(
  vi_ui_matrix,
  vi_ue_matrix,
  ve_ui_matrix,
  ve_ue_matrix,
  local_dof_indices,
  local_neighbor_dof_indices);
 
 
  }
  }
  }
  }
 
 

Now that have looped over all the faces for this cell and computed as well as disributed the local flux matrices to the system_matrix, we can finally distribute the cell's local_matrix and local_vector contribution to the global system matrix and global right hand side vector. We remark that we have to wait until this point to distribute the local_matrix and system_rhs to the global system. The reason being that in looping over the faces the faces on the boundary of the domain contribute to the local_matrix and system_rhs. We could distribute the local contributions for each component seperately, but writing to the distributed sparse matrix and vector is expensive and want to to minimize the number of times we do so.

  constraints.distribute_local_to_global(local_matrix,
  local_dof_indices,
  system_matrix);
 
  constraints.distribute_local_to_global(local_vector,
  local_dof_indices,
  system_rhs);
 
  }
  }
 

We need to synchronize assembly of our global system matrix and global right hand side vector with all the other processors and use the compress() function to do this. This was discussed in detail in step-40.

  system_matrix.compress(VectorOperation::add);
  system_rhs.compress(VectorOperation::add);
  }
 
 

assemble_cell_terms

This function deals with constructing the local matrix due to the solid integrals over each element and is very similar to the the other examples in the deal.ii tutorials.

  template<int dim>
  void
  LDGPoissonProblem<dim>::
  assemble_cell_terms(
  const FEValues<dim> &cell_fe,
  FullMatrix<double> &cell_matrix,
  Vector<double> &cell_vector)
  {
  const unsigned int dofs_per_cell = cell_fe.dofs_per_cell;
  const unsigned int n_q_points = cell_fe.n_quadrature_points;
 
  const FEValuesExtractors::Vector VectorField(0);
  const FEValuesExtractors::Scalar Potential(dim);
 
  std::vector<double> rhs_values(n_q_points);
 

We first get the value of the right hand side function evaluated at the quadrature points in the cell.

  rhs_function.value_list(cell_fe.get_quadrature_points(),
  rhs_values);
 

Now, we loop over the quadrature points in the cell and then loop over the degrees of freedom and perform quadrature to approximate the integrals.

  for (unsigned int q=0; q<n_q_points; ++q)
  {
  for (unsigned int i=0; i<dofs_per_cell; ++i)
  {
  const Tensor<1, dim> psi_i_field = cell_fe[VectorField].value(i,q);
  const double div_psi_i_field = cell_fe[VectorField].divergence(i,q);
  const double psi_i_potential = cell_fe[Potential].value(i,q);
  const Tensor<1, dim> grad_psi_i_potential = cell_fe[Potential].gradient(i,q);
 
  for (unsigned int j=0; j<dofs_per_cell; ++j)
  {
  const Tensor<1, dim> psi_j_field = cell_fe[VectorField].value(j,q);
  const double psi_j_potential = cell_fe[Potential].value(j,q);
 

This computation corresponds to assembling the local system matrix for the integral over an element,

\(\int_{\Omega_{e}} \left(\textbf{w} \cdot \textbf{q} - \nabla \cdot \textbf{w} u - \nabla w \cdot \textbf{q} \right) dx \)

  cell_matrix(i,j) += ( (psi_i_field * psi_j_field)
  -
  (div_psi_i_field * psi_j_potential)
  -
  (grad_psi_i_potential * psi_j_field)
  ) * cell_fe.JxW(q);
  }
 

And this local right hand vector corresponds to the integral over the element cell,

\( \int_{\Omega_{e}} w \, f(\textbf{x}) \, dx \)

  cell_vector(i) += psi_i_potential *
  rhs_values[q] *
  cell_fe.JxW(q);
  }
  }
  }
 

assemble_Dirichlet_boundary_terms

Here we have the function that builds the local_matrix contribution and local right hand side vector, local_vector for the Dirichlet boundary condtions.

  template<int dim>
  void
  LDGPoissonProblem<dim>::
  assemble_Dirichlet_boundary_terms(
  const FEFaceValues<dim> &face_fe,
  FullMatrix<double> &local_matrix,
  Vector<double> &local_vector,
  const double &h)
  {
  const unsigned int dofs_per_cell = face_fe.dofs_per_cell;
  const unsigned int n_q_points = face_fe.n_quadrature_points;
 
  const FEValuesExtractors::Vector VectorField(0);
  const FEValuesExtractors::Scalar Potential(dim);
 
  std::vector<double> Dirichlet_bc_values(n_q_points);
 

In order to evaluate the flux on the Dirichlet boundary face we first get the value of the Dirichlet boundary function on the quadrature points of the face. Then we loop over all the quadrature points and degrees of freedom and approximate the integrals on the Dirichlet boundary element faces.

  Dirichlet_bc_function.value_list(face_fe.get_quadrature_points(),
  Dirichlet_bc_values);
 
  for (unsigned int q=0; q<n_q_points; ++q)
  {
  for (unsigned int i=0; i<dofs_per_cell; ++i)
  {
  const Tensor<1, dim> psi_i_field = face_fe[VectorField].value(i,q);
  const double psi_i_potential = face_fe[Potential].value(i,q);
 
  for (unsigned int j=0; j<dofs_per_cell; ++j)
  {
  const Tensor<1, dim> psi_j_field = face_fe[VectorField].value(j,q);
  const double psi_j_potential = face_fe[Potential].value(j,q);
 

We compute contribution for the flux \(\widehat{q}\) on the Dirichlet boundary which enters our system matrix as,

\( \int_{\text{face}} w \, ( \textbf{n} \cdot \textbf{q} + \sigma u) ds \)

  local_matrix(i,j) += psi_i_potential * (
  face_fe.normal_vector(q) *
  psi_j_field
  +
  (penalty/h) *
  psi_j_potential) *
  face_fe.JxW(q);
 
  }
 

We also compute the contribution for the flux for \(\widehat{u}\) on the Dirichlet boundary which is the Dirichlet boundary condition function and enters the right hand side vector as

\(\int_{\text{face}} (-\textbf{w} \cdot \textbf{n} + \sigma w) \, u_{D} ds \)

  local_vector(i) += (-1.0 * psi_i_field *
  face_fe.normal_vector(q)
  +
  (penalty/h) *
  psi_i_potential) *
  Dirichlet_bc_values[q] *
  face_fe.JxW(q);
  }
  }
  }
 

assemble_Neumann_boundary_terms

Here we have the function that builds the local_matrix and local_vector for the Neumann boundary condtions.

  template<int dim>
  void
  LDGPoissonProblem<dim>::
  assemble_Neumann_boundary_terms(
  const FEFaceValues<dim> &face_fe,
  FullMatrix<double> &local_matrix,
  Vector<double> &local_vector)
  {
  const unsigned int dofs_per_cell = face_fe.dofs_per_cell;
  const unsigned int n_q_points = face_fe.n_quadrature_points;
 
  const FEValuesExtractors::Vector VectorField(0);
  const FEValuesExtractors::Scalar Potential(dim);
 

In order to get evaluate the flux on the Neumann boundary face we first get the value of the Neumann boundary function on the quadrature points of the face. Then we loop over all the quadrature points and degrees of freedom and approximate the integrals on the Neumann boundary element faces.

  std::vector<double > Neumann_bc_values(n_q_points);
 
  for (unsigned int q=0; q<n_q_points; ++q)
  {
  for (unsigned int i=0; i<dofs_per_cell; ++i)
  {
  const Tensor<1, dim> psi_i_field = face_fe[VectorField].value(i,q);
  const double psi_i_potential = face_fe[Potential].value(i,q);
 
  for (unsigned int j=0; j<dofs_per_cell; ++j)
  {
 
  const double psi_j_potential = face_fe[Potential].value(j,q);
 

We compute contribution for the flux \(\widehat{u}\) on the Neumann boundary which enters our system matrix as,

\(\int_{\text{face}} \textbf{w} \cdot \textbf{n} \, u \, ds \)

  local_matrix(i,j) += psi_i_field *
  face_fe.normal_vector(q) *
  psi_j_potential *
  face_fe.JxW(q);
 
  }
 

We also compute the contribution for the flux for \(\widehat{q}\) on the Neumann bounary which is the Neumann boundary condition and enters the right hand side vector as

\(\int_{\text{face}} -w \, g_{N} \, ds\)

  local_vector(i) += -psi_i_potential *
  Neumann_bc_values[q] *
  face_fe.JxW(q);
  }
  }
  }
 

assemble_flux_terms

Now we finally get to the function which builds the interior fluxes. This is a rather long function and we will describe what is going on in detail.

  template<int dim>
  void
  LDGPoissonProblem<dim>::
  assemble_flux_terms(
  const FEFaceValuesBase<dim> &fe_face_values,
  const FEFaceValuesBase<dim> &fe_neighbor_face_values,
  FullMatrix<double> &vi_ui_matrix,
  FullMatrix<double> &vi_ue_matrix,
  FullMatrix<double> &ve_ui_matrix,
  FullMatrix<double> &ve_ue_matrix,
  const double &h)
  {
  const unsigned int n_face_points = fe_face_values.n_quadrature_points;
  const unsigned int dofs_this_cell = fe_face_values.dofs_per_cell;
  const unsigned int dofs_neighbor_cell = fe_neighbor_face_values.dofs_per_cell;
 
  const FEValuesExtractors::Vector VectorField(0);
  const FEValuesExtractors::Scalar Potential(dim);
 

The first thing we do is after the boilerplate is define the unit vector \(\boldsymbol \beta\) that is used in defining the LDG/ALternating fluxes.

  Point<dim> beta;
  for (int i=0; i<dim; ++i)
  beta(i) = 1.0;
  beta /= sqrt(beta.square() );
 

Now we loop over all the quadrature points on the element face and loop over all the degrees of freedom and approximate the following flux integrals.

  for (unsigned int q=0; q<n_face_points; ++q)
  {
  for (unsigned int i=0; i<dofs_this_cell; ++i)
  {
  const Tensor<1,dim> psi_i_field_minus =
  fe_face_values[VectorField].value(i,q);
  const double psi_i_potential_minus =
  fe_face_values[Potential].value(i,q);
 
  for (unsigned int j=0; j<dofs_this_cell; ++j)
  {
  const Tensor<1,dim> psi_j_field_minus =
  fe_face_values[VectorField].value(j,q);
  const double psi_j_potential_minus =
  fe_face_values[Potential].value(j,q);
 

We compute the flux matrix where the test function's as well as the solution function's values are taken from the interior as,

\(\int_{\text{face}} \left( \frac{1}{2} \, \textbf{n}^{-} \cdot ( \textbf{w}^{-} u^{-} + w^{-} \textbf{q}^{-}) + \boldsymbol \beta \cdot \textbf{w}^{-} u^{-} - w^{-} \boldsymbol \beta \cdot \textbf{q}^{-} + \sigma w^{-} \, u^{-} \right) ds\)

  vi_ui_matrix(i,j) += (0.5 * (
  psi_i_field_minus *
  fe_face_values.normal_vector(q) *
  psi_j_potential_minus
  +
  psi_i_potential_minus *
  fe_face_values.normal_vector(q) *
  psi_j_field_minus )
  +
  beta *
  psi_i_field_minus *
  psi_j_potential_minus
  -
  beta *
  psi_i_potential_minus *
  psi_j_field_minus
  +
  (penalty/h) *
  psi_i_potential_minus *
  psi_j_potential_minus
  ) *
  fe_face_values.JxW(q);
  }
 
  for (unsigned int j=0; j<dofs_neighbor_cell; ++j)
  {
  const Tensor<1,dim> psi_j_field_plus =
  fe_neighbor_face_values[VectorField].value(j,q);
  const double psi_j_potential_plus =
  fe_neighbor_face_values[Potential].value(j,q);
 

We compute the flux matrix where the test function is from the interior of this elements face and solution function is taken from the exterior. This corresponds to the computation,

\(\int_{\text{face}} \left( \frac{1}{2} \, \textbf{n}^{-} \cdot ( \textbf{w}^{-} u^{+} + w^{-} \textbf{q}^{+}) - \boldsymbol \beta \cdot \textbf{w}^{-} u^{+} + w^{-} \boldsymbol \beta \cdot \textbf{q}^{+} - \sigma w^{-} \, u^{+} \right) ds \)

  vi_ue_matrix(i,j) += ( 0.5 * (
  psi_i_field_minus *
  fe_face_values.normal_vector(q) *
  psi_j_potential_plus
  +
  psi_i_potential_minus *
  fe_face_values.normal_vector(q) *
  psi_j_field_plus )
  -
  beta *
  psi_i_field_minus *
  psi_j_potential_plus
  +
  beta *
  psi_i_potential_minus *
  psi_j_field_plus
  -
  (penalty/h) *
  psi_i_potential_minus *
  psi_j_potential_plus
  ) *
  fe_face_values.JxW(q);
  }
  }
 
  for (unsigned int i=0; i<dofs_neighbor_cell; ++i)
  {
  const Tensor<1,dim> psi_i_field_plus =
  fe_neighbor_face_values[VectorField].value(i,q);
  const double psi_i_potential_plus =
  fe_neighbor_face_values[Potential].value(i,q);
 
  for (unsigned int j=0; j<dofs_this_cell; ++j)
  {
  const Tensor<1,dim> psi_j_field_minus =
  fe_face_values[VectorField].value(j,q);
  const double psi_j_potential_minus =
  fe_face_values[Potential].value(j,q);
 
 

We compute the flux matrix where the test function is from the exterior of this elements face and solution function is taken from the interior. This corresponds to the computation,

\( \int_{\text{face}} \left( -\frac{1}{2}\, \textbf{n}^{-} \cdot (\textbf{w}^{+} u^{-} + w^{+} \textbf{q}^{-} ) - \boldsymbol \beta \cdot \textbf{w}^{+} u^{-} + w^{+} \boldsymbol \beta \cdot \textbf{q}^{-} - \sigma w^{+} u^{-} \right) ds \)

  ve_ui_matrix(i,j) += (-0.5 * (
  psi_i_field_plus *
  fe_face_values.normal_vector(q) *
  psi_j_potential_minus
  +
  psi_i_potential_plus *
  fe_face_values.normal_vector(q) *
  psi_j_field_minus)
  -
  beta *
  psi_i_field_plus *
  psi_j_potential_minus
  +
  beta *
  psi_i_potential_plus *
  psi_j_field_minus
  -
  (penalty/h) *
  psi_i_potential_plus *
  psi_j_potential_minus
  ) *
  fe_face_values.JxW(q);
  }
 
  for (unsigned int j=0; j<dofs_neighbor_cell; ++j)
  {
  const Tensor<1,dim> psi_j_field_plus =
  fe_neighbor_face_values[VectorField].value(j,q);
  const double psi_j_potential_plus =
  fe_neighbor_face_values[Potential].value(j,q);
 

And lastly we compute the flux matrix where the test function and solution function are taken from the exterior cell to this face. This corresponds to the computation,

\(\int_{\text{face}} \left( -\frac{1}{2}\, \textbf{n}^{-} \cdot ( \textbf{w}^{+} u^{+} + w^{+} \textbf{q}^{+} ) + \boldsymbol \beta \cdot \textbf{w}^{+} u^{+} - w^{+} \boldsymbol \beta \cdot \textbf{q}^{+} + \sigma w^{+} u^{+} \right) ds \)

  ve_ue_matrix(i,j) += (-0.5 * (
  psi_i_field_plus *
  fe_face_values.normal_vector(q) *
  psi_j_potential_plus
  +
  psi_i_potential_plus *
  fe_face_values.normal_vector(q) *
  psi_j_field_plus )
  +
  beta *
  psi_i_field_plus *
  psi_j_potential_plus
  -
  beta *
  psi_i_potential_plus *
  psi_j_field_plus
  +
  (penalty/h) *
  psi_i_potential_plus *
  psi_j_potential_plus
  ) *
  fe_face_values.JxW(q);
  }
 
  }
  }
  }
 

distribute_local_flux_to_global

In this function we use the ConstraintMatrix to distribute the local flux matrices to the global system matrix. Since I have to do this twice in assembling the system matrix, I made function to do it rather than have repeated code. We remark that the reader take special note of the which matrices we are distributing and the order in which we pass the dof indices vectors. In distributing the first matrix, i.e. vi_ui_matrix, we are taking the test function and solution function values from the interior of this cell and therefore only need the local_dof_indices since it contains the dof indices to this cell. When we distribute the second matrix, vi_ue_matrix, the test function is taken form the inteior of this cell while the solution function is taken from the exterior, i.e. the neighbor cell. Notice that the order degrees of freedom index vectors matrch this pattern: first the local_dof_indices which is local to this cell and then the local_neighbor_dof_indices which is local to the neighbor's cell. The order in which we pass the dof indices for the matrices is paramount to constructing our global system matrix properly. The ordering of the last two matrices follow the same logic as the first two we discussed.

  template<int dim>
  void
  LDGPoissonProblem<dim>::
  distribute_local_flux_to_global(
  const FullMatrix<double> &vi_ui_matrix,
  const FullMatrix<double> &vi_ue_matrix,
  const FullMatrix<double> &ve_ui_matrix,
  const FullMatrix<double> &ve_ue_matrix,
  const std::vector<types::global_dof_index> &local_dof_indices,
  const std::vector<types::global_dof_index> &local_neighbor_dof_indices)
  {
  constraints.distribute_local_to_global(vi_ui_matrix,
  local_dof_indices,
  system_matrix);
 
  constraints.distribute_local_to_global(vi_ue_matrix,
  local_dof_indices,
  local_neighbor_dof_indices,
  system_matrix);
 
  constraints.distribute_local_to_global(ve_ui_matrix,
  local_neighbor_dof_indices,
  local_dof_indices,
  system_matrix);
 
  constraints.distribute_local_to_global(ve_ue_matrix,
  local_neighbor_dof_indices,
  system_matrix);
  }
 
 
 

solve

As mentioned earlier I used a direct solver to solve the linear system of equations resulting from the LDG method applied to the Poisson equation. One could also use a iterative sovler, however, we then need to use a preconditoner and that was something I did not wanted to get into. For information on preconditioners for the LDG Method see this paper. The uses of a direct sovler here is somewhat of a limitation. The built-in distributed direct solver in Trilinos reduces everything to one processor, solves the system and then distributes everything back out to the other processors. However, by linking to more advanced direct sovlers through Trilinos one can accomplish fully distributed computations and not much about the following function calls will change.

  template<int dim>
  void
  LDGPoissonProblem<dim>::
  solve()
  {
  TimerOutput::Scope t(computing_timer, "solve");
 

As in step-40 in order to perform a linear solve we need solution vector where there is no overlap across the processors and we create this by instantiating completely_distributed_solution solution vector using the copy constructor on the global system right hand side vector which itself is completely distributed vector.

  completely_distributed_solution(system_rhs);
 

Now we can preform the solve on the completeley distributed right hand side vector, system matrix and the completely distributed solution.

  solver.solve(system_matrix,
  completely_distributed_solution,
  system_rhs);
 

We now distribute the constraints of our system onto the completely solution vector, but in our case with the LDG method there are none.

  constraints.distribute(completely_distributed_solution);
 

Lastly we copy the completely distributed solution vector, completely_distributed_solution, to solution vector which has some overlap between processors, locally_relevant_solution. We need the overlapped portions of our solution in order to be able to do computations using the solution later in the code or in post processing.

  locally_relevant_solution = completely_distributed_solution;
  }
 

output_results

This function deals with the writing of the reuslts in parallel to disk. It is almost exactly the same as in step-40 and we wont go into it. It is noteworthy that in step-40 the output is only the scalar solution, while in our situation, we are outputing both the scalar solution as well as the vector field solution. The only difference between this function and the one in step-40 is in the solution_names vector where we have to add the gradient dimensions. Everything else is taken care of by the deal.ii library!

  template<int dim>
  void
  LDGPoissonProblem<dim>::
  output_results() const
  {
  std::vector<std::string> solution_names;
  switch (dim)
  {
  case 1:
  solution_names.push_back("u");
  solution_names.push_back("du/dx");
  break;
 
  case 2:
  solution_names.push_back("grad(u)_x");
  solution_names.push_back("grad(u)_y");
  solution_names.push_back("u");
  break;
 
  case 3:
  solution_names.push_back("grad(u)_x");
  solution_names.push_back("grad(u)_y");
  solution_names.push_back("grad(u)_z");
  solution_names.push_back("u");
  break;
 
  default:
  Assert(false, ExcNotImplemented() );
  }
 
  DataOut<dim> data_out;
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(locally_relevant_solution,
  solution_names);
 
  Vector<float> subdomain(triangulation.n_active_cells());
 
  for (unsigned int i=0; i<subdomain.size(); ++i)
  subdomain(i) = triangulation.locally_owned_subdomain();
 
  data_out.add_data_vector(subdomain,"subdomain");
 
  data_out.build_patches();
 
  const std::string filename = ("solution." +
  triangulation.locally_owned_subdomain(),4));
 
  std::ofstream output((filename + ".vtu").c_str());
  data_out.write_vtu(output);
 
  if (Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0 )
  {
  std::vector<std::string> filenames;
  for (unsigned int i=0;
  i < Utilities::MPI::n_mpi_processes(MPI_COMM_WORLD);
  i++)
  {
  filenames.push_back("solution." +
  ".vtu");
  }
  std::ofstream master_output("solution.pvtu");
  data_out.write_pvtu_record(master_output, filenames);
  }
  }
 
 
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
unsigned int n_mpi_processes(const MPI_Comm mpi_communicator)
Definition mpi.cc:128
unsigned int this_mpi_process(const MPI_Comm mpi_communicator)
Definition mpi.cc:143
std::string int_to_string(const unsigned int value, const unsigned int digits=numbers::invalid_unsigned_int)
Definition utilities.cc:470

run

The only public function of this class is pretty much exactly the same as all the other deal.ii examples except I setting the constant in the DG penalty ( \(\tilde{\sigma}\)) to be 1.

  template<int dim>
  void
  LDGPoissonProblem<dim>::
  run()
  {
  penalty = 1.0;
  make_grid();
  make_dofs();
  assemble_system();
  solve();
  output_results();
  }
 
 

main

Here it the main class of our program, since it is nearly exactly the same as step-40 and many of the other examples I won't elaborate on it.

  int main(int argc, char *argv[])
  {
 
  try
  {
  using namespace dealii;
 
 
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv,
 
  unsigned int degree = 1;
  unsigned int n_refine = 6;
  LDGPoissonProblem<2> Poisson(degree, n_refine);
  Poisson.run();
 
  }
  catch (std::exception &exc)
  {
  std::cerr << std::endl << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Exception on processing: " << std::endl
  << exc.what() << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
 
  return 1;
  }
  catch (...)
  {
  std::cerr << std::endl << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Unknown exception!" << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
  return 1;
  }
  return 0;
  }
unsigned int depth_console(const unsigned int n)
Definition logstream.cc:349
LogStream deallog
Definition logstream.cc:36
static const unsigned int invalid_unsigned_int
Definition types.h:220