Reference documentation for deal.II version GIT relicensing-233-g802318d791 2024-03-28 20:20:02+00:00
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The step-55 tutorial program

This tutorial depends on step-40, step-22.

Table of contents
  1. Introduction
  2. The commented program
  1. Results
  2. The plain program

This program was contributed by Timo Heister. Special thanks to Sander Rhebergen for the inspiration to finally write this tutorial.

This material is based upon work partially supported by National Science Foundation grant DMS1522191 and the Computational Infrastructure in Geodynamics initiative (CIG), through the National Science Foundation under Award No. EAR-0949446 and The University of California-Davis.

The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme Melt in the Mantle where work on this tutorial was undertaken. This work was supported by EPSRC grant no EP/K032208/1.

Note
As a prerequisite of this program, you need to have PETSc or Trilinos and the p4est library installed. The installation of deal.II together with these additional libraries is described in the README file.

Introduction

Building on step-40, this tutorial shows how to solve linear PDEs with several components in parallel using MPI with PETSc or Trilinos for the linear algebra. For this, we return to the Stokes equations as discussed in step-22. The motivation for writing this tutorial is to provide an intermediate step (pun intended) between step-40 (parallel Laplace) and step-32 (parallel coupled Stokes with Boussinesq for a time dependent problem).

The learning outcomes for this tutorial are:

  • You are able to solve PDEs with several variables in parallel and can apply this to different problems.
  • You understand the concept of optimal preconditioners and are able to check this for a particular problem.
  • You are able to construct manufactured solutions using the free computer algreba system SymPy (https://sympy.org).
  • You can implement various other tasks for parallel programs: error computation, writing graphical output, etc.
  • You can visualize vector fields, stream lines, and contours of vector quantities.

We are solving for a velocity \(\textbf{u}\) and pressure \(p\) that satisfy the Stokes equation, which reads

\begin{eqnarray*} - \triangle \textbf{u} + \nabla p &=& \textbf{f}, \\ -\textrm{div}\; \textbf{u} &=& 0. \end{eqnarray*}

Optimal preconditioners

Make sure that you read (even better: try) what is described in "Block Schur complement preconditioner" in the "Possible Extensions" section in step-22. Like described there, we are going to solve the block system using a Krylov method and a block preconditioner.

Our goal here is to construct a very simple (maybe the simplest?) optimal preconditioner for the linear system. A preconditioner is called "optimal" or "of optimal complexity", if the number of iterations of the preconditioned system is independent of the mesh size \(h\). You can extend that definition to also require indepence of the number of processors used (we will discuss that in the results section), the computational domain and the mesh quality, the test case itself, the polynomial degree of the finite element space, and more.

Why is a constant number of iterations considered to be "optimal"? Assume the discretized PDE gives a linear system with N unknowns. Because the matrix coming from the FEM discretization is sparse, a matrix-vector product can be done in O(N) time. A preconditioner application can also only be O(N) at best (for example doable with multigrid methods). If the number of iterations required to solve the linear system is independent of \(h\) (and therefore N), the total cost of solving the system will be O(N). It is not possible to beat this complexity, because even looking at all the entries of the right-hand side already takes O(N) time. For more information see [72], Chapter 2.5 (Multigrid).

The preconditioner described here is even simpler than the one described in step-22 and will typically require more iterations and consequently time to solve. When considering preconditioners, optimality is not the only important metric. But an optimal and expensive preconditioner is typically more desirable than a cheaper, non-optimal one. This is because, eventually, as the mesh size becomes smaller and smaller and linear problems become bigger and bigger, the former will eventually beat the latter.

The solver and preconditioner

We precondition the linear system

\begin{eqnarray*} \left(\begin{array}{cc} A & B^T \\ B & 0 \end{array}\right) \left(\begin{array}{c} U \\ P \end{array}\right) = \left(\begin{array}{c} F \\ 0 \end{array}\right), \end{eqnarray*}

with the block diagonal preconditioner

\begin{eqnarray*} P^{-1} = \left(\begin{array}{cc} A & 0 \\ 0 & S \end{array}\right) ^{-1}, = \left(\begin{array}{cc} A^{-1} & 0 \\ 0 & S^{-1} \end{array}\right), \end{eqnarray*}

where \(S=-BA^{-1} B^T\) is the Schur complement.

With this choice of \(P\), assuming that we handle \(A^{-1}\) and \(S^{-1}\) exactly (which is an "idealized" situation), the preconditioned linear system has three distinct eigenvalues independent of \(h\) and is therefore "optimal". See section 6.2.1 (especially p. 292) in [72]. For comparison, using the ideal version of the upper block-triangular preconditioner in step-22 (also used in step-56) would have all eigenvalues be equal to one.

We will use approximations of the inverse operations in \(P^{-1}\) that are (nearly) independent of \(h\). In this situation, one can again show, that the eigenvalues are independent of \(h\). For the Krylov method we choose MINRES, which is attractive for the analysis (iteration count is proven to be independent of \(h\), see the remainder of the chapter 6.2.1 in [72]), great from the computational standpoint (simpler and cheaper than GMRES for example), and applicable (matrix and preconditioner are symmetric).

For the approximations we will use a CG solve with the mass matrix in the pressure space for approximating the action of \(S^{-1}\). Note that the mass matrix is spectrally equivalent to \(S\). We can expect the number of CG iterations to be independent of \(h\), even with a simple preconditioner like ILU.

For the approximation of the velocity block \(A\) we will perform a single AMG V-cycle. In practice this choice is not exactly independent of \(h\), which can explain the slight increase in iteration numbers. A possible explanation is that the coarsest level will be solved exactly and the number of levels and size of the coarsest matrix is not predictable.

The testcase

We will construct a manufactured solution based on the classical Kovasznay problem, see [123]. Here is an image of the solution colored by the x velocity including streamlines of the velocity:

We have to cheat here, though, because we are not solving the non-linear Navier-Stokes equations, but the linear Stokes system without convective term. Therefore, to recreate the exact same solution, we use the method of manufactured solutions with the solution of the Kovasznay problem. This will effectively move the convective term into the right-hand side \(f\).

The right-hand side is computed using the script "reference.py" and we use the exact solution for boundary conditions and error computation.

The commented program

  #include <deal.II/base/quadrature_lib.h>
  #include <deal.II/base/function.h>
  #include <deal.II/base/timer.h>
 

The following chunk out code is identical to step-40 and allows switching between PETSc and Trilinos:

  #include <deal.II/lac/generic_linear_algebra.h>
 
  /* #define FORCE_USE_OF_TRILINOS */
 
  namespace LA
  {
  #if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \
  !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
  using namespace dealii::LinearAlgebraPETSc;
  # define USE_PETSC_LA
  #elif defined(DEAL_II_WITH_TRILINOS)
  using namespace dealii::LinearAlgebraTrilinos;
  #else
  # error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
  #endif
  } // namespace LA
 
  #include <deal.II/lac/vector.h>
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/solver_cg.h>
  #include <deal.II/lac/solver_gmres.h>
  #include <deal.II/lac/solver_minres.h>
  #include <deal.II/lac/affine_constraints.h>
  #include <deal.II/lac/dynamic_sparsity_pattern.h>
 
  #include <deal.II/lac/petsc_sparse_matrix.h>
  #include <deal.II/lac/petsc_vector.h>
  #include <deal.II/lac/petsc_solver.h>
  #include <deal.II/lac/petsc_precondition.h>
 
  #include <deal.II/grid/grid_generator.h>
  #include <deal.II/grid/manifold_lib.h>
  #include <deal.II/grid/grid_tools.h>
  #include <deal.II/dofs/dof_handler.h>
  #include <deal.II/dofs/dof_renumbering.h>
  #include <deal.II/dofs/dof_tools.h>
  #include <deal.II/fe/fe_values.h>
  #include <deal.II/fe/fe_q.h>
  #include <deal.II/fe/fe_system.h>
  #include <deal.II/numerics/vector_tools.h>
  #include <deal.II/numerics/data_out.h>
  #include <deal.II/numerics/error_estimator.h>
 
  #include <deal.II/base/utilities.h>
  #include <deal.II/base/conditional_ostream.h>
  #include <deal.II/base/index_set.h>
  #include <deal.II/lac/sparsity_tools.h>
  #include <deal.II/distributed/tria.h>
  #include <deal.II/distributed/grid_refinement.h>
 
  #include <cmath>
  #include <fstream>
  #include <iostream>
 
  namespace Step55
  {
  using namespace dealii;
 

Linear solvers and preconditioners

We need a few helper classes to represent our solver strategy described in the introduction.

  namespace LinearSolvers
  {

This class exposes the action of applying the inverse of a giving matrix via the function InverseMatrix::vmult(). Internally, the inverse is not formed explicitly. Instead, a linear solver with CG is performed. This class extends the InverseMatrix class in step-22 with an option to specify a preconditioner, and to allow for different vector types in the vmult function. We use the same mechanism as in step-31 to convert a run-time exception into a failed assertion should the inner solver not converge.

  template <class Matrix, class Preconditioner>
  class InverseMatrix : public Subscriptor
  {
  public:
  InverseMatrix(const Matrix &m, const Preconditioner &preconditioner);
 
  template <typename VectorType>
  void vmult(VectorType &dst, const VectorType &src) const;
 
  private:
  const Preconditioner &preconditioner;
  };
 
 
  template <class Matrix, class Preconditioner>
  InverseMatrix<Matrix, Preconditioner>::InverseMatrix(
  const Matrix &m,
  const Preconditioner &preconditioner)
  : matrix(&m)
  , preconditioner(preconditioner)
  {}
 
 
 
  template <class Matrix, class Preconditioner>
  template <typename VectorType>
  void
  InverseMatrix<Matrix, Preconditioner>::vmult(VectorType &dst,
  const VectorType &src) const
  {
  SolverControl solver_control(src.size(), 1e-8 * src.l2_norm());
  SolverCG<VectorType> cg(solver_control);
  dst = 0;
 
  try
  {
  cg.solve(*matrix, dst, src, preconditioner);
  }
  catch (std::exception &e)
  {
  Assert(false, ExcMessage(e.what()));
  }
  }
 
 
#define Assert(cond, exc)
SymmetricTensor< 2, dim, Number > e(const Tensor< 2, dim, Number > &F)

The class A template class for a simple block diagonal preconditioner for 2x2 matrices.

  template <class PreconditionerA, class PreconditionerS>
  class BlockDiagonalPreconditioner : public Subscriptor
  {
  public:
  BlockDiagonalPreconditioner(const PreconditionerA &preconditioner_A,
  const PreconditionerS &preconditioner_S);
 
  void vmult(LA::MPI::BlockVector &dst,
  const LA::MPI::BlockVector &src) const;
 
  private:
  const PreconditionerA &preconditioner_A;
  const PreconditionerS &preconditioner_S;
  };
 
  template <class PreconditionerA, class PreconditionerS>
  BlockDiagonalPreconditioner<PreconditionerA, PreconditionerS>::
  BlockDiagonalPreconditioner(const PreconditionerA &preconditioner_A,
  const PreconditionerS &preconditioner_S)
  : preconditioner_A(preconditioner_A)
  , preconditioner_S(preconditioner_S)
  {}
 
 
  template <class PreconditionerA, class PreconditionerS>
  void BlockDiagonalPreconditioner<PreconditionerA, PreconditionerS>::vmult(
  LA::MPI::BlockVector &dst,
  const LA::MPI::BlockVector &src) const
  {
  preconditioner_A.vmult(dst.block(0), src.block(0));
  preconditioner_S.vmult(dst.block(1), src.block(1));
  }
 
  } // namespace LinearSolvers
 

Problem setup

The following classes represent the right hand side and the exact solution for the test problem.

  template <int dim>
  class RightHandSide : public Function<dim>
  {
  public:
  RightHandSide()
  : Function<dim>(dim + 1)
  {}
 
  virtual void vector_value(const Point<dim> &p,
  Vector<double> &value) const override;
  };
 
 
  template <int dim>
  void RightHandSide<dim>::vector_value(const Point<dim> &p,
  Vector<double> &values) const
  {
  const double R_x = p[0];
  const double R_y = p[1];
 
  constexpr double pi = numbers::PI;
  constexpr double pi2 = numbers::PI * numbers::PI;
 
virtual void vector_value(const Point< dim > &p, Vector< RangeNumberType > &values) const
Definition point.h:111
spacedim const Point< spacedim > & p
Definition grid_tools.h:980
static constexpr double PI
Definition numbers.h:259

velocity

  values[0] = -1.0L / 2.0L * (-2 * std::sqrt(25.0 + 4 * pi2) + 10.0) *
  std::exp(R_x * (-2 * std::sqrt(25.0 + 4 * pi2) + 10.0)) -
  0.4 * pi2 * std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
  std::cos(2 * R_y * pi) +
  0.1 *
  Utilities::fixed_power<2>(-std::sqrt(25.0 + 4 * pi2) + 5.0) *
  std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
  std::cos(2 * R_y * pi);
  values[1] = 0.2 * pi * (-std::sqrt(25.0 + 4 * pi2) + 5.0) *
  std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
  std::sin(2 * R_y * pi) -
  0.05 *
  Utilities::fixed_power<3>(-std::sqrt(25.0 + 4 * pi2) + 5.0) *
  std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
  std::sin(2 * R_y * pi) / pi;
 
::VectorizedArray< Number, width > exp(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > cos(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > sin(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > sqrt(const ::VectorizedArray< Number, width > &)

pressure

  values[dim] = 0;
  }
 
 
  template <int dim>
  class ExactSolution : public Function<dim>
  {
  public:
  ExactSolution()
  : Function<dim>(dim + 1)
  {}
 
  virtual void vector_value(const Point<dim> &p,
  Vector<double> &values) const override;
  };
 
  template <int dim>
  void ExactSolution<dim>::vector_value(const Point<dim> &p,
  Vector<double> &values) const
  {
  const double R_x = p[0];
  const double R_y = p[1];
 
  constexpr double pi = numbers::PI;
  constexpr double pi2 = numbers::PI * numbers::PI;
 

velocity

  values[0] = -std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
  std::cos(2 * R_y * pi) +
  1;
  values[1] = (1.0L / 2.0L) * (-std::sqrt(25.0 + 4 * pi2) + 5.0) *
  std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
  std::sin(2 * R_y * pi) / pi;
 

pressure

  values[dim] =
  -1.0L / 2.0L * std::exp(R_x * (-2 * std::sqrt(25.0 + 4 * pi2) + 10.0)) -
  2.0 *
  (-6538034.74494422 +
  0.0134758939981709 * std::exp(4 * std::sqrt(25.0 + 4 * pi2))) /
  (-80.0 * std::exp(3 * std::sqrt(25.0 + 4 * pi2)) +
  16.0 * std::sqrt(25.0 + 4 * pi2) *
  std::exp(3 * std::sqrt(25.0 + 4 * pi2))) -
  1634508.68623606 * std::exp(-3.0 * std::sqrt(25.0 + 4 * pi2)) /
  (-10.0 + 2.0 * std::sqrt(25.0 + 4 * pi2)) +
  (-0.00673794699908547 * std::exp(std::sqrt(25.0 + 4 * pi2)) +
  3269017.37247211 * std::exp(-3 * std::sqrt(25.0 + 4 * pi2))) /
  (-8 * std::sqrt(25.0 + 4 * pi2) + 40.0) +
  0.00336897349954273 * std::exp(1.0 * std::sqrt(25.0 + 4 * pi2)) /
  (-10.0 + 2.0 * std::sqrt(25.0 + 4 * pi2));
  }
 
 
 

The main program

The main class is very similar to step-40, except that matrices and vectors are now block versions, and we store a std::vector<IndexSet> for owned and relevant DoFs instead of a single IndexSet. We have exactly two IndexSets, one for all velocity unknowns and one for all pressure unknowns.

  template <int dim>
  class StokesProblem
  {
  public:
  StokesProblem(unsigned int velocity_degree);
 
  void run();
 
  private:
  void make_grid();
  void setup_system();
  void assemble_system();
  void solve();
  void refine_grid();
  void output_results(const unsigned int cycle) const;
 
  unsigned int velocity_degree;
  double viscosity;
  MPI_Comm mpi_communicator;
 
  DoFHandler<dim> dof_handler;
 
  std::vector<IndexSet> owned_partitioning;
  std::vector<IndexSet> relevant_partitioning;
 
 
  LA::MPI::BlockSparseMatrix system_matrix;
  LA::MPI::BlockSparseMatrix preconditioner_matrix;
  LA::MPI::BlockVector locally_relevant_solution;
  LA::MPI::BlockVector system_rhs;
 
  TimerOutput computing_timer;
  };
 
 
 
  template <int dim>
  StokesProblem<dim>::StokesProblem(unsigned int velocity_degree)
  : velocity_degree(velocity_degree)
  , viscosity(0.1)
  , mpi_communicator(MPI_COMM_WORLD)
  , fe(FE_Q<dim>(velocity_degree) ^ dim, FE_Q<dim>(velocity_degree - 1))
  , triangulation(mpi_communicator,
  typename Triangulation<dim>::MeshSmoothing(
  Triangulation<dim>::smoothing_on_refinement |
  Triangulation<dim>::smoothing_on_coarsening))
  , dof_handler(triangulation)
  , pcout(std::cout,
  (Utilities::MPI::this_mpi_process(mpi_communicator) == 0))
  , computing_timer(mpi_communicator,
  pcout,
  TimerOutput::never,
  TimerOutput::wall_times)
  {}
 
 
Definition fe_q.h:550
STL namespace.
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

The Kovasznay flow is defined on the domain [-0.5, 1.5]^2, which we create by passing the min and max values to GridGenerator::hyper_cube.

  template <int dim>
  void StokesProblem<dim>::make_grid()
  {
  triangulation.refine_global(3);
  }
 
void hyper_cube(Triangulation< dim, spacedim > &tria, const double left=0., const double right=1., const bool colorize=false)

System Setup

The construction of the block matrices and vectors is new compared to step-40 and is different compared to serial codes like step-22, because we need to supply the set of rows that belong to our processor.

  template <int dim>
  void StokesProblem<dim>::setup_system()
  {
  TimerOutput::Scope t(computing_timer, "setup");
 
  dof_handler.distribute_dofs(fe);
 

Put all dim velocities into block 0 and the pressure into block 1, then reorder the unknowns by block. Finally count how many unknowns we have per block.

  std::vector<unsigned int> stokes_sub_blocks(dim + 1, 0);
  stokes_sub_blocks[dim] = 1;
  DoFRenumbering::component_wise(dof_handler, stokes_sub_blocks);
 
  const std::vector<types::global_dof_index> dofs_per_block =
  DoFTools::count_dofs_per_fe_block(dof_handler, stokes_sub_blocks);
 
  const unsigned int n_u = dofs_per_block[0];
  const unsigned int n_p = dofs_per_block[1];
 
  pcout << " Number of degrees of freedom: " << dof_handler.n_dofs() << " ("
  << n_u << '+' << n_p << ')' << std::endl;
 
void component_wise(DoFHandler< dim, spacedim > &dof_handler, const std::vector< unsigned int > &target_component=std::vector< unsigned int >())
std::vector< types::global_dof_index > count_dofs_per_fe_block(const DoFHandler< dim, spacedim > &dof, const std::vector< unsigned int > &target_block=std::vector< unsigned int >())

We split up the IndexSet for locally owned and locally relevant DoFs into two IndexSets based on how we want to create the block matrices and vectors.

  const IndexSet &locally_owned_dofs = dof_handler.locally_owned_dofs();
  owned_partitioning.resize(2);
  owned_partitioning[0] = locally_owned_dofs.get_view(0, n_u);
  owned_partitioning[1] = locally_owned_dofs.get_view(n_u, n_u + n_p);
 
  const IndexSet locally_relevant_dofs =
  relevant_partitioning.resize(2);
  relevant_partitioning[0] = locally_relevant_dofs.get_view(0, n_u);
  relevant_partitioning[1] = locally_relevant_dofs.get_view(n_u, n_u + n_p);
 
IndexSet get_view(const size_type begin, const size_type end) const
Definition index_set.cc:270
IndexSet extract_locally_relevant_dofs(const DoFHandler< dim, spacedim > &dof_handler)

Setting up the constraints for boundary conditions and hanging nodes is identical to step-40. Even though we don't have any hanging nodes because we only perform global refinement, it is still a good idea to put this function call in, in case adaptive refinement gets introduced later.

  {
  constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
 
  const FEValuesExtractors::Vector velocities(0);
  DoFTools::make_hanging_node_constraints(dof_handler, constraints);
  0,
  ExactSolution<dim>(),
  constraints,
  fe.component_mask(velocities));
  constraints.close();
  }
 
void make_hanging_node_constraints(const DoFHandler< dim, spacedim > &dof_handler, AffineConstraints< number > &constraints)
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask={})

Now we create the system matrix based on a BlockDynamicSparsityPattern. We know that we won't have coupling between different velocity components (because we use the laplace and not the deformation tensor) and no coupling between pressure with its test functions, so we use a Table to communicate this coupling information to DoFTools::make_sparsity_pattern.

  {
  system_matrix.clear();
 
  Table<2, DoFTools::Coupling> coupling(dim + 1, dim + 1);
  for (unsigned int c = 0; c < dim + 1; ++c)
  for (unsigned int d = 0; d < dim + 1; ++d)
  if (c == dim && d == dim)
  coupling[c][d] = DoFTools::none;
  else if (c == dim || d == dim || c == d)
  coupling[c][d] = DoFTools::always;
  else
  coupling[c][d] = DoFTools::none;
 
  BlockDynamicSparsityPattern dsp(relevant_partitioning);
 
  dof_handler, coupling, dsp, constraints, false);
 
  dsp,
  dof_handler.locally_owned_dofs(),
  mpi_communicator,
  locally_relevant_dofs);
 
  system_matrix.reinit(owned_partitioning, dsp, mpi_communicator);
  }
 
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)
void distribute_sparsity_pattern(DynamicSparsityPattern &dsp, const IndexSet &locally_owned_rows, const MPI_Comm mpi_comm, const IndexSet &locally_relevant_rows)

The preconditioner matrix has a different coupling (we only fill in the 1,1 block with the mass matrix), otherwise this code is identical to the construction of the system_matrix above.

  {
  preconditioner_matrix.clear();
 
  Table<2, DoFTools::Coupling> coupling(dim + 1, dim + 1);
  for (unsigned int c = 0; c < dim + 1; ++c)
  for (unsigned int d = 0; d < dim + 1; ++d)
  if (c == dim && d == dim)
  coupling[c][d] = DoFTools::always;
  else
  coupling[c][d] = DoFTools::none;
 
  BlockDynamicSparsityPattern dsp(relevant_partitioning);
 
  dof_handler, coupling, dsp, constraints, false);
  dsp,
  Utilities::MPI::all_gather(mpi_communicator,
  dof_handler.locally_owned_dofs()),
  mpi_communicator,
  locally_relevant_dofs);
  preconditioner_matrix.reinit(owned_partitioning, dsp, mpi_communicator);
  }
 
std::vector< T > all_gather(const MPI_Comm comm, const T &object_to_send)

Finally, we construct the block vectors with the right sizes. The function call with two std::vector<IndexSet> will create a ghosted vector.

  locally_relevant_solution.reinit(owned_partitioning,
  relevant_partitioning,
  mpi_communicator);
  system_rhs.reinit(owned_partitioning, mpi_communicator);
  }
 
 
 

Assembly

This function assembles the system matrix, the preconditioner matrix, and the right hand side. The code is pretty standard.

  template <int dim>
  void StokesProblem<dim>::assemble_system()
  {
  TimerOutput::Scope t(computing_timer, "assembly");
 
  system_matrix = 0;
  preconditioner_matrix = 0;
  system_rhs = 0;
 
  const QGauss<dim> quadrature_formula(velocity_degree + 1);
 
  FEValues<dim> fe_values(fe,
  quadrature_formula,
 
  const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
  const unsigned int n_q_points = quadrature_formula.size();
 
  FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
  FullMatrix<double> cell_matrix2(dofs_per_cell, dofs_per_cell);
  Vector<double> cell_rhs(dofs_per_cell);
 
  const RightHandSide<dim> right_hand_side;
  std::vector<Vector<double>> rhs_values(n_q_points, Vector<double>(dim + 1));
 
  std::vector<Tensor<2, dim>> grad_phi_u(dofs_per_cell);
  std::vector<double> div_phi_u(dofs_per_cell);
  std::vector<double> phi_p(dofs_per_cell);
 
  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
  const FEValuesExtractors::Vector velocities(0);
  const FEValuesExtractors::Scalar pressure(dim);
 
  for (const auto &cell : dof_handler.active_cell_iterators())
  if (cell->is_locally_owned())
  {
  cell_matrix = 0;
  cell_matrix2 = 0;
  cell_rhs = 0;
 
  fe_values.reinit(cell);
  right_hand_side.vector_value_list(fe_values.get_quadrature_points(),
  rhs_values);
  for (unsigned int q = 0; q < n_q_points; ++q)
  {
  for (unsigned int k = 0; k < dofs_per_cell; ++k)
  {
  grad_phi_u[k] = fe_values[velocities].gradient(k, q);
  div_phi_u[k] = fe_values[velocities].divergence(k, q);
  phi_p[k] = fe_values[pressure].value(k, q);
  }
 
  for (unsigned int i = 0; i < dofs_per_cell; ++i)
  {
  for (unsigned int j = 0; j < dofs_per_cell; ++j)
  {
  cell_matrix(i, j) +=
  (viscosity *
  scalar_product(grad_phi_u[i], grad_phi_u[j]) -
  div_phi_u[i] * phi_p[j] - phi_p[i] * div_phi_u[j]) *
  fe_values.JxW(q);
 
  cell_matrix2(i, j) += 1.0 / viscosity * phi_p[i] *
  phi_p[j] * fe_values.JxW(q);
  }
 
  const unsigned int component_i =
  fe.system_to_component_index(i).first;
  cell_rhs(i) += fe_values.shape_value(i, q) *
  rhs_values[q](component_i) * fe_values.JxW(q);
  }
  }
 
 
  cell->get_dof_indices(local_dof_indices);
  constraints.distribute_local_to_global(cell_matrix,
  cell_rhs,
  local_dof_indices,
  system_matrix,
  system_rhs);
 
  constraints.distribute_local_to_global(cell_matrix2,
  local_dof_indices,
  preconditioner_matrix);
  }
 
  system_matrix.compress(VectorOperation::add);
  preconditioner_matrix.compress(VectorOperation::add);
  system_rhs.compress(VectorOperation::add);
  }
 
 
 
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.
@ update_quadrature_points
Transformed quadrature points.

Solving

This function solves the linear system with MINRES with a block diagonal preconditioner and AMG for the two diagonal blocks as described in the introduction. The preconditioner applies a v cycle to the 0,0 block and a CG with the mass matrix for the 1,1 block (the Schur complement).

  template <int dim>
  void StokesProblem<dim>::solve()
  {
  TimerOutput::Scope t(computing_timer, "solve");
 
  LA::MPI::PreconditionAMG prec_A;
  {
  LA::MPI::PreconditionAMG::AdditionalData data;
 
  #ifdef USE_PETSC_LA
  data.symmetric_operator = true;
  #endif
  prec_A.initialize(system_matrix.block(0, 0), data);
  }
 
  LA::MPI::PreconditionAMG prec_S;
  {
  LA::MPI::PreconditionAMG::AdditionalData data;
 
  #ifdef USE_PETSC_LA
  data.symmetric_operator = true;
  #endif
  prec_S.initialize(preconditioner_matrix.block(1, 1), data);
  }
 

The InverseMatrix is used to solve for the mass matrix:

  using mp_inverse_t = LinearSolvers::InverseMatrix<LA::MPI::SparseMatrix,
  LA::MPI::PreconditionAMG>;
  const mp_inverse_t mp_inverse(preconditioner_matrix.block(1, 1), prec_S);
 

This constructs the block preconditioner based on the preconditioners for the individual blocks defined above.

  const LinearSolvers::BlockDiagonalPreconditioner<LA::MPI::PreconditionAMG,
  mp_inverse_t>
  preconditioner(prec_A, mp_inverse);
 

With that, we can finally set up the linear solver and solve the system:

  SolverControl solver_control(system_matrix.m(),
  1e-10 * system_rhs.l2_norm());
 
  SolverMinRes<LA::MPI::BlockVector> solver(solver_control);
 
  LA::MPI::BlockVector distributed_solution(owned_partitioning,
  mpi_communicator);
 
  constraints.set_zero(distributed_solution);
 
  solver.solve(system_matrix,
  distributed_solution,
  system_rhs,
  preconditioner);
 
  pcout << " Solved in " << solver_control.last_step() << " iterations."
  << std::endl;
 
  constraints.distribute(distributed_solution);
 

Like in step-56, we subtract the mean pressure to allow error computations against our reference solution, which has a mean value of zero.

  locally_relevant_solution = distributed_solution;
  const double mean_pressure =
  QGauss<dim>(velocity_degree + 2),
  locally_relevant_solution,
  dim);
  distributed_solution.block(1).add(-mean_pressure);
  locally_relevant_solution.block(1) = distributed_solution.block(1);
  }
 
 
 
Number compute_mean_value(const hp::MappingCollection< dim, spacedim > &mapping_collection, const DoFHandler< dim, spacedim > &dof, const hp::QCollection< dim > &q_collection, const ReadVector< Number > &v, const unsigned int component)

The rest

The remainder of the code that deals with mesh refinement, output, and the main loop is pretty standard.

  template <int dim>
  void StokesProblem<dim>::refine_grid()
  {
  TimerOutput::Scope t(computing_timer, "refine");
 
  triangulation.refine_global();
  }
 
 
 
  template <int dim>
  void StokesProblem<dim>::output_results(const unsigned int cycle) const
  {
  {
  const ComponentSelectFunction<dim> pressure_mask(dim, dim + 1);
  const ComponentSelectFunction<dim> velocity_mask(std::make_pair(0, dim),
  dim + 1);
 
  Vector<double> cellwise_errors(triangulation.n_active_cells());
  QGauss<dim> quadrature(velocity_degree + 2);
 
  locally_relevant_solution,
  ExactSolution<dim>(),
  cellwise_errors,
  quadrature,
  &velocity_mask);
 
  const double error_u_l2 =
  cellwise_errors,
 
  locally_relevant_solution,
  ExactSolution<dim>(),
  cellwise_errors,
  quadrature,
  &pressure_mask);
 
  const double error_p_l2 =
  cellwise_errors,
 
  pcout << "error: u_0: " << error_u_l2 << " p_0: " << error_p_l2
  << std::endl;
  }
 
 
  std::vector<std::string> solution_names(dim, "velocity");
  solution_names.emplace_back("pressure");
  std::vector<DataComponentInterpretation::DataComponentInterpretation>
  data_component_interpretation(
  data_component_interpretation.push_back(
 
  DataOut<dim> data_out;
  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(locally_relevant_solution,
  solution_names,
  data_component_interpretation);
 
  LA::MPI::BlockVector interpolated;
  interpolated.reinit(owned_partitioning, MPI_COMM_WORLD);
  VectorTools::interpolate(dof_handler, ExactSolution<dim>(), interpolated);
 
  LA::MPI::BlockVector interpolated_relevant(owned_partitioning,
  relevant_partitioning,
  MPI_COMM_WORLD);
  interpolated_relevant = interpolated;
  {
  std::vector<std::string> solution_names(dim, "ref_u");
  solution_names.emplace_back("ref_p");
  data_out.add_data_vector(interpolated_relevant,
  solution_names,
  data_component_interpretation);
  }
 
 
  Vector<float> subdomain(triangulation.n_active_cells());
  for (unsigned int i = 0; i < subdomain.size(); ++i)
  subdomain(i) = triangulation.locally_owned_subdomain();
  data_out.add_data_vector(subdomain, "subdomain");
 
  data_out.build_patches();
 
  data_out.write_vtu_with_pvtu_record(
  "./", "solution", cycle, mpi_communicator, 2);
  }
 
 
 
  template <int dim>
  void StokesProblem<dim>::run()
  {
  #ifdef USE_PETSC_LA
  pcout << "Running using PETSc." << std::endl;
  #else
  pcout << "Running using Trilinos." << std::endl;
  #endif
  const unsigned int n_cycles = 5;
  for (unsigned int cycle = 0; cycle < n_cycles; ++cycle)
  {
  pcout << "Cycle " << cycle << ':' << std::endl;
 
  if (cycle == 0)
  make_grid();
  else
  refine_grid();
 
  setup_system();
 
  assemble_system();
  solve();
 
  if (Utilities::MPI::n_mpi_processes(mpi_communicator) <= 32)
  {
  TimerOutput::Scope t(computing_timer, "output");
  output_results(cycle);
  }
 
  computing_timer.print_summary();
  computing_timer.reset();
 
  pcout << std::endl;
  }
  }
  } // namespace Step55
 
 
 
  int main(int argc, char *argv[])
  {
  try
  {
  using namespace dealii;
  using namespace Step55;
 
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
 
  StokesProblem<2> problem(2);
  problem.run();
  }
  catch (std::exception &exc)
  {
  std::cerr << std::endl
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Exception on processing: " << std::endl
  << exc.what() << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
 
  return 1;
  }
  catch (...)
  {
  std::cerr << std::endl
  << std::endl
  << "----------------------------------------------------"
  << std::endl;
  std::cerr << "Unknown exception!" << std::endl
  << "Aborting!" << std::endl
  << "----------------------------------------------------"
  << std::endl;
  return 1;
  }
 
  return 0;
  }
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
unsigned int n_mpi_processes(const MPI_Comm mpi_communicator)
Definition mpi.cc:128
double compute_global_error(const Triangulation< dim, spacedim > &tria, const InVector &cellwise_error, const NormType &norm, const double exponent=2.)
void interpolate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Function< spacedim, typename VectorType::value_type > &function, VectorType &vec, const ComponentMask &component_mask={})
void integrate_difference(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const ReadVector< Number > &fe_function, const Function< spacedim, Number > &exact_solution, OutVector &difference, const Quadrature< dim > &q, const NormType &norm, const Function< spacedim, double > *weight=nullptr, const double exponent=2.)

Results

As expected from the discussion above, the number of iterations is independent of the number of processors and only very slightly dependent on \(h\):

PETSc number of processors
cycle dofs 1 2 4 8 16 32 64 128
0 659 49 49 49 51 51 51 49 49
1 2467 52 52 52 52 52 54 54 53
2 9539 56 56 56 54 56 56 54 56
3 37507 57 57 57 57 57 56 57 56
4 148739 58 59 57 59 57 57 57 57
5 592387 60 60 59 59 59 59 59 59
6 2364419 62 62 61 61 61 61 61 61
Trilinos number of processors
cycle dofs 1 2 4 8 16 32 64 128
0 659 37 37 37 37 37 37 37 37
1 2467 92 89 89 82 86 81 78 78
2 9539 102 99 96 95 95 88 83 95
3 37507 107 105 104 99 100 96 96 90
4 148739 112 112 111 111 127 126 115 117
5 592387 116 115 114 112 118 120 131 130
6 2364419 130 126 120 120 121 122 121 123

While the PETSc results show a constant number of iterations, the iterations increase when using Trilinos. This is likely because of the different settings used for the AMG preconditioner. For performance reasons we do not allow coarsening below a couple thousand unknowns. As the coarse solver is an exact solve (we are using LU by default), a change in number of levels will influence the quality of a V-cycle. Therefore, a V-cycle is closer to an exact solver for smaller problem sizes.

Possibilities for extensions

Investigate Trilinos iterations

Play with the smoothers, smoothing steps, and other properties for the Trilinos AMG to achieve an optimal preconditioner.

Solve the Oseen problem instead of the Stokes system

This change requires changing the outer solver to GMRES or BiCGStab, because the system is no longer symmetric.

You can prescribe the exact flow solution as \(b\) in the convective term \(b \cdot \nabla u\). This should give the same solution as the original problem, if you set the right hand side to zero.

Adaptive refinement

So far, this tutorial program refines the mesh globally in each step. Replacing the code in StokesProblem::refine_grid() by something like

Vector<float> estimated_error_per_cell(triangulation.n_active_cells());
dof_handler,
QGauss<dim - 1>(fe.degree + 1),
std::map<types::boundary_id, const Function<dim> *>(),
locally_relevant_solution,
estimated_error_per_cell,
fe.component_mask(velocities));
triangulation, estimated_error_per_cell, 0.3, 0.0);
triangulation.execute_coarsening_and_refinement();
static void estimate(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const Quadrature< dim - 1 > &quadrature, const std::map< types::boundary_id, const Function< spacedim, Number > * > &neumann_bc, const ReadVector< Number > &solution, Vector< float > &error, const ComponentMask &component_mask={}, const Function< spacedim > *coefficients=nullptr, const unsigned int n_threads=numbers::invalid_unsigned_int, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id, const types::material_id material_id=numbers::invalid_material_id, const Strategy strategy=cell_diameter_over_24)
void refine_and_coarsen_fixed_number(::Triangulation< dim, spacedim > &tria, const ::Vector< Number > &criteria, const double top_fraction_of_cells, const double bottom_fraction_of_cells, const types::global_cell_index max_n_cells=std::numeric_limits< types::global_cell_index >::max())

makes it simple to explore adaptive mesh refinement.

The plain program

/* ------------------------------------------------------------------------
*
* SPDX-License-Identifier: LGPL-2.1-or-later
* Copyright (C) 2016 - 2024 by the deal.II authors
*
* This file is part of the deal.II library.
*
* Part of the source code is dual licensed under Apache-2.0 WITH
* LLVM-exception OR LGPL-2.1-or-later. Detailed license information
* governing the source code and code contributions can be found in
* LICENSE.md and CONTRIBUTING.md at the top level directory of deal.II.
*
* ------------------------------------------------------------------------
*
* Author: Timo Heister, Clemson University, 2016
*/
/* #define FORCE_USE_OF_TRILINOS */
namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !defined(DEAL_II_PETSC_WITH_COMPLEX) && \
!(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
using namespace dealii::LinearAlgebraPETSc;
# define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
using namespace dealii::LinearAlgebraTrilinos;
#else
# error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
} // namespace LA
#include <cmath>
#include <fstream>
#include <iostream>
namespace Step55
{
using namespace dealii;
namespace LinearSolvers
{
template <class Matrix, class Preconditioner>
class InverseMatrix : public Subscriptor
{
public:
InverseMatrix(const Matrix &m, const Preconditioner &preconditioner);
template <typename VectorType>
void vmult(VectorType &dst, const VectorType &src) const;
private:
const Preconditioner &preconditioner;
};
template <class Matrix, class Preconditioner>
InverseMatrix<Matrix, Preconditioner>::InverseMatrix(
const Matrix &m,
const Preconditioner &preconditioner)
: matrix(&m)
, preconditioner(preconditioner)
{}
template <class Matrix, class Preconditioner>
template <typename VectorType>
void
InverseMatrix<Matrix, Preconditioner>::vmult(VectorType &dst,
const VectorType &src) const
{
SolverControl solver_control(src.size(), 1e-8 * src.l2_norm());
SolverCG<VectorType> cg(solver_control);
dst = 0;
try
{
cg.solve(*matrix, dst, src, preconditioner);
}
catch (std::exception &e)
{
Assert(false, ExcMessage(e.what()));
}
}
template <class PreconditionerA, class PreconditionerS>
class BlockDiagonalPreconditioner : public Subscriptor
{
public:
BlockDiagonalPreconditioner(const PreconditionerA &preconditioner_A,
const PreconditionerS &preconditioner_S);
void vmult(LA::MPI::BlockVector &dst,
const LA::MPI::BlockVector &src) const;
private:
const PreconditionerA &preconditioner_A;
const PreconditionerS &preconditioner_S;
};
template <class PreconditionerA, class PreconditionerS>
BlockDiagonalPreconditioner<PreconditionerA, PreconditionerS>::
BlockDiagonalPreconditioner(const PreconditionerA &preconditioner_A,
const PreconditionerS &preconditioner_S)
: preconditioner_A(preconditioner_A)
, preconditioner_S(preconditioner_S)
{}
template <class PreconditionerA, class PreconditionerS>
void BlockDiagonalPreconditioner<PreconditionerA, PreconditionerS>::vmult(
LA::MPI::BlockVector &dst,
const LA::MPI::BlockVector &src) const
{
preconditioner_A.vmult(dst.block(0), src.block(0));
preconditioner_S.vmult(dst.block(1), src.block(1));
}
} // namespace LinearSolvers
template <int dim>
class RightHandSide : public Function<dim>
{
public:
RightHandSide()
: Function<dim>(dim + 1)
{}
virtual void vector_value(const Point<dim> &p,
Vector<double> &value) const override;
};
template <int dim>
void RightHandSide<dim>::vector_value(const Point<dim> &p,
Vector<double> &values) const
{
const double R_x = p[0];
const double R_y = p[1];
constexpr double pi = numbers::PI;
constexpr double pi2 = numbers::PI * numbers::PI;
values[0] = -1.0L / 2.0L * (-2 * std::sqrt(25.0 + 4 * pi2) + 10.0) *
std::exp(R_x * (-2 * std::sqrt(25.0 + 4 * pi2) + 10.0)) -
0.4 * pi2 * std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
std::cos(2 * R_y * pi) +
0.1 *
Utilities::fixed_power<2>(-std::sqrt(25.0 + 4 * pi2) + 5.0) *
std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
std::cos(2 * R_y * pi);
values[1] = 0.2 * pi * (-std::sqrt(25.0 + 4 * pi2) + 5.0) *
std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
std::sin(2 * R_y * pi) -
0.05 *
Utilities::fixed_power<3>(-std::sqrt(25.0 + 4 * pi2) + 5.0) *
std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
std::sin(2 * R_y * pi) / pi;
values[dim] = 0;
}
template <int dim>
class ExactSolution : public Function<dim>
{
public:
ExactSolution()
: Function<dim>(dim + 1)
{}
virtual void vector_value(const Point<dim> &p,
Vector<double> &values) const override;
};
template <int dim>
void ExactSolution<dim>::vector_value(const Point<dim> &p,
Vector<double> &values) const
{
const double R_x = p[0];
const double R_y = p[1];
constexpr double pi = numbers::PI;
constexpr double pi2 = numbers::PI * numbers::PI;
values[0] = -std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
std::cos(2 * R_y * pi) +
1;
values[1] = (1.0L / 2.0L) * (-std::sqrt(25.0 + 4 * pi2) + 5.0) *
std::exp(R_x * (-std::sqrt(25.0 + 4 * pi2) + 5.0)) *
std::sin(2 * R_y * pi) / pi;
values[dim] =
-1.0L / 2.0L * std::exp(R_x * (-2 * std::sqrt(25.0 + 4 * pi2) + 10.0)) -
2.0 *
(-6538034.74494422 +
0.0134758939981709 * std::exp(4 * std::sqrt(25.0 + 4 * pi2))) /
(-80.0 * std::exp(3 * std::sqrt(25.0 + 4 * pi2)) +
16.0 * std::sqrt(25.0 + 4 * pi2) *
std::exp(3 * std::sqrt(25.0 + 4 * pi2))) -
1634508.68623606 * std::exp(-3.0 * std::sqrt(25.0 + 4 * pi2)) /
(-10.0 + 2.0 * std::sqrt(25.0 + 4 * pi2)) +
(-0.00673794699908547 * std::exp(std::sqrt(25.0 + 4 * pi2)) +
3269017.37247211 * std::exp(-3 * std::sqrt(25.0 + 4 * pi2))) /
(-8 * std::sqrt(25.0 + 4 * pi2) + 40.0) +
0.00336897349954273 * std::exp(1.0 * std::sqrt(25.0 + 4 * pi2)) /
(-10.0 + 2.0 * std::sqrt(25.0 + 4 * pi2));
}
template <int dim>
class StokesProblem
{
public:
StokesProblem(unsigned int velocity_degree);
void run();
private:
void make_grid();
void setup_system();
void assemble_system();
void solve();
void refine_grid();
void output_results(const unsigned int cycle) const;
unsigned int velocity_degree;
double viscosity;
MPI_Comm mpi_communicator;
DoFHandler<dim> dof_handler;
std::vector<IndexSet> owned_partitioning;
std::vector<IndexSet> relevant_partitioning;
LA::MPI::BlockSparseMatrix system_matrix;
LA::MPI::BlockSparseMatrix preconditioner_matrix;
LA::MPI::BlockVector locally_relevant_solution;
LA::MPI::BlockVector system_rhs;
TimerOutput computing_timer;
};
template <int dim>
StokesProblem<dim>::StokesProblem(unsigned int velocity_degree)
: velocity_degree(velocity_degree)
, viscosity(0.1)
, mpi_communicator(MPI_COMM_WORLD)
, fe(FE_Q<dim>(velocity_degree) ^ dim, FE_Q<dim>(velocity_degree - 1))
, triangulation(mpi_communicator,
typename Triangulation<dim>::MeshSmoothing(
Triangulation<dim>::smoothing_on_refinement |
Triangulation<dim>::smoothing_on_coarsening))
, dof_handler(triangulation)
, pcout(std::cout,
(Utilities::MPI::this_mpi_process(mpi_communicator) == 0))
, computing_timer(mpi_communicator,
pcout,
TimerOutput::never,
TimerOutput::wall_times)
{}
template <int dim>
void StokesProblem<dim>::make_grid()
{
triangulation.refine_global(3);
}
template <int dim>
void StokesProblem<dim>::setup_system()
{
TimerOutput::Scope t(computing_timer, "setup");
dof_handler.distribute_dofs(fe);
std::vector<unsigned int> stokes_sub_blocks(dim + 1, 0);
stokes_sub_blocks[dim] = 1;
DoFRenumbering::component_wise(dof_handler, stokes_sub_blocks);
const std::vector<types::global_dof_index> dofs_per_block =
DoFTools::count_dofs_per_fe_block(dof_handler, stokes_sub_blocks);
const unsigned int n_u = dofs_per_block[0];
const unsigned int n_p = dofs_per_block[1];
pcout << " Number of degrees of freedom: " << dof_handler.n_dofs() << " ("
<< n_u << '+' << n_p << ')' << std::endl;
const IndexSet &locally_owned_dofs = dof_handler.locally_owned_dofs();
owned_partitioning.resize(2);
owned_partitioning[0] = locally_owned_dofs.get_view(0, n_u);
owned_partitioning[1] = locally_owned_dofs.get_view(n_u, n_u + n_p);
const IndexSet locally_relevant_dofs =
relevant_partitioning.resize(2);
relevant_partitioning[0] = locally_relevant_dofs.get_view(0, n_u);
relevant_partitioning[1] = locally_relevant_dofs.get_view(n_u, n_u + n_p);
{
constraints.reinit(locally_owned_dofs, locally_relevant_dofs);
const FEValuesExtractors::Vector velocities(0);
DoFTools::make_hanging_node_constraints(dof_handler, constraints);
0,
ExactSolution<dim>(),
constraints,
fe.component_mask(velocities));
constraints.close();
}
{
system_matrix.clear();
Table<2, DoFTools::Coupling> coupling(dim + 1, dim + 1);
for (unsigned int c = 0; c < dim + 1; ++c)
for (unsigned int d = 0; d < dim + 1; ++d)
if (c == dim && d == dim)
coupling[c][d] = DoFTools::none;
else if (c == dim || d == dim || c == d)
coupling[c][d] = DoFTools::always;
else
coupling[c][d] = DoFTools::none;
BlockDynamicSparsityPattern dsp(relevant_partitioning);
dof_handler, coupling, dsp, constraints, false);
dsp,
dof_handler.locally_owned_dofs(),
mpi_communicator,
locally_relevant_dofs);
system_matrix.reinit(owned_partitioning, dsp, mpi_communicator);
}
{
preconditioner_matrix.clear();
Table<2, DoFTools::Coupling> coupling(dim + 1, dim + 1);
for (unsigned int c = 0; c < dim + 1; ++c)
for (unsigned int d = 0; d < dim + 1; ++d)
if (c == dim && d == dim)
coupling[c][d] = DoFTools::always;
else
coupling[c][d] = DoFTools::none;
BlockDynamicSparsityPattern dsp(relevant_partitioning);
dof_handler, coupling, dsp, constraints, false);
dsp,
Utilities::MPI::all_gather(mpi_communicator,
dof_handler.locally_owned_dofs()),
mpi_communicator,
locally_relevant_dofs);
preconditioner_matrix.reinit(owned_partitioning, dsp, mpi_communicator);
}
locally_relevant_solution.reinit(owned_partitioning,
relevant_partitioning,
mpi_communicator);
system_rhs.reinit(owned_partitioning, mpi_communicator);
}
template <int dim>
void StokesProblem<dim>::assemble_system()
{
TimerOutput::Scope t(computing_timer, "assembly");
system_matrix = 0;
preconditioner_matrix = 0;
system_rhs = 0;
const QGauss<dim> quadrature_formula(velocity_degree + 1);
FEValues<dim> fe_values(fe,
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
const unsigned int n_q_points = quadrature_formula.size();
FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
FullMatrix<double> cell_matrix2(dofs_per_cell, dofs_per_cell);
Vector<double> cell_rhs(dofs_per_cell);
const RightHandSide<dim> right_hand_side;
std::vector<Vector<double>> rhs_values(n_q_points, Vector<double>(dim + 1));
std::vector<Tensor<2, dim>> grad_phi_u(dofs_per_cell);
std::vector<double> div_phi_u(dofs_per_cell);
std::vector<double> phi_p(dofs_per_cell);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
const FEValuesExtractors::Vector velocities(0);
const FEValuesExtractors::Scalar pressure(dim);
for (const auto &cell : dof_handler.active_cell_iterators())
if (cell->is_locally_owned())
{
cell_matrix2 = 0;
cell_rhs = 0;
fe_values.reinit(cell);
right_hand_side.vector_value_list(fe_values.get_quadrature_points(),
rhs_values);
for (unsigned int q = 0; q < n_q_points; ++q)
{
for (unsigned int k = 0; k < dofs_per_cell; ++k)
{
grad_phi_u[k] = fe_values[velocities].gradient(k, q);
div_phi_u[k] = fe_values[velocities].divergence(k, q);
phi_p[k] = fe_values[pressure].value(k, q);
}
for (unsigned int i = 0; i < dofs_per_cell; ++i)
{
for (unsigned int j = 0; j < dofs_per_cell; ++j)
{
cell_matrix(i, j) +=
(viscosity *
scalar_product(grad_phi_u[i], grad_phi_u[j]) -
div_phi_u[i] * phi_p[j] - phi_p[i] * div_phi_u[j]) *
fe_values.JxW(q);
cell_matrix2(i, j) += 1.0 / viscosity * phi_p[i] *
phi_p[j] * fe_values.JxW(q);
}
const unsigned int component_i =
fe.system_to_component_index(i).first;
cell_rhs(i) += fe_values.shape_value(i, q) *
rhs_values[q](component_i) * fe_values.JxW(q);
}
}
cell->get_dof_indices(local_dof_indices);
constraints.distribute_local_to_global(cell_matrix,
cell_rhs,
local_dof_indices,
system_matrix,
system_rhs);
constraints.distribute_local_to_global(cell_matrix2,
local_dof_indices,
preconditioner_matrix);
}
system_matrix.compress(VectorOperation::add);
preconditioner_matrix.compress(VectorOperation::add);
system_rhs.compress(VectorOperation::add);
}
template <int dim>
void StokesProblem<dim>::solve()
{
TimerOutput::Scope t(computing_timer, "solve");
LA::MPI::PreconditionAMG prec_A;
{
LA::MPI::PreconditionAMG::AdditionalData data;
#ifdef USE_PETSC_LA
data.symmetric_operator = true;
#endif
prec_A.initialize(system_matrix.block(0, 0), data);
}
LA::MPI::PreconditionAMG prec_S;
{
LA::MPI::PreconditionAMG::AdditionalData data;
#ifdef USE_PETSC_LA
data.symmetric_operator = true;
#endif
prec_S.initialize(preconditioner_matrix.block(1, 1), data);
}
using mp_inverse_t = LinearSolvers::InverseMatrix<LA::MPI::SparseMatrix,
LA::MPI::PreconditionAMG>;
const mp_inverse_t mp_inverse(preconditioner_matrix.block(1, 1), prec_S);
const LinearSolvers::BlockDiagonalPreconditioner<LA::MPI::PreconditionAMG,
mp_inverse_t>
preconditioner(prec_A, mp_inverse);
SolverControl solver_control(system_matrix.m(),
1e-10 * system_rhs.l2_norm());
SolverMinRes<LA::MPI::BlockVector> solver(solver_control);
LA::MPI::BlockVector distributed_solution(owned_partitioning,
mpi_communicator);
constraints.set_zero(distributed_solution);
solver.solve(system_matrix,
distributed_solution,
system_rhs,
preconditioner);
pcout << " Solved in " << solver_control.last_step() << " iterations."
<< std::endl;
constraints.distribute(distributed_solution);
locally_relevant_solution = distributed_solution;
const double mean_pressure =
QGauss<dim>(velocity_degree + 2),
locally_relevant_solution,
dim);
distributed_solution.block(1).add(-mean_pressure);
locally_relevant_solution.block(1) = distributed_solution.block(1);
}
template <int dim>
void StokesProblem<dim>::refine_grid()
{
TimerOutput::Scope t(computing_timer, "refine");
triangulation.refine_global();
}
template <int dim>
void StokesProblem<dim>::output_results(const unsigned int cycle) const
{
{
const ComponentSelectFunction<dim> pressure_mask(dim, dim + 1);
const ComponentSelectFunction<dim> velocity_mask(std::make_pair(0, dim),
dim + 1);
Vector<double> cellwise_errors(triangulation.n_active_cells());
QGauss<dim> quadrature(velocity_degree + 2);
locally_relevant_solution,
ExactSolution<dim>(),
cellwise_errors,
quadrature,
&velocity_mask);
const double error_u_l2 =
cellwise_errors,
locally_relevant_solution,
ExactSolution<dim>(),
cellwise_errors,
quadrature,
&pressure_mask);
const double error_p_l2 =
cellwise_errors,
pcout << "error: u_0: " << error_u_l2 << " p_0: " << error_p_l2
<< std::endl;
}
std::vector<std::string> solution_names(dim, "velocity");
solution_names.emplace_back("pressure");
std::vector<DataComponentInterpretation::DataComponentInterpretation>
data_component_interpretation(
data_component_interpretation.push_back(
DataOut<dim> data_out;
data_out.attach_dof_handler(dof_handler);
data_out.add_data_vector(locally_relevant_solution,
solution_names,
data_component_interpretation);
LA::MPI::BlockVector interpolated;
interpolated.reinit(owned_partitioning, MPI_COMM_WORLD);
VectorTools::interpolate(dof_handler, ExactSolution<dim>(), interpolated);
LA::MPI::BlockVector interpolated_relevant(owned_partitioning,
relevant_partitioning,
MPI_COMM_WORLD);
interpolated_relevant = interpolated;
{
std::vector<std::string> solution_names(dim, "ref_u");
solution_names.emplace_back("ref_p");
data_out.add_data_vector(interpolated_relevant,
solution_names,
data_component_interpretation);
}
Vector<float> subdomain(triangulation.n_active_cells());
for (unsigned int i = 0; i < subdomain.size(); ++i)
subdomain(i) = triangulation.locally_owned_subdomain();
data_out.add_data_vector(subdomain, "subdomain");
data_out.build_patches();
"./", "solution", cycle, mpi_communicator, 2);
}
template <int dim>
void StokesProblem<dim>::run()
{
#ifdef USE_PETSC_LA
pcout << "Running using PETSc." << std::endl;
#else
pcout << "Running using Trilinos." << std::endl;
#endif
const unsigned int n_cycles = 5;
for (unsigned int cycle = 0; cycle < n_cycles; ++cycle)
{
pcout << "Cycle " << cycle << ':' << std::endl;
if (cycle == 0)
make_grid();
else
refine_grid();
setup_system();
assemble_system();
solve();
if (Utilities::MPI::n_mpi_processes(mpi_communicator) <= 32)
{
TimerOutput::Scope t(computing_timer, "output");
output_results(cycle);
}
computing_timer.print_summary();
computing_timer.reset();
pcout << std::endl;
}
}
} // namespace Step55
int main(int argc, char *argv[])
{
try
{
using namespace dealii;
using namespace Step55;
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
StokesProblem<2> problem(2);
problem.run();
}
catch (std::exception &exc)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}
std::string write_vtu_with_pvtu_record(const std::string &directory, const std::string &filename_without_extension, const unsigned int counter, const MPI_Comm mpi_communicator, const unsigned int n_digits_for_counter=numbers::invalid_unsigned_int, const unsigned int n_groups=0) const
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition data_out.cc:1062
if(marked_vertices.size() !=0) for(auto it
@ matrix
Contents is actually a matrix.
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
Definition advection.h:74
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
unsigned int this_mpi_process(const MPI_Comm mpi_communicator)
Definition mpi.cc:143
void run(const Iterator &begin, const std_cxx20::type_identity_t< Iterator > &end, Worker worker, Copier copier, const ScratchData &sample_scratch_data, const CopyData &sample_copy_data, const unsigned int queue_length, const unsigned int chunk_size)