deal.II version GIT relicensing-1972-g22a7b89abe 2024-10-11 21:20:00+00:00
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
The step-3 tutorial program

This tutorial depends on step-2.

Table of contents
  1. Introduction
  2. The commented program
  1. Results
  2. The plain program

Introduction

Note
The material presented here is also discussed in video lecture 10. (All video lectures are also available here.)

The basic set up of finite element methods

This is the first example where we actually use finite elements to compute something. We will solve a simple version of Poisson's equation with zero boundary values, but a nonzero right hand side:

\begin{align*} -\Delta u &= f \qquad\qquad & \text{in}\ \Omega, \\ u &= 0 \qquad\qquad & \text{on}\ \partial\Omega. \end{align*}

We will solve this equation on the square, \(\Omega=[-1,1]^2\), for which you've already learned how to generate a mesh in step-1 and step-2. In this program, we will also only consider the particular case \(f(\mathbf x)=1\) and come back to how to implement the more general case in the next tutorial program, step-4.

If you've learned about the basics of the finite element method, you will remember the steps we need to take to approximate the solution \(u\) by a finite dimensional approximation. Specifically, we first need to derive the weak form of the equation above, which we obtain by multiplying the equation by a test function \(\varphi\) from the left (we will come back to the reason for multiplying from the left and not from the right below) and integrating over the domain \(\Omega\):

\begin{align*} -\int_\Omega \varphi \Delta u = \int_\Omega \varphi f. \end{align*}

This can be integrated by parts:

\begin{align*} \int_\Omega \nabla\varphi \cdot \nabla u - \int_{\partial\Omega} \varphi \mathbf{n}\cdot \nabla u = \int_\Omega \varphi f. \end{align*}

The test function \(\varphi\) has to satisfy the same kind of boundary conditions (in mathematical terms: it needs to come from the tangent space of the set in which we seek the solution), so on the boundary \(\varphi=0\) and consequently the weak form we are looking for reads

\begin{align*} (\nabla\varphi, \nabla u) = (\varphi, f), \end{align*}

where we have used the common notation \((a,b)=\int_\Omega a\; b\). The problem then asks for a function \(u\) for which this statement is true for all test functions \(\varphi\) from the appropriate space (which here is the space \(H^1\)).

Of course we can't find such a function on a computer in the general case, and instead we seek an approximation \(u_h(\mathbf x)=\sum_j U_j \varphi_j(\mathbf x)\), where the \(U_j\) are unknown expansion coefficients we need to determine (the "degrees of freedom" of this problem), and \(\varphi_i(\mathbf x)\) are the finite element shape functions we will use. To define these shape functions, we need the following:

  • A mesh on which to define shape functions. You have already seen how to generate and manipulate the objects that describe meshes in step-1 and step-2.
  • A finite element that describes the shape functions we want to use on the reference cell (which in deal.II is always the unit interval \([0,1]\), the unit square \([0,1]^2\) or the unit cube \([0,1]^3\), depending on which space dimension you work in). In step-2, we had already used an object of type FE_Q<2>, which denotes the usual Lagrange elements that define shape functions by interpolation on support points. The simplest one is FE_Q<2>(1), which uses polynomial degree 1. In 2d, these are often referred to as bilinear, since they are linear in each of the two coordinates of the reference cell. (In 1d, they would be linear and in 3d tri-linear; however, in the deal.II documentation, we will frequently not make this distinction and simply always call these functions "linear".)
  • A DoFHandler object that enumerates all the degrees of freedom on the mesh, taking the reference cell description the finite element object provides as the basis. You've also already seen how to do this in step-2.
  • A mapping that tells how the shape functions on the real cell are obtained from the shape functions defined by the finite element class on the reference cell. By default, unless you explicitly say otherwise, deal.II will use a (bi-, tri-)linear mapping for this, so in most cases you don't have to worry about this step.

Through these steps, we now have a set of functions \(\varphi_i\), and we can define the weak form of the discrete problem: Find a function \(u_h\), i.e., find the expansion coefficients \(U_j\) mentioned above, so that

\begin{align*} (\nabla\varphi_i, \nabla u_h) = (\varphi_i, f), \qquad\qquad i=0\ldots N-1. \end{align*}

Note that we here follow the convention that everything is counted starting at zero, as common in C and C++. This equation can be rewritten as a linear system if you insert the representation \(u_h(\mathbf x)=\sum_j U_j \varphi_j(\mathbf x)\) and then observe that

\begin{align*} (\nabla\varphi_i, \nabla u_h) &= \left(\nabla\varphi_i, \nabla \Bigl[\sum_j U_j \varphi_j\Bigr]\right) \\ &= \sum_j \left(\nabla\varphi_i, \nabla \left[U_j \varphi_j\right]\right) \\ &= \sum_j \left(\nabla\varphi_i, \nabla \varphi_j \right) U_j. \end{align*}

With this, the problem reads: Find a vector \(U\) so that

\begin{align*} A U = F, \end{align*}

where the matrix \(A\) and the right hand side \(F\) are defined as

\begin{align*} A_{ij} &= (\nabla\varphi_i, \nabla \varphi_j), \\ F_i &= (\varphi_i, f). \end{align*}

Should we multiply by a test function from the left or from the right?

Before we move on with describing how these quantities can be computed, note that if we had multiplied the original equation from the right by a test function rather than from the left, then we would have obtained a linear system of the form

\begin{align*} U^T A = F^T \end{align*}

with a row vector \(F^T\). By transposing this system, this is of course equivalent to solving

\begin{align*} A^T U = F \end{align*}

which here is the same as above since \(A=A^T\). But in general is not, and in order to avoid any sort of confusion, experience has shown that simply getting into the habit of multiplying the equation from the left rather than from the right (as is often done in the mathematical literature) avoids a common class of errors as the matrix is automatically correct and does not need to be transposed when comparing theory and implementation. See step-9 for the first example in this tutorial where we have a non-symmetric bilinear form for which it makes a difference whether we multiply from the right or from the left.

Assembling the matrix and right hand side vector

Now we know what we need (namely: objects that hold the matrix and vectors, as well as ways to compute \(A_{ij},F_i\)), and we can look at what it takes to make that happen:

  • The object for \(A\) is of type SparseMatrix while those for \(U\) and \(F\) are of type Vector. We will see in the program below what classes are used to solve linear systems.
  • We need a way to form the integrals. In the finite element method, this is most commonly done using quadrature, i.e. the integrals are replaced by a weighted sum over a set of quadrature points on each cell. That is, we first split the integral over \(\Omega\) into integrals over all cells,

    \begin{align*} A_{ij} &= (\nabla\varphi_i, \nabla \varphi_j) = \sum_{K \in {\mathbb T}} \int_K \nabla\varphi_i \cdot \nabla \varphi_j, \\ F_i &= (\varphi_i, f) = \sum_{K \in {\mathbb T}} \int_K \varphi_i f, \end{align*}

    and then approximate each cell's contribution by quadrature:

    \begin{align*} A^K_{ij} &= \int_K \nabla\varphi_i \cdot \nabla \varphi_j \approx \sum_q \nabla\varphi_i(\mathbf x^K_q) \cdot \nabla \varphi_j(\mathbf x^K_q) w_q^K, \\ F^K_i &= \int_K \varphi_i f \approx \sum_q \varphi_i(\mathbf x^K_q) f(\mathbf x^K_q) w^K_q, \end{align*}

    where \(\mathbb{T} \approx \Omega\) is a Triangulation approximating the domain, \(\mathbf x^K_q\) is the \(q\)th quadrature point on cell \(K\), and \(w^K_q\) the \(q\)th quadrature weight. There are different parts to what is needed in doing this, and we will discuss them in turn next.
  • First, we need a way to describe the location \(\mathbf x_q^K\) of quadrature points and their weights \(w^K_q\). They are usually mapped from the reference cell in the same way as shape functions, i.e., implicitly using the MappingQ1 class or, if you explicitly say so, through one of the other classes derived from Mapping. The locations and weights on the reference cell are described by objects derived from the Quadrature base class. Typically, one chooses a quadrature formula (i.e. a set of points and weights) so that the quadrature exactly equals the integral in the matrix; this can be achieved because all factors in the integral are polynomial, and is done by Gaussian quadrature formulas, implemented in the QGauss class.
  • We then need something that can help us evaluate \(\varphi_i(\mathbf x^K_q)\) on cell \(K\). This is what the FEValues class does: it takes a finite element objects to describe \(\varphi\) on the reference cell, a quadrature object to describe the quadrature points and weights, and a mapping object (or implicitly takes the MappingQ1 class) and provides values and derivatives of the shape functions on the real cell \(K\) as well as all sorts of other information needed for integration, at the quadrature points located on \(K\).

The process of computing the matrix and right hand side as a sum over all cells (and then a sum over quadrature points) is usually called assembling the linear system, or assembly for short, using the meaning of the word related to assembly line, meaning "the act of putting together a set of pieces, fragments, or elements".

FEValues really is the central class in the assembly process. One way you can view it is as follows: The FiniteElement and derived classes describe shape functions, i.e., infinite dimensional objects: functions have values at every point. We need this for theoretical reasons because we want to perform our analysis with integrals over functions. However, for a computer, this is a very difficult concept, since they can in general only deal with a finite amount of information, and so we replace integrals by sums over quadrature points that we obtain by mapping (the Mapping object) using points defined on a reference cell (the Quadrature object) onto points on the real cell. In essence, we reduce the problem to one where we only need a finite amount of information, namely shape function values and derivatives, quadrature weights, normal vectors, etc, exclusively at a finite set of points. The FEValues class is the one that brings the three components together and provides this finite set of information on a particular cell \(K\). You will see it in action when we assemble the linear system below.

It is noteworthy that all of this could also be achieved if you simply created these three objects yourself in an application program, and juggled the information yourself. However, this would neither be simpler (the FEValues class provides exactly the kind of information you actually need) nor faster: the FEValues class is highly optimized to only compute on each cell the particular information you need; if anything can be re-used from the previous cell, then it will do so, and there is a lot of code in that class to make sure things are cached wherever this is advantageous.

The final piece of this introduction is to mention that after a linear system is obtained, it is solved using an iterative solver and then postprocessed: we create an output file using the DataOut class that can then be visualized using one of the common visualization programs.

Note
The preceding overview of all the important steps of any finite element implementation has its counterpart in deal.II: The library can naturally be grouped into a number of "topics" that cover the basic concepts just outlined. You can access these topics through the "Topics" tab at the top of this page. An overview of the most fundamental groups of concepts is also available on the front page of the deal.II manual.

Solving the linear system

For a finite element program, the linear system we end up with here is relatively small: The matrix has size \(1089 \times 1089\), owing to the fact that the mesh we use is \(32\times 32\) and so there are \(33^2=1089\) vertices in the mesh. In many of the later tutorial programs, matrix sizes in the range of tens of thousands to hundreds of thousands will not be uncommon, and with codes such as ASPECT that build on deal.II, we regularly solve problems with more than a hundred million equations (albeit using parallel computers). In any case, even for the small system here, the matrix is much larger than what one typically encounters in an undergraduate or most graduate courses, and so the question arises how we can solve such linear systems.

The first method one typically learns for solving linear systems is Gaussian elimination. The problem with this method is that it requires a number of operations that is proportional to \(N^3\), where \(N\) is the number of equations or unknowns in the linear system – more specifically, the number of operations is \(\frac 23 N^3\), give or take a few. With \(N=1089\), this means that we would have to do around \(861\) million operations. This is a number that is quite feasible and it would take modern processors less than 0.1 seconds to do this. But it is clear that this isn't going to scale: If we have twenty times as many equations in the linear system (that is, twenty times as many unknowns), then it would already take 1000-10,000 seconds or on the order of an hour. Make the linear system another ten times larger, and it is clear that we can not solve it any more on a single computer.

One can rescue the situation somewhat by realizing that only a relatively small number of entries in the matrix are nonzero – that is, the matrix is sparse. Variations of Gaussian elimination can exploit this, making the process substantially faster; we will use one such method – implemented in the SparseDirectUMFPACK class – in step-29 for the first time, among several others than come after that. These variations of Gaussian elimination might get us to problem sizes on the order of 100,000 or 200,000, but not all that much beyond that.

Instead, what we will do here is take up an idea from 1952: the Conjugate Gradient method, or in short "CG". CG is an "iterative" solver in that it forms a sequence of vectors that converge to the exact solution; in fact, after \(N\) such iterations in the absence of roundoff errors it finds the exact solution if the matrix is symmetric and positive definite. The method was originally developed as another way to solve a linear system exactly, like Gaussian elimination, but as such it had few advantages and was largely forgotten for a few decades. But, when computers became powerful enough to solve problems of a size where Gaussian elimination doesn't work well any more (sometime in the 1980s), CG was rediscovered as people realized that it is well suited for large and sparse systems like the ones we get from the finite element method. This is because (i) the vectors it computes converge to the exact solution, and consequently we do not actually have to do all \(N\) iterations to find the exact solution as long as we're happy with reasonably good approximations; and (ii) it only ever requires matrix-vector products, which is very useful for sparse matrices because a sparse matrix has, by definition, only \({\cal O}(N)\) entries and so a matrix-vector product can be done with \({\cal O}(N)\) effort whereas it costs \(N^2\) operations to do the same for dense matrices. As a consequence, we can hope to solve linear systems with at most \({\cal O}(N^2)\) operations, and in many cases substantially fewer.

Finite element codes therefore almost always use iterative solvers such as CG for the solution of the linear systems, and we will do so in this code as well. (We note that the CG method is only usable for matrices that are symmetric and positive definite; for other equations, the matrix may not have these properties and we will have to use other variations of iterative solvers such as BiCGStab or GMRES that are applicable to more general matrices.)

An important component of these iterative solvers is that we specify the tolerance with which we want to solve the linear system – in essence, a statement about the error we are willing to accept in our approximate solution. The error in an approximate solution \(\tilde x\) obtained to the exact solution \(x\) of a linear system \(Ax=b\) is defined as \(\|x-\tilde x\|\), but this is a quantity we cannot compute because we don't know the exact solution \(x\). Instead, we typically consider the residual, defined as \(\|b-A\tilde x\|=\|A(x-\tilde x)\|\), as a computable measure. We then let the iterative solver compute more and more accurate solutions \(\tilde x\), until \(\|b-A\tilde x\|\le \tau\). A practical question is what value \(\tau\) should have. In most applications, setting

\begin{align*} \tau = 10^{-6} \|b\| \end{align*}

is a reasonable choice. The fact that we make \(\tau\) proportional to the size (norm) of \(b\) makes sure that our expectations of the accuracy in the solution are relative to the size of the solution. This makes sense: If we make the right hand side \(b\) ten times larger, then the solution \(x\) of \(Ax=b\) will also be ten times larger, and so will \(\tilde x\); we want the same number of accurate digits in \(\tilde x\) as before, which means that we should also terminate when the residual \(\|b-A\tilde x\|\) is ten times the original size – which is exactly what we get if we make \(\tau\) proportional to \(\|b\|\).

All of this will be implemented in the Step3::solve() function in this program. As you will see, it is quite simple to set up linear solvers with deal.II: The whole function will have only three lines.

About the implementation

Although this is the simplest possible equation you can solve using the finite element method, this program shows the basic structure of most finite element programs and also serves as the template that almost all of the following programs will essentially follow. Specifically, the main class of this program looks like this:

class Step3
{
public:
Step3 ();
void run ();
private:
void make_grid ();
void setup_system ();
void assemble_system ();
void solve ();
void output_results () const;
FE_Q<2> fe;
DoFHandler<2> dof_handler;
SparsityPattern sparsity_pattern;
SparseMatrix<double> system_matrix;
Vector<double> solution;
Vector<double> system_rhs;
};
Definition fe_q.h:554
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation

This follows the object oriented programming mantra of data encapsulation, i.e. we do our best to hide almost all internal details of this class in private members that are not accessible to the outside.

Let's start with the member variables: These follow the building blocks we have outlined above in the bullet points, namely we need a Triangulation and a DoFHandler object, and a finite element object that describes the kinds of shape functions we want to use. The second group of objects relate to the linear algebra: the system matrix and right hand side as well as the solution vector, and an object that describes the sparsity pattern of the matrix. This is all this class needs (and the essentials that any solver for a stationary PDE requires) and that needs to survive throughout the entire program. In contrast to this, the FEValues object we need for assembly is only required throughout assembly, and so we create it as a local object in the function that does that and destroy it again at its end.

Secondly, let's look at the member functions. These, as well, already form the common structure that almost all following tutorial programs will use:

  • make_grid(): This is what one could call a preprocessing function. As its name suggests, it sets up the object that stores the triangulation. In later examples, it could also deal with boundary conditions, geometries, etc.
  • setup_system(): This then is the function in which all the other data structures are set up that are needed to solve the problem. In particular, it will initialize the DoFHandler object and correctly size the various objects that have to do with the linear algebra. This function is often separated from the preprocessing function above because, in a time dependent program, it may be called at least every few time steps whenever the mesh is adaptively refined (something we will see how to do in step-6). On the other hand, setting up the mesh itself in the preprocessing function above is done only once at the beginning of the program and is, therefore, separated into its own function.
  • assemble_system(): This, then is where the contents of the matrix and right hand side are computed, as discussed at length in the introduction above. Since doing something with this linear system is conceptually very different from computing its entries, we separate it from the following function.
  • solve(): This then is the function in which we compute the solution \(U\) of the linear system \(AU=F\). In the current program, this is a simple task since the matrix is so simple, but it will become a significant part of a program's size whenever the problem is not so trivial any more (see, for example, step-20, step-22, or step-31 once you've learned a bit more about the library).
  • output_results(): Finally, when you have computed a solution, you probably want to do something with it. For example, you may want to output it in a format that can be visualized, or you may want to compute quantities you are interested in: say, heat fluxes in a heat exchanger, air friction coefficients of a wing, maximum bridge loads, or simply the value of the numerical solution at a point. This function is therefore the place for postprocessing your solution.

All of this is held together by the single public function (other than the constructor), namely the run() function. It is the one that is called from the place where an object of this type is created, and it is the one that calls all the other functions in their proper order. Encapsulating this operation into the run() function, rather than calling all the other functions from main() makes sure that you can change how the separation of concerns within this class is implemented. For example, if one of the functions becomes too big, you can split it up into two, and the only places you have to be concerned about changing as a consequence are within this very same class, and not anywhere else.

As mentioned above, you will see this general structure — sometimes with variants in spelling of the functions' names, but in essentially this order of separation of functionality — again in many of the following tutorial programs.

A note on types

deal.II defines a number of integral types via alias in namespace types. (In the previous sentence, the word "integral" is used as the adjective that corresponds to the noun "integer". It shouldn't be confused with the noun "integral" that represents the area or volume under a curve or surface. The adjective "integral" is widely used in the C++ world in contexts such as "integral type", "integral constant", etc.) In particular, in this program you will see types::global_dof_index in a couple of places: an integer type that is used to denote the global index of a degree of freedom, i.e., the index of a particular degree of freedom within the DoFHandler object that is defined on top of a triangulation (as opposed to the index of a particular degree of freedom within a particular cell). For the current program (as well as almost all of the tutorial programs), you will have a few thousand to maybe a few million unknowns globally (and, for \(Q_1\) elements, you will have 4 locally on each cell in 2d and 8 in 3d). Consequently, a data type that allows to store sufficiently large numbers for global DoF indices is unsigned int given that it allows to store numbers between 0 and slightly more than 4 billion (on most systems, where integers are 32-bit). In fact, this is what types::global_dof_index is.

So, why not just use unsigned int right away? deal.II used to do this until version 7.3. However, deal.II supports very large computations (via the framework discussed in step-40) that may have more than 4 billion unknowns when spread across a few thousand processors. Consequently, there are situations where unsigned int is not sufficiently large and we need a 64-bit unsigned integral type. To make this possible, we introduced types::global_dof_index which by default is defined as simply unsigned int whereas it is possible to define it as unsigned long long int if necessary, by passing a particular flag during configuration (see the ReadMe file).

This covers the technical aspect. But there is also a documentation purpose: everywhere in the library and codes that are built on it, if you see a place using the data type types::global_dof_index, you immediately know that the quantity that is being referenced is, in fact, a global dof index. No such meaning would be apparent if we had just used unsigned int (which may also be a local index, a boundary indicator, a material id, etc.). Immediately knowing what a variable refers to also helps avoid errors: it's quite clear that there must be a bug if you see an object of type types::global_dof_index being assigned to variable of type types::subdomain_id, even though they are both represented by unsigned integers and the compiler will, consequently, not complain.

In more practical terms what the presence of this type means is that during assembly, we create a \(4\times 4\) matrix (in 2d, using a \(Q_1\) element) of the contributions of the cell we are currently sitting on, and then we need to add the elements of this matrix to the appropriate elements of the global (system) matrix. For this, we need to get at the global indices of the degrees of freedom that are local to the current cell, for which we will always use the following piece of the code:

cell->get_dof_indices (local_dof_indices);

where local_dof_indices is declared as

std::vector<types::global_dof_index> local_dof_indices (fe.n_dofs_per_cell());

The name of this variable might be a bit of a misnomer – it stands for "the global indices of those degrees of freedom locally defined on the current cell" – but variables that hold this information are universally named this way throughout the library.

Note
types::global_dof_index is not the only type defined in this namespace. Rather, there is a whole family, including types::subdomain_id, types::boundary_id, and types::material_id. All of these are alias for integer data types but, as explained above, they are used throughout the library so that (i) the intent of a variable becomes more easily discerned, and (ii) so that it becomes possible to change the actual type to a larger one if necessary without having to go through the entire library and figure out whether a particular use of unsigned int corresponds to, say, a material indicator.

The commented program

Many new include files

These include files are already known to you. They declare the classes which handle triangulations and enumeration of degrees of freedom:

  #include <deal.II/grid/tria.h>
  #include <deal.II/dofs/dof_handler.h>

And this is the file in which the functions are declared that create grids:

  #include <deal.II/grid/grid_generator.h>
 

This file contains the description of the Lagrange interpolation finite element:

  #include <deal.II/fe/fe_q.h>
 

And this file is needed for the creation of sparsity patterns of sparse matrices, as shown in previous examples:

  #include <deal.II/dofs/dof_tools.h>
 

The next two files are needed for assembling the matrix using quadrature on each cell. The classes declared in them will be explained below:

  #include <deal.II/fe/fe_values.h>
  #include <deal.II/base/quadrature_lib.h>
 

The following three include files we need for the treatment of boundary values:

  #include <deal.II/base/function.h>
  #include <deal.II/numerics/vector_tools.h>
  #include <deal.II/numerics/matrix_tools.h>
 

We're now almost to the end. The second to last group of include files is for the linear algebra which we employ to solve the system of equations arising from the finite element discretization of the Laplace equation. We will use vectors and full matrices for assembling the system of equations locally on each cell, and transfer the results into a sparse matrix. We will then use a Conjugate Gradient solver to solve the problem, for which we need a preconditioner (in this program, we use the identity preconditioner which does nothing, but we need to include the file anyway):

  #include <deal.II/lac/vector.h>
  #include <deal.II/lac/full_matrix.h>
  #include <deal.II/lac/sparse_matrix.h>
  #include <deal.II/lac/dynamic_sparsity_pattern.h>
  #include <deal.II/lac/solver_cg.h>
  #include <deal.II/lac/precondition.h>
 

Finally, this is for output to a file and to the console:

  #include <deal.II/numerics/data_out.h>
  #include <fstream>
  #include <iostream>
 

...and this is to import the deal.II namespace into the global scope:

  using namespace dealii;
 

The Step3 class

Instead of the procedural programming of previous examples, we encapsulate everything into a class for this program. The class consists of functions which each perform certain aspects of a finite element program, a main function which controls what is done first and what is done next, and a list of member variables.

The public part of the class is rather short: it has a constructor and a function run that is called from the outside and acts as something like the main function: it coordinates which operations of this class shall be run in which order. Everything else in the class, i.e. all the functions that actually do anything, are in the private section of the class:

  class Step3
  {
  public:
  Step3();
 
  void run();
 

Then there are the member functions that mostly do what their names suggest and whose have been discussed in the introduction already. Since they do not need to be called from outside, they are made private to this class.

  private:
  void make_grid();
  void setup_system();
  void assemble_system();
  void solve();
  void output_results() const;
 

And finally we have some member variables. There are variables describing the triangulation and the global numbering of the degrees of freedom (we will specify the exact polynomial degree of the finite element in the constructor of this class)...

  const FE_Q<2> fe;
  DoFHandler<2> dof_handler;
 

...variables for the sparsity pattern and values of the system matrix resulting from the discretization of the Laplace equation...

  SparsityPattern sparsity_pattern;
  SparseMatrix<double> system_matrix;
 

...and variables which will hold the right hand side and solution vectors.

  Vector<double> solution;
  Vector<double> system_rhs;
  };
 

Step3::Step3

Here comes the constructor. It does not much more than first to specify that we want bi-linear elements (denoted by the parameter to the finite element object, which indicates the polynomial degree), and to associate the dof_handler variable to the triangulation we use. (Note that the triangulation isn't set up with a mesh at all at the present time, but the DoFHandler doesn't care: it only wants to know which triangulation it will be associated with, and it only starts to care about an actual mesh once you try to distribute degree of freedom on the mesh using the distribute_dofs() function.) All the other member variables of the Step3 class have a default constructor which does all we want.

  Step3::Step3()
  : fe(/* polynomial degree = */ 1)
  , dof_handler(triangulation)
  {}
 
 

Step3::make_grid

Now, the first thing we've got to do is to generate the triangulation on which we would like to do our computation and number each vertex with a degree of freedom. We have seen these two steps in step-1 and step-2 before, respectively.

This function does the first part, creating the mesh. We create the grid and refine all cells five times. Since the initial grid (which is the square \([-1,1] \times [-1,1]\)) consists of only one cell, the final grid has 32 times 32 cells, for a total of 1024.

Unsure that 1024 is the correct number? We can check that by outputting the number of cells using the n_active_cells() function on the triangulation.

  void Step3::make_grid()
  {
  triangulation.refine_global(5);
 
  std::cout << "Number of active cells: " << triangulation.n_active_cells()
  << std::endl;
  }
 
void hyper_cube(Triangulation< dim, spacedim > &tria, const double left=0., const double right=1., const bool colorize=false)
Note
We call the Triangulation::n_active_cells() function, rather than Triangulation::n_cells(). Here, active means the cells that aren't refined any further. We stress the adjective "active" since there are more cells, namely the parent cells of the finest cells, their parents, etc, up to the one cell which made up the initial grid. Of course, on the next coarser level, the number of cells is one quarter that of the cells on the finest level, i.e. 256, then 64, 16, 4, and 1. If you called triangulation.n_cells() instead in the code above, you would consequently get a value of 1365 instead. On the other hand, the number of cells (as opposed to the number of active cells) is not typically of much interest, so there is no good reason to print it.

Step3::setup_system

Next we enumerate all the degrees of freedom and set up matrix and vector objects to hold the system data. Enumerating is done by using DoFHandler::distribute_dofs(), as we have seen in the step-2 example. Since we use the FE_Q class and have set the polynomial degree to 1 in the constructor, i.e. bilinear elements, this associates one degree of freedom with each vertex. While we're at generating output, let us also take a look at how many degrees of freedom are generated:

  void Step3::setup_system()
  {
  dof_handler.distribute_dofs(fe);
  std::cout << "Number of degrees of freedom: " << dof_handler.n_dofs()
  << std::endl;

There should be one DoF for each vertex. Since we have a 32 times 32 grid, the number of DoFs should be 33 times 33, or 1089.

As we have seen in the previous example, we set up a sparsity pattern by first creating a temporary structure, tagging those entries that might be nonzero, and then copying the data over to the SparsityPattern object that can then be used by the system matrix.

  DynamicSparsityPattern dsp(dof_handler.n_dofs());
  DoFTools::make_sparsity_pattern(dof_handler, dsp);
  sparsity_pattern.copy_from(dsp);
 
void make_sparsity_pattern(const DoFHandler< dim, spacedim > &dof_handler, SparsityPatternBase &sparsity_pattern, const AffineConstraints< number > &constraints={}, const bool keep_constrained_dofs=true, const types::subdomain_id subdomain_id=numbers::invalid_subdomain_id)

Note that the SparsityPattern object does not hold the values of the matrix, it only stores the places where entries are. The entries themselves are stored in objects of type SparseMatrix, of which our variable system_matrix is one.

The distinction between sparsity pattern and matrix was made to allow several matrices to use the same sparsity pattern. This may not seem relevant here, but when you consider the size which matrices can have, and that it may take some time to build the sparsity pattern, this becomes important in large-scale problems if you have to store several matrices in your program.

  system_matrix.reinit(sparsity_pattern);
 

The last thing to do in this function is to set the sizes of the right hand side vector and the solution vector to the right values:

  solution.reinit(dof_handler.n_dofs());
  system_rhs.reinit(dof_handler.n_dofs());
  }
 

Step3::assemble_system

The next step is to compute the entries of the matrix and right hand side that form the linear system from which we compute the solution. This is the central function of each finite element program and we have discussed the primary steps in the introduction already.

The general approach to assemble matrices and vectors is to loop over all cells, and on each cell compute the contribution of that cell to the global matrix and right hand side by quadrature. The point to realize now is that we need the values of the shape functions at the locations of quadrature points on the real cell. However, both the finite element shape functions as well as the quadrature points are only defined on the reference cell. They are therefore of little help to us, and we will in fact hardly ever query information about finite element shape functions or quadrature points from these objects directly.

Rather, what is required is a way to map this data from the reference cell to the real cell. Classes that can do that are derived from the Mapping class, though one again often does not have to deal with them directly: many functions in the library can take a mapping object as argument, but when it is omitted they simply resort to the standard bilinear Q1 mapping. We will go this route, and not bother with it for the moment (we come back to this in step-10, step-11, and step-12).

So what we now have is a collection of three classes to deal with: finite element, quadrature, and mapping objects. That's too much, so there is one type of class that orchestrates information exchange between these three: the FEValues class. If given one instance of each three of these objects (or two, and an implicit linear mapping), it will be able to provide you with information about values and gradients of shape functions at quadrature points on a real cell.

Using all this, we will assemble the linear system for this problem in the following function:

  void Step3::assemble_system()
  {

Ok, let's start: we need a quadrature formula for the evaluation of the integrals on each cell. Let's take a Gauss formula with two quadrature points in each direction, i.e. a total of four points since we are in 2d. This quadrature formula integrates polynomials of degrees up to three exactly (in 1d). It is easy to check that this is sufficient for the present problem:

  const QGauss<2> quadrature_formula(fe.degree + 1);

And we initialize the object which we have briefly talked about above. It needs to be told which finite element we want to use, and the quadrature points and their weights (jointly described by a Quadrature object). As mentioned, we use the implied Q1 mapping, rather than specifying one ourselves explicitly. Finally, we have to tell it what we want it to compute on each cell: we need the values of the shape functions at the quadrature points (for the right hand side \((\varphi_i,f)\)), their gradients (for the matrix entries \((\nabla \varphi_i, \nabla \varphi_j)\)), and also the weights of the quadrature points and the determinants of the Jacobian transformations from the reference cell to the real cells.

This list of what kind of information we actually need is given as a collection of flags as the third argument to the constructor of FEValues. Since these values have to be recomputed, or updated, every time we go to a new cell, all of these flags start with the prefix update_ and then indicate what it actually is that we want updated. The flag to give if we want the values of the shape functions computed is update_values; for the gradients it is update_gradients. The determinants of the Jacobians and the quadrature weights are always used together, so only the products (Jacobians times weights, or short JxW) are computed; since we need them, we have to list update_JxW_values as well:

  FEValues<2> fe_values(fe,
  quadrature_formula,
@ update_values
Shape function values.
@ update_JxW_values
Transformed quadrature weights.
@ update_gradients
Shape function gradients.

The advantage of this approach is that we can specify what kind of information we actually need on each cell. It is easily understandable that this approach can significantly speed up finite element computations, compared to approaches where everything, including second derivatives, normal vectors to cells, etc are computed on each cell, regardless of whether they are needed or not.

Note
The syntax update_values | update_gradients | update_JxW_values is not immediately obvious to anyone not used to programming bit operations in C for years already. First, operator| is the bitwise or operator, i.e., it takes two integer arguments that are interpreted as bit patterns and returns an integer in which every bit is set for which the corresponding bit is set in at least one of the two arguments. For example, consider the operation 9|10. In binary, 9=0b1001 (where the prefix 0b indicates that the number is to be interpreted as a binary number) and 10=0b1010. Going through each bit and seeing whether it is set in one of the argument, we arrive at 0b1001|0b1010=0b1011 or, in decimal notation, 9|10=11. The second piece of information you need to know is that the various update_* flags are all integers that have exactly one bit set. For example, assume that update_values=0b00001=1, update_gradients=0b00010=2, update_JxW_values=0b10000=16. Then update_values | update_gradients | update_JxW_values = 0b10011 = 19. In other words, we obtain a number that encodes a binary mask representing all of the operations you want to happen, where each operation corresponds to exactly one bit in the integer that, if equal to one, means that a particular piece should be updated on each cell and, if it is zero, means that we need not compute it. In other words, even though operator| is the bitwise OR operation, what it really represents is I want this AND that AND the other. Such binary masks are quite common in C programming, but maybe not so in higher level languages like C++, but serve the current purpose quite well.

For use further down below, we define a shortcut for a value that will be used very frequently. Namely, an abbreviation for the number of degrees of freedom on each cell (since we are in 2d and degrees of freedom are associated with vertices only, this number is four, but we rather want to write the definition of this variable in a way that does not preclude us from later choosing a different finite element that has a different number of degrees of freedom per cell, or work in a different space dimension).

In general, it is a good idea to use a symbolic name instead of hard-coding these numbers even if you know them, since for example, you may want to change the finite element at some time. Changing the element would have to be done in a different function and it is easy to forget to make a corresponding change in another part of the program. It is better to not rely on your own calculations, but instead ask the right object for the information: Here, we ask the finite element to tell us about the number of degrees of freedom per cell and we will get the correct number regardless of the space dimension or polynomial degree we may have chosen elsewhere in the program.

The shortcut here, defined primarily to discuss the basic concept and not because it saves a lot of typing, will then make the following loops a bit more readable. You will see such shortcuts in many places in larger programs, and dofs_per_cell is one that is more or less the conventional name for this kind of object.

  const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
 

Now, we said that we wanted to assemble the global matrix and vector cell-by-cell. We could write the results directly into the global matrix, but this is not very efficient since access to the elements of a sparse matrix is slow. Rather, we first compute the contribution of each cell in a small matrix with the degrees of freedom on the present cell, and only transfer them to the global matrix when the computations are finished for this cell. We do the same for the right hand side vector. So let's first allocate these objects (these being local objects, all degrees of freedom are coupling with all others, and we should use a full matrix object rather than a sparse one for the local operations; everything will be transferred to a global sparse matrix later on):

  FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
  Vector<double> cell_rhs(dofs_per_cell);
 

When assembling the contributions of each cell, we do this with the local numbering of the degrees of freedom (i.e. the number running from zero through dofs_per_cell-1). However, when we transfer the result into the global matrix, we have to know the global numbers of the degrees of freedom. When we query them, we need a scratch (temporary) array for these numbers (see the discussion at the end of the introduction for the type, types::global_dof_index, used here):

  std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
 

Now for the loop over all cells. We have seen before how this works for a triangulation. A DoFHandler has cell iterators that are exactly analogous to those of a Triangulation, but with extra information about the degrees of freedom for the finite element you're using. Looping over the active cells of a degree-of-freedom handler works the same as for a triangulation.

Note that we declare the type of the cell as const auto & instead of auto this time around. In step 1, we were modifying the cells of the triangulation by flagging them with refinement indicators. Here we're only examining the cells without modifying them, so it's good practice to declare cell as const in order to enforce this invariant.

  for (const auto &cell : dof_handler.active_cell_iterators())
  {

We are now sitting on one cell, and we would like the values and gradients of the shape functions be computed, as well as the determinants of the Jacobian matrices of the mapping between reference cell and true cell, at the quadrature points. Since all these values depend on the geometry of the cell, we have to have the FEValues object re-compute them on each cell:

  fe_values.reinit(cell);
 

Next, reset the local cell's contributions to global matrix and global right hand side to zero, before we fill them:

  cell_matrix = 0;
  cell_rhs = 0;
 

Now it is time to start integration over the cell, which we do by looping over all quadrature points, which we will number by q_index.

  for (const unsigned int q_index : fe_values.quadrature_point_indices())
  {

First assemble the matrix: For the Laplace problem, the matrix on each cell is the integral over the gradients of shape function i and j. Since we do not integrate, but rather use quadrature, this is the sum over all quadrature points of the integrands times the determinant of the Jacobian matrix at the quadrature point times the weight of this quadrature point. You can get the gradient of shape function \(i\) at quadrature point with number q_index by using fe_values.shape_grad(i,q_index); this gradient is a 2-dimensional vector (in fact it is of type Tensor<1,dim>, with here dim=2) and the product of two such vectors is the scalar product, i.e. the product of the two shape_grad function calls is the dot product. This is in turn multiplied by the Jacobian determinant and the quadrature point weight (that one gets together by the call to FEValues::JxW() ). Finally, this is repeated for all shape functions \(i\) and \(j\):

  for (const unsigned int i : fe_values.dof_indices())
  for (const unsigned int j : fe_values.dof_indices())
  cell_matrix(i, j) +=
  (fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
  fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
  fe_values.JxW(q_index)); // dx
 

We then do the same thing for the right hand side. Here, the integral is over the shape function i times the right hand side function, which we choose to be the function with constant value one (more interesting examples will be considered in the following programs).

  for (const unsigned int i : fe_values.dof_indices())
  cell_rhs(i) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
  1. * // f(x_q)
  fe_values.JxW(q_index)); // dx
  }

Now that we have the contribution of this cell, we have to transfer it to the global matrix and right hand side. To this end, we first have to find out which global numbers the degrees of freedom on this cell have. Let's simply ask the cell for that information:

  cell->get_dof_indices(local_dof_indices);
 

Then again loop over all shape functions i and j and transfer the local elements to the global matrix. The global numbers can be obtained using local_dof_indices[i]:

  for (const unsigned int i : fe_values.dof_indices())
  for (const unsigned int j : fe_values.dof_indices())
  system_matrix.add(local_dof_indices[i],
  local_dof_indices[j],
  cell_matrix(i, j));
 

And again, we do the same thing for the right hand side vector.

  for (const unsigned int i : fe_values.dof_indices())
  system_rhs(local_dof_indices[i]) += cell_rhs(i);
  }
 
 

Now almost everything is set up for the solution of the discrete system. However, we have not yet taken care of boundary values (in fact, Laplace's equation without Dirichlet boundary values is not even uniquely solvable, since you can add an arbitrary constant to the discrete solution). We therefore have to do something about the situation.

For this, we first obtain a list of the degrees of freedom on the boundary and the value the shape function shall have there. For simplicity, we only interpolate the boundary value function, rather than projecting it onto the boundary. There is a function in the library which does exactly this: VectorTools::interpolate_boundary_values(). Its parameters are (omitting parameters for which default values exist and that we don't care about): the DoFHandler object to get the global numbers of the degrees of freedom on the boundary; the component of the boundary where the boundary values shall be interpolated; the boundary value function itself; and the output object.

The component of the boundary is meant as follows: in many cases, you may want to impose certain boundary values only on parts of the boundary. For example, you may have inflow and outflow boundaries in fluid dynamics, or clamped and free parts of bodies in deformation computations of bodies. Then you will want to denote these different parts of the boundary by indicators, and tell the interpolate_boundary_values function to only compute the boundary values on a certain part of the boundary (e.g. the clamped part, or the inflow boundary). By default, all boundaries have a 0 boundary indicator, unless otherwise specified. (For example, many functions in namespace GridGenerator specify otherwise.) If sections of the boundary have different boundary conditions, you have to number those parts with different boundary indicators. The function call below will then only determine boundary values for those parts of the boundary for which the boundary indicator is in fact the zero specified as the second argument.

The function describing the boundary values is an object of type Function or of a derived class. One of the derived classes is Functions::ZeroFunction, which describes (not unexpectedly) a function which is zero everywhere. We create such an object in-place and pass it to the VectorTools::interpolate_boundary_values() function.

Finally, the output object is a list of pairs of global degree of freedom numbers (i.e. the number of the degrees of freedom on the boundary) and their boundary values (which are zero here for all entries). This mapping of DoF numbers to boundary values is done by the std::map class.

  std::map<types::global_dof_index, double> boundary_values;
  boundary_values);
void interpolate_boundary_values(const Mapping< dim, spacedim > &mapping, const DoFHandler< dim, spacedim > &dof, const std::map< types::boundary_id, const Function< spacedim, number > * > &function_map, std::map< types::global_dof_index, number > &boundary_values, const ComponentMask &component_mask={})

Now that we got the list of boundary DoFs and their respective boundary values, let's use them to modify the system of equations accordingly. This is done by the following function call:

  system_matrix,
  solution,
  system_rhs);
  }
 
 
void apply_boundary_values(const std::map< types::global_dof_index, number > &boundary_values, SparseMatrix< number > &matrix, Vector< number > &solution, Vector< number > &right_hand_side, const bool eliminate_columns=true)

Step3::solve

The following function solves the discretized equation. As discussed in the introduction, we want to use an iterative solver to do this, specifically the Conjugate Gradient (CG) method.

The way to do this in deal.II is a three-step process:

  • First, we need to have an object that knows how to tell the CG algorithm when to stop. This is done by using a SolverControl object, and as stopping criterion we say: stop after a maximum of 1000 iterations (which is far more than is needed for 1089 variables; see the results section to find out how many were really used), and stop if the norm of the residual is below \(\tau=10^{-6}\|\mathbf b\|\) where \(\mathbf b\) is the right hand side vector. In practice, this latter criterion will be the one which stops the iteration.
  • Then we need the solver itself. The template parameter to the SolverCG class is the type of the vectors we are using.
  • The last step is to actually solve the system of equations. The CG solver takes as arguments the components of the linear system \(Ax=b\) (in the order in which they appear in this equation), and a preconditioner as the fourth argument. We don't feel ready to delve into preconditioners yet, so we tell it to use the identity operation as preconditioner. Later tutorial programs will spend significant amount of time and space on constructing better preconditioners.

At the end of this process, the solution variable contains the nodal values of the solution function. At the end of the function, we output how many Conjugate Gradients iterations it took to solve the linear system.

  void Step3::solve()
  {
  SolverControl solver_control(1000, 1e-6 * system_rhs.l2_norm());
  SolverCG<Vector<double>> solver(solver_control);
  solver.solve(system_matrix, solution, system_rhs, PreconditionIdentity());
 
  std::cout << solver_control.last_step()
  << " CG iterations needed to obtain convergence." << std::endl;
  }
 
 

Step3::output_results

The last part of a typical finite element program is to output the results and maybe do some postprocessing (for example compute the maximal stress values at the boundary, or the average flux across the outflow, etc). We have no such postprocessing here, but we would like to write the solution to a file.

  void Step3::output_results() const
  {

To write the output to a file, we need an object which knows about output formats and the like. This is the DataOut class, and we need an object of that type:

  DataOut<2> data_out;

Now we have to tell it where to take the values from which it shall write. We tell it which DoFHandler object to use, and the solution vector (and the name by which the solution variable shall appear in the output file). If we had more than one vector which we would like to look at in the output (for example right hand sides, errors per cell, etc) we would add them as well:

  data_out.attach_dof_handler(dof_handler);
  data_out.add_data_vector(solution, "solution");

After the DataOut object knows which data it is to work on, we have to tell it to process them into something the back ends can handle. The reason is that we have separated the frontend (which knows about how to treat DoFHandler objects and data vectors) from the back end (which knows many different output formats) and use an intermediate data format to transfer data from the front- to the backend. The data is transformed into this intermediate format by the following function:

  data_out.build_patches();
 

Now we have everything in place for the actual output. Just open a file and write the data into it, using VTK format (there are many other functions in the DataOut class we are using here that can write the data in postscript, AVS, GMV, Gnuplot, or some other file formats):

  const std::string filename = "solution.vtk";
  std::ofstream output(filename);
  data_out.write_vtk(output);
  std::cout << "Output written to " << filename << std::endl;
  }
 
 

Step3::run

Finally, the last function of this class is the main function which calls all the other functions of the Step3 class. The order in which this is done resembles the order in which most finite element programs work. Since the names are mostly self-explanatory, there is not much to comment about:

  void Step3::run()
  {
  make_grid();
  setup_system();
  assemble_system();
  solve();
  output_results();
  }
 
 

The main function

This is the main function of the program. Since the concept of a main function is mostly a remnant from the pre-object oriented era before C++ programming, it often does not do much more than creating an object of the top-level class and calling its principle function.

  int main()
  {
  Step3 laplace_problem;
  laplace_problem.run();
 
  return 0;
  }

Results

The output of the program looks as follows:

Number of active cells: 1024
Number of degrees of freedom: 1089
36 CG iterations needed to obtain convergence.
Output written to solution.vtk

The last line is the output we generated at the bottom of the output_results() function: The program generated the file solution.vtk, which is in the VTK format that is widely used by many visualization programs today – including the two heavy-weights VisIt and Paraview that are the most commonly used programs for this purpose today.

Using VisIt, it is not very difficult to generate a picture of the solution like this:

Visualization of the solution of step-3

It shows both the solution and the mesh, elevated above the \(x\)- \(y\) plane based on the value of the solution at each point. Of course the solution here is not particularly exciting, but that is a result of both what the Laplace equation represents and the right hand side \(f(\mathbf x)=1\) we have chosen for this program: The Laplace equation describes (among many other uses) the vertical deformation of a membrane subject to an external (also vertical) force. In the current example, the membrane's borders are clamped to a square frame with no vertical variation; a constant force density will therefore intuitively lead to a membrane that simply bulges upward – like the one shown above.

VisIt and Paraview both allow playing with various kinds of visualizations of the solution. Several video lectures show how to use these programs. See also video lecture 11, video lecture 32.

Possibilities for extensions

If you want to play around a little bit with this program, here are a few suggestions:

  • Change the geometry and mesh: In the program, we have generated a square domain and mesh by using the GridGenerator::hyper_cube() function. However, the GridGenerator has a good number of other functions as well. Try an L-shaped domain, a ring, or other domains you find there.

  • Change the boundary condition: The code uses the Functions::ZeroFunction function to generate zero boundary conditions. However, you may want to try non-zero constant boundary values using Functions::ConstantFunction<2>(1) instead of Functions::ZeroFunction<2>() to have unit Dirichlet boundary values. More exotic functions are described in the documentation of the Functions namespace, and you may pick one to describe your particular boundary values.

  • Modify the type of boundary condition: Presently, what happens is that we use Dirichlet boundary values all around, since the default is that all boundary parts have boundary indicator zero, and then we tell the VectorTools::interpolate_boundary_values() function to interpolate boundary values to zero on all boundary components with indicator zero.

    We can change this behavior if we assign parts of the boundary different indicators. For example, try this immediately after calling GridGenerator::hyper_cube():

    triangulation.begin_active()->face(0)->set_boundary_id(1);

    What this does is it first asks the triangulation to return an iterator that points to the first active cell. Of course, this being the coarse mesh for the triangulation of a square, the triangulation has only a single cell at this moment, and it is active. Next, we ask the cell to return an iterator to its first face, and then we ask the face to reset the boundary indicator of that face to 1. What then follows is this: When the mesh is refined, faces of child cells inherit the boundary indicator of their parents, i.e. even on the finest mesh, the faces on one side of the square have boundary indicator 1. Later, when we get to interpolating boundary conditions, the VectorTools::interpolate_boundary_values() call will only produce boundary values for those faces that have zero boundary indicator, and leave those faces alone that have a different boundary indicator. What this then does is to impose Dirichlet boundary conditions on the former, and homogeneous Neumann conditions on the latter (i.e. zero normal derivative of the solution, unless one adds additional terms to the right hand side of the variational equality that deal with potentially non-zero Neumann conditions). You will see this if you run the program.

    An alternative way to change the boundary indicator is to label the boundaries based on the Cartesian coordinates of the face centers. For example, we can label all of the cells along the top and bottom boundaries with a boundary indicator 1 by checking to see if the cell centers' y-coordinates are within a tolerance (here 1e-12) of -1 and 1. Try this immediately after calling GridGenerator::hyper_cube(), as before:

    for (auto &face : triangulation.active_face_iterators())
    if (face->at_boundary())
    if (std::fabs(face->center()(1) - (-1.0)) < 1e-12 ||
    std::fabs(face->center()(1) - (1.0)) < 1e-12)
    face->set_boundary_id(1);
    Point< 3 > center
    STL namespace.

    Although this code is a bit longer than before, it is useful for complex geometries, as it does not require knowledge of face labels.

  • A slight variation of the last point would be to set different boundary values as above, but then use a different boundary value function for boundary indicator one. In practice, what you have to do is to add a second call to interpolate_boundary_values for boundary indicator one:

    If you have this call immediately after the first one to this function, then it will interpolate boundary values on faces with boundary indicator 1 to the unit value, and merge these interpolated values with those previously computed for boundary indicator 0. The result will be that we will get discontinuous boundary values, zero on three sides of the square, and one on the fourth.

  • Use triangles: As mentioned in the results section of step-1, for historical reasons, almost all tutorial programs for deal.II are written using quadrilateral or hexahedral meshes. But deal.II also supports triangular and tetrahedral meshes. So a good experiment would be to replace the mesh used here by a triangular mesh.

    This is almost trivial. First, as discussed in step-1, we may want to start with the quadrilateral mesh we are already creating, and then convert it into a triangular one. You can do that by replacing the first line of Step3::make_grid() by the following code:

    Triangulation<2> triangulation_quad;
    GridGenerator::hyper_cube(triangulation_quad, -1, 1);
    void convert_hypercube_to_simplex_mesh(const Triangulation< dim, spacedim > &in_tria, Triangulation< dim, spacedim > &out_tria)

    The GridGenerator::convert_hypercube_to_simplex_mesh() replaces each quadrilateral by eight triangles with half the diameter of the original quadrilateral; as a consequence, the resulting mesh is substantially finer and one might expect that the solution is consequently more accurate (but also has many more degrees of freedom). That is a question you can explore with the techniques discussed in the "Results" section of step-4, but that goes beyond what we want to demonstrate here.

    If you run this program, you will run into an error message that will look something like this:

    --------------------------------------------------------
    An error occurred in line <2633> of file </home/bangerth/p/deal.II/1/dealii/include/deal.II/dofs/dof_accessor.templates.h> in function
    const ::FiniteElement<dimension_, space_dimension_>& ::DoFCellAccessor<dim, spacedim, lda>::get_fe() const [with int dimension_ = 2; int space_dimension_ = 2; bool level_dof_access = false]
    The violated condition was:
    this->reference_cell() == fe.reference_cell()
    Additional information:
    The reference-cell type used on this cell (Tri) does not match the
    reference-cell type of the finite element associated with this cell
    (Quad). Did you accidentally use simplex elements on hypercube meshes
    (or the other way around), or are you using a mixed mesh and assigned
    a simplex element to a hypercube cell (or the other way around) via
    the active_fe_index?
    const FiniteElement< dimension_, space_dimension_ > & get_fe() const

    It is worth carefully reading the error message. It doesn't just state that there is an error, but also how it may have arisen. Specifically, it asks whether we are using a finite element for simplex meshes (in 2d simplices are triangles) with a hypercube mesh (in 2d hypercubes are quadrilaterals) or the other way around?

    Of course, this is exactly what we are doing, though this may perhaps not be clear to you. But if you look up the documentation, you will find that the FE_Q element we use in the main class can only be used on hypercube meshes; what we want to use instead now that we are using a simplex mesh is the FE_SimplexP class that is the equivalent to FE_Q for simplex cells. (To do this, you will also have to add #include <deal.II/fe/fe_simplex_p.h> at the top of the file.)

    The last thing you need to change (which at the time of writing is unfortunately not prompted by getting an error message) is that when we integrate, we need to use a quadrature formula that is appropriate for triangles. This is done by changing QGauss by QGaussSimplex in the code.

    With all of these steps, you then get the following solution: Visualization of the solution of step-3 using triangles

  • Observe convergence: We will only discuss computing errors in norms in step-7, but it is easy to check that computations converge already here. For example, we could evaluate the value of the solution in a single point and compare the value for different numbers of global refinement (the number of global refinement steps is set in LaplaceProblem::make_grid above). To evaluate the solution at a point, say at \((\frac 13, \frac 13)\), we could add the following code to the LaplaceProblem::output_results function:

    std::cout << "Solution at (1/3,1/3): "
    << VectorTools::point_value(dof_handler, solution,
    Point<2>(1./3, 1./3))
    << std::endl;
    Definition point.h:111
    void point_value(const DoFHandler< dim, spacedim > &dof, const VectorType &fe_function, const Point< spacedim, double > &point, Vector< typename VectorType::value_type > &value)

    For 1 through 9 global refinement steps, we then get the following sequence of point values:

    # of refinements \(u_h(\frac 13,\frac13)\)
    1 0.166667
    2 0.227381
    3 0.237375
    4 0.240435
    5 0.241140
    6 0.241324
    7 0.241369
    8 0.241380
    9 0.241383

    By noticing that the difference between each two consecutive values reduces by about a factor of 4, we can conjecture that the "correct" value may be \(u(\frac 13, \frac 13)\approx 0.241384\). In fact, if we assumed this to be the correct value, we could show that the sequence above indeed shows \({\cal O}(h^2)\) convergence — theoretically, the convergence order should be \({\cal O}(h^2 |\log h|)\) but the symmetry of the domain and the mesh may lead to the better convergence order observed.

    A slight variant of this would be to repeat the test with quadratic elements. All you need to do is to set the polynomial degree of the finite element to two in the constructor LaplaceProblem::LaplaceProblem.

  • Convergence of the mean: A different way to see that the solution actually converges (to something — we can't tell whether it's really the correct value!) is to compute the mean of the solution. To this end, add the following code to LaplaceProblem::output_results:
    std::cout << "Mean value: "
    QGauss<2>(fe.degree + 1),
    solution,
    0)
    << std::endl;
    Number compute_mean_value(const hp::MappingCollection< dim, spacedim > &mapping_collection, const DoFHandler< dim, spacedim > &dof, const hp::QCollection< dim > &q_collection, const ReadVector< Number > &v, const unsigned int component)
    The documentation of the function explains what the second and fourth parameters mean, while the first and third should be obvious. Doing the same study again where we change the number of global refinement steps, we get the following result:
    # of refinements \(\int_\Omega u_h(x)\; dx\)
    0 0.09375000
    1 0.12790179
    2 0.13733440
    3 0.13976069
    4 0.14037251
    5 0.14052586
    6 0.14056422
    7 0.14057382
    8 0.14057622
    Again, the difference between two adjacent values goes down by about a factor of four, indicating convergence as \({\cal O}(h^2)\).

Using HDF5 to output the solution and additional data

HDF5 is a commonly used format that can be read by many scripting languages (e.g. R or Python). It is not difficult to get deal.II to produce some HDF5 files that can then be used in external scripts to postprocess some of the data generated by this program. Here are some ideas on what is possible.

Changing the output to .h5

To fully make use of the automation we first need to introduce a private variable for the number of global refinement steps unsigned int n_refinement_steps , which will be used for the output filename. In make_grid() we then replace triangulation.refine_global(5); with

n_refinement_steps = 5;
triangulation.refine_global(n_refinement_steps);

The deal.II library has two different HDF5 bindings, one in the HDF5 namespace (for interfacing to general-purpose data files) and another one in DataOut (specifically for writing files for the visualization of solutions). Although the HDF5 deal.II binding supports both serial and MPI, the HDF5 DataOut binding only supports parallel output. For this reason we need to initialize an MPI communicator with only one processor. This is done by adding the following code.

int main(int argc, char* argv[])
{
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
...
}

Next we change the Step3::output_results() output routine as described in the DataOutBase namespace documentation:

const std::string filename_h5 = "solution_" + std::to_string(n_refinement_steps) + ".h5";
DataOutBase::DataOutFilter data_filter(flags);
data_out.write_filtered_data(data_filter);
data_out.write_hdf5_parallel(data_filter, filename_h5, MPI_COMM_WORLD);

The resulting file can then be visualized just like the VTK file that the original version of the tutorial produces; but, since HDF5 is a more general file format, it can also easily be processed in scripting languages for other purposes.

Adding the point value and the mean (see extension above) into the .h5 file

After outputting the solution, the file can be opened again to include more datasets. This allows us to keep all the necessary information of our experiment in a single result file, which can then be read and processed by some postprocessing script. (Have a look at HDF5::Group::write_dataset() for further information on the possible output options.)

To make this happen, we first include the necessary header into our file :

Adding the following lines to the end of our output routine adds the information about the value of the solution at a particular point, as well as the mean value of the solution, to our HDF5 file :

HDF5::File data_file(filename_h5, HDF5::File::FileAccessMode::open, MPI_COMM_WORLD);
Vector<double> point_value(1);
point_value[0] = VectorTools::point_value(dof_handler, solution,
Point<2>(1./3, 1./3));
data_file.write_dataset("point_value", point_value);
Vector<double> mean_value(1);
mean_value[0] = VectorTools::compute_mean_value(dof_handler,
QGauss<2>(fe.degree + 1),
solution, 0);
data_file.write_dataset("mean_value",mean_value);

Using R and ggplot2 to generate plots

Note
Alternatively, one could use the Python code in the next subsection.

The data put into HDF5 files above can then be used from scripting languages for further postprocessing. In the following, let us show how this can, in particular, be done with the R programming language, a widely used language in statistical data analysis. (Similar things can also be done in Python, for example.) If you are unfamiliar with R and ggplot2 you could check out the data carpentry course on R here. Furthermore, since most search engines struggle with searches of the form "R + topic", we recommend using the specializes service RSeek instead.

The most prominent difference between R and other languages is that the assignment operator (a = 5) is typically written as a <- 5. As the latter is considered standard we will use it in our examples as well. To open the .h5 file in R you have to install the rhdf5 package, which is a part of the Bioconductor package.

First we will include all necessary packages and have a look at how the data is structured in our file.

library(rhdf5) # library for handling HDF5 files
library(ggplot2) # main plotting library
library(grDevices) # needed for output to PDF
library(viridis) # contains good colormaps for sequential data
refinement <- 5
h5f <- H5Fopen(paste("solution_",refinement,".h5",sep=""))
print(h5f)

This gives the following output

HDF5 FILE
name /
filename
name otype dclass dim
0 cells H5I_DATASET INTEGER x 1024
1 mean_value H5I_DATASET FLOAT 1
2 nodes H5I_DATASET FLOAT x 1089
3 point_value H5I_DATASET FLOAT 1
4 solution H5I_DATASET FLOAT x 1089

The datasets can be accessed by h5f$name. The function dim(h5f$cells) gives us the dimensions of the matrix that is used to store our cells. We can see the following three matrices, as well as the two additional data points we added.

  • cells: a 4x1024 matrix that stores the (C++) vertex indices for each cell
  • nodes: a 2x1089 matrix storing the position values (x,y) for our cell vertices
  • solution: a 1x1089 matrix storing the values of our solution at each vertex

Now we can use this data to generate various plots. Plotting with ggplot2 usually splits into two steps. At first the data needs to be manipulated and added to a data.frame. After that, a ggplot object is constructed and manipulated by adding plot elements to it.

nodes and cells contain all the information we need to plot our grid. The following code wraps all the data into one dataframe for plotting our grid:

# Counting in R starts at 1 instead of 0, so we need to increment all
# vertex indices by one:
cell_ids <- h5f@f$cells+1
# Store the x and y positions of each vertex in one big vector in a
# cell by cell fashion (every 4 entries belong to one cell):
cells_x <- h5f@f$nodes[1,][cell_ids]
cells_y <- h5f@f$nodes[2,][cell_ids]
# Construct a vector that stores the matching cell by cell grouping
# (1,1,1,1,2,2,2,2,...):
groups <- rep(1:ncol(cell_ids),each=4)
# Finally put everything into one dataframe:
meshdata <- data.frame(x = cells_x, y = cells_y, id = groups)

With the finished dataframe we have everything we need to plot our grid:

pdf (paste("grid_",refinement,".pdf",sep=""),width = 5,height = 5) # Open new PDF file
plt <- ggplot(meshdata,aes(x=x,y=y,group=id)) # Construction of our plot
# object, at first only data
plt <- plt + geom_polygon(fill="white",colour="black") # Actual plotting of the grid as polygons
plt <- plt + ggtitle(paste("grid at refinement level #",refinement))
print(plt) # Show the current state of the plot/add it to the pdf
dev.off() # Close PDF file

The contents of this file then look as follows (not very exciting, but you get the idea):

Grid after 5 refinement steps of step-3

We can also visualize the solution itself, and this is going to look more interesting. To make a 2D pseudocolor plot of our solution we will use geom_raster. This function needs a structured grid, i.e. uniform in x and y directions. Luckily our data at this point is structured in the right way. The following code plots a pseudocolor representation of our surface into a new PDF:

pdf (paste("pseudocolor_",refinement,".pdf",sep=""),width = 5,height = 4.2) # Open new PDF file
colordata <- data.frame(x = h5f@f$nodes[1,],y = h5f@f$nodes[2,] , solution = h5f@f$solution[1,])
plt <- ggplot(colordata,aes(x=x,y=y,fill=solution))
plt <- plt + geom_raster(interpolate=TRUE)
plt <- plt + scale_fill_viridis()
plt <- plt + ggtitle(paste("solution at refinement level #",refinement))
print(plt)
dev.off()
H5Fclose(h5f) # Close the HDF5 file

This is now going to look as follows:

Solution after 5 refinement steps of step-3

For plotting the convergence curves we need to re-run the C++ code multiple times with different values for n_refinement_steps starting from 1. Since every file only contains a single data point we need to loop over them and concatenate the results into a single vector.

n_ref <- 8 # Maximum refinement level for which results are existing
# First we initiate all vectors with the results of the first level
h5f <- H5Fopen("solution_1.h5")
dofs <- dim(h5f@f$solution)[2]
mean <- h5f@f$mean_value
point <- h5f@f$point_value
H5Fclose(h5f)
for (reflevel in 2:n_ref)
{
h5f <- H5Fopen(paste("solution_",reflevel,".h5",sep=""))
dofs <- c(dofs,dim(h5f\$solution)[2])
mean <- c(mean,h5f\$mean_value)
point <- c(point,h5f\$point_value)
H5Fclose(h5f)
}

As we are not interested in the values themselves but rather in the error compared to a "exact" solution we will assume our highest refinement level to be that solution and omit it from the data.

# Calculate the error w.r.t. our maximum refinement step
mean_error <- abs(mean[1:n_ref-1]-mean[n_ref])
point_error <- abs(point[1:n_ref-1]-point[n_ref])
# Remove the highest value from our DoF data
dofs <- dofs[1:n_ref-1]
convdata <- data.frame(dofs = dofs, mean_value= mean_error, point_value = point_error)

Now we have all the data available to generate our plots. It is often useful to plot errors on a log-log scale, which is accomplished in the following code:

pdf (paste("convergence.pdf",sep=""),width = 5,height = 4.2)
plt <- ggplot(convdata,mapping=aes(x = dofs, y = mean_value))
plt <- plt+geom_line()
plt <- plt+labs(x="#DoFs",y = "mean value error")
plt <- plt+scale_x_log10()+scale_y_log10()
print(plt)
plt <- ggplot(convdata,mapping=aes(x = dofs, y = point_value))
plt <- plt+geom_line()
plt <- plt+labs(x="#DoFs",y = "point value error")
plt <- plt+scale_x_log10()+scale_y_log10()
print(plt)
dev.off()

This results in the following plot that shows how the errors in the mean value and the solution value at the chosen point nicely converge to zero:

Using Python to generate plots

In this section we discuss the postprocessing of the data stored in HDF5 files using the "python" programming language. The necessary packages to import are

import numpy as np # to work with multidimensional arrays
import h5py # to work with %HDF5 files
import pandas as pd # for data frames
import matplotlib.pyplot as plt # plotting
from matplotlib.patches import Polygon
from scipy.interpolate import griddata # interpolation function
from matplotlib import cm # for colormaps

The HDF5 solution file corresponding to refinement = 5 can be opened as

refinement = 5
filename = "solution_%d.h5" % refinement
file = h5py.File(filename, "r")

The following prints out the tables in the HDF5 file

for key, value in file.items():
print(key, " : ", value)

which prints out

cells : <HDF5 dataset "cells": shape (1024, 4), type "<u4">
mean_value : <HDF5 dataset "mean_value": shape (1,), type "<f8">
nodes : <HDF5 dataset "nodes": shape (1089, 2), type "<f8">
point_value : <HDF5 dataset "point_value": shape (1,), type "<f8">
solution : <HDF5 dataset "solution": shape (1089, 1), type "<f8">

There are \((32+1)\times(32+1) = 1089\) nodes. The coordinates of these nodes \((x,y)\) are stored in the table nodes in the HDF5 file. There are a total of \(32\times 32 = 1024\) cells. The nodes which make up each cell are marked in the table cells in the HDF5 file.

Next, we extract the data into multidimensional arrays

nodes = np.array(file["/nodes"])
cells = np.array(file["/cells"])
solution = np.array(file["/solution"])
x, y = nodes.T

The following stores the \(x\) and \(y\) coordinates of each node of each cell in one flat array.

cell_x = x[cells.flatten()]
cell_y = y[cells.flatten()]

The following tags the cell ids. Each four entries correspond to one cell. Then we collect the coordinates and ids into a data frame

n_cells = cells.shape[0]
cell_ids = np.repeat(np.arange(n_cells), 4)
meshdata = pd.DataFrame({"x": cell_x, "y": cell_y, "ids": cell_ids})

The data frame looks

print(meshdata)
x y ids
0 0.00000 0.00000 0
1 0.03125 0.00000 0
2 0.03125 0.03125 0
3 0.00000 0.03125 0
4 0.03125 0.00000 1
... ... ... ...
4091 0.93750 1.00000 1022
4092 0.96875 0.96875 1023
4093 1.00000 0.96875 1023
4094 1.00000 1.00000 1023
4095 0.96875 1.00000 1023
4096 rows × 3 columns

To plot the mesh, we loop over all cells and connect the first and last node to complete the polygon

fig, ax = plt.subplots()
ax.set_aspect("equal", "box")
ax.set_title("grid at refinement level #%d" % refinement)
for cell_id, cell in meshdata.groupby(["ids"]):
cell = pd.concat([cell, cell.head(1)])
plt.plot(cell["x"], cell["y"], c="k")

Alternatively one could use the matplotlib built-in Polygon class

fig, ax = plt.subplots()
ax.set_aspect("equal", "box")
ax.set_title("grid at refinement level #%d" % refinement)
for cell_id, cell in meshdata.groupby(["ids"]):
patch = Polygon(cell[["x", "y"]], facecolor="w", edgecolor="k")
ax.add_patch(patch)

To plot the solution, we first create a finer grid and then interpolate the solution values into the grid and then plot it.

nx = int(np.sqrt(n_cells)) + 1
nx *= 10
xg = np.linspace(x.min(), x.max(), nx)
yg = np.linspace(y.min(), y.max(), nx)
xgrid, ygrid = np.meshgrid(xg, yg)
solution_grid = griddata((x, y), solution.flatten(), (xgrid, ygrid), method="linear")
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_title("solution at refinement level #%d" % refinement)
c = ax.pcolor(xgrid, ygrid, solution_grid, cmap=cm.viridis)
fig.colorbar(c, ax=ax)
plt.show()

To check the convergence of mean_value and point_value we loop over data of all refinements and store into vectors mean_values and mean_values

mean_values = np.zeros((8,))
point_values = np.zeros((8,))
dofs = np.zeros((8,))
for refinement in range(1, 9):
filename = "solution_%d.h5" % refinement
file = h5py.File(filename, "r")
mean_values[refinement - 1] = np.array(file["/mean_value"])[0]
point_values[refinement - 1] = np.array(file["/point_value"])[0]
dofs[refinement - 1] = np.array(file["/solution"]).shape[0]

Following is the plot of mean_values on log-log scale

mean_error = np.abs(mean_values[1:] - mean_values[:-1])
plt.loglog(dofs[:-1], mean_error)
plt.grid()
plt.xlabel("#DoFs")
plt.ylabel("mean value error")
plt.show()

Following is the plot of point_values on log-log scale

point_error = np.abs(point_values[1:] - point_values[:-1])
plt.loglog(dofs[:-1], point_error)
plt.grid()
plt.xlabel("#DoFs")
plt.ylabel("point value error")
plt.show()

A Python package which mimics the R ggplot2 (which is based on specifying the grammar of the graphics) is plotnine.

We need to import the following from the <code>plotnine</code> package
from plotnine import (
ggplot,
aes,
geom_raster,
geom_polygon,
geom_line,
labs,
scale_x_log10,
scale_y_log10,
ggtitle,
)

Then plot the mesh meshdata dataframe

plot = (
ggplot(meshdata, aes(x="x", y="y", group="ids"))
+ geom_polygon(fill="white", colour="black")
+ ggtitle("grid at refinement level #%d" % refinement)
)
print(plot)

Collect the solution into a dataframe

colordata = pd.DataFrame({"x": x, "y": y, "solution": solution.flatten()})

Plot of the solution

plot = (
ggplot(colordata, aes(x="x", y="y", fill="solution"))
+ geom_raster(interpolate=True)
+ ggtitle("solution at refinement level #%d" % refinement)
)
print(plot)

Collect the convergence data into a dataframe

convdata = pd.DataFrame(
{"dofs": dofs[:-1], "mean_value": mean_error, "point_value": point_error}
)

Following is the plot of mean_values on log-log scale

plot = (
ggplot(convdata, mapping=aes(x="dofs", y="mean_value"))
+ geom_line()
+ labs(x="#DoFs", y="mean value error")
+ scale_x_log10()
+ scale_y_log10()
)
plot.save("mean_error.pdf", dpi=60)
print(plot)

Following is the plot of point_values on log-log scale

plot = (
ggplot(convdata, mapping=aes(x="dofs", y="point_value"))
+ geom_line()
+ labs(x="#DoFs", y="point value error")
+ scale_x_log10()
+ scale_y_log10()
)
plot.save("point_error.pdf", dpi=60)
print(plot)

The plain program

/* ------------------------------------------------------------------------
*
* SPDX-License-Identifier: LGPL-2.1-or-later
* Copyright (C) 1999 - 2024 by the deal.II authors
*
* This file is part of the deal.II library.
*
* Part of the source code is dual licensed under Apache-2.0 WITH
* LLVM-exception OR LGPL-2.1-or-later. Detailed license information
* governing the source code and code contributions can be found in
* LICENSE.md and CONTRIBUTING.md at the top level directory of deal.II.
*
* ------------------------------------------------------------------------
*/
#include <fstream>
#include <iostream>
using namespace dealii;
class Step3
{
public:
Step3();
void run();
private:
void make_grid();
void setup_system();
void assemble_system();
void solve();
void output_results() const;
const FE_Q<2> fe;
DoFHandler<2> dof_handler;
SparsityPattern sparsity_pattern;
SparseMatrix<double> system_matrix;
Vector<double> solution;
Vector<double> system_rhs;
};
Step3::Step3()
: fe(/* polynomial degree = */ 1)
, dof_handler(triangulation)
{}
void Step3::make_grid()
{
triangulation.refine_global(5);
std::cout << "Number of active cells: " << triangulation.n_active_cells()
<< std::endl;
}
void Step3::setup_system()
{
dof_handler.distribute_dofs(fe);
std::cout << "Number of degrees of freedom: " << dof_handler.n_dofs()
<< std::endl;
DynamicSparsityPattern dsp(dof_handler.n_dofs());
sparsity_pattern.copy_from(dsp);
system_matrix.reinit(sparsity_pattern);
solution.reinit(dof_handler.n_dofs());
system_rhs.reinit(dof_handler.n_dofs());
}
void Step3::assemble_system()
{
const QGauss<2> quadrature_formula(fe.degree + 1);
FEValues<2> fe_values(fe,
quadrature_formula,
const unsigned int dofs_per_cell = fe.n_dofs_per_cell();
FullMatrix<double> cell_matrix(dofs_per_cell, dofs_per_cell);
Vector<double> cell_rhs(dofs_per_cell);
std::vector<types::global_dof_index> local_dof_indices(dofs_per_cell);
for (const auto &cell : dof_handler.active_cell_iterators())
{
fe_values.reinit(cell);
cell_rhs = 0;
for (const unsigned int q_index : fe_values.quadrature_point_indices())
{
for (const unsigned int i : fe_values.dof_indices())
for (const unsigned int j : fe_values.dof_indices())
cell_matrix(i, j) +=
(fe_values.shape_grad(i, q_index) * // grad phi_i(x_q)
fe_values.shape_grad(j, q_index) * // grad phi_j(x_q)
fe_values.JxW(q_index)); // dx
for (const unsigned int i : fe_values.dof_indices())
cell_rhs(i) += (fe_values.shape_value(i, q_index) * // phi_i(x_q)
1. * // f(x_q)
fe_values.JxW(q_index)); // dx
}
cell->get_dof_indices(local_dof_indices);
for (const unsigned int i : fe_values.dof_indices())
for (const unsigned int j : fe_values.dof_indices())
system_matrix.add(local_dof_indices[i],
local_dof_indices[j],
cell_matrix(i, j));
for (const unsigned int i : fe_values.dof_indices())
system_rhs(local_dof_indices[i]) += cell_rhs(i);
}
std::map<types::global_dof_index, double> boundary_values;
boundary_values);
system_matrix,
solution,
system_rhs);
}
void Step3::solve()
{
SolverControl solver_control(1000, 1e-6 * system_rhs.l2_norm());
SolverCG<Vector<double>> solver(solver_control);
solver.solve(system_matrix, solution, system_rhs, PreconditionIdentity());
std::cout << solver_control.last_step()
<< " CG iterations needed to obtain convergence." << std::endl;
}
void Step3::output_results() const
{
DataOut<2> data_out;
data_out.attach_dof_handler(dof_handler);
data_out.add_data_vector(solution, "solution");
data_out.build_patches();
const std::string filename = "solution.vtk";
std::ofstream output(filename);
data_out.write_vtk(output);
std::cout << "Output written to " << filename << std::endl;
}
void Step3::run()
{
make_grid();
setup_system();
assemble_system();
solve();
output_results();
}
int main()
{
Step3 laplace_problem;
laplace_problem.run();
return 0;
}
void write_vtk(std::ostream &out) const
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector< DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
Definition data_out.cc:1062
void cell_matrix(FullMatrix< double > &M, const FEValuesBase< dim > &fe, const FEValuesBase< dim > &fetest, const ArrayView< const std::vector< double > > &velocity, const double factor=1.)
Definition advection.h:74