Reference documentation for deal.II version Git 395ab8d 2014-10-31 13:01:43 -0600
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Groups Pages
The step-17 tutorial program
Table of contents
  1. Introduction
  2. The commented program
  1. Results
  2. The plain program

Introduction

This program does not introduce any new mathematical ideas; in fact, all it does is to do the exact same computations that step-8 already does, but it does so in a different manner: instead of using deal.II's own linear algebra classes, we build everything on top of classes deal.II provides that wrap around the linear algebra implementation of the PETSc library. And since PETSc allows to distribute matrices and vectors across several computers within an MPI network, the resulting code will even be able to solve the problem in parallel. If you don't know what PETSc is, then this would be a good time to take a quick glimpse at their homepage.

As a prerequisite of this program, you need to have PETSc installed, and if you want to run in parallel on a cluster, you also need METIS to partition meshes. The installation of deal.II together with these two additional libraries is described in the README file.

Now, for the details: as mentioned, the program does not compute anything new, so the use of finite element classes etc. is exactly the same as before. The difference to previous programs is that we have replaced almost all uses of classes Vector and SparseMatrix by their near-equivalents PETScWrappers::Vector and PETScWrappers::SparseMatrix (for sequential vectors and matrices, i.e. objects for which all elements are stored locally on one machine), and PETScWrappers::MPI::Vector and PETScWrappers::MPI::SparseMatrix for versions of these classes where only a part of the matrix or vector is stored on each machine within an MPI network. These classes provide an interface that is very similar to that of the deal.II linear algebra classes, but instead of implementing this functionality themselves, they simply pass on to their corresponding PETSc functions. The wrappers are therefore only used to give PETSc a more modern, object oriented interface, and to make the use of PETSc and deal.II objects as interchangeable as possible.

While the sequential PETSc wrappers classes do not have any advantage over their deal.II counterparts, the main point of using PETSc is that it can run in parallel. We will make use of this by partitioning the domain into as many blocks (``subdomains'') as there are processes in the MPI network. At the same time, PETSc provides dummy MPI stubs that allow to run the same program on a single machine if so desired, without any changes.

Note, however, that the only data structures we parallelize are matrices and vectors. We do, in particular, not split up the Triangulation and DoFHandler classes: each process still has a complete copy of these objects, and all processes have exact copies of what the other processes have. Doing so is slightly, though not much more complicated (from a user perspective, it is much more complicated under the hood) to achieve and we will show how to do this in step-40.

The techniques this program demonstrates are: how to use the PETSc wrapper classes; how to parallelize operations for jobs running on an MPI network; and how to partition the domain into subdomains to parallelize up the work. Since all this can only be demonstrated using actual code, let us go straight to the code without much further ado.

The commented program

First the usual assortment of header files we have already used in previous example programs:

#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/function.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/multithread_info.h>
#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_refinement.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/tria_boundary_lib.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_accessor.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/matrix_tools.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/error_estimator.h>

And here come the things that we need particularly for this example program and that weren't in step-8. First, we replace the standard output std::cout by a new stream pcout which is used in parallel computations for generating output only on one of the MPI processes.

#include <deal.II/base/conditional_ostream.h>

We are going to query the number of processes and the number of the present process by calling the respective functions in the Utilities::MPI namespace.

#include <deal.II/base/utilities.h>

Then, we are going to replace all linear algebra components that involve the (global) linear system by classes that wrap interfaces similar to our own linear algebra classes around what PETSc offers (PETSc is a library written in C, and deal.II comes with wrapper classes that provide the PETSc functionality with an interface that is similar to the interface we already had for our own linear algebra classes). In particular, we need vectors and matrices that are distributed across several processes in MPI programs (and simply map to sequential, local vectors and matrices if there is only a single process, i.e. if you are running on only one machine, and without MPI support):

#include <deal.II/lac/petsc_vector.h>
#include <deal.II/lac/petsc_parallel_vector.h>
#include <deal.II/lac/petsc_parallel_sparse_matrix.h>

Then we also need interfaces for solvers and preconditioners that PETSc provides:

#include <deal.II/lac/petsc_solver.h>
#include <deal.II/lac/petsc_precondition.h>

And in addition, we need some algorithms for partitioning our meshes so that they can be efficiently distributed across an MPI network. The partitioning algorithm is implemented in the GridTools class, and we need an additional include file for a function in DoFRenumbering that allows to sort the indices associated with degrees of freedom so that they are numbered according to the subdomain they are associated with:

#include <deal.II/grid/grid_tools.h>
#include <deal.II/dofs/dof_renumbering.h>

And this is simply C++ again:

#include <fstream>
#include <iostream>
#include <sstream>

The last step is as in all previous programs:

namespace Step17
{
using namespace dealii;

Now, here comes the declaration of the main class and of various other things below it. As mentioned in the introduction, almost all of this has been copied verbatim from step-8, so we only comment on the few things that are different. There is one (cosmetic) change in that we let solve return a value, namely the number of iterations it took to converge, so that we can output this to the screen at the appropriate place. In addition, we introduce a stream-like variable pcout, explained below:

template <int dim>
class ElasticProblem
{
public:
ElasticProblem ();
~ElasticProblem ();
void run ();
private:
void setup_system ();
void assemble_system ();
unsigned int solve ();
void refine_grid ();
void output_results (const unsigned int cycle) const;

The first variable is basically only for convenience: in parallel program, if each process outputs status information, then there quickly is a lot of clutter. Rather, we would want to only have one process output everything once, for example the one with process number zero. ConditionalOStream does exactly this: it acts as if it were a stream, but only forwards to a real, underlying stream if a flag is set. By setting this condition to this_mpi_process==0, we make sure that output is only generated from the first process and that we don't get the same lines of output over and over again, once per process.

With this simple trick, we make sure that we don't have to guard each and every write to std::cout by a prefixed if(this_mpi_process==0).

The next few variables are taken verbatim from step-8:

Triangulation<dim> triangulation;
DoFHandler<dim> dof_handler;
ConstraintMatrix hanging_node_constraints;

In step-8, this would have been the place where we would have declared the member variables for the sparsity pattern, the system matrix, right hand, and solution vector. We change these declarations to use parallel PETSc objects instead (note that the fact that we use the parallel versions is denoted the fact that we use the classes from the PETScWrappers::MPI namespace; sequential versions of these classes are in the PETScWrappers namespace, i.e. without the MPI part). Note also that we do not use a separate sparsity pattern, since PETSc manages that as part of its matrix data structures.

The next change is that we have to declare a variable that indicates the MPI communicator over which we are supposed to distribute our computations. Note that if this is a sequential job without support by MPI, then PETSc provides some dummy type for MPI_Comm, so we do not have to care here whether the job is really a parallel one:

MPI_Comm mpi_communicator;

Then we have two variables that tell us where in the parallel world we are. The first of the following variables, n_mpi_processes tells us how many MPI processes there exist in total, while the second one, this_mpi_process, indicates which is the number of the present process within this space of processes. The latter variable will have a unique value for each process between zero and (less than) n_mpi_processes. If this program is run on a single machine without MPI support, then their values are 1 and 0, respectively.

const unsigned int n_mpi_processes;
const unsigned int this_mpi_process;
};

The following is again taken from step-8 without change:

template <int dim>
class RightHandSide : public Function<dim>
{
public:
RightHandSide ();
virtual void vector_value (const Point<dim> &p,
Vector<double> &values) const;
virtual void vector_value_list (const std::vector<Point<dim> > &points,
std::vector<Vector<double> > &value_list) const;
};
template <int dim>
RightHandSide<dim>::RightHandSide () :
Function<dim> (dim)
{}
template <int dim>
inline
void RightHandSide<dim>::vector_value (const Point<dim> &p,
Vector<double> &values) const
{
Assert (values.size() == dim,
ExcDimensionMismatch (values.size(), dim));
Assert (dim >= 2, ExcInternalError());
Point<dim> point_1, point_2;
point_1(0) = 0.5;
point_2(0) = -0.5;
if (((p-point_1).square() < 0.2*0.2) ||
((p-point_2).square() < 0.2*0.2))
values(0) = 1;
else
values(0) = 0;
if (p.square() < 0.2*0.2)
values(1) = 1;
else
values(1) = 0;
}
template <int dim>
void RightHandSide<dim>::vector_value_list (const std::vector<Point<dim> > &points,
std::vector<Vector<double> > &value_list) const
{
const unsigned int n_points = points.size();
Assert (value_list.size() == n_points,
ExcDimensionMismatch (value_list.size(), n_points));
for (unsigned int p=0; p<n_points; ++p)
RightHandSide<dim>::vector_value (points[p],
value_list[p]);
}

The first step in the actual implementation of things is the constructor of the main class. Apart from initializing the same member variables that we already had in step-8, we here initialize the MPI communicator variable we shall use with the global MPI communicator linking all processes together (in more complex applications, one could here use a communicator object that only links a subset of all processes), and call the Utilities helper functions to determine the number of processes and where the present one fits into this picture. In addition, we make sure that output is only generated by the (globally) first process. As this_mpi_process is determined after creation of pcout, we cannot set the condition through the constructor, i.e. by pcout(std::cout, this_mpi_process==0), but set the condition separately.

template <int dim>
ElasticProblem<dim>::ElasticProblem ()
:
pcout (std::cout),
dof_handler (triangulation),
fe (FE_Q<dim>(1), dim),
mpi_communicator (MPI_COMM_WORLD),
n_mpi_processes (Utilities::MPI::n_mpi_processes(mpi_communicator)),
this_mpi_process (Utilities::MPI::this_mpi_process(mpi_communicator))
{
pcout.set_condition(this_mpi_process == 0);
}
template <int dim>
ElasticProblem<dim>::~ElasticProblem ()
{
dof_handler.clear ();
}

The second step is the function in which we set up the various variables for the global linear system to be solved.

template <int dim>
void ElasticProblem<dim>::setup_system ()
{

Before we even start out setting up the system, there is one thing to do for a parallel program: we need to assign cells to each of the processes. We do this by splitting (partitioning) the mesh cells into as many chunks (subdomains) as there are processes in this MPI job (if this is a sequential job, then there is only one job and all cells will get a zero as subdomain indicator). This is done using an interface to the METIS library that does this in a very efficient way, trying to minimize the number of nodes on the interfaces between subdomains. All this is hidden behind the following call to a deal.II library function:

GridTools::partition_triangulation (n_mpi_processes, triangulation);

As for the linear system: First, we need to generate an enumeration for the degrees of freedom in our problem. Further below, we will show how we assign each cell to one of the MPI processes before we even get here. What we then need to do is to enumerate the degrees of freedom in a way so that all degrees of freedom associated with cells in subdomain zero (which resides on process zero) come before all DoFs associated with cells on subdomain one, before those on cells on process two, and so on. We need this since we have to split the global vectors for right hand side and solution, as well as the matrix into contiguous chunks of rows that live on each of the processors, and we will want to do this in a way that requires minimal communication. This is done using the following two functions, which first generates an initial ordering of all degrees of freedom, and then re-sort them according to above criterion:

dof_handler.distribute_dofs (fe);

While we're at it, let us also count how many degrees of freedom there exist on the present process:

const types::global_dof_index n_local_dofs
this_mpi_process);

Then we initialize the system matrix, solution, and right hand side vectors. Since they all need to work in parallel, we have to pass them an MPI communication object, as well as their global sizes (both dimensions are equal to the number of degrees of freedom), and also how many rows out of this global size are to be stored locally (n_local_dofs). In addition, PETSc needs to know how to partition the columns in the chunk of the matrix that is stored locally; for square matrices, the columns should be partitioned in the same way as the rows (indicated by the second n_local_dofs in the call) but in the case of rectangular matrices one has to partition the columns in the same way as vectors are partitioned with which the matrix is multiplied, while rows have to partitioned in the same way as destination vectors of matrix-vector multiplications:

system_matrix.reinit (mpi_communicator,
dof_handler.n_dofs(),
dof_handler.n_dofs(),
n_local_dofs,
n_local_dofs,
solution.reinit (mpi_communicator, dof_handler.n_dofs(), n_local_dofs);
system_rhs.reinit (mpi_communicator, dof_handler.n_dofs(), n_local_dofs);

Finally, we need to initialize the objects denoting hanging node constraints for the present grid. Note that since PETSc handles the sparsity pattern internally to the matrix, there is no need to set up an independent sparsity pattern here, and to condense it for constraints, as we have done in all other example programs.

hanging_node_constraints.clear ();
hanging_node_constraints);
hanging_node_constraints.close ();
}

The third step is to actually assemble the matrix and right hand side of the problem. There are some things worth mentioning before we go into detail. First, we will be assembling the system in parallel, i.e. each process will be responsible for assembling on cells that belong to this particular processor. Note that the degrees of freedom are split in a way such that all DoFs in the interior of cells and between cells belonging to the same subdomain belong to the process that owns the cell. However, even then we sometimes need to assemble on a cell with a neighbor that belongs to a different process, and in these cases when we write the local contributions into the global matrix or right hand side vector, we actually have to transfer these entries to the other process. Fortunately, we don't have to do this by hand, PETSc does all this for us by caching these elements locally, and sending them to the other processes as necessary when we call the compress() functions on the matrix and vector at the end of this function.

The second point is that once we have handed over matrix and vector contributions to PETSc, it is a) hard, and b) very inefficient to get them back for modifications. This is not only the fault of PETSc, it is also a consequence of the distributed nature of this program: if an entry resides on another processor, then it is necessarily expensive to get it. The consequence of this is that where we previously first assembled the matrix and right hand side as if there were no hanging node constraints and boundary values, and then eliminated these in a second step, we should now try to do that while still assembling the local systems, and before handing these entries over to PETSc. At least as far as eliminating hanging nodes is concerned, this is actually possible, though removing boundary nodes isn't that simple. deal.II provides functions to do this first part: instead of copying elements by hand into the global matrix, we use the distribute_local_to_global functions below to take care of hanging nodes at the same time. The second step, elimination of boundary nodes, is then done in exactly the same way as in all previous example programs.

So, here is the actual implementation:

template <int dim>
void ElasticProblem<dim>::assemble_system ()
{

The infrastructure to assemble linear systems is the same as in all the other programs, and in particular unchanged from step-8. Note that we still use the deal.II full matrix and vector types for the local systems.

QGauss<dim> quadrature_formula(2);
FEValues<dim> fe_values (fe, quadrature_formula,
const unsigned int dofs_per_cell = fe.dofs_per_cell;
const unsigned int n_q_points = quadrature_formula.size();
FullMatrix<double> cell_matrix (dofs_per_cell, dofs_per_cell);
Vector<double> cell_rhs (dofs_per_cell);
std::vector<types::global_dof_index> local_dof_indices (dofs_per_cell);
std::vector<double> lambda_values (n_q_points);
std::vector<double> mu_values (n_q_points);
ConstantFunction<dim> lambda(1.), mu(1.);
RightHandSide<dim> right_hand_side;
std::vector<Vector<double> > rhs_values (n_q_points,

The next thing is the loop over all elements. Note that we do not have to do all the work: our job here is only to assemble the system on cells that actually belong to this MPI process, all other cells will be taken care of by other processes. This is what the if-clause immediately after the for-loop takes care of: it queries the subdomain identifier of each cell, which is a number associated with each cell that tells which process handles it. In more generality, the subdomain id is used to split a domain into several parts (we do this above, at the beginning of setup_system), and which allows to identify which subdomain a cell is living on. In this application, we have each process handle exactly one subdomain, so we identify the terms subdomain and MPI process with each other.

Apart from this, assembling the local system is relatively uneventful if you have understood how this is done in step-8, and only becomes interesting again once we start distributing it into the global matrix and right hand sides.

cell = dof_handler.begin_active(),
endc = dof_handler.end();
for (; cell!=endc; ++cell)
if (cell->subdomain_id() == this_mpi_process)
{
cell_matrix = 0;
cell_rhs = 0;
fe_values.reinit (cell);
lambda.value_list (fe_values.get_quadrature_points(), lambda_values);
mu.value_list (fe_values.get_quadrature_points(), mu_values);
for (unsigned int i=0; i<dofs_per_cell; ++i)
{
const unsigned int
component_i = fe.system_to_component_index(i).first;
for (unsigned int j=0; j<dofs_per_cell; ++j)
{
const unsigned int
component_j = fe.system_to_component_index(j).first;
for (unsigned int q_point=0; q_point<n_q_points;
++q_point)
{

TODO investigate really small values here

cell_matrix(i,j)
+=
(
(fe_values.shape_grad(i,q_point)[component_i] *
fe_values.shape_grad(j,q_point)[component_j] *
lambda_values[q_point])
+
(fe_values.shape_grad(i,q_point)[component_j] *
fe_values.shape_grad(j,q_point)[component_i] *
mu_values[q_point])
+
((component_i == component_j) ?
(fe_values.shape_grad(i,q_point) *
fe_values.shape_grad(j,q_point) *
mu_values[q_point]) :
0)
)
*
fe_values.JxW(q_point);
}
}
}
right_hand_side.vector_value_list (fe_values.get_quadrature_points(),
rhs_values);
for (unsigned int i=0; i<dofs_per_cell; ++i)
{
const unsigned int
component_i = fe.system_to_component_index(i).first;
for (unsigned int q_point=0; q_point<n_q_points; ++q_point)
cell_rhs(i) += fe_values.shape_value(i,q_point) *
rhs_values[q_point](component_i) *
fe_values.JxW(q_point);
}

Now we have the local system, and need to transfer it into the global objects. However, as described in the introduction to this function, we want to avoid any operations to matrix and vector entries after handing them off to PETSc (i.e. after distributing to the global objects). Therefore, we will take care of hanging node constraints already here. This is not quite trivial since the rows and columns of constrained nodes have to be distributed to the rows and columns of those nodes to which they are constrained. This can't be done on a purely local basis (because the degrees of freedom to which hanging nodes are constrained may not be associated with the cell we are presently treating, and are therefore not represented in the local matrix and vector), but it can be done while distributing the local system to the global one. This is what the following call does, i.e. we distribute to the global objects and at the same time make sure that hanging node constraints are taken care of:

cell->get_dof_indices (local_dof_indices);
hanging_node_constraints
.distribute_local_to_global(cell_matrix, cell_rhs,
local_dof_indices,
system_matrix, system_rhs);
}

Now compress the vector and the system matrix:

system_matrix.compress(VectorOperation::add);
system_rhs.compress(VectorOperation::add);

The global matrix and right hand side vectors have now been formed. Note that since we took care of this already above, we do not have to condense away hanging node constraints any more.

However, we still have to apply boundary values, in the same way as we always do:

std::map<types::global_dof_index,double> boundary_values;
0,
boundary_values);
system_matrix, solution,
system_rhs, false);

The last argument to the call just performed allows for some optimizations. It controls whether we should also delete the column corresponding to a boundary node, or keep it (and passing true means: yes, do eliminate the column). If we do, then the resulting matrix will be symmetric again if it was before; if we don't, then it won't. The solution of the resulting system should be the same, though. The only reason why we may want to make the system symmetric again is that we would like to use the CG method, which only works with symmetric matrices. Experience tells that CG also works (and works almost as well) if we don't remove the columns associated with boundary nodes, which can be easily explained by the special structure of the non-symmetry. Since eliminating columns from dense matrices is not expensive, though, we let the function do it; not doing so is more important if the linear system is either non-symmetric anyway, or we are using the non-local version of this function (as in all the other example programs before) and want to save a few cycles during this operation.

}

The fourth step is to solve the linear system, with its distributed matrix and vector objects. Fortunately, PETSc offers a variety of sequential and parallel solvers, for which we have written wrappers that have almost the same interface as is used for the deal.II solvers used in all previous example programs.

template <int dim>
unsigned int ElasticProblem<dim>::solve ()
{

First, we have to set up a convergence monitor, and assign it the accuracy to which we would like to solve the linear system. Next, an actual solver object using PETSc's CG solver which also works with parallel (distributed) vectors and matrices. And finally a preconditioner; we choose to use a block Jacobi preconditioner which works by computing an incomplete LU decomposition on each block (i.e. the chunk of matrix that is stored on each MPI process). That means that if you run the program with only one process, then you will use an ILU(0) as a preconditioner, while if it is run on many processes, then we will have a number of blocks on the diagonal and the preconditioner is the ILU(0) of each of these blocks.

SolverControl solver_control (solution.size(),
1e-8*system_rhs.l2_norm());
PETScWrappers::SolverCG cg (solver_control,
mpi_communicator);
PETScWrappers::PreconditionBlockJacobi preconditioner(system_matrix);

Then solve the system:

cg.solve (system_matrix, solution, system_rhs,
preconditioner);

The next step is to distribute hanging node constraints. This is a little tricky, since to fill in the value of a constrained node you need access to the values of the nodes to which it is constrained (for example, for a Q1 element in 2d, we need access to the two nodes on the big side of a hanging node face, to compute the value of the constrained node in the middle). Since PETSc (and, for that matter, the MPI model on which it is built) does not allow to query the value of another node in a simple way if we should need it, what we do here is to get a copy of the distributed vector where we keep all elements locally. This is simple, since the deal.II wrappers have a conversion constructor for the non-MPI vector class:

PETScWrappers::Vector localized_solution (solution);

Then we distribute hanging node constraints on this local copy, i.e. we compute the values of all constrained nodes:

hanging_node_constraints.distribute (localized_solution);

Then transfer everything back into the global vector. The following operation copies those elements of the localized solution that we store locally in the distributed solution, and does not touch the other ones. Since we do the same operation on all processors, we end up with a distributed vector that has all the constrained nodes fixed.

solution = localized_solution;

Finally return the number of iterations it took to converge, to allow for some output:

return solver_control.last_step();
}

Step five is to output the results we computed in this iteration. This is actually the same as done in step-8 before, with two small differences. First, all processes call this function, but not all of them need to do the work associated with generating output. In fact, they shouldn't, since we would try to write to the same file multiple times at once. So we let only the first job do this, and all the other ones idle around during this time (or start their work for the next iteration, or simply yield their CPUs to other jobs that happen to run at the same time). The second thing is that we not only output the solution vector, but also a vector that indicates which subdomain each cell belongs to. This will make for some nice pictures of partitioned domains.

In practice, the present implementation of the output function is a major bottleneck of this program, since generating graphical output is expensive and doing so only on one process does, of course, not scale if we significantly increase the number of processes. In effect, this function will consume most of the run-time if you go to very large numbers of unknowns and processes, and real applications should limit the number of times they generate output through this function.

The solution to this is to have each process generate output data only for it's own local cells, and write them to separate files, one file per process. This would distribute the work of generating the output to all processes equally. In a second step, separate from running this program, we would then take all the output files for a given cycle and merge these parts into one single output file. This has to be done sequentially, but can be done on a different machine, and should be relatively cheap. However, the necessary functionality for this is not yet implemented in the library, and since we are too close to the next release, we do not want to do such major destabilizing changes any more. This has been fixed in the meantime, though, and a better way to do things is explained in the step-18 example program.

template <int dim>
void ElasticProblem<dim>::output_results (const unsigned int cycle) const
{

One point to realize is that when we want to generate output on process zero only, we need to have access to all elements of the solution vector. So we need to get a local copy of the distributed vector, which is in fact simple:

const PETScWrappers::Vector localized_solution (solution);

The thing to notice, however, is that we do this localization operation on all processes, not only the one that actually needs the data. This can't be avoided, however, with the communication model of MPI: MPI does not have a way to query data on another process, both sides have to initiate a communication at the same time. So even though most of the processes do not need the localized solution, we have to place the call here so that all processes execute it.

(In reality, part of this work can in fact be avoided. What we do is send the local parts of all processes to all other processes. What we would really need to do is to initiate an operation on all processes where each process simply sends its local chunk of data to process zero, since this is the only one that actually needs it, i.e. we need something like a gather operation. PETSc can do this, but for simplicity's sake we don't attempt to make use of this here. We don't, since what we do is not very expensive in the grand scheme of things: it is one vector communication among all processes , which has to be compared to the number of communications we have to do when solving the linear system, setting up the block-ILU for the preconditioner, and other operations.)

This being done, process zero goes ahead with setting up the output file as in step-8, and attaching the (localized) solution vector to the output object:. (The code to generate the output file name is stolen and slightly modified from step-5, since we expect that we can do a number of cycles greater than 10, which is the maximum of what the code in step-8 could handle.)

if (this_mpi_process == 0)
{
std::ostringstream filename;
filename << "solution-" << cycle << ".gmv";
std::ofstream output (filename.str().c_str());
DataOut<dim> data_out;
data_out.attach_dof_handler (dof_handler);
std::vector<std::string> solution_names;
switch (dim)
{
case 1:
solution_names.push_back ("displacement");
break;
case 2:
solution_names.push_back ("x_displacement");
solution_names.push_back ("y_displacement");
break;
case 3:
solution_names.push_back ("x_displacement");
solution_names.push_back ("y_displacement");
solution_names.push_back ("z_displacement");
break;
default:
}
data_out.add_data_vector (localized_solution, solution_names);

The only thing we do here additionally is that we also output one value per cell indicating which subdomain (i.e. MPI process) it belongs to. This requires some conversion work, since the data the library provides us with is not the one the output class expects, but this is not difficult. First, set up a vector of integers, one per cell, that is then filled by the number of subdomain each cell is in:

std::vector<unsigned int> partition_int (triangulation.n_active_cells());
GridTools::get_subdomain_association (triangulation, partition_int);

Then convert this integer vector into a floating point vector just as the output functions want to see:

const Vector<double> partitioning(partition_int.begin(),
partition_int.end());

And finally add this vector as well:

data_out.add_data_vector (partitioning, "partitioning");

This all being done, generate the intermediate format and write it out in GMV output format:

data_out.build_patches ();
data_out.write_gmv (output);
}
}

The sixth step is to take the solution just computed, and evaluate some kind of refinement indicator to refine the mesh. The problem is basically the same as with distributing hanging node constraints: in order to compute the error indicator, we need access to all elements of the solution vector. We then compute the indicators for the cells that belong to the present process, but then we need to distribute the refinement indicators into a distributed vector so that all processes have the values of the refinement indicator for all cells. But then, in order for each process to refine its copy of the mesh, they need to have access to all refinement indicators locally, so they have to copy the global vector back into a local one. That's a little convoluted, but thinking about it quite straightforward nevertheless. So here's how we do it:

template <int dim>
void ElasticProblem<dim>::refine_grid ()
{

So, first part: get a local copy of the distributed solution vector. This is necessary since the error estimator needs to get at the value of neighboring cells even if they do not belong to the subdomain associated with the present MPI process:

const PETScWrappers::Vector localized_solution (solution);

Second part: set up a vector of error indicators for all cells and let the Kelly class compute refinement indicators for all cells belonging to the present subdomain/process. Note that the last argument of the call indicates which subdomain we are interested in. The three arguments before it are various other default arguments that one usually doesn't need (and doesn't state values for, but rather uses the defaults), but which we have to state here explicitly since we want to modify the value of a following argument (i.e. the one indicating the subdomain):

Vector<float> local_error_per_cell (triangulation.n_active_cells());
localized_solution,
local_error_per_cell,
0,

Now all processes have computed error indicators for their own cells and stored them in the respective elements of the local_error_per_cell vector. The elements of this vector for cells not on the present process are zero. However, since all processes have a copy of a copy of the entire triangulation and need to keep these copies in sync, they need the values of refinement indicators for all cells of the triangulation. Thus, we need to distribute our results. We do this by creating a distributed vector where each process has its share, and sets the elements it has computed. We will then later generate a local sequential copy of this distributed vector to allow each process to access all elements of this vector.

So in the first step, we need to set up a parallel vector. For simplicity, every process will own a chunk with as many elements as this process owns cells, so that the first chunk of elements is stored with process zero, the next chunk with process one, and so on. It is important to remark, however, that these elements are not necessarily the ones we will write to. This is so, since the order in which cells are arranged, i.e. the order in which the elements of the vector correspond to cells, is not ordered according to the subdomain these cells belong to. In other words, if on this process we compute indicators for cells of a certain subdomain, we may write the results to more or less random elements if the distributed vector, that do not necessarily lie within the chunk of vector we own on the present process. They will subsequently have to be copied into another process's memory space then, an operation that PETSc does for us when we call the compress function. This inefficiency could be avoided with some more code, but we refrain from it since it is not a major factor in the program's total runtime.

So here's how we do it: count how many cells belong to this process, set up a distributed vector with that many elements to be stored locally, and copy over the elements we computed locally, then compress the result. In fact, we really only copy the elements that are nonzero, so we may miss a few that we computed to zero, but this won't hurt since the original values of the vector is zero anyway.

const unsigned int n_local_cells
this_mpi_process);
distributed_all_errors (mpi_communicator,
triangulation.n_active_cells(),
n_local_cells);
for (unsigned int i=0; i<local_error_per_cell.size(); ++i)
if (local_error_per_cell(i) != 0)
distributed_all_errors(i) = local_error_per_cell(i);
distributed_all_errors.compress (VectorOperation::insert);

So now we have this distributed vector out there that contains the refinement indicators for all cells. To use it, we need to obtain a local copy...

const Vector<float> localized_all_errors (distributed_all_errors);

...which we can the subsequently use to finally refine the grid:

localized_all_errors,
0.3, 0.03);
triangulation.execute_coarsening_and_refinement ();
}

Lastly, here is the driver function. It is almost unchanged from step-8, with the exception that we replace std::cout by the pcout stream. Apart from this, the only other cosmetic change is that we output how many degrees of freedom there are per process, and how many iterations it took for the linear solver to converge:

template <int dim>
void ElasticProblem<dim>::run ()
{
for (unsigned int cycle=0; cycle<10; ++cycle)
{
pcout << "Cycle " << cycle << ':' << std::endl;
if (cycle == 0)
{
GridGenerator::hyper_cube (triangulation, -1, 1);
triangulation.refine_global (3);
}
else
refine_grid ();
pcout << " Number of active cells: "
<< triangulation.n_active_cells()
<< std::endl;
setup_system ();
pcout << " Number of degrees of freedom: "
<< dof_handler.n_dofs()
<< " (by partition:";
for (unsigned int p=0; p<n_mpi_processes; ++p)
pcout << (p==0 ? ' ' : '+')
p));
pcout << ")" << std::endl;
assemble_system ();
const unsigned int n_iterations = solve ();
pcout << " Solver converged in " << n_iterations
<< " iterations." << std::endl;
output_results (cycle);
}
}
}

So that's it, almost. main() works the same way as most of the main functions in the other example programs, i.e. it delegates work to the run function of a master object, and only wraps everything into some code to catch exceptions:

int main (int argc, char **argv)
{
try
{
using namespace dealii;
using namespace Step17;

Here is the only real difference: PETSc requires that we initialize it at the beginning of the program, and un-initialize it at the end. The class MPI_InitFinalize takes care of that. The original code sits in between, enclosed in braces to make sure that the elastic_problem variable goes out of scope (and is destroyed) before PETSc is closed with PetscFinalize. (If we wouldn't use braces, the destructor of elastic_problem would run after PetscFinalize; since the destructor involves calls to PETSc functions, we would get strange error messages from PETSc.)

Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
{
deallog.depth_console (0);
ElasticProblem<2> elastic_problem;
elastic_problem.run ();
}
}
catch (std::exception &exc)
{
std::cerr << std::endl << std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl << std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}

Results

If the program above is compiled and run on a single processor machine, it should generate results that are very similar to those that we already got with step-8. However, it becomes more interesting if we run it on a multicore machine or a cluster of computers. The most basic way to run MPI programs is using a command line like

mpirun -np 32 ./step-17

to run the step-17 executable with 32 processors.

The command line above is the appropriate way of starting the program on a multicore machine when using MPI for parallelization. On the other hand, most clusters are shared resources and have some kind of scheduling system that distributes jobs onto available processors. All of these scheduling systems have their own calling syntax - on my system, I have to use the command qsub with a whole host of options to run a job in parallel - so that the exact command line syntax varies.

Whether directly or through a scheduler, if you run this program on 8 processors, you should get output like the following:

Cycle 0:
Number of active cells: 64
Number of degrees of freedom: 162 (by partition: 22+22+20+20+18+16+20+24)
Solver converged in 23 iterations.
Cycle 1:
Number of active cells: 124
Number of degrees of freedom: 302 (by partition: 38+42+36+34+44+44+36+28)
Solver converged in 35 iterations.
Cycle 2:
Number of active cells: 238
Number of degrees of freedom: 570 (by partition: 68+80+66+74+58+68+78+78)
Solver converged in 46 iterations.
Cycle 3:
Number of active cells: 454
Number of degrees of freedom: 1046 (by partition: 120+134+124+130+154+138+122+124)
Solver converged in 55 iterations.
Cycle 4:
Number of active cells: 868
Number of degrees of freedom: 1926 (by partition: 232+276+214+248+230+224+234+268)
Solver converged in 77 iterations.
Cycle 5:
Number of active cells: 1654
Number of degrees of freedom: 3550 (by partition: 418+466+432+470+442+474+424+424)
Solver converged in 93 iterations.
Cycle 6:
Number of active cells: 3136
Number of degrees of freedom: 6702 (by partition: 838+796+828+892+866+798+878+806)
Solver converged in 127 iterations.
Cycle 7:
Number of active cells: 5962
Number of degrees of freedom: 12446 (by partition: 1586+1484+1652+1552+1556+1576+1560+1480)
Solver converged in 158 iterations.
Cycle 8:
Number of active cells: 11320
Number of degrees of freedom: 23586 (by partition: 2988+2924+2890+2868+2864+3042+2932+3078)
Solver converged in 225 iterations.
Cycle 9:
Number of active cells: 21424
Number of degrees of freedom: 43986 (by partition: 5470+5376+5642+5450+5630+5470+5416+5532)
Solver converged in 282 iterations.
Cycle 10:
Number of active cells: 40696
Number of degrees of freedom: 83754 (by partition: 10660+10606+10364+10258+10354+10322+10586+10604)
Solver converged in 392 iterations.
Cycle 11:
Number of active cells: 76978
Number of degrees of freedom: 156490 (by partition: 19516+20148+19390+19390+19336+19450+19730+19530)
Solver converged in 509 iterations.
Cycle 12:
Number of active cells: 146206
Number of degrees of freedom: 297994 (by partition: 37462+37780+37000+37060+37232+37328+36860+37272)
Solver converged in 705 iterations.
Cycle 13:
Number of active cells: 276184
Number of degrees of freedom: 558766 (by partition: 69206+69404+69882+71266+70348+69616+69796+69248)
Solver converged in 945 iterations.
Cycle 14:
Number of active cells: 523000
Number of degrees of freedom: 1060258 (by partition: 132928+132296+131626+132172+132170+133588+132252+133226)
Solver converged in 1282 iterations.
Cycle 15:
Number of active cells: 987394
Number of degrees of freedom: 1994226 (by partition: 253276+249068+247430+248402+248496+251380+248272+247902)
Solver converged in 1760 iterations.
Cycle 16:
Number of active cells: 1867477
Number of degrees of freedom: 3771884 (by partition: 468452+474204+470818+470884+469960+
471186+470686+475694)
Solver converged in 2251 iterations.

(This run uses a few more refinement cycles than the code available in the examples/ directory. The run also used a version of METIS from 2004 that generated different partitionings; consequently, the numbers you get today are slightly different.)

As can be seen, we can easily get to almost four million unknowns. In fact, the code's runtime with 8 processes was less than 7 minutes up to (and including) cycle 14, and 14 minutes including the second to last step. (These are numbers relevant to when the code was initially written, in 2004.) I lost the timing information for the last step, though, but you get the idea. All this is after release mode has been enabled by running make release, and with the generation of graphical output switched off for the reasons stated in the program comments above. (See also video lecture 18.) The biggest 2d computations we did had roughly 7.1 million unknowns, and were done on 32 processes. It took about 40 minutes. Not surprisingly, the limiting factor for how far one can go is how much memory one has, since every process has to hold the entire mesh and DoFHandler objects, although matrices and vectors are split up. For the 7.1M computation, the memory consumption was about 600 bytes per unknown, which is not bad, but one has to consider that this is for every unknown, whether we store the matrix and vector entries locally or not.

Here is some output generated in the 12th cycle of the program, i.e. with roughly 300,000 unknowns:

As one would hope for, the x- (left) and y-displacements (right) shown here closely match what we already saw in step-8. As shown there and in step-22, we could as well have produced a vector plot of the displacement field, rather than plotting it as two separate scalar fields. What may be more interesting, though, is to look at the mesh and partition at this step:

Again, the mesh (left) shows the same refinement pattern as seen previously. The right panel shows the partitioning of the domain across the 8 processes, each indicated by a different color. The picture shows that the subdomains are smaller where mesh cells are small, a fact that needs to be expected given that the partitioning algorithm tries to equilibrate the number of cells in each subdomain; this equilibration is also easily identified in the output shown above, where the number of degrees per subdomain is roughly the same.

It is worth noting that if we ran the same program with a different number of processes, that we would likely get slightly different output: a different mesh, different number of unknowns and iterations to convergence. The reason for this is that while the matrix and right hand side are the same independent of the number of processes used, the preconditioner is not: it performs an ILU(0) on the chunk of the matrix of each processor separately. Thus, it's effectiveness as a preconditioner diminishes as the number of processes increases, which makes the number of iterations increase. Since a different preconditioner leads to slight changes in the computed solution, this will then lead to slightly different mesh cells tagged for refinement, and larger differences in subsequent steps. The solution will always look very similar, though.

Finally, here are some results for a 3d simulation. You can repeat these by first changing

ElasticProblem<2> elastic_problem;

to

ElasticProblem<3> elastic_problem;

in the main function, and then in the Makefile, change the reference to the 2d libraries to their 3d counterparts. If you then run the program in parallel, you get something similar to this (this is for a job with 16 processes):

Cycle 0:
Number of active cells: 512
Number of degrees of freedom: 2187 (by partition: 114+156+150+114+114+210+105+102+120+120+96+123+141+183+156+183)
Solver converged in 27 iterations.
Cycle 1:
Number of active cells: 1604
Number of degrees of freedom: 6549 (by partition: 393+291+342+354+414+417+570+366+444+288+543+525+345+387+489+381)
Solver converged in 42 iterations.
Cycle 2:
Number of active cells: 4992
Number of degrees of freedom: 19167 (by partition: 1428+1266+1095+1005+1455+1257+1410+1041+1320+1380+1080+1050+963+1005+1188+1224)
Solver converged in 65 iterations.
Cycle 3:
Number of active cells: 15485
Number of degrees of freedom: 56760 (by partition: 3099+3714+3384+3147+4332+3858+3615+3117+3027+3888+3942+3276+4149+3519+3030+3663)
Solver converged in 96 iterations.
Cycle 4:
Number of active cells: 48014
Number of degrees of freedom: 168762 (by partition: 11043+10752+9846+10752+9918+10584+10545+11433+12393+11289+10488+9885+10056+9771+11031+8976)
Solver converged in 132 iterations.
Cycle 5:
Number of active cells: 148828
Number of degrees of freedom: 492303 (by partition: 31359+30588+34638+32244+30984+28902+33297+31569+29778+29694+28482+28032+32283+30702+31491+28260)
Solver converged in 179 iterations.
Cycle 6:
Number of active cells: 461392
Number of degrees of freedom: 1497951 (by partition: 103587+100827+97611+93726+93429+88074+95892+88296+96882+93000+87864+90915+92232+86931+98091+90594)
Solver converged in 261 iterations.

The last step, going up to 1.5 million unknowns, takes about 55 minutes with 16 processes on 8 dual-processor machines (of the kind available in 2003). The graphical output generated by this job is rather large (cycle 5 already prints around 82 MB of GMV data), so we contend ourselves with showing output from cycle 4:

The left picture shows the partitioning of the cube into 16 processes, whereas the right one shows the x-displacement along two cutplanes through the cube.

Possibilities for extensions

The program keeps a complete copy of the Triangulation and DoFHandler objects on every processor. That's obviously the bottleneck for as far as parallelization goes. Internally, within deal.II, parallelizing the data structures used in hierarchic and unstructured triangulations is a very hard problem, and it took us a few more years to make this happen. The step-40 tutorial program and the Parallel computing with multiple processors using distributed memory documentation module talk about how to do these steps and what it takes from an application perspective. An obvious extension of the current program would be to use this functionality to completely distribute computations to many more processors than used here.

The plain program

/* ---------------------------------------------------------------------
*
* Copyright (C) 2000 - 2013 by the deal.II authors
*
* This file is part of the deal.II library.
*
* The deal.II library is free software; you can use it, redistribute
* it, and/or modify it under the terms of the GNU Lesser General
* Public License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
* The full text of the license can be found in the file LICENSE at
* the top level of the deal.II distribution.
*
* ---------------------------------------------------------------------
*
* Author: Wolfgang Bangerth, University of Texas at Austin, 2000, 2004
*/
#include <deal.II/base/quadrature_lib.h>
#include <deal.II/base/function.h>
#include <deal.II/base/logstream.h>
#include <deal.II/base/multithread_info.h>
#include <deal.II/lac/vector.h>
#include <deal.II/lac/full_matrix.h>
#include <deal.II/lac/constraint_matrix.h>
#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_refinement.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/tria_iterator.h>
#include <deal.II/grid/tria_boundary_lib.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_accessor.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_values.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/numerics/vector_tools.h>
#include <deal.II/numerics/matrix_tools.h>
#include <deal.II/numerics/data_out.h>
#include <deal.II/numerics/error_estimator.h>
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/utilities.h>
#include <deal.II/lac/petsc_vector.h>
#include <deal.II/lac/petsc_parallel_vector.h>
#include <deal.II/lac/petsc_parallel_sparse_matrix.h>
#include <deal.II/lac/petsc_solver.h>
#include <deal.II/lac/petsc_precondition.h>
#include <deal.II/grid/grid_tools.h>
#include <deal.II/dofs/dof_renumbering.h>
#include <fstream>
#include <iostream>
#include <sstream>
namespace Step17
{
using namespace dealii;
template <int dim>
class ElasticProblem
{
public:
ElasticProblem ();
~ElasticProblem ();
void run ();
private:
void setup_system ();
void assemble_system ();
unsigned int solve ();
void refine_grid ();
void output_results (const unsigned int cycle) const;
Triangulation<dim> triangulation;
DoFHandler<dim> dof_handler;
ConstraintMatrix hanging_node_constraints;
MPI_Comm mpi_communicator;
const unsigned int n_mpi_processes;
const unsigned int this_mpi_process;
};
template <int dim>
class RightHandSide : public Function<dim>
{
public:
RightHandSide ();
virtual void vector_value (const Point<dim> &p,
Vector<double> &values) const;
virtual void vector_value_list (const std::vector<Point<dim> > &points,
std::vector<Vector<double> > &value_list) const;
};
template <int dim>
RightHandSide<dim>::RightHandSide () :
Function<dim> (dim)
{}
template <int dim>
inline
void RightHandSide<dim>::vector_value (const Point<dim> &p,
Vector<double> &values) const
{
Assert (values.size() == dim,
ExcDimensionMismatch (values.size(), dim));
Assert (dim >= 2, ExcInternalError());
Point<dim> point_1, point_2;
point_1(0) = 0.5;
point_2(0) = -0.5;
if (((p-point_1).square() < 0.2*0.2) ||
((p-point_2).square() < 0.2*0.2))
values(0) = 1;
else
values(0) = 0;
if (p.square() < 0.2*0.2)
values(1) = 1;
else
values(1) = 0;
}
template <int dim>
void RightHandSide<dim>::vector_value_list (const std::vector<Point<dim> > &points,
std::vector<Vector<double> > &value_list) const
{
const unsigned int n_points = points.size();
Assert (value_list.size() == n_points,
ExcDimensionMismatch (value_list.size(), n_points));
for (unsigned int p=0; p<n_points; ++p)
RightHandSide<dim>::vector_value (points[p],
value_list[p]);
}
template <int dim>
ElasticProblem<dim>::ElasticProblem ()
:
pcout (std::cout),
dof_handler (triangulation),
fe (FE_Q<dim>(1), dim),
mpi_communicator (MPI_COMM_WORLD),
n_mpi_processes (Utilities::MPI::n_mpi_processes(mpi_communicator)),
this_mpi_process (Utilities::MPI::this_mpi_process(mpi_communicator))
{
pcout.set_condition(this_mpi_process == 0);
}
template <int dim>
ElasticProblem<dim>::~ElasticProblem ()
{
dof_handler.clear ();
}
template <int dim>
void ElasticProblem<dim>::setup_system ()
{
GridTools::partition_triangulation (n_mpi_processes, triangulation);
dof_handler.distribute_dofs (fe);
const types::global_dof_index n_local_dofs
this_mpi_process);
system_matrix.reinit (mpi_communicator,
dof_handler.n_dofs(),
dof_handler.n_dofs(),
n_local_dofs,
n_local_dofs,
solution.reinit (mpi_communicator, dof_handler.n_dofs(), n_local_dofs);
system_rhs.reinit (mpi_communicator, dof_handler.n_dofs(), n_local_dofs);
hanging_node_constraints.clear ();
hanging_node_constraints);
hanging_node_constraints.close ();
}
template <int dim>
void ElasticProblem<dim>::assemble_system ()
{
QGauss<dim> quadrature_formula(2);
FEValues<dim> fe_values (fe, quadrature_formula,
const unsigned int dofs_per_cell = fe.dofs_per_cell;
const unsigned int n_q_points = quadrature_formula.size();
FullMatrix<double> cell_matrix (dofs_per_cell, dofs_per_cell);
Vector<double> cell_rhs (dofs_per_cell);
std::vector<types::global_dof_index> local_dof_indices (dofs_per_cell);
std::vector<double> lambda_values (n_q_points);
std::vector<double> mu_values (n_q_points);
ConstantFunction<dim> lambda(1.), mu(1.);
RightHandSide<dim> right_hand_side;
std::vector<Vector<double> > rhs_values (n_q_points,
cell = dof_handler.begin_active(),
endc = dof_handler.end();
for (; cell!=endc; ++cell)
if (cell->subdomain_id() == this_mpi_process)
{
cell_matrix = 0;
cell_rhs = 0;
fe_values.reinit (cell);
lambda.value_list (fe_values.get_quadrature_points(), lambda_values);
mu.value_list (fe_values.get_quadrature_points(), mu_values);
for (unsigned int i=0; i<dofs_per_cell; ++i)
{
const unsigned int
component_i = fe.system_to_component_index(i).first;
for (unsigned int j=0; j<dofs_per_cell; ++j)
{
const unsigned int
component_j = fe.system_to_component_index(j).first;
for (unsigned int q_point=0; q_point<n_q_points;
++q_point)
{
cell_matrix(i,j)
+=
(
(fe_values.shape_grad(i,q_point)[component_i] *
fe_values.shape_grad(j,q_point)[component_j] *
lambda_values[q_point])
+
(fe_values.shape_grad(i,q_point)[component_j] *
fe_values.shape_grad(j,q_point)[component_i] *
mu_values[q_point])
+
((component_i == component_j) ?
(fe_values.shape_grad(i,q_point) *
fe_values.shape_grad(j,q_point) *
mu_values[q_point]) :
0)
)
*
fe_values.JxW(q_point);
}
}
}
right_hand_side.vector_value_list (fe_values.get_quadrature_points(),
rhs_values);
for (unsigned int i=0; i<dofs_per_cell; ++i)
{
const unsigned int
component_i = fe.system_to_component_index(i).first;
for (unsigned int q_point=0; q_point<n_q_points; ++q_point)
cell_rhs(i) += fe_values.shape_value(i,q_point) *
rhs_values[q_point](component_i) *
fe_values.JxW(q_point);
}
cell->get_dof_indices (local_dof_indices);
hanging_node_constraints
.distribute_local_to_global(cell_matrix, cell_rhs,
local_dof_indices,
system_matrix, system_rhs);
}
system_matrix.compress(VectorOperation::add);
system_rhs.compress(VectorOperation::add);
std::map<types::global_dof_index,double> boundary_values;
0,
boundary_values);
system_matrix, solution,
system_rhs, false);
}
template <int dim>
unsigned int ElasticProblem<dim>::solve ()
{
SolverControl solver_control (solution.size(),
1e-8*system_rhs.l2_norm());
PETScWrappers::SolverCG cg (solver_control,
mpi_communicator);
PETScWrappers::PreconditionBlockJacobi preconditioner(system_matrix);
cg.solve (system_matrix, solution, system_rhs,
preconditioner);
PETScWrappers::Vector localized_solution (solution);
hanging_node_constraints.distribute (localized_solution);
solution = localized_solution;
return solver_control.last_step();
}
template <int dim>
void ElasticProblem<dim>::output_results (const unsigned int cycle) const
{
const PETScWrappers::Vector localized_solution (solution);
if (this_mpi_process == 0)
{
std::ostringstream filename;
filename << "solution-" << cycle << ".gmv";
std::ofstream output (filename.str().c_str());
DataOut<dim> data_out;
data_out.attach_dof_handler (dof_handler);
std::vector<std::string> solution_names;
switch (dim)
{
case 1:
solution_names.push_back ("displacement");
break;
case 2:
solution_names.push_back ("x_displacement");
solution_names.push_back ("y_displacement");
break;
case 3:
solution_names.push_back ("x_displacement");
solution_names.push_back ("y_displacement");
solution_names.push_back ("z_displacement");
break;
default:
}
data_out.add_data_vector (localized_solution, solution_names);
std::vector<unsigned int> partition_int (triangulation.n_active_cells());
GridTools::get_subdomain_association (triangulation, partition_int);
const Vector<double> partitioning(partition_int.begin(),
partition_int.end());
data_out.add_data_vector (partitioning, "partitioning");
data_out.build_patches ();
data_out.write_gmv (output);
}
}
template <int dim>
void ElasticProblem<dim>::refine_grid ()
{
const PETScWrappers::Vector localized_solution (solution);
Vector<float> local_error_per_cell (triangulation.n_active_cells());
localized_solution,
local_error_per_cell,
0,
const unsigned int n_local_cells
this_mpi_process);
distributed_all_errors (mpi_communicator,
triangulation.n_active_cells(),
n_local_cells);
for (unsigned int i=0; i<local_error_per_cell.size(); ++i)
if (local_error_per_cell(i) != 0)
distributed_all_errors(i) = local_error_per_cell(i);
distributed_all_errors.compress (VectorOperation::insert);
const Vector<float> localized_all_errors (distributed_all_errors);
localized_all_errors,
0.3, 0.03);
triangulation.execute_coarsening_and_refinement ();
}
template <int dim>
void ElasticProblem<dim>::run ()
{
for (unsigned int cycle=0; cycle<10; ++cycle)
{
pcout << "Cycle " << cycle << ':' << std::endl;
if (cycle == 0)
{
GridGenerator::hyper_cube (triangulation, -1, 1);
triangulation.refine_global (3);
}
else
refine_grid ();
pcout << " Number of active cells: "
<< triangulation.n_active_cells()
<< std::endl;
setup_system ();
pcout << " Number of degrees of freedom: "
<< dof_handler.n_dofs()
<< " (by partition:";
for (unsigned int p=0; p<n_mpi_processes; ++p)
pcout << (p==0 ? ' ' : '+')
p));
pcout << ")" << std::endl;
assemble_system ();
const unsigned int n_iterations = solve ();
pcout << " Solver converged in " << n_iterations
<< " iterations." << std::endl;
output_results (cycle);
}
}
}
int main (int argc, char **argv)
{
try
{
using namespace dealii;
using namespace Step17;
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
{
deallog.depth_console (0);
ElasticProblem<2> elastic_problem;
elastic_problem.run ();
}
}
catch (std::exception &exc)
{
std::cerr << std::endl << std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
catch (...)
{
std::cerr << std::endl << std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
}
return 0;
}