Reference documentation for deal.II version GIT relicensing-422-gb369f187d8 2024-04-17 18:10:02+00:00
\(\newcommand{\dealvcentcolon}{\mathrel{\mathop{:}}}\) \(\newcommand{\dealcoloneq}{\dealvcentcolon\mathrel{\mkern-1.2mu}=}\) \(\newcommand{\jump}[1]{\left[\!\left[ #1 \right]\!\right]}\) \(\newcommand{\average}[1]{\left\{\!\left\{ #1 \right\}\!\right\}}\)
Loading...
Searching...
No Matches
Namespaces | Functions
Utilities::Trilinos Namespace Reference

Namespaces

namespace  internal
 

Functions

const Epetra_Comm & comm_world ()
 
const Epetra_Comm & comm_self ()
 
const Teuchos::RCP< const Teuchos::Comm< int > > & tpetra_comm_self ()
 
Epetra_Comm * duplicate_communicator (const Epetra_Comm &communicator)
 
void destroy_communicator (Epetra_Comm &communicator)
 
unsigned int get_n_mpi_processes (const Epetra_Comm &mpi_communicator)
 
unsigned int get_this_mpi_process (const Epetra_Comm &mpi_communicator)
 
Epetra_Map duplicate_map (const Epetra_BlockMap &map, const Epetra_Comm &comm)
 
MPI_Comm teuchos_comm_to_mpi_comm (const Teuchos::RCP< const Teuchos::Comm< int > > &teuchos_comm)
 

Detailed Description

This namespace provides some of the basic structures used in the initialization of the Trilinos objects (e.g., matrices, vectors, and preconditioners).

Function Documentation

◆ comm_world()

const Epetra_Comm & Utilities::Trilinos::comm_world ( )

Return a Trilinos Epetra_Comm object needed for creation of Epetra_Maps.

If deal.II has been configured to use a compiler that does not support MPI then the resulting communicator will be a serial one. Otherwise, the communicator will correspond to MPI_COMM_WORLD, i.e. a communicator that encompasses all processes within this MPI universe.

Definition at line 35 of file trilinos_utilities.cc.

◆ comm_self()

const Epetra_Comm & Utilities::Trilinos::comm_self ( )

Return a Trilinos Epetra_Comm object needed for creation of Epetra_Maps.

If deal.II has been configured to use a compiler that does not support MPI then the resulting communicator will be a serial one. Otherwise, the communicator will correspond to MPI_COMM_SELF, i.e. a communicator that comprises only this one processor.

Definition at line 67 of file trilinos_utilities.cc.

◆ tpetra_comm_self()

const Teuchos::RCP< const Teuchos::Comm< int > > & Utilities::Trilinos::tpetra_comm_self ( )

Return a Teuchos::Comm object needed for creation of Tpetra::Maps.

If deal.II has been configured to use a compiler that does not support MPI then the resulting communicator will be a serial one. Otherwise, the communicator will correspond to MPI_COMM_SELF, i.e. a communicator that comprises only this one processor.

Definition at line 51 of file trilinos_utilities.cc.

◆ duplicate_communicator()

Epetra_Comm * Utilities::Trilinos::duplicate_communicator ( const Epetra_Comm &  communicator)

Given a communicator, duplicate it. If the given communicator is serial, that means to just return a copy of itself. On the other hand, if it is parallel, we duplicate the underlying MPI_Comm object: we create a separate MPI communicator that contains the same processors and in the same order but has a separate identifier distinct from the given communicator. The function returns a pointer to a new object of a class derived from Epetra_Comm. The caller of this function needs to assume ownership of this function. The returned object should be destroyed using the destroy_communicator() function.

This facility is used to separate streams of communication. For example, a program could simply use MPI_Comm_World for everything. But it is easy to come up with scenarios where sometimes not all processors participate in a communication that is intended to be global – for example if we assemble a matrix on a coarse mesh with fewer cells than there are processors, some processors may not sync their matrices with the rest because they haven't written into it because they own no cells. That's clearly a bug. However, if these processors just continue their work, and the next parallel operation happens to be a sync on a different matrix, then the sync could succeed – by accident, since different processors are talking about different matrices.

This kind of situation can be avoided if we use different communicators for different matrices which reduces the likelihood that communications meant to be separate aren't recognized as such just because they happen on the same communicator. In addition, it is conceivable that some MPI operations can be parallelized using multiple threads because their communicators identifies the communication in question, not their relative timing as is the case in a sequential program that just uses a single communicator.

Definition at line 83 of file trilinos_utilities.cc.

◆ destroy_communicator()

void Utilities::Trilinos::destroy_communicator ( Epetra_Comm &  communicator)

Given an Epetra communicator that was created by the duplicate_communicator() function, destroy the underlying MPI communicator object and reset the Epetra_Comm object to a the result of comm_self().

It is necessary to call this function at the time when the result of duplicate_communicator() is no longer needed. The reason is that in that function, we first create a new MPI_Comm object and then create an Epetra_Comm around it. While we can take care of destroying the latter, it doesn't destroy the communicator since it can only assume that it may also be still used by other objects in the program. Consequently, we have to take care of destroying it ourselves, explicitly.

This function does exactly that. Because this has to happen while the Epetra_Comm object is still around, it first resets the latter and then destroys the communicator object.

Note
If you call this function on an Epetra_Comm object that is not created by duplicate_communicator(), you are likely doing something quite wrong. Don't do this.

Definition at line 110 of file trilinos_utilities.cc.

◆ get_n_mpi_processes()

unsigned int Utilities::Trilinos::get_n_mpi_processes ( const Epetra_Comm &  mpi_communicator)

Return the number of MPI processes there exist in the given communicator object. If this is a sequential job (i.e., the program is not using MPI at all, or is using MPI but has been started with only one MPI process), then the communicator necessarily involves only one process and the function returns 1.

Definition at line 129 of file trilinos_utilities.cc.

◆ get_this_mpi_process()

unsigned int Utilities::Trilinos::get_this_mpi_process ( const Epetra_Comm &  mpi_communicator)

Return the number of the present MPI process in the space of processes described by the given communicator. This will be a unique value for each process between zero and (less than) the number of all processes (given by get_n_mpi_processes()).

Definition at line 136 of file trilinos_utilities.cc.

◆ duplicate_map()

Epetra_Map Utilities::Trilinos::duplicate_map ( const Epetra_BlockMap &  map,
const Epetra_Comm &  comm 
)

Given a Trilinos Epetra map, create a new map that has the same subdivision of elements to processors but uses the given communicator object instead of the one stored in the first argument. In essence, this means that we create a map that communicates among the same processors in the same way, but using a separate channel.

This function is typically used with a communicator that has been obtained by the duplicate_communicator() function.

Definition at line 144 of file trilinos_utilities.cc.

◆ teuchos_comm_to_mpi_comm()

MPI_Comm Utilities::Trilinos::teuchos_comm_to_mpi_comm ( const Teuchos::RCP< const Teuchos::Comm< int > > &  teuchos_comm)

Return the underlying MPI_Comm communicator from the Teuchos::Comm communicator.

Definition at line 177 of file trilinos_utilities.cc.