Project ideas

deal.II is a community project that lives by the participation of its members. It grows by members of our community implementing things that they find interesting, important, gratifying, or useful for their own work. Here are a few ideas for larger projects that would make a significant difference:

  • Parallel hp::DoFHandler: Currently, only the ::DoFHandler class can work on parallel or shared triangulations, but hp::DoFHandler can not. This is a serious obstacles not only for people who really want to do hp adaptivity, but also for those who want to use FE_Nothing on some cells in the style of step-46, and want to do so for parallel computations.
    We know pretty well what steps one needs to take to get us there, but lack the time to do it. It would, however, make for a nice and self-contained project with big pay-off.
  • Write a finite element: deal.II implements a fair number of different elements, but there are many more out there. Examples include the various elasticity elements, serendipity elements, and a number of nonconforming elements. These make for relatively self-contained projects since one really only has to understand one particular aspect of deal.II -- the interfaces of the FiniteElement class and how it interacts with FEValues and a rather limited set of other functions.
    An often requested feature would also be a tutorial program that walks users through the process of writing an element description. In other words, a tutorial program that shows how one would write a finite element would be fantastic.
  • Complex valued algebra: Many scientific applications demand usage of complex valued algebra. Being a heavily templated C++ library, deal.II is almost ready to support that usage case. Much of the preliminary work was done in the following pull-requests 1027, and 631. However there is some work left to do to make the issue completely fixed (see github issue 1894 and 2033). Completing these remaining steps would make a nice and self-contained project.
  • Restructure finite element interfaces: The interfaces of the finite element and mapping classes date back to the earliest times of deal.II when we were not as experienced software designers as we are now. Much of this has been rewritten in 2015 as part of github issue 1198, but a few odd corners remain. In particular, there are a couple of classes that take a template argument that designates a class that describes a polynomial space, when it would be entirely sufficient to just receive a reference to a class object (rather than a class type) that represents this space, see github issue 1973. Making this change would eliminate a template argument, improve compile time, and make the interfaces easier to understand.
  • Use C++11 lambda functions: In many places where we spawn tasks for parallel processing, we implement these tasks in separate functions because in C++98 and C++03, tasks required providing a pointer to a (named) function. C++11 now has lambda functions, and so a simpler way to achieve the same goal is to implement the code to be executed on a task as a lambda function, declared right at the place where one wants to use it. Examples for this are in the implementation of parallelized vector operations for multithreading, and many of the places where we call Threads::new_task().
  • Improve error messages: In the past, we have often just provided the equivalent of an "An error has occurred" statement if a user called a function with wrong arguments, but this is not as useful as could be to find out what exactly it was that you did wrong. A better approach is to be much more verbose with error messages. We track our progress towards this goal in github issue 610.
  • Linux based installer: We have been working on providing a source based installer on linux systems, but this requires additional testing, support for more systems/clusters and writing documentation. See github issue 1523 for more information.
  • Saving memory for very large parallel computations: When doing large scale parappel computations especially with a large coarse mesh, large amounts of memory are wasted. This is true especially when using an MPI only approach, because a single node will have many (redundant) copies of the data. There are several ways to re-organize the data structures or using special MPI support for shared allocations. See github issue 2209 for more information.
  • Easy enhancement projects: Some smaller projects are also described on the list of open issues at our github site.

If you want to work on any of these projects: awesome! A good starting point would be to contact the principal developers so we can coordinate finding someone to mentor you.