Open Projects

The projects below are suggestions for student projects on the MSc level. Some of them can be developed into proper PhD proposals. If you are interested in pursuing a PhD, feel free to contact me.

Mini SPH

SPH (Smooth Particle Hydrodynamics) is a popular technique in many application fields to simulate fluids (matter distribution in galaxies, e.g.). It is convenient, as the fluid is not simulated over a grid but through distributions which move around in space. Typically, developers add however a grid on top of these “particles” (centres of distributions) to speed up certain steps such as finding close-by other distributions. In the Peano code, we have successfully written some particle management routines where the grid hosts particles – the grid is not a sole metadata but actually owns all particles. While we have only used it for Particle in Cell (PIC) methods and Discrete Elements (DEM), there is no reason why these data structures should not work for SPH as well. In this project, we will demonstrate this idea by implementing a real SPH solver within Peano4.

Space-filling brute force

Space-filling curves are a popular tool to realise domain decomposition on regular and adaptive Cartesian grids. They define a ‘curve’ (polynomial) that runs through all cells of a grid or voxel field recursively. Popular curves are the Hilbert curve, the Lebesgue curve and the Peano curve. All of them serialise a 2d or 3d space. When we cut this curve into equally sized fragments, we also divide the space into equally sized fragments. We obtain a grid/domain partition. This process is known to yield quasi-optimal partitions, i.e. segments that are as good as spheres besides a constant, when we measure their surface relative to their volume; if the underlying grid is regular. The ratio is important as it determines the amount of local work to communication that we have to do on a parallel computer. Unfortunately, the constant is not known quantitatively (all existing bounds are rather inaccurate). We do not even know whether there is such a constant for adaptive grids. In this project, we want to create a simulator that starts from very simple curves and then advances to more and more complicated curve patterns. For each pattern, it determines the ratios or, more accurate, all constants determining the ratio. We hope to find, with such a brute-force approach, that the constant is bounded (and converges towards a magic number if the partitions become big enough) which would give the first quantitative formula to determine the partitions’ surface-to-volume ratio.

ExaHyPE tracers

The ExaHyPE project is a EU-funded international project developing an engine to solve hyperbolic differential equations on supercomputers. The engine is used to run long-range seismic risk assessment (impact of earthquakes on critical infrastructure such as dams) and to simulate the dynamics of binary star systems that are possible sources of gravitational waves. ExaHyPE works with a so-called Discontinuous Galerkin approach and dynamically adaptive meshes, i.e. the problem of interest (the earthquakes, e.g.) is represented on a graph. For many applications, it would be more convenient (also because of the size of these graphs) if there were only a few tracer particles inserted into the simulation that illustrate how the underlying waves evolve over time. The goal of this project is to take ExaHyPE plus an existing particle administration approach and to fuse them in ExaHyPE. Once this is done, some larger simulations are to be ran and one can start to discuss how the particles in turn can affect the simulation (through some random noise added, e.g.).

Incompressible Navier-Stokes

Michael Griebel and others have written an excellent book on Navier-Stokes solvers. The book is tutorial-like and had been written actually for postgrad and undergrad courses doing lab work. The goal of this project is to follow the steps from the book and to realise them within Peano4, a massively parallel software for adaptive Cartesian meshes. The goal is to deliver a solver for incompressible fluids.

DaStGen 2.0

DaStGen is a simple, nice tool which I use quite intensively for all of my project. You find a description on my homepage. In its current form, it allows me to model C++ classes which are very memory-modest and supported by MPI. The biggest “problem” is that DaStGen is written in Java. In this project, you are supposed to redesign DaStGen in Python. DaStGen 2.0 will still be “usable” as command line tool, i.e. parse a domain-specific C++ extension, but there will also be a variant where users can assemble a DaStGen data within their Python code and then ask the code to generate plain C++ code. Finally, the new DaStGen version will support novel compressed float precision formats (bfloat16, e.g.).

Invasive Scheduling

Today’s schedulers typically ask the user to specify how many ranks (processes) run on a node concurrently. The scheduler then splits up the available cores among equally among the ranks. This does not take into account that load in simulations changes quickly and we might thus run into situations where some ranks can effectively use quite a lot of cores, while others have not that much to do. The goal in this project is to write a new TBB-based library where the individual ranks bid against each other how many cores each rank may use. And then they quickly migrate ownership if it suits the code. One rank invades the cores of the other rank if it is very compute-heavy and can make use of more ranks. Low-workload processes retreat from the cores in return.

AMR Vis

For my flagship code Peano, I’ve written a tailored file format for its tree-based data structures. As it is made for Peano, it has a low memory footprint and some nice, advantageous features. To make this format useful, I ship a small C++ command line tool which converts the file (or a file sequence for a video) into VTK which is a mainstream visualisation format. In this project, you will replace the C++ prototype with a Python library which allows to do these conversations on-the-fly (as you include VTK’s Python binding), allows to run some data analysis task right within a Python script, and, finally, can connect to a Paraview remote server to display the final image on a large visualisation wall.