In ExaHyPE, we have written an engine to solve hyperbolic equation systems with the ADER-DG method. It works on dynamically adaptive Cartesian meshes and provides compute kernels which are tailored towards Intel architectures. The code itself is a generic engine, i.e. not tied to a particular application area – though we had some prime users (demonstrators) in the EU project. It tries to hide all technical details and numerics from the users as much as possible, such that users can focus on the modelling and application challenges.
Naive ADER-DG is a high order scheme in space and time. Therefore, it is vulnerable to oscillations (as any high order scheme) if shocks arise, e.g. In ExaHyPE’s ADER-DG variant, this is mitigated by an a posteriori limiter: We check per cell per time step whether a solution is physically admissible (does not exhibit shocks or negative pressures, e.g.). If this check fails, we locally roll back the cell and replace the solution by a patch to which we apply a standard Finite Volume scheme. That is, ExaHyPE basically provides two solver types that are coupled with each other: The DG scheme and a patch-based/block-structured Finite Volume scheme. By default, we offer very simplistic Riemann solvers for both schemes only, though users can implement better solvers manually.
n our developer blog, search out for the GPU category for posts on the
- the long and winding road to GPUs (lessons learned);
- P. Samfass, T. Weinzierl, B. Hazelwood, M. Bader: teaMPI—replication-based resiliency without the (performance) pain. ISC 2020 conference (to appear)
- P. Samfass, T. Weinzierl, D.-E. Charrier, M. Bader: Lightweight Task Offloading Exploiting MPI Wait Times for Parallel Adaptive Mesh Refinement. CCPE
- Code improvement and extension:
- Integrate the Riemann solvers (and other stuff) from the ClawPack suite into ExaHyPE so users have a bigger choice of options.
- Port ExaHyPE to heterogeneous machines. That is GPGPUs.
- Make ExaHyPE flexible such that it can run on a small partition of a supercomputer. But if massive compute demands arise, it can grab a bigger part of the machine.
- Community-building, continuous integration, performance regression testing, training material preparation and dissemination.
Here’s the more elaborate version of the actual coding work:
To amplify ExaHyPE’s impact, applicability and usability from a user’s point of view, it is important that the software offers the right Riemann solvers for both its ADER-DG and FV schemes. ExaHyPE is currently equipped with a small set of handwritten Riemann solvers. For many applications, these are not sufficient as they fail to accommodate particular solver properties such as grid distortion, matched boundary layers or particular non-linearities due to material law changes (dynamic rupture or
wetting/drying). Interfaces to integrate sophisticated, bespoke Riemann solvers into ExaHyPE do exist, but users have to do so manually. The ClawPack software provides a huge, open source repository of sophisticated Riemann solvers for various applications, while the applied maths community is used to integrate new solvers into ClawPack. We propose to couple ClawPack with ExaHyPE. ExaClaw users hence can help themselves from the ClawPack knowledge and directly benefit from new ideas published therein.
ExaHyPE has been built for Intel/x86 architectures in the Intel manycore era. Its excellent vectorisation properties suggest that it is well-suited for GPUs, too. We propose to port ExaClaw to GPUs. This will yield a step-change in performance on heterogeneous machines. Different to mechanical extensions of CPU-only codes which offload heavy computations, we propose a flexible design where GPUs can (dynamically) step in by stealing work from the host; supported by non-local global memory. ExaClaw will be able to automatically exploit heterogeneous systems with different per-node hardware configurations, and even systems where the number of active GPUs changes dynamically.
ExaClaw will, by definition, yield inhomogeneous, strongly varying workload. ADER-DG’s FV limiter follows shock fronts. As it is relatively expensive, the workload increases with a shock front spreading. Dynamic AMR can change the mesh resolution in every single time step, while the most expensive ADER-DG step has non predictable cost as it solves a non-linear equation internally. Finally, it suffers from exactly the same “constraints” any other solver suffers from: Users need I/O. We will extend ExaClaw such that it can dynamically ask for additional resources (compute nodes) on a supercomputer if computations become heavy. In return, it can also retreat from nodes. The resulting runtime will allow multiple codes to run simultaneously on a supercomputer, while the code with the biggest resource demands grabs the biggest part of the machine. The machines are not overprovisioned for ineffective algorithm or computation phases.
External academic partners
Alice-Agnes Gabriel is our contact point to assess how the ExaClaw research programme affects progress in seismic risk assessment as well as the numerical modelling of the underlying phenomena. Some of her works uses the ExaHyPE core and includes the development of new Riemann solvers/discretisation schemes, e.g.
Arnau Folch embodies our collaboration with the Center of Excellence in the domain of Solid Earth (ChEESE). This collaboration has two different types of impact on the ExaClaw research agenda: On the one hand, ExaHyPE (the baseline code) is a flagship code within ChEESE. So ChEESE researchers use it for their research. Through this route, we therefore get supercomputer access in all over Europe and performance measurements from many different machines “for free” (and bug reports as well obviously). On the other hand, ChEESE brings together many researchers interested in hyperbolic equation systems and thus hosts parts of our user community.
The RSC Group is a long-time partner of ExaHyPE and now also a partner on ExaClaw. With RSC, we’ve in the past worked successfully on energy measurements and deep memory architectures. We will continue with work along these lines in ExaClaw, and will in particular emancipate from Intel-only solutions.
OCF integrates the new supercomputer NICE in Durham, and we plan to work particularly close on flexible computing, i.e. simulations where the (logical) hardware topology changes at runtime.
ExaClaw is heavily supported by Durham’s Advanced Research Computing (ARC) which help us on the hardware side, with research software engineers and day to day support.
Durham’s Computer Science department supports ExaClaw by granting extended Research Leave to the PI. The department also hosts the Durham research group behind ExaHyPE.
This project is sponsed by EPSRC under the Excalibur Phase I call. The grant number is EP/V00154X/1.