speaker: Prof. Mathieu Salanne (Sorbonne Université, Maison de la Simulation CEA/CNRS, Institut Universitaire de France)
date: 07/07/2021 10.00-11.15 am, Wednesday
abstract: The electric double layer is generally viewed as simply the boundary that interpolates between an electrolyte solution and a metal surface. Contrary to that view, recent studies have shown that the interface between ionic liquids and metallic electrodes can exhibit structures and fluctuations that are not simple reflections of surrounding bulk materials. The charge of the electrode is screened by the interfacial fluid and induces subtle changes in its structure, which cannot be captured by the conventional Gouy-Chapman theory. In recent years, this topic has been more intensively addressed in order to develop supercapacitors that are more efficient. The latter are electrochemical devices that store the charge at the electrode/electrolyte interface through reversible ion adsorption. In order to understand the molecular mechanisms at play, we have performed molecular dynamics simulations on a variety of systems made of ionic liquids and electrodes of different geometries ranging from planar to nanoporous. A key aspect of our simulations is to use a realistic model for the electrodes, by allowing the local charges on the atoms to vary dynamically in response to the electrical potential caused by the ions and molecules in the electrolyte. These simulations have allowed us to gain strong insight on the structure and dynamics of ionic liquids at electrified interfaces. From the comparison between graphite and nanoporous carbide-derived carbon (CDC) electrodes, we have elucidated the microscopic mechanism at the origin of the increase of the capacitance enhancement in nanoporous carbons. We have also extended the simulations to blue energy production devices, which use the capacitive effect to extract electricity from salinity gradients between sea water and rivers.
speaker: Dr. Ulrich Ruede (FAU Erlangen-Nürnberg and CERFACS Toulouse)
date: 28/01/2021, 2 pm
This webinar will focus on parallel matrix-free multigrid for extreme scale computing.
Multigrid is one of the most efficient algorithms to solve linear systems for elliptic partial differential equations, and matrix-free variants are essential to reach the best possible performance. This will be demonstrated for positive definite systems as they arise in the discretization of the gyrokinetic Poisson equation, as well as indefinite systems that originate in viscous flow problems.
During this webinar, special attention will be given to the coarse grids of the multigrid hierarchy to avoid that they become a sequential bottleneck. Modern sparse direct methods and their approximate form using block-low-rank approximations will be used.
The talk will include a scalability study, aiming to solve a linear system with more than ten trillion unknowns, equivalent to a solution vector as big as 80 TByte in main memory.
Dr. Ulrich Ruede (FAU Erlangen-Nürnberg and CERFACS, Toulouse) will host this webinar, organized by the European Energy-Oriented Center of Excellence (EoCoE). 
The webinar is free and open to everyone, and will be recorded to be later available on the EoCoE YouTube channel.
speaker: - Christie L. Alappat, Erlangen Regional Computing Center (RRZE), PhD student in the group of Prof. G. Wellein - Dr. Georg Hager, Erlangen Regional Computing Center (RRZE), Senior researcher in the HPC division at RRZE
date: 18/11/2020, 10 am
abstract: The A64FX CPU powers the current #1 supercomputer on the Top500 list. Although it is a traditional cache-based multicore processor, its peak performance and memory bandwidth rival accelerator devices. Generating efficient code for such a new architecture requires a good understanding of its performance features. Using these features, the Erlangen Regional Computing Center (RRZE) team will detail how they construct the Execution-Cache-Memory (ECM) performance model for the A64FX processor in the FX700 supercomputer and validate it using streaming loops. They will describe how the machine model points to peculiarities in the microarchitecture to keep in mind when optimizing applications, and how, applying the ECM model to sparse matrix-vector multiplication (SpMV), they motivate why the CRS matrix storage format is inappropriate and how the SELL-C-sigma format can achieve bandwidth saturation for SpMV. In this context, they will also look into some code optimization strategies that are relevant for A64FX and compare SpMV performance with AMD Rome, Intel Cascade Lake and NVIDIA V100. This webinar, organized by the European Energy-Oriented Center of Excellence (EoCoE), will be hosted by Christie L. Alappat, PhD student at the RRZE, and Dr. Georg Hager, senior researcher in the HPC division at RRZE. The webinar is free and open to everyone, and will be recorded to be later available on the EoCoE YouTube channel.
speaker: Jose Alberto Fonseca Castillo, postdoctoral researcher at CEA / Maison de la Simulation
date: 01/07/2020, 11 am
abstract: The software library ParFlow is a complex parallel code that is used extensively for high performance computing, specifically for the simulation of surface and subsurface flow. The code discretizes the corresponding partial differential equations using cell centered finite differences on a uniform hexahedral mesh. Even with the current supercomputing resources, using uniform meshes may translate in prohibitively expensive computations for certain simulations. A solution to this problem is to employ adaptive mesh refinement (AMR) to enforce a higher mesh resolution only whenever it is required. To this this end, we have relegated ParFlow's mesh management to the parallel AMR library p4est. During this seminar, Jose Fonseca, postdoc researcher at CEA / Maison de la Simulation, will present the algorithmic approach used to perform this coupling and our latest efforts to generalize ParFlow's native discretization to the locally refined meshes obtained with p4est.
speaker: Jaro Hokkanen, Computer Scientist at Forschungszentrum Jülich
date: 10/06/2020, 10.30 am
abstract: Hosted by Jaro Hokkanen, computer scientist at Forschungszentrum Jülich, this webinar will address the GPU implementation of the Parflow code. ParFlow is known as a numerical model that simulates the hydrologic cycle from the bedrock to the top of the plant canopy. The original codebase provides an embedded Domain-Specific Language (eDSL) for generic numerical implementations with support for supercomputer environments (distributed memory parallelism), on top of which the hydrologic numerical core has been built. In ParFlow, the newly developed optional GPU acceleration is built directly into the eDSL headers such that, ideally, parallelizing all loops in a single source file requires only a new header file. This is possible because the eDSL API is used for looping, allocating memory, and accessing data structures. The decision to embed GPU acceleration directly into the eDSL layer resulted in a highly productive and minimally invasive implementation. This eDSL implementation is based on C host language and the support for GPU acceleration is based on CUDA C++. CUDA C++ has been under intense development during the past years, and features such as Unified Memory and host-device lambdas were extensively leveraged in the ParFlow implementation in order to maximize productivity. Efficient intra- and inter-node data transfer between GPUs rests on a CUDA-aware MPI library and application side GPU-based data packing routines. The current, moderately optimized ParFlow GPU version runs a representative model up to 20 times faster on a node with 2 Intel Skylake processors and 4 NVIDIA V100 GPUs compared to the original version of ParFlow, where the GPUs are not used. The eDSL approach and ParFlow GPU implementation may serve as a blueprint to tackle the challenges of heterogeneous HPC hardware architectures on the path to exascale.
speaker: Prof. Dr. Eric Sonnendrücker, Head of Numerical Methods in Plasma Physics Division at the Max Planck Institute for Plasma Physics
date: 15/05/2020, 10.30 am
abstract: The principle behind magnetic fusion research is to confine a plasma, which is a gas of charged particles at a very large temperature, around 100 000 degrees, so that the fusion reaction can generate energy with a positive balance. At such a high temperature, the plasma needs to be completely isolated from the wall of the reactor. This isolation can be achieved in toroidal devices thanks to a very large magnetic field. Due to the multiple and complex physical processes involved, theoretical research in this field relies heavily on numerical simulations and some problems require huge computational resources. After introducing the context of magnetic confinement fusion, we shall address different specific challenges for numerical simulations in this topic, which are in particular related to the multiple space and time scales that need to be spanned and to the geometry of the experimental devices. These can only be solved thanks to a close collaboration between physicists, mathematicians and HPC specialists. A few current research problems in this field going from the computation of a 3D equilibrium to fluid and kinetic simulations will be presented as an illustration.
speaker: Leonardo Bautista Gomez, Senior researcher at BSC, and Kai Keller, Software engineer at BSC
date: April 1st, 2020, 11 AM
abstract: Large scale infrastructures for distributed and parallel computing offer thousands of computing nodes to their users to satisfy their computing needs. As the need for massively parallel computing increases in industry and development, cloud infrastructures and computing centers are being forced to increase in size and to transition to new computing technologies. While the advantage for the users is clear, such evolution imposes significant challenges, such as energy consumption and fault tolerance. Fault tolerance is even more critical in infrastructures built on commodity hardware. Recent works have shown that large scale machines built with commodity hardware experience more failures than previously thought. In this webinar, Leonardo Bautista Gomez and Kai Keller, respectively Senior Researcher and Software Engineer at the Barcelona Supercomputing Center, will focus on how to guarantee high reliability to high-performance applications running in large infrastructures. In particular, they will cover all the technical content necessary to implement scalable multilevel checkpointing for tightly coupled applications. This will include an overview of the internals of the FTI library, and explain how multilevel checkpointing is implemented today, together with examples that the audience can test and analyze on their own laptops, so that they learn how to use FTI in practice, and ultimately transfer that knowledge to their production systems.