Skip to content Skip to navigation

Success stories

EoCoE offers an ever-expanding NETWORK of experts in High Performance Computing and in Sustainable Energies from Academia, Industry and the Public Sector. To see how EoCoE experts can support you to exploit HPC resources in all the phases of your project, please navigate to our services section.

Simulations of tokamak startup and control operations in realistic geometry enabled by EoCoE

The plasma-edge is the outer part of tokamak plasma, encompassing the outer core region until the plasma-facing components. Through its impact on the confinement of the plasma and on particles and heat flux exhaust, this part of the plasma is both determinant for the performances of a fusion reactor and for its durability. The physics at play is particularly complex, involving turbulence and plasma instabilities as well as plasma-neutral and plasma-wall interactions. Therefore, modelling the dynamics of the plasma-edge is crucial to enhance the performance of the tokamak, in terms of confinement and heat transfer to the walls, and also to design optimized operation scenarios. In the IRFM, the 3D turbulent code TOKAM3X is developed to analyze the turbulent heat and mass transfer in the plasma-edge. TOKAM3X is designed to run in a massively parallelized environment. One of the most important bottlenecks of the code is the inversion of the so-called 3D vorticity problem, which allows computing the electric potential in the machine. This problem takes the form of an implicit 3D linear system corresponding to an extremely anisotropic elliptic operator. The EoCoE collaboration network has allowed tackling this issue with new weapons, that is, the two iterative solvers AGMG and Maphys, that are now being tested in TOKAM3X. Preliminary results are very promising and now have to be confirmed in full scale simulations.

In parallel, other activities of improvement of the code have been undertaken, including the development of a new numerical scheme based on a high-order discontinuous Galerkin scheme. This new scheme is based on non-aligned computational grids, and will introduce new capabilities in the landscape of fluid solvers for the plasma-edge, such as for example, the possibility of computing the transport during a magnetic equilibrium evolution. This will allow performing simulations of tokamak startup and control operations in realistic geometry for both the plasma and the reactor’s wall (a world-wide unique capability), and it will also permit to enhance the consistency and flexibility of equilibrium-transport simulations.

The EoCoE code ALYA helps design of a new class of wind turbine for the SME Vortex Bladeless

The vast majority of manufacturing companies in Europe are small to medium sized enterprises (SME). HPC and the technological advantages it provides are key for them to remain competitive. The Alya team at BSC has always maintained a close relationship with both big enterprises and SMEs. In the field of wind energy, the BSC team has recently been working with the company Vortex Bladeless ( Inspired by an iconic engineering catastrophe in the USA in November 1940, Vortex Bladeless aims to harness the power of vorticity for a new generation of wind turbine.

Before the Tacoma Narrows bridge was violently ripped apart, it was extracting enormous amounts of energy from the wind. What destroyed the bridge was the vortex induced vibrations (VIV) arising from the interaction between the wind and the structure. Generally, these are natural phenomena that engineers and architects try to avoid. But for Vortex Bladeless, it formed the basis of an entirely new way of thinking about wind energy. They developed a single column with no bearings and no gears avoiding any moving parts in contact. It just oscillates with the wind.

The simplicity of the design has multiple advantages: A bladeless turbine poses no threat to birds, makes virtually no noise, costs less, lasts longer and requires less maintenance.

Experiments with scaled down prototypes have been encouraging, but the physics behind these devices is highly complex. In order to optimize Vortex Bladeless and explore scalability, the company has been working with a team of experts at the Barcelona Supercomputing Centre (BSC) on the MareNostrum supercomputer. The supercomputing resources were provided through 2 SHAPE PRACE projects.The fluid-structure interaction (FSI) between the Vortex Bladeless device and a turbulent flow is simulated with Alya. Obtaining the correct vortex shedding is critical for the device to work properly.

The results from initial simulations of a scaled-down device were very close to the actual wind tunnel tests performed by the Vortex Bladeless team, allowing them to develop the idea of a range of devices at the micro scale and the utility scale. Then, the behaviour of the device at a more realistic scale was studied by means of numerical simulations, helping in the design of real scale experiments and reducing costs. Due to the complexity of the flow and the need for time accurate results Large Eddy Simulation (LES) is the right choice. LES simulations are very computationally demanding and therefore if is critical to use a highly optimized code to avoid throwing away costly computational resources. Computational Fluid Dynamics simulations involve to two key operations; matrix and right hand side assembly and solving a linear system. With the help of HPC experts from the Energy Oriented Center of Excellence (EoCoE) the cpu time for assembly has been reduced up to 38%. Iterative solvers, Maphys and AGMG, provided by experts within the project have been incorporated into Alya. For some LES problems, the latter solver has provided speed ups of up to five times with respect to Alya's own solvers. For SMEs, that are typically more limited in their resources, such savings are key to be able to use highly advanced techniques such as LES. 

Videos here

Improving HPC-CFD software for wind resource assessment with the help of EoCoE

EoCoE-assisted boost for HPC-CFD modelling of wind resource assessment

BSC collaborates with Iberdrola ( on wind resource assessment. As part of this collaboration, the CFD version of code ALYA ( developed at BSC has been adapted so that Iberdrola can use it as an alternative to commercial software for wind farm assessment.  This has several advantages. The wind farm assessment tool is based on ALYAlya, a code designed to run efficiently on supercomputers comprising many . Therefore, thousands of processors, which in turn permits can be used to simulate onsimulations using significantly finer meshes than those possible with a commercial code. can handle.FurthermoreOn the other hand, since BSC is the developer of ALYAlya, new models can rapidly be implemented and tested. RANS turbulence models are the standard approach for wind farm assessment but recently LES models are also being consideredtested since they can become feasible with the advent of Exascale computers.  With the help of HPC experts from the Energy Oriented Center of Excellence (EoCoE) experts the code has been optimized to increase its node level performance. Computational savingsReductions of up to 38% have been obtained in the assembly of the matrix for the Navier Stokes equations.  Moreover, we are working withelinear algebrasolverspecialistsexperts from EoCoE to test 2 iterative solvers (Maphys and AGMG).  For some LES cases, we have seen CPU time reductions of more than 50% in the solver CPU time with AGMG.  

In order to make the tools easy to handle by wind assessment engineers a series of pre-process and post-process tools tailored to their needs have been developed. The process starts with the creation of a mesh in a highly automated way. The key inputs the user specifies needs to give are the extent of the domain he wants to simulate together withand topography and roughness files in any of the standard formats (grd, map, etc). Further,Sseveral coordinate systems can be used (WGS_84,NAD_83,ED_50, …). Moreover, the user can easily modify the size of the meshes to be used. The meshing process has recently been enhanced with the possibility of including adaptive refinement to take into account the wind turbines and their wakes [GP15]. This,In combinationtogether with an actuator disc model, this allows simulating the wind deficit generated by the wind turbines to be taken into account directly. The second step is the generation of boundary conditions and input files for ALYAlya. This is done with minimal input from the user. This ends the pre-processing phase, after which and now ALYAlya can be run in thousands of processors for.sSeveral different wind directions. are simulated. A post-processing tool has been developed that allows blending the results from the simulations with experimental data at a mast. In this way, the information at the mast is extrapolated to the whole domain of interest to help select of the optimal position for the wind turbines. The output from the post processing step is a resource grid file (WRG) with Weibull values A and k over the whole domain that can easily be used in the standard workflow for wind farm assessment.

[GP15] A. Gargallo-Peiró, M. Ávila, H. Owen, L. Prieto, A. Folch. Mesh generation for Atmospheric Boundary Layer simulation in wind farm design and management, 24th International Meshing Roundtable (IMR24).



EoCoE: Indicating extreme forecast error events in energy meteorology via ultra large NWP ensembles

With the integration of wind and solar into the electric grid, power forecasts informed by Numerical Weather Predictions (NPWs) have become critical to grid operation and energy markets. The day’s notice provided by NWPs give time and flexibility for Transmission System Operators to economically anticipate changes in wind and solar plant output. Extreme forecast errors of wind and solar power are rare, yet may cost as much as multiple thousands of Euro/Mwh. Additionally, by maintenance of balancing power to correct for such forecast errors, a constant cost is exacted on everyday operation.

The limited predictability of weather models calls for probabilistic forecasts realized by ensembles, in which each ensemble member represents a unique model's realization with different outcomes to give a measure of uncertainty (the figure shows the geopotential isolines of each ensemble member, where ensemble dispersion indicates forecast uncertainties). Operational weather centers generate 10s of ensemble members, which gives a good indicator of the ensemble spread. However, to resolve the tails of the probability distributions that represent such extreme error events, we are now in a position to operate 1000s of ensemble members in parallel to set up a demonstrator warning system for high impact events in the energy sector throughout the ongoing phase of the EoCoE project. 

For this purpose, an efficient ultra large ensemble control system of the Weather Research and Forecasting Model (WRF) on an IBM Blue Gene HPC architecture has been developed collaboratively within EoCoE between domain scientists and HPC experts. The system is designed to efficiently realize particle filter assimilation with an unlimited ensemble size. Multiple assimilation cycles may be performed within a single application, enabling communication among numerous ensemble members. Strong parallel scaling has been demonstrated with up to 4000 ensemble members. Code adaption has been conducted to make existing model error schemes efficiently applicable, with computation time decreasing to a factor of up to ten. 

EoCoE: Improving efficiency of photovoltaic cells by designing materials at the atomic-scale

Major technological advancements are often driven by the discovery of new materials. There is an increasing demand of multi-functional and sustainable materials designed to provide a specific function in the final product. However, decades are usually needed to identify new materials, and longer times to optimize them for commercialization by experiments. In the field of renewable energy production there is the urgent need to design materials with improved properties to increase the overall efficiency and to lower the cost of energy conversion processes.

The silicon hetero-junction (SHJ) technology for inorganic photovoltaic solar cells has achieved efficiency as high as 26.3% and shows great potential to become a future industrial standard for high-efficiency crystalline silicon (c-Si) cells. One of the key features of the technology is the passivation of contacts by thin films of hydrogenated amorphous silicon (a-Si:H). The a-Si:H/c-Si interface, while central to the technology, is still not fully understood in terms of charge carrier transport and recombination across this nanoscale region and its impact on the overall efficiency of the cell. The difficulty of modeling an interface arises from the consideration that it should be large enough to take into account all the amorphous surface peculiarities and because on both sides of the interface several plane of atoms are needed to mimic the behaviour of bulk materials. Thus a reliable interface implies the simulation of a very large number of atoms with the accuracy of quantum approaches to take into account properly the electronic properties.

An ENEA - Jülich collaboration, supported by the computational expertises available in the Center of Excellence EoCoE, has designed a new procedure to model the SHJ solar cell from the atomic-scale material properties to the macroscopic device characteristics.

The first step of this procedure is the development of an atomic-scale numerical model of the materials by designing both the crystalline surface and the amorphous phase. Firstly, a small numerical sample has been modeled and characterized by the ab-initio electronic structure package Quantum Espresso in order to increase the reliability of the model. Then a larger system has been generated by replicating in space the small one to attain a large enough interface to compute both structural and electronic quantities. This result has been reached by exploiting the linear scaling of the quantum package CP2K. Dedicated evaluation sessions on the CP2K code have been performed to optimize its performance for the simulation of the interface. Both the optimization of the code and the right design of the material allow for upscaling of the performance for the simulation of large interfaces. This approach opens the way to the simulation of very large interfaces fully exploiting the power of HPC infrastructures. Moreover it provides input for mesoscale numerical approaches devoted to the assessment of the charge carrier dynamics affecting  the overall efficiency of the photovoltaic device.

Supercapacitors charge faster in simulations with EoCoE

The urgent need for efficient energy storage has resulted in a widespread and concerted research effort in the past ten years. Current battery technologies are very efficient in term of energy storage density but reach their limit when large amounts of energy have to be stored or retrieved on short time scales. Supercapacitors can be seen as complementary devices with smaller energy storage densities but which can be operated on short time scales. These devices are already replacing batteries in high power applications - in future this trend expected to accelerate, particularly for  the recovery of kinetic energy in electric vehicles, for example. A major challenge remains to be solved in order to determine the relevant quantities for the target objectives: electrical capacitance, amount of adsorbed ions and diffusion coefficient as a function of the electrolyte composition and of the potential difference between the electrodes. This information cannot easily be obtained from experiments and traditional models do not work in this case where interactions at the molecular level play an essential role.

Metalwalls is a classical molecular dynamics code able to simulate supercapacitors with an accuracy that put this numerical tool in a world leading position. Thanks to EoCoE, the computationally intensive parts of the code can be now efficiently vectorized by the compiler.  The memory footprint of the application could also be reduced by recomputing some quantities when they are needed instead of storing them in memory. Finally, cache blocking techniques allowed to keep the data structure in lower cache levels. As a result, the performance of the code has been improved by a factor 2.5, i.e. it is now possible to simulate 2.5 more numerical systems with the same amount of computing resources. In 2016 alone, the code used 20 million CPUh on the Mare Nostrum cluster @ BSC (Spain) and would have needed 70 million without this optimization work to achieve the same results.
Quote from MS: "EoCoE allows our code to consume less energy for better storing it!"

EoCoE: High Resolution River Discharge Modeling for Hydropower Energy Applications

In the scope of EoCoE, a continental scale high resolution hydrologic modeling system for the investigation of river discharge in the European region is developed at 3Km resolution using hydrological models, ParFlow and Common Land Model (CLM). ParFlow is a massively parallel three dimensional watershed model which simulates fully coupled surface and subsurface flow; suitable for large scale problems, where CLM simulates discharge predictions in combinations with a river routing algorithm. For calibration and validation of model outputs, comparisons of observed and modeled discharge data for a given geographic region and time frame is made to evaluate the accuracy and suitability of each model. Through these modeling systems, it is now possible to assess modeled time series data using visualization tools and post-processing analysis chains developed in collaboration with EoCoE HPC experts. Development of these additional features and tools has contributed greatly to the EoCoE impact modeling efforts to assess river discharge information for hydropower energy applications.

EoCoE: Making real time weather nowcasting possible in post Moore era

SolarNowcast aims to forecast the solar irradiation from fisheye lens from webcam images. The sofware includes two components: "MotionEstimation" for the estimation of the dynamics from image data, and "Forecast" for the irradiation forecasting at short temporal horizon. A full performance evaluation on the Forecast code revealed a large optimization potential, both on serial (algorithmic, vectorization, memory usage) and parallel (openmp efficiency) levels. Follow-up optimization efforts improved the execution time, on the targeted production benchmark, by more than 2 times on the serial run, 4 times on 8 threads, and more than 5 times on 24 threads, and raised the scalability efficiency to 70% on 16 threads and 66% on 24 threads. A knowledge transfer then allowed the code holders to distribute and improve MotionEstimation, resulting in a factor 3 on 8 threads. This was obtained by improving the calculation part tenfold (x10), while an external minimizer part stays monothread. Further efforts will concentrate on this last part. The initial objectives of this work were to perform calculations in "real time" with respect to a given model and image acquisition rate. This milestone has already been accomplished on all tested machines, over 4 threads.

Without the help of the EoCoE network, engaging in these new paths would be much more difficult.

Tokamak Physics. IRFM (Research Institute on Magnetic Fusion) develops a set of ab-initio codes dedicated to Tokamak modelling. These codes describe a set of major phenomena that take place inside Tokamaks: instabilities, turbulent transport, plasma-wall interaction, and heating. These codes are crucial to develop the expertise and skills in hot plasma physics as well as in applied mathematics and computer science that are mandatory to prepare the operation of the large ITER Tokamak currently being built ( Realistic models, simulations and highly parallel algorithms are a key point in dealing with such challenges because of the huge range of temporal and spatial scales involved. Regarding transport, a so-called gyrokinetic code named GYSELA, is being developed. This high-performance code is adapted for supercomputer and its runtime scales quite well with the number of processors up to tens of thousands cores. Accelerated progress on this critical issue is especially important for ITER, because the size and cost of a fusion reactor are determined by the balance between 1) loss processes and 2) self-heating rates of the actual fusion reactions. EoCoE project and the systematic code auditing procedure implemented during the EoCoE workshop has largely contributed to the improvement of GYSELA performance. New horizon has been opened for Gysela through the test of very novative techniques such as BOAST (Bringing Optimization Through Automatic Source-to-Source Transformations, developed by INRIA project-team CORSE). These new approaches are very promising to easily port the code to new exascale architecture.  Without the help of the EoCoE network, engaging in these new paths would be much more difficult.”

EoCoE is a European Horizon 2020 funded project of Centre of Excellence in computing applications. It is designed to enhance numerical simulation efficiency in the international context of the High Performance Computing (HPC) challenges. It focusses its application scope towards low carbon energy domains.

created at