HPC Application Support Group

HPC Application support GROUP


The applications group at the MPCDF provides high-level support for the development, optimization, analysis and visualization of high-performance-computing (HPC) applications to Max Planck Institutes with high-end computing needs, e.g. in astrophysics, fusion research, materials and bio sciences, polymer research, and theoretical chemistry. The mission of the group is to directly support such HPC applications, in close cooperation  with  the scientists at the Max Planck institutes. Specifically, this comprises development and optimization of codes, visualization and graphical analysis of data, and technical consulting, e.g. on new (parallel) programming  techniques and models, like for utilizing hardware accelerators (e.g. GPU).


Group Members

  • Cesar Allande Alvarez
  • Michele Compostella
  • Tilman Dannert (team leader academic collaborations)
  • Renate Dohmen
  • Meisam Farzalipour-Tabriz
  • Ihor Holod
  • Lorenz Hüdepohl
  • Sebastian Kehl
  • Rafael Lago
  • Cristian Lalescu
  • Carlos Lopez
  • Andreas Marek (team leader data analytics)
  • Sebastian Ohlmann
  • Markus Rampp (group leader)
  • Klaus Reuter (deputy group leader)
  • Luka Stanisic
  • Fabio Baruffa (Intel, associated)


Former Group Members

  • Pavel Kus (University of Prague)
  • Florian Merz (IBM/Lenovo, associated)
  • Berenger Bramas (INRIA Strassbourg)
  • Werner Nagel (Retired)
  • Elena Erastova (Industry)
  • Fabio Baruffa (Intel)
  • Reinhard Tisma (Retired)



  • C. Penke (PhD thesis in collaboration with Prof. P. Benner, MPI for Dynamics of Complex Technical Systems; co-supervisors: T. Dannert, A. Marek)
    • Numerical solvers for the Bethe-Salpeter Eigenvalue problem
  • M. Albert (Bachelor thesis in collaboration with Prof. D. Kranzlmueller, LMU; co-supervisor: M. Rampp)
    • Visualizing Big Data in Virtual Reality – Interactive Analysis of Large Scale Turbulence Simulations (completed in 2018)
  • V. Artigues (Master thesis in collaboration with Prof. O. Lafitte, Universite Paris 13 & Prof. E. Sonnendruecker, IPP; co-supervisors: K. Reuter, M. Rampp)
    • Evaluation of novel, high-level programming frameworks for exascale HPC architectures and their application in numerical plasma physics. (completed in 2018)


Open Positions

We are always looking for highly skilled computational scientists or HPC or HPDA application experts with profound knowledge in parallel programming (ideally with modern Fortran, C or C++, together with MPI, OpenMP, and GPU programming models) and/or data analytics. We offer the opportunity to work on state-of-the-art numerical algorithms, leading HPC codes and HPDA applications, latest supercomputer technology and to maintain close collaborations with the code owners and domain scientists from various Max-Planck-Institutes e.g. in astrophysics, fusion research, materials and bio sciences, polymer research, and theoretical chemistry. Applications are welcome and should be directed M. Rampp.

See also MPCDF Career Opportunities


HPC code development & optimization projects

The group has been involved in development and optimization activities for the following HPC applications

  • GENE (Max-Planck Institute for Plasmaphysics, Garching)
  • GVEC (Max-Planck Institute for Plasmaphysics, Garching)
  • GRILLIX (Max-Planck Institute for Plasmaphysics, Garching)
  • FHI-aims (Fritz-Haber Institute of the MPG, Berlin)
  • NECI (Max-Planck Institute for Solid State Research, Stuttgart)
  • ELPA (MPCDF, Garching)
  • OCTOPUS (Max-Planck Institute for the Structure and Dynamics of Matter, Hamburg)
  • VERTEX (Max-Planck Institute for Astrophysics, Garching)
  • ESPResSo++ (Max-Planck Institute for Polymer Research, Mainz)
  • S/PHI/nX (Max-Planck Institute for Iron Research, Duesseldorf)
  • IDE/GPEC (Max-Planck Institute for Plasmaphysics, Garching)
  • SeLaLib (Max-Planck Institute for Plasmaphysics, Garching)
  • H5xx (Max-Planck Institute for Intelligent Systems, Stuttgart)
  • GOEMHD3 (Max-Planck Institute for Solar System Research, Goettingen)
  • MagIC (Max-Planck Institute for Solar System Research, Goettingen)
  • NSCOUETTE (Max-Planck Institute for Dynamics and Self-Organization, Goettingen)
  • TurTLE/BFPS (Max-Planck Institute for Dynamics and Self-Organization, Goettingen)
  • BioEM (Max-Planck Institute for Biophysics, Frankfurt)
  • Complexes++ (Max-Planck Institute for Biophysics, Frankfurt)
  • GROMACS (together with Max-Planck Institute for Biophysical Chemistry, Goettingen)


Scientific visualization

Talks and training material (HPC)

Selected publications


  • Octopus, a computational framework for exploring light-driven phenomena and quantum dynamics in extended and finite systems. N. Tancogne-Dejean, M. J. T. Oliveira, X. Andrade, H. Appel, C. H. Borca, G. Le Breton, F. Buchholz, A. Castro, S. Corni, A. A. Correa, U. De Giovannini, A. Delgado, F. G. Eich, J. Flick, G. Gil, A. Gomez, N. Helbig, H. Hübener, R. Jestädt, J. Jornet-Somoza, A. H. Larsen, I. V. Lebedeva, M. Lüders, M. A. L. Marques, S. T. Ohlmann, S. Pipolo, M. Rampp, C. A. Rozzi, D. A. Strubbe, S. A. Sato, C. Schäfer, I. Theophilou, A. Welden, A. Rubio. Journal of Chemical Physics 152,12 (2020) arXiv:1912.07921
  • nsCouette -- A high-performance code for direct numerical simulations of turbulent Taylor-Couette flow. J.-M. Lopez, D. Feldmann, M. Rampp, A. Vela-Martin, L. Shi, M. Avila. SoftwareX, 11, 100395 (2020) arXiv:1908.00587


  • Evaluation of performance portability frameworks for the implementation of a particle-in-cell code V. Artigues, K. Kormann, M. Rampp, K. Reuter. Concurrency and Computation: Practice and Experience (2019) e5640. arXiv:1911.08394
  • Discovery of an Exceptionally Strong β-Decay Transition of 20F and Implications for the Fate of Intermediate-Mass Stars, Kirsebom, O. S., Jones, S., Strömberg, D. F., Martínez-Pinedo, G., Langanke, K., Röpke, F. K., Brown, B. A., Eronen, T., Fynbo, H. O. U., Hukkanen, M., Idini, A., Jokinen, A., Kankainen, A., Kostensalo, J., Moore, I., Möller, H., Ohlmann, S. T., Penttilä, H., Riisager, K., Rinta-Antila, S., Srivastava, P. C., Suhonen, J., Trzaska, W. H., & Äystö, J. Phys. Rev. Lett. 123, 262701 (2019)
  • Defect calculations with hybrid functionals in layered compounds and in slab models, P. Deák, E. Khorasani, M. Lorke, M. Farzalipour-Tabriz, B. Aradi & Th. Frauenheim, Phys. Rev. B 100, 235304 (2019)
  • Current profile tailoring with the upgraded ECRH system at ASDEX Upgrade, R. Fischer, A. Bock, A. Burckhart, L. Giannone, A. Gude, R.M. McDermott, M. Rampp, M. Reisner, J. Stober, M. Weiland, M. Willensdorfer, and the ASDEX Upgrade Team 46th EPS Conference on Plasma Physics (2019)
  • Efficient Ensemble Refinement by Reweighting, J. Köfinger, L. Stelzl, K. Reuter, C. Allande, K. Reichel, G. Hummer, J. Chem. Theory Comput. (2019), 15, 5, 3390-3401 doi: 10.1021/acs.jctc.8b01231(preprint)
  • A massively parallel semi-Lagrangian solver for the six-dimensional Vlasov-Poisson equation. Kormann, K., Reuter, K., Rampp, M. International Journal of High Performance Computing Applications (2019) (arXiv:1903.00308)
  • Remnants and ejecta of thermonuclear electron-capture supernovae - Constraining oxygen-neon deflagrations in high-density white dwarfs. S. Jones, F. K. Röpke, C. Fryer, A. J. Ruiter, I. R. Seitenzahl, L. R.  Nittler, S. T. Ohlmann, R. Reifarth, M. Pignatari and K. Belczynski. Astronomy & Astrophysics 622, A74 (2019)
  • Increasing the degree of parallelism using speculative execution in task-based runtime systems. B. Bramas. PeerJ Computer Science 5, e183 (2019), doi: 10.7717/peerj-cs.18


  • CADISHI: Fast parallel calculation of particle-pair distance histograms on CPUs and GPUs. Reuter, K., Koefinger, J. Computer Physics Communications (2018)
  • Current distribution reconstruction for plasma scenario development at ASDEX Upgrade. Fischer, R., Bock, A., Burckhart, A., Denk, S., Dunne, M., Ford, O., Giannone, L., Gude, A., Maraschek, M., McDermott, R., Mlynek, A., Poli, E., Rampp, M., Rittich, D., Weiland, M., Willensdorfer, M. 45th EPS Conference on Plasma Physics (2018)
  • Rotation-supported Neutrino-driven Supernova Explosions in Three Dimensions and the Critical Luminosity Condition.Summa A., Janka H.Th., Melson T. and Marek A. Astrophysical Journal, 852 (2018).



  • GPEC, a real-time capable tokamak equilibrium code. M. Rampp, R. Preuss, R. Fischer & ASDEX Upgrade Team. Fusion Science & Technology, 70(1) (2016) (arXiv:1511.04203)
  • Coupling of the flux diffusion equation with the equilibrium reconstruction at ASDEX Upgrade. R. Fischer, A. Bock, M. Dunne, J. C. Fuchs, L. Giannone, K. Lackner, P. J. McCarthy, E. Poli, R. Preuss, M. Rampp, M. Schubert, J. Stober, W. Suttrop, G. Tardini, M. Weiland & ASDEX Upgrade Team. Fusion Science & Technology, 69(2) 526-536 (2016).
  • Summa A., Hanke F., Janka H.-Th., Melson T., Marek A., and Müller B.: Progenitor-dependent Explosion Dynamics in Self-consistent, Axisymmetric Simulations of Neutrino-driven Core-collapse Supernovae, Astrophysical Journal, 825, 2016
  • Ertl T., Ugliano M., Janka H.-Th., Marek A., and Arconnes A.: Erratum: Progenitor-explosion Connection and Remnant Birth Masses for Neutrino-driven Supernovae of Iron-core Progenitors, Astrophysical Journal, 821, 2016


  • The 3D MHD code GOEMHD3 for astrophysical plasmas with large Reynolds numbers. Code description, verification, and computational performance. Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.Astronomy & Astrophysics, 580, A48  (2015) (arXiv:1411.1289).
  • A hybrid MPI-OpenMP parallel implementation for pseudospectral simulations with application to Taylor-Couette flow. Shi, L.; Rampp, M.; Hof, B.; Avila, M. Computers & Fluids, 106, 1-11 (2015) (arXiv:1311.2481).
  • Melson T., Janka H.-Th., Bollig R., Hanke F., Marek A., and Müller B.: Neutrino-driven Explosion of a 20 Solar-mass Star in Three Dimensions Enabled by Strange-quark Contributions to Neutrino-Nucleon Scattering, Astrophysical Journal, 808, 2015
  • Melson T., Janka H.-Th., and Marek A.: Neutrino-driven Supernova of a Low-mass Iron-core Progenitor Boosted by Three-dimensional Turbulent Convection, Astrophysical Journal, 801, 2015


  • Porting Large HPC Applications to GPU Clusters: The Codes GENE and VERTEX. Dannert, T.; Marek, A., Rampp, M., in Advances in Parallel Computing, 25, 305-314 (2014) (arXiv:1310.1485).
  • Towards Petaflops Capability of the VERTEX supernova Code. Marek, A., Rampp, M., Hanke, F., Janka, H.-Th., in Advances in Parallel Computing, 25, 712-721 (2014). (arXiv:1404.1719).
  • Tamborra I., Hanke F., Janka H.Th., Müller B., Raffelt G., and Marek A.: Self-sustained Asymmetry of Lepton-number Emission: A New Phenomenon during the Supernova Shock-accretion Phase in Three Dimensions, Astrophysical Journal, 792, 2014
  • Marek A., Blum V., Johanni R., Havu V., Lang B., Auckenthaler T., Heinecke A., Bungartz H.-J, and Lederer H.: The ELPA library - Scalable parallel Eigenvalü Solutions for Electronic Structure Theory and Computational Science, Journal of Physics Condensed Matter, 26, 2014


  • A data acquisition system for real-time magnetic equilibrium reconstruction on ASDEX Upgrade and its application to NTM stabilization experiments. L. Giannone et al. (including M. Rampp). Fusion Engineering and Design, 88 3299-3311 (2013)
  • Optimization strategy for the VMEC stellarator equilibrium code. Merz, F.; Geiger, J.; Rampp, M.IPP-Report R/48 (2013)
  • Dnmt2-dependent methylomes lack defined DNA methylation patterns
    G. Raddatz, P. M. Guzzardo, N. Olova, M. R. Fantappié, M. Rampp, M. Schaefer, W. Reik, G. J. Hannon, F. Lyko PNAS (2013). doi:10.1073/pnas.1306723110
  • Genome of the Haloarchaeon Natronomonas moolapensis, a Neutrophilic Member of a Previously Haloalkaliphilic Genus.
    Dyall-Smith M.L., Pfeiffer F., Oberwinkler T., Klee K., Rampp M., Palm P., Gross K., Schuster S.C., Oesterhelt D. Genome Announc. (2013). doi: 10.1128/genomeA.00095-13
  • Chloride & organic osmolytes: a hybrid strategy to cope with elevated salinities by the moderately halophilic, chloride-dependent bacterium Halobacillus halophilus. Saum S., Pfeiffer F., Palm P., Rampp M., Schuster S., Mueller V., Oesterhelt D., Environmental Microbiology (2013). doi: 10.1111/j.1462-2920.2012.02770.x
  • Marek A., Blum V., Johanni R., Havu V., Lang B., Auckenthaler T., Heinecke A., Bungartz H.-J, and Lederer H.: The ELPA library - Scalable parallel Eigenvalü Solutions for Electronic Structure Theory and Computational Science, psi-k.org, Scientific Highlight, 2013
  • Hanke F., Müller B., Wongwathanarat A., Marek A., and Janka H.-Th..: SASI Activity in Three-dimensional Neutrino-hydrodynamics Simulations of Supernova Cores, Astrophysical Journal, 770, 2013
  • Müller B., Janka H.-Th., Marek A.: A New Multi-Dimensional General Relativistic Neutrino Hydrodynamics Code of Core-Collapse Supernovae III. Gravitational Wave Signals from Supernova Explosion Models, Astrophysical Journal, 766, 2013


  • A parallel Grad-Shafranov solver for real-time control of tokamak plasmas . Rampp, M.; Preuss, R.; Fischer, R.; Hallatschek, K.; Giannone, L. Fusion Science & Technology, 62(3) 409-418 (2012).
  • Parallel equilibrium algorithm for real-time control of tokamak plasmas. Preuss, R.; Fischer, R.; Rampp, M.; Hallatschek, K.; von Toussaint, U.; Giannone, L.; McCarthy, P., IPP-Report R/47 (2012)
  • Ugliano M., Janka H.-Th., Marek A., Arcones, A: Progenitor-explosion Connection and Remnant Birth Masses for Neutrino-driven Supernovae of Iron-core Progenitors, APJ, vol 757, 2012
  • Müller B., Janka H.-Th., Marek A.: A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. II. Relativistic Explosion Models of Core-collapse Supernovae, APJ, vol 756, 2012
  • Hanke F., Marek A., Müller B., Janka H.-Th.: Is Strong SASI Activity the Key to Successful Neutrino-driven Supernova Explosions?, APJ, vol 755, 2012

2011 and earlier

Document Actions