How to run GROMACS efficiently on LUMI supercomputer
This course gives practical tips on how to run GROMACS simulations efficiently on LUMI-G i.e. on AMD GPUs. The attendees will learn how to assess and tune GROMACS performance. In addition the course provides an overview on LUMI architecture, GROMACS heterogeneous parallelization, with a special attention to AMD GPU.
The event is organized in collaboration with BioExcel, CSC and PDC.
Where and when:
Online
Wednesday 24th – Thursday 25th January
Topics:
- LUMI architecture
- GROMACS heterogeneous parallelization
- AMD GPU support in GROMACS
- Assessing and tuning performance of GROMACS simulations
Learning outcome:
After the course the participants should have the skills and knowledge needed to
- efficiently use GPU resources in GROMACS simulations
- tune and assess GROMACS performance.
Target audience:
These sessions are aimed at GROMACS user on HPC clusters interested in:
- running efficiently GROMACS simulation on LUMI-G
- Understanding how to tune and to assess GROMACS performance
Selection: participants will be selected to maximise benefit by evaluating the match between content, motivation and current skills. Moreover geographical coverage and gender balance will be considered.
Those with access to LUMI will be prioritized. Applications will be evaluated regularly and it is possible the workshop will fill up before the deadline.
Prerequisites:
The participants are required:
- to be familiar with molecular dynamics simulation and have working knowledge of GROMACS. Note: The fundamentals of molecular dynamics simulation or basic usage of GROMACS are not covered in this course
- to have experience in basic Linux/Unix skills, understand basic batch scripts and hands-on experience in using HPC systems. Ability to use the command line: navigate files and directories through shell commands, create/move/delete files and directories, construct command pipelines, edit/run shell scripts, use basic Slurm commands, submit job in a queue, find further help about the scheduler
- To have awareness of differences between hardware components (i.e. GPU vs CPU, memory and storage)
Please consult the following materials, if you’re uncertain of your Linux or GROMACS skills:
- Lectures on the basis of molecular dynamics simulation (videolinks: part I, part II)
- See MD with GROMACS tutorials here
- Linux commands, bash shell, a quiz and a link to intro course
- Introduction to HPC system (link)
Technical requirements:
To participate in the online interactive sessions:
- We recommend using the Zoom desktop app.
- A microphone and (ideally) a webcam.
- Two screens are handy
Agenda:
The training includes two online sessions and participants are expected to attend both of them. Each session consists of lectures and hands-on exercises. GROMACS will be used in the exercise sessions. LUMI supercomputer will be used for the hands-on exercises. You need to apply for a user account, but we’ll provide the project and resources.
Wednesday 24th January 9:30-14:30 CET
- Workshop Introduction
- LUMI architecture
- Brief intro to GROMACS
- GROMACS parallelization and algorithms background
- AMD GPU support in GROMACS
- How to run on LUMI
- Hands-on: experiment to find optimal run parameters using upto three model systems
Thursday 25th January 9:30-14:30 CET
- Assessing and tuning performance of GROMACS simulations
- Hands-on on continues
- Overview and discussion on attendees results
Materials
Time
24.1.2024 - 25.1.2024
10:30 - 15:30
More information
LECTURERS:
Szilárd Páll (KTH, Royal Institute of Technology)
Andrey Alekseenko (KTH, Royal Institute of Technology)
Rasmus Kronberg (CSC)
TRAINERS THAT WILL HELP IN HANDS-ON:
Alessandra Villa (KTH, Royal Institute of Technology)
Cathrine Bergh (KTH, Royal Institute of Technology)
Xavier Prasanna Antohony Raj (CSC)
NUMBER OF PARTICIPANTS/REGISTRANT CAPACITY:
40 + an additional seat for a member from each LUMI consortium country NCC or Local Organization in order to strengthen supporting local users in each country.
DEADLINE FOR REGISTRATION:
20th December 2023