Skip to main content

Sample Submission scripts

Sample submission scripts for some codes are available in the /share/doc/submission/ folder:

jan.gmys@zeus-2:~$ ls /share/doc/submission/
abaqus cp2k dlpoly gaussian gromacs lammps matlab molpro namd polyrate R tensorflow theano vasp

If you have a test-case for a module or wish to see an additional script in this list, don't hesitate to let us know.

In the following examples, note the difference between tasks (typically MPI processes) and CPUs or threads (typically OpenMP threads) :

  • --ntasks-per-node : number of MPI processes
  • --cpus-per-task : number of OpenMP threads

Single-core

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=24:00:00
#SBATCH --job-name=my_serial_job
#SBATCH --mem=1536M

./my_program

This script submits a job called my_serial_job requesting 1 core on 1 node with a maximum of 1536 MB of memory for at most 24 hours.

Multi-node (MPI)

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=16
#SBATCH --time=48:00:00
#SBATCH --job-name=my_mpi_job
#SBATCH --mem=4096M

mpiexec ./my_program

This script submits a job called my_mpi_job requesting 16 cores on 2 nodes (32 cores in total) with a maximum of 4096 MB of memory per node (8 GB in total) for a duration of 48 hours.

Multi-coeur (OpenMP)

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=8
#SBATCH --time=48:00:00
#SBATCH --job-name=my_openmp_job
#SBATCH --mem=4096M

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./my_program

This script submits a job called my_openmp_job requesting 8 cores on a single node with a maximum of 4096 MB memory for a duration of 48 hours.

SLURM_CPUS_PER_TASK is an environment variable that is set if the --cpus-per-task option is specified. A complete list of environment variables set by Slurm can be found here

MPI+OpenMP

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=4
#SBATCH --time=48:00:00
#SBATCH --job-name=my_mpi_openmp_job
#SBATCH --mem=4096M

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
mpiexec ./my_program

This script submits a hybrid job called my_mpi_openmp_job requesting 2 nodes with 16 cores per node (4 MPI processes per node with 4 threads each) and a maximum of 4096 MB of memory per node (8 GB in total) for a duration of 48 hours.