Shared Memory Jobs

Shared memory jobs run on a single physical node, taking advantage of multiple processors using a number of programming models (openmp, multi-threading, multip-processing) with shared access to the memory on the node.

Launching these codes is very similar to launching a simple serial job, at it's most basic, just specify the number of cores to be used.

The following example jobfile launches a 64 processor openmp code:

#!/bin/bash 

### Example OpenMP job ###

#SBATCH --job-name=openmp_test
#SBATCH -o openmp_test_o.%j
#SBATCH -e openmp_test_e.%j
#SBATCH -p slurm
#SBATCH -A account_name

# run for ten minutes, turn on all mail notifications
#SBATCH --time=00:10:00
#SBATCH --mail-type=ALL

# 1 task with 64 cores
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=64

# setup modules
module purge
module load gcc/10.3.0
module load openblas/0.3.15

# set OMP_NUM_THREADS explicitly to the number of cpus assigned to this task
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

#execute the job
time ./my_openmp_code

exit 0