Job scripts
There are various types of jobs that you can run on DIaL3 such as:-
These are covered in more detail in the relevant sections.
Basic Job Script
A basic job script on DiAL3 would look like the following.
#!/bin/bash -l
##############################
# BASIC JOB SCRIPT #
##############################
# Define your job name here so that you can recognise it in the queue.
#SBATCH --job-name=example
# Define the output file name (%j at end will append the job id at the end of the name). This will store the standard output 'stdout' for the job.
#SBATCH -o your_output_file_name%j
# Define file name for storing any errors (stderr).
#SBATCH -e your_error_file_name%j
# Define the partition on which you want to run.
#SBATCH -p partition_name
# Define the Account/Project name from which the computation would be charged.
#SBATCH -A your_Account_name
# Define, how many nodes you need. Here, we ask for 1 node. Each node has 128 CPU cores.
#SBATCH --nodes=1
# You can further define the number of tasks with --ntasks-per-*
# See "man sbatch" for details. e.g. --ntasks=4 will ask for 4 cpus.
# Define, how long the job will run in real time. This is a hard cap meaning
# that if the job runs longer than what is written here, it will be
# force-stopped by the server. If you make the expected time too long, it will
# take longer for the job to start. Here, we say the job will take 1 hour.
# hh:mm:ss
#SBATCH --time=01:00:00
# How much memory you need.
# In most cases, you can skip this as jobs run on entire nodes by default
# --mem will define memory per node and
# --mem-per-cpu will define memory per CPU/core. Choose one of those.
##SBATCH --mem-per-cpu=1500MB
##SBATCH --mem=5GB
# Turn on mail notification. There are many possible self-explaining values:
# NONE, BEGIN, END, FAIL, ALL (including all aforementioned)
# For more values, check "man sbatch"
#SBATCH --mail-type=END,FAIL
# You may not place any commands before the last SBATCH directive
# Define and create a unique scratch directory for this job (if required)
SCRATCH_DIRECTORY=/scratch/Your_project/${USER}/${SLURM_JOBID}
mkdir -p ${SCRATCH_DIRECTORY}
cd ${SCRATCH_DIRECTORY}
# You can copy everything you need to the scratch directory
# ${SLURM_SUBMIT_DIR} points to the path where this script was submitted from
cp ${SLURM_SUBMIT_DIR}/myfiles*.txt ${SCRATCH_DIRECTORY}
# This is where the actual work is done. In this case, the script only waits.
# The time command is optional, but it may give you a hint on how long the
# command worked
time sleep 10
# After the job is done we copy our output back to $SLURM_SUBMIT_DIR
cp ${SCRATCH_DIRECTORY}/my_output ${SLURM_SUBMIT_DIR}
# After everything is saved to the home directory, delete the work directory to
# save space on /scratch
cd ${SLURM_SUBMIT_DIR}
rm -rf ${SCRATCH_DIRECTORY}
# Finish the script
exit 0
Please note that the above script does not contain any module load
command as it is a very simple and basic example. In many cases, you would need to load certain modules or compilers such as ifort
, icpc
, gcc
, g++
etc. Please see the examples given in the following sections to see how to add module load statements in the job submission scripts.