The following is a list of basic #SBATCH specifications. To see the complete options of SBATCH, please refer to the SLURM sbatch command page.
Submit a job array, multiple jobs to be executed. The indexes specification identifies what array ID values should be used. Each job has the same job ID ($SLURM_JOB_ID) but different array ID ($SLURM_ARRAY_JOB_ID variable).
Can use step function with ":" separator.
A maximum number of simultaneously running jobs may be specified with "%" separator.
#SBATCH -a 0-15
#SBATCH --a 0-15:4 (same as #SBATCH –a 0,4,8,12)
#SBATCH --array=0-15%4 (4 jobs running simultaneously)
|-A, --account=<account>||This option tells SLURM to use the specified buy-in Account. Unless you are an authorized user of the account, your job will not run.||#SBATCH -A <account>|
|--begin=<time>||Submit the batch script to the Slurm controller immediately, like normal, but tell the controller to defer the allocation of the job until the specified time. Time may be of the form HH:MM:SS to run a job at a specific time of day (seconds are optional).||#SBATCH --begin=16:00|
|-C, --constraint=<list>||Request node feature. May be specified with symbol "&" for and, "|" for or, etc. Constraints using "|" must be prepended with 'NOAUTO:'. Click here for more information about constraints.||#SBATCH -C NOAUTO:intel16|intel14|
|-c, --cpus-per-task=<ncpus>||Require ncpus number of processors per task||#SBATCH -c 3 (3 cores per node)|
|-d, --dependency=<dependency_list>||Defer the start of this job until the specified dependencies have been satisfied completed. <dependency_list> is of the form:|
after:job_id[:jobid...] This job can begin execution after the specified jobs have begun execution. afterany:job_id[:jobid...] This job can begin execution after the specified jobs have terminated. afterburstbuffer:job_id[:jobid...] This job can begin execution after the specified jobs have terminated and any associated burst buffer stage out operations have completed. aftercorr:job_id[:jobid...] A task of this job array can begin execution after the corresponding task ID in the specified job has completed successfully (ran to completion with an exit code of zero). afternotok:job_id[:jobid...] This job can begin execution after the specified jobs have terminated in some failed state (non-zero exit code, node failure, timed out, etc). afterok:job_id[:jobid...] This job can begin execution after the specified jobs have successfully executed (ran to completion with an exit code of zero). expand:job_id Resources allocated to this job should be used to expand the specified job. The job to expand must share the same QOS (Quality of Service) and partition. Gang scheduling of resources in the partition is also not supported. singleton This job can begin execution after any previously launched jobs sharing the same job name and user have terminated.
|#SBATCH -d after:<JobID1>:<JobID2>,afterok:<JobID3>|
|-D, --chdir=<directory>||Set the working directory of the batch script to directory before it is executed. The path can be specified as full path or relative path to the directory where the command is executed.||#SBATCH -D /mnt/scratch/username|
Instruct Slurm to connect the batch script's standard error directly to the file name specified. By default both standard output and standard error are directed to the same file. See -o, --output for the default file name.
|#SBATCH -e /home/username/myerrorfile|
|--export=<environment variables [ALL] | NONE>||Identify which environment variables are propagated to the launched application, by default all are propagated. Multiple environment variable names should be comma separated.||#SBATCH --export=EDITOR=/bin/emacs,ALL|
|--gres=<list>||Specifies a comma delimited list of generic consumable resources. The format of each entry on the list is "name[[:type]:count]", where name is that of the consumable resource. To request for GPU, --gres=gpu:k20:1 is an example to request one k20 GPU. Valid GPU types are k20, k80 and v100. Note that type is optional, but the number of GPU is necessary.|
#SBATCH --gres=gpu:2 (request 2 GPUs per node)
#SBATCH --gres=gpu:k80:2 (request 2 K80 GPUs per node)
This option ensures that CPUs available to the job will be those bound to allocated GPUs. The enforce-binding type may increase the performance of some GPU jobs. NOTE: The number of available CPUs bound with GPUs on a node is smaller than the total number of CPUs on the node. Configuration details could be seen in /etc/slurm/gres.conf .
|-G, --gpus=[<type>:]<number>||Specify the total number of GPUs required for the job. An optional GPU type specification can be supplied. Valid GPU types are k20, k80 and v100. Note that type is optional, but the number of GPU is necessary.|
#SBATCH --gpus=k80:2 (request 2 k80 GPUs for entire job)
#SBATCH --gpus=2 (request 2 GPUs for entire job)
|--gpus-per-node=[<type>:]<number>||Specify the number of GPUs required for the job on each node included in the job's resource allocation. An optional GPU type specification can be supplied. Valid GPU types are k20, k80 and v100. Note that type is optional, but the number of GPU is necessary.|
#SBATCH --gpus-per-node=v100:8 (request 8 v100 GPUs for each node requested by job)
#SBATCH --gpus-per-node=8 (request 8 GPUs for each node requested by job)
|--gpus-per-task=[<type>:]<number>||Specify the number of GPUs required for the job on each task to be spawned in the job's resource allocation. An optional GPU type specification can be supplied. Valid GPU types are k20, k80 and v100. Note that type is optional, but the number of GPU is necessary.|
#SBATCH --gpus-per-task=k80:2 (request 2 k80 GPUs for each task requested by job)
#SBATCH --gpus-per-task=2 (request 2 GPUs for each task requested by job)
|-H, --hold||Specify the job is to be submitted in a held state (priority of zero). A held job can now be released using scontrol to reset its priority (e.g. "scontrol release <job_id>").|
|The batch script will only be submitted to the controller if the resources necessary to grant its job allocation are immediately available. If the job allocation will have to wait in a queue of pending jobs, the batch script will not be submitted.|
|-i, --input=<filename pattern>||Instruct Slurm to connect the batch script's standard input directly to the file name specified in the "filename pattern".|
|-J, --job-name=<jobname>||Specify a name for the job allocation.||#SBATCH -J MySuperComputing|
|--jobid=<jobid>||Allocate resources as the specified job id.|
|-L, --licenses=<license>||Specification of licenses (or other resources available on all nodes of the cluster) which must be allocated to this job.||#SBATCH -L comsol@firstname.lastname@example.org|
Notify user by email when certain event types occur. Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL, REQUEUE, and STAGE_OUT), STAGE_OUT (burst buffer stage out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send emails for each array task).
|--mail-user=<user>||User to receive email notification of state changes as defined by --mail-type. The default value is the submitting user.||#SBATCH --email@example.com|
|--mem=<size[units]>||Specify the real memory required per node.||#SBATCH --mem=2G (M or G bytes)|
|--mem-per-cpu=<size[units]>||Minimum memory required per allocated CPU||#SBATCH --mem-per-cpu=2G (M or G bytes)|
|-N, --nodes=<minnodes[-maxnodes]>||Request that a minimum of minnodes nodes be allocated to this job. A maximum node count may also be specified with maxnodes. If only one number is specified, this is used as both the minimum and maximum node count.|
(Request 2 to 4 different nodes)
|--no-requeue||Request that a job not be requeued under any circumstances. Jobs are requeued by default if a node they are running on fails. This options may be useful for jobs that will not run properly after having run partially and failing.||#SBATCH --no-requeue|
|-n, --ntasks=<number>||Request total number of tasks. The default is one task per node, but note that the --cpus-per-task option will change this default.|
#SBATCH -n 4
(All tasks could be in 1 to 4 different nodes)
|Request that ntasks be invoked on each node. This is related to --cpus-per-task=ncpus, but does not require knowledge of the actual number of cpus on each node.|
|-o, --output=<filename pattern>|
Instruct Slurm to connect the batch script's standard output directly to the file name specified in the "filename pattern".
The default file name is "slurm-%j.out", where the "%j" is replaced by the job ID. For job arrays, the default file name is "slurm-%A_%a.out", "%A" is replaced by the job ID and "%a" with the array index.
#SBATCH -o /home/username/output-file
Need a file name or filename pattern not just a directory.
|-t, --time=<time>||Set a limit on the total run time of the job allocation. The total run time in the form: HH:MM:SS or DD-HH:MM:SS||#SBATCH -t 00:20:00|
|--tmp=<size[units]>||Specify a minimum amount of temporary disk space per node.||#SBATCH --tmp=2G|
|Increase the verbosity of sbatch's informational messages. Multiple -v's will further increase sbatch's verbosity. By default only errors will be displayed.|
|-w, --nodelist=<node name list>|
Request a specific list of your buy-in nodes. The job will contain all of these hosts and possibly additional hosts as needed to satisfy resource requirements.
The list may be specified as a comma-separated list of hosts, a range of hosts, or a filename. The host list will be assumed to be a filename if it contains a "/" character.
#SBATCH -w host[1-5,7,...]
#SBATCH -w /mnt/home/userid/nodelist
|-x, --exclude=<node name list>||Explicitly exclude certain nodes from the resources granted to the job.|