You are visiting the archived wiki pages which are for previous system and NO LONGER SUPPORTED.
Please go to the latest HPCC wiki site.

Skip to end of metadata
Go to start of metadata

Your job submission script has a number of environment variables that can be used to help you write some more advanced scripts. These variables can make your code more portable and save you time.

Variables listed by functionality

The following list are some of the more useful environment variables available for you to use in your scripts:

General Useful Environment Variables

Variable Name



User Name (NetID). Useful if you would like to dynamically generate a directory on some scratch space.


Name of the computer currently running the script.This should be one of the nodes listed in the file $PBS_NODEFILE.


Same as $HOSTNAME.


Variable Name



User Home directory (typically /mnt/home/$USER). You can also use the simple short cut '~' to refer to your home directory.


Current directory that the script is running in. The value of this environment variable will change as you change directories. When your jobs start, the current directory will be your $HOME directory.


Directory where the qsub command was executed. Useful with the cd (change directory) command to change your current directory to your working directory.


Local temporary disk storage (typically /mnt/local/$PBS_JOBID) unique to each node and each job. This directory is automatically created at the beginning of the job and deleted at the end of the job.


Variable Name



Queue job was submitted to.


Queue job is running in (typically this is the same as PBS_O_QUEUE).

Job Information

Variable Name



Job ID number given to this job. This number is used by many of the job monitoring programs such as qstat, showstart, and dque.


Name of the job. This can be set using the -N option in the PBS script (or from the command line). The default job name is the name of the PBS script.


Name of the file that contains a list of the HOSTS provided for the job.


Array ID numbers for jobs submitted with the -t flag. For example a job submitted with #PBS -t 1-8 will run eight identical copies of the shell script. The value of $PBS_ARRAYID will be an integer between 1 and 8.


Used with pbsdsh to determine the task number of each processor. For more information see


Original PBS path. Used with pbsdsh.

PBS_NUM_PPNNumber of cores requested (per node)


Probably the most useful aspect of these variables is that you can make your jobs more portable and easier to run. This is very useful if you want to run different jobs from different directories, reuse code or give your code to other users to try out.

Example 1: Running from the PBS working directory

Most users run their jobs from a subdirectory that contains their code, input data and submission scripts. However, when your job runs, the starting directory is your HOME directory. So, typically one of the first lines in your submission script is to change the directory to the one with your code. For example,

#!/bin/bash -login
cd /mnt/home/username/workingdirectory/

This should work fine. However, if you make a copy of your code and put it in workingdirectory2 you then need to change your submission script to match the new directory. This is fine, but annoying and you do not want to constantly keep track of what directory you are working from. Instead, you can replace your hard coded working directory with the PBS_O_WORKDIR environment variable. As long as your submission script ( is in your working directory you will be able to run correctly and you can make many copies of your submission script without having to change the directory.
#!/bin/bash -login

Example 2: Running multiple copies of the same job at the same time

For some experiments, it is important to run the same job more than once (this is especially true if a random number generator is used and you need to average the results). Consider the following example:

#!/bin/bash -login
myprogram 1> myprogram.out 2> myprogram.err

The problem with this code is that if you qsub the job more than once, results from one job overwrite the results from a previous job. The way PBS solves this problem is by using the unique job ID to label the standard output and standard error. You can use the PBS_JOBID variable to do the same thing in your script. This variable contains the Job ID that is displayed when you run the qsub command. Each job ID is unique so it is a very easy way to generate a unique file name or directory name. For example:
#!/bin/bash -login
myprogram 1> $PBS_JOBID.out 2> $PBS_JOBID.err

Here is another example where we create a directory to place all of our programs output:
#!/bin/bash -login
mkdir $PBS_JOBID
myprogram 1> $PBS_JOBID/myprogram.out 2> $PBS_JOBID/myprogram.err

With either of these modifications, you should be able to run your job as many times as you want without the output overwriting itself.

Example 3: Utilizing scratch space on code given to other users

It can be difficult to share your code with other users and have it work properly. This is especially true if you are using the scratch file space and have hard coded the directory. Consider the following example that has a hypothetical program that can take an input argument called workingdir:

#!/bin/bash -login

mkdir /mnt/scratch/myprogram_scratch_space
myprogram -workingdir /mnt/lustre/myprogram_scratch_space 1> $PBS_JOBID/myprogram.out 2> $PBS_JOBID/myprogram.err

This will result in the similar problem as hard coding your output file names, even though the outputs are directed to different files by the PBS_JOBID variable. If two users use the above code at the same time they will be using the same scratch space. You could solve the problem using the PBS_JOBID again. For example
#!/bin/bash -login

mkdir /mnt/scratch/myprogram_scratch_space
mkdir /mnt/scratch/myprogram_scratch_space/$PBS_JOBID
myprogram -workingdir /mnt/scratch/myprogram_scratch_space/$PBS_JOBID 1> $PBS_JOBID.out 2> $PBS_JOBID.err

This will work by putting all of the jobs in subdirectories under the scratch space. However, now you have the problem of remembering which users go with which job id. A second solution uses the USER name to make user specific scratch spaces. For example:
#!/bin/bash -login

mkdir /mnt/scratch/myprogram_scratch_space
mkdir /mnt/scratch/myprogram_scratch_space/$USER
myprogram -workingdir /mnt/scratch/myprogram_scratch_space/$USER 1> $PBS_JOBID/myprogram.out 2> $PBS_JOBID/myprogram.err

You could clean this code up even further by defining your own working variables. For example:
#!/bin/bash -login

mkdir $SCRATCH

myprogram -workingdir $SCRATCH/$USER 1> $PBS_JOBID/myprogram.out 2> $PBS_JOBID/myprogram.err

This same technique can also be used if your code is taking advantage of /mnt/local disk space on the compute nodes.

Example 4: Running the same code using different inputs.

It is common for users to want to run copies of their jobs over different input files. Consider the example where you have the following inputfiles:

And you have a program that takes each of these inputs and generates an output file using the following command:

#!/bin/bash -login
myprogram < 1> data1.out 2> data1.err

Making a new copy of the script and then submitting each one for every input data file is time consuming. An alternative is to make a job array using the -t option in your submission script. The -t option allows many copies of the same script to be queued all at once. You can use the PBS_ARRAYID to differenciate between the different jobs in the array.
#!/bin/bash -login
#PBS -t 1-8
myprogram < data${PBS_ARRAYID}.in 1> data${PBS_ARRAYID}.out 2> data${PBS_ARRAYID}.err

Other variables

the env command will list all of the environment variables currently set on your system. The following job script will who all of the variables available within a job (by selecting only those with PBS in the name):
#!/bin/bash -login
#PBS -l nodes=1:ppn=1,walltime=00:00:10
env | grep PBS

Submitting the above job script using qsub results in output similar to the following. Some of these variables may or may not be useful to you:


More information about these variables can be found at

  • No labels