Page tree
Skip to end of metadata
Go to start of metadata

Why use a container?

Singularity allows users to run software inside of containers. A popular container system is 'Docker', which is interoperable with singularity.

A Linux container provides an environment that's different from the Linux host server you may be running on. For example, you could run a different version of Linux (e.g., running Ubuntu on our CentOS system). One advantage of containers is if your software requires a newer version of system libraries (e.g. glibc) than is available in our operating system, then you can run your software in a container. The main reason for using containers is that all of the dependencies are pre-installed.

Changes from the old CentOS 6 system

First, 'module load singularity' is no longer required now on the CentOS 7 system. All singularity commands are built into the system such as 'singularity shell' and 'singularity exec', which means you can invoke these commands directly from the command line.

Second, many software that used to require singularity images can now be installed without singularity after we have upgraded to CentOS 7. For example, you may use Rstudio without a singularity container. Use "module spider" to see if the software you need is installed (see more on module loading commands).

General information of singularity on the HPCC

The version of singularity is currently 3.4.1. The official documentation for this version is at https://www.sylabs.io/guides/3.4/user-guide/index.html

In general, a singularity image file contains all the software you need to run a program, and you use the command "singularity exec <imgfile> <command>" to run your command inside that image. For example, if I have a special version of R, I can type the following command:

singularity exec r-special.simg Rscript myprogram.R

Running a docker container

Many programs are available as docker containers pre-built, and many of those are available on the docker hub (https://hub.docker.com).

For details about running a docker container with singularity, see https://www.sylabs.io/guides/3.4/user-guide/singularity_and_docker.html. Here is a quick example of running a command in Ubuntu linux even though we use CentOS Linux:

singularity shell docker://ubuntu:latest # log-in to Ubuntu, use exit to log-out
singularity run docker://ubuntu:latest uname -a # show details of the Linux version

Building containers

Building your own containers requires administrative access (e.g., root privileges) so you can't do this. However you may build them on your laptop following the singularity documentation and transfer the image file over to the HPCC and use it here.

Submitting a singularity job to our cluster

In general, running singularity commands is the same as running any kinds of programs when you prepare your SLURM script. Two typical situations are:

(1) The program you are going to run inside of the container is as simple as one that runs on a single node. In this case, you put your singularity commands (singularity exec <imgfile> <command>) right after all the sbatch directive lines. If the program needs to use multiple threads/cores on a node, say 8, you would request 8 cores by `#SBATCH --cpus-per-task=8` as you would do with any regular program running.

(2) You are running an MPI program within your container. In this case, you must use `srun -n $SLURM_NTASKS` before the `singularity` command to launch the processes on the cluster nodes. See a template script below.

SLURM script
#!/bin/bash

# Job name:
#SBATCH --job-name=singularity-test
#
# Number of MPI tasks needed for use case:
#SBATCH --ntasks=18
#
# Processors per task:
#SBATCH --cpus-per-task=1
#
# Memory per CPU
#SBATCH --mem-per-cpu=1G
#
# Wall clock limit:
#SBATCH --time=30
#
# Standard out and error:
#SBATCH --output=%x-%j.SLURMout

cd <directory containing the singularity image file (.sif)>
srun -n $SLURM_NTASKS singularity exec xxx.sif <commands>

  • No labels