#

Singularity on Odyssey

Table of Contents:

Introduction

Containerization of workloads has become popular, particularly using Docker. However, Docker is not suitable for HPC applications due to security reasons. There are a couple of alternatives for HPC containers, with Singularity being the one that covers a large set of cases. Singularity has been deployed on the Odyssey cluster, and can also import Docker containers.

This page provides information on how to use Singularity on the Odyssey cluster. Singularity enables users to have full control of their operating system environment. This allows a non-privileged user to "swap out" the Linux operating system and environment on the host machine for a Linux OS and computing environment that they can control. For instance, if the host system runs CentOS Linux but your application requires Ubuntu Linux with a specific software stack, you can create an Ubuntu image, install your software into that image, copy the created image to Odyssey, and run your application on that host in its native Ubuntu environment.
Singularity leverages the resources of the host system, such as high-speed interconnect (e.g., InfiniBand), high-performance parallel file systems (e.g., Lustre /n/regal and /n/holylfs filesystems), GPUs, and other resources (e.g., licensed Intel compilers).

Note for Windows and MacOS: Singularity only supports Linux containers. You cannot create images that use Windows or MacOS (this is a restriction of the containerization model rather than Singularity).

Why Singularity?

There are some important differences between Docker and Singularity:

  • Docker and Singularity have their own container formats.
  • Docker containers may be imported to run via Singularity.
  • Docker containers need root privileges for full functionality which is not suitable for a shared HPC environment.
  • Singularity allows working with containers as a regular user.

Singularity on Odyssey

Singularity is available only on the compute nodes on the Odyssey cluster. Therefore, to use it you need to either start an interactive job or submit a batch-job to the available SLURM queues.

In the below examples we illustrate the interactive use of Singularity in an interactive bash shell.

[user@rclogin15 ~]$ srun -p test -n 1 -t 00-01:00 --pty --mem=4000 bash
[user@holyseas02 ~]$

Check Singularity version:

[user@holyseas02 ~]$ which singularity
/bin/singularity
[user@holyseas02 ~]$ singularity --version
2.5.1-dist

The most up-to-date help on Singularity comes from the command itself.

[user@holyseas02 ~]$ singularity --help
USAGE: singularity [global options...]  [command options...] ...

GLOBAL OPTIONS:
    -d|--debug    Print debugging information
    -h|--help     Display usage summary
    -s|--silent   Only print errors
    -q|--quiet    Suppress all normal output
       --version  Show application version
    -v|--verbose  Increase verbosity +1
    -x|--sh-debug Print shell wrapper debugging information

GENERAL COMMANDS:
    help       Show additional help for a command or container                  
    selftest   Run some self tests for singularity install                      

CONTAINER USAGE COMMANDS:
    exec       Execute a command within container                               
    run        Launch a runscript within container                              
    shell      Run a Bourne shell within container                              
    test       Launch a testscript within container                             

CONTAINER MANAGEMENT COMMANDS:
    apps       List available apps within a container                           
    bootstrap  *Deprecated* use build instead                                   
    build      Build a new Singularity container                                
    check      Perform container lint checks                                    
    inspect    Display container's metadata                                     
    mount      Mount a Singularity container image                              
    pull       Pull a Singularity/Docker container to $PWD                      

COMMAND GROUPS:
    image      Container image command group                                    
    instance   Persistent instance command group                                


CONTAINER USAGE OPTIONS:
    see singularity help 

For any additional help or support visit the Singularity
website: http://singularity.lbl.gov/

Getting existing images onto Odyssey

Singularity uses container images which you can scp or rsync to Odyssey as you would do with any other file. See Copying Data to & from Odyssey using SCP or SFTP for more information.

Note: For larger Singularity images, please use the available scratch filesystems, such as /n/regal/my_lab/username and /n/holylfs/LABS/my_lab/username.

You can also use the pull or build commands to download pre-built images from external resources, such as Singularity Hub or Docker Hub. For instance, you can download a native Singularity image with its default name from Singularity Hub with:

[user@holyseas02 ~]$ singularity pull shub://vsoch/hello-world
Progress |===================================| 100.0% 
Done. Container is at: /n/holylfs/LABS/my_lab/user/vsoch-hello-world-master-latest.simg

You can also pull the image with a customized name:

[user@holyseas02 ~]$ singularity pull --name hello.simg shub://vsoch/hello-world
Progress |===================================| 100.0% 
Done. Container is at: /n/holylfs/LABS/my_lab/user/hello.simg

Similarly, you can pull images from Docker Hub:

[user@holyseas02 ~]$ singularity pull docker://godlovedc/lolcow
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
Docker image path: index.docker.io/godlovedc/lolcow:latest
Cache folder set to /n/homeXX/user/.singularity/docker
[6/6] |===================================| 100.0% 
Importing: base Singularity environment
Exploding layer: sha256:9fb6c798fa41e509b58bccc5c29654c3ff4648b608f5daa67c1aab6a7d02c118.tar.gz
...
Building Singularity image...
Singularity container built: ./lolcow.simg
Cleaning up...
Done. Container is at: ./lolcow.simg

See official Singularity documentation for more information.

Working with images

When working with images you could either start an interactive session, or submit a Singularity job to the available queues. For these examples, we will use a hello-world.simg in an interactive bash shell.

[user@rclogin15 ~]$ srun -p test -n 1 -t 00-01:00 --pty --mem=4000 bash
[user@holyseas02 ~]$ singularity pull --name hello-world.simg shub://vsoch/hello-world
Progress |===================================| 100.0%
Done. Container is at: /n/holylfs/LABS/my_lab/user/hello-world.simg

Shell

With the shell command, you can start a new shell within the container image and interact with it as if it were a small virtual machine.

[user@holyseas02 ~]$ singularity shell hello-world.simg 
Singularity: Invoking an interactive shell within container...

Singularity hello-world.simg:~/holylfs/pgk/SINGULARITY/vol2> pwd
/n/home06/pkrastev/holylfs/pgk/SINGULARITY/vol2
Singularity hello-world.simg:~/holylfs/pgk/SINGULARITY/vol2> ls
funny.simg  gcc-7.2.0.simg  hello-world.simg  hello.simg  lolcow.simg  ubuntu.simg  vsoch-hello-world-master-latest.simg
Singularity hello-world.simg:~/holylfs/pgk/SINGULARITY/vol2> id
uid=56139(pkrastev) gid=40273(rc_admin) groups=40273(rc_admin),10006(econh11),34539(fas_it),34540(cluster_users),402119(solexa_writers),402160(VPN_HELPMAN),402161(RT_Users),402854(wpdocs_users),403083(owncloud),403266(file-isi_microsoft-full-dlg),403284(gitlabint_users),403331(rc_class)
Singularity hello-world.simg:~/holylfs/pgk/SINGULARITY/vol2> 

Commands within a container

You can use the exec command to execute specific commands within the container. For instance, you can run the below command to display information about the native Linux OS of the image:

[user@holyseas02 ~]$ singularity exec hello-world.simg cat /etc/os-release
NAME="Ubuntu"
VERSION="14.04.5 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.5 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"

Running containers

Singularity images contain run-scripts that can be triggered with the run command to perform specific actions when the container is run. This can be done either by using the run command, or by calling the container as if it were an executable, i.e.,

[user@holyseas02 ~]$ singularity run hello-world.simg
RaawwWWWWWRRRR!!

or

[user@holyseas02 ~]$ ./hello-world.simg 
RaawwWWWWWRRRR!!

Sometimes you may have a container with several apps, each with its own set of run-scripts. You can use the apps command to list the available apps within the container. For instance, if you have an image named my_image.simg which has N apps (app_1, app_2,..., app_N) you can do:

[user@holyseas02 ~]$ singularity apps my_image.simg
app_1
app_2
...
app_N

You can run a particular app with

[user@holyseas02 ~]$ singularity run --app app_2 my_image.simg

GPU example:

To access Nvidia GPU card driver installed inside of Singularity container you need to use --nv option while executing the container. 

[user@rclogin15 ~]$ srun -p gpu --gres=gpu:1 --mem 1000 -n 4 --pty -t 600 /bin/bash 
[user@supermicgpu01 ~]$ singularity pull --name hello-world.simg shub://vsoch/hello-world 
[user@supermicgpu01 ~]$ singularity exec --nv hello-world.simg /bin/bash
[user@supermicgpu01 ~]$ nvidia-smi
 +-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26 Driver Version: 396.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K20Xm Off | 00000000:88:00.0 Off | 0 |
| N/A 37C P0 61W / 235W | 0MiB / 5700MiB | 65% Default |
+-------------------------------+----------------------+----------------------+

To verify, if you have access to the requested GPUs, run "nvidia-smi" 

Accessing files from a container

Files and directories on Odyssey are accessible from within the container. By default, directories under /n, $HOME, $PWD, and /tmp are available at runtime.

You can specify additional directories to bind mount into your container with the --bind option. For instance, in the below example the /scratch directory on the host system is bind mounted to the /mnt directory inside the container:

[user@holyseas02 ~]$ echo 'Hello from inside the container!' > /scratch/hello.dat
[user@holyseas02 ~]$ singularity exec --bind /scratch:/mnt hello-world.simg cat /mnt/hello.dat
Hello from inside the container!

Singularity containers as SLURM jobs

You can also use Singularity images within a non-interactive batch script as you would any other command. If your image contains a run-script then you can use singularity run to execute the run-script in the job. You can also use singularity exec to execute arbitrary commands (or scripts) within the image. Below is an example batch-job submission script using the hello-world.simg to print out information about the native OS of the image.

#!/bin/bash
#SBATCH -J singularity_test
#SBATCH -o singularity_test.out
#SBATCH -e singularity_test.err
#SBATCH -p shared
#SBATCH -t 0-00:30
#SBATCH -N 1
#SBATCH -c 1
#SBATCH --mem=4000

# Singularity command line options
singularity exec hello-world.simg cat /etc/os-release

If the above batch-job script is named singularity.sbatch, for instance, the jobs is submitted as usual with sbatch:

[user@rclogin15 ~]$ sbatch singularity.sbatch

Upon the job completion, the STD output is located in the file singularity_test.out.

[user@rclogin15 ~]$ cat singularity_test.out 
NAME="Ubuntu"
VERSION="14.04.5 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.5 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"

Building Singularity images

To build Singularity containers, you need root access to the build system. Therefore, you cannot build a Singularity container on Odyssey. Depending on whether or not you have an access to a Linux machine, possible options are:

In addition to your own Linux environment, you will also need a definition file to build a Singularity container from scratch. You can find some simple definition files for a variety of Linux distributions in the /example directory of the source code. Detailed documentation about building Singularity container images is available at the Singularity website.

References

CC BY-NC-SA 4.0 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Permissions beyond the scope of this license may be available at Attribution.