Why use a Cluster?
Overview
Teaching: 15 min
Exercises: 5 minQuestions
Why would I be interested in High Performance Computing (HPC)?
What can I expect to learn from this course?
Objectives
Describe what an HPC system is
Identify how an HPC system could benefit you.
Frequently, research problems that use computing can outgrow the capabilities of the desktop or laptop computer where they started:
- A statistics student wants to cross-validate a model. This involves running the model 1000 times – but each run takes an hour. Running the model on a laptop will take over a month! In this research problem, final results are calculated after all 1000 models have run, but typically only one model is run at a time (in serial) on the laptop. Since each of the 1000 runs is independent of all others, and given enough computers, it’s theoretically possible to run them all at once (in parallel).
- A genomics researcher has been using small datasets of sequence data, but soon will be receiving a new type of sequencing data that is 10 times as large. It’s already challenging to open the datasets on a computer – analyzing these larger datasets will probably crash it. In this research problem, the calculations required might be impossible to parallelize, but a computer with more memory would be required to analyze the much larger future data set.
- An engineer is using a fluid dynamics package that has an option to run in parallel. So far, this option was not used on a desktop. In going from 2D to 3D simulations, the simulation time has more than tripled. It might be useful to take advantage of that option or feature. In this research problem, the calculations in each region of the simulation are largely independent of calculations in other regions of the simulation. It’s possible to run each region’s calculations simultaneously (in parallel), communicate selected results to adjacent regions as needed, and repeat the calculations to converge on a final set of results. In moving from a 2D to a 3D model, both the amount of data and the amount of calculations increases greatly, and it’s theoretically possible to distribute the calculations across multiple computers communicating over a shared network.
In all these cases, access to more (and larger) computers is needed. Those computers should be usable at the same time, solving many researchers’ problems in parallel.
Jargon Busting Presentation
Open the HPC Jargon Buster
in a new tab. To present the content, press C
to open a clone in a
separate window, then press P
to toggle presentation mode.
I’ve Never Used a Server, Have I?
Take a minute and think about which of your daily interactions with a computer may require a remote server or even cluster to provide you with results.
Some Ideas
- Checking email: your computer (possibly in your pocket) contacts a remote machine, authenticates, and downloads a list of new messages; it also uploads changes to message status, such as whether you read, marked as junk, or deleted the message. Since yours is not the only account, the mail server is probably one of many in a data center.
- Searching for a phrase online involves comparing your search term against a massive database of all known sites, looking for matches. This “query” operation can be straightforward, but building that database is a monumental task! Servers are involved at every step.
- Searching for directions on a mapping website involves connecting your (A) starting and (B) end points by traversing a graph in search of the “shortest” path by distance, time, expense, or another metric. Converting a map into the right form is relatively simple, but calculating all the possible routes between A and B is expensive.
Checking email could be serial: your machine connects to one server and exchanges data. Searching by querying the database for your search term (or endpoints) could also be serial, in that one machine receives your query and returns the result. However, assembling and storing the full database is far beyond the capability of any one machine. Therefore, these functions are served in parallel by a large, “hyperscale” collection of servers working together.
Key Points
High Performance Computing (HPC) typically involves connecting to very large computing systems elsewhere in the world.
These other systems can be used to do work that would either be impossible or much slower on smaller systems.
HPC resources are shared by multiple users.
The standard method of interacting with such systems is via a command line interface.
Connecting to a remote HPC system
Overview
Teaching: 25 min
Exercises: 10 minQuestions
How do I log in to a remote HPC system?
Objectives
Configure secure access to a remote HPC system.
Connect to a remote HPC system.
Secure Connections
The first step in using a cluster is to establish a connection from our laptop to the cluster. When we are sitting at a computer (or standing, or holding it in our hands or on our wrists), we have come to expect a visual display with icons, widgets, and perhaps some windows or applications: a graphical user interface, or GUI. Since computer clusters are remote resources that we connect to over slow or intermittent interfaces (WiFi and VPNs especially), it is more practical to use a command-line interface, or CLI, to send commands as plain-text. If a command returns output, it is printed as plain text as well. The commands we run today will not open a window to show graphical results.
If you have ever opened the Windows Command Prompt or macOS Terminal, you have seen a CLI. If you have already taken The Carpentries’ courses on the UNIX Shell or Version Control, you have used the CLI on your local machine extensively. The only leap to be made here is to open a CLI on a remote machine, while taking some precautions so that other folks on the network can’t see (or change) the commands you’re running or the results the remote machine sends back. We will use the Secure SHell protocol (or SSH) to open an encrypted network connection between two machines, allowing you to send & receive text and data without having to worry about prying eyes.
SSH clients are usually command-line tools, where you provide the remote
machine address as the only required argument. If your username on the remote
system differs from what you use locally, you must provide that as well. If
your SSH client has a graphical front-end, such as PuTTY or MobaXterm, you will
set these arguments before clicking “connect.” From the terminal, you’ll write
something like ssh userName@hostname
, where the argument is just like an
email address: the “@” symbol is used to separate the personal ID from the
address of the remote machine.
When logging in to a laptop, tablet, or other personal device, a username, password, or pattern are normally required to prevent unauthorized access. In these situations, the likelihood of somebody else intercepting your password is low, since logging your keystrokes requires a malicious exploit or physical access. For systems like cirrus-login1 running an SSH server, anybody on the network can log in, or try to. Since usernames are often public or easy to guess, your password is often the weakest link in the security chain. Many clusters therefore forbid password-based login, requiring instead that you generate and configure a public-private key pair with a much stronger password. Even if your cluster does not require it, the next section will guide you through the use of SSH keys and an SSH agent to both strengthen your security and make it more convenient to log in to remote systems.
Log In to the Cluster
Go ahead and open your terminal or graphical SSH client, then log in to the
cluster. Replace yourUsername
with your username or the one
supplied by the instructors.
[user@laptop ~]$ ssh yourUsername@login.cirrus.ac.uk
You will be prompted first for your ssh key passphrase and then for your Cirrus login password. Watch out: the characters you type after the password prompt are not displayed on the screen. Normal output will resume
once you press Enter
.
You may have noticed that the prompt changed when you logged into the remote
system using the terminal. This change is important because
it can help you distinguish on which system the commands you type will be run
when you pass them into the terminal. This change is also a small complication
that we will need to navigate throughout the workshop. Exactly what is displayed
as the prompt (which conventionally ends in $
) in the terminal when it is
connected to the local system and the remote system will typically be different
for every user. We still need to indicate which system we are entering commands
on though so we will adopt the following convention:
[user@laptop ~]$
when the command is to be entered on a terminal connected to your local computer[yourUsername@cirrus-login1 ~]$
when the command is to be entered on a terminal connected to the remote system$
when it really doesn’t matter which system the terminal is connected to.
Creating an alias for quicker login
We can create an alias on our local machine to use as a shortcut to login to Cirrus.
Instead of typing ssh yourUsername@login.cirrus.ac.uk
every time we want to login
we can reduce it to a much shorter command, for example ssh cirrus
Create the file ~/.ssh/config
if it does not exist on your local machine. Add the following lines:
Host cirrus
Hostname login.cirrus.ac.uk
User yourUsername
IdentityFile ~/.ssh/mykey
You should now be able to connect to Cirrus from your local machine with the following shell command,
[user@laptop ~]$ ssh cirrus
Looking Around Your Remote Home
Very often, many users are tempted to think of a high-performance computing
installation as one giant, magical machine. Sometimes, people will assume that
the computer they’ve logged onto is the entire computing cluster. So what’s
really happening? What computer have we logged on to? The name of the current
computer we are logged onto can be checked with the hostname
command. (You
may also notice that the current hostname is also part of our prompt!)
[yourUsername@cirrus-login1 ~]$ hostname
cirrus-login1
So, we’re definitely on the remote machine. Note that since there are two login nodes on Cirrus
the hostname
command may also return cirrus-login2
.
Next, let’s find out where we are by running pwd
to print the working directory.
[yourUsername@cirrus-login1 ~]$ pwd
/home/tc036/tc036/yourUsername
Great, we know where we are!
Let’s see what’s in our current directory. The system administrators may have configured your home directory with some helpful files, folders, and links (shortcuts) to space reserved for you on other filesystems. If they did not, your home directory may appear empty. To double-check, include hidden files in your directory listing:
[yourUsername@cirrus-login1 ~]$ ls -a
. .. .bash_history .cache .config .local .python_history .ssh
In the first column, .
is a reference to the current directory and ..
a
reference to its parent (/home/tc036/tc036
). You may or may not see
the other files, or files like them: .bashrc
is a shell configuration file,
which you can edit with your preferences; and .ssh
is a directory storing SSH
keys and a record of authorized connections.
SSH Keys
SSH keys are an alternative method for authentication to obtain access to remote computing systems. They can also be used for authentication when transferring files or for accessing remote version control systems (such as GitHub).
During setup
you will have create a pair of SSH keys:
- a private key which you keep on your own computer, and
- a public key which can be placed on any remote system you will access.
Private keys are your secure digital passport
A private key that is visible to anyone but you should be considered compromised, and must be destroyed. This includes having improper permissions on the directory it (or a copy) is stored in, traversing any network that is not secure (encrypted), attachment on unencrypted email, and even displaying the key on your terminal window.
Protect this key as if it unlocks your front door. In many ways, it does.
Regardless of the software or operating system you use, please choose a strong password or passphrase to act as another layer of protection for your private SSH key.
Considerations for SSH Key Passwords
When prompted, enter a strong password that you will remember. There are two common approaches to this:
- Create a memorable passphrase with some punctuation and number-for-letter substitutions, 32 characters or longer. Street addresses work well; just be careful of social engineering or public records attacks.
- Use a password manager and its built-in password generator with all character classes, 25 characters or longer. KeePass and BitWarden are two good options.
- Nothing is less secure than a private key with no password. If you skipped password entry by accident, go back and generate a new key pair with a strong password.
On your local machine take a look in
~/.ssh
(usels ~/.ssh
). You should see two new files:
- your private key (
~/.ssh/id_rsa
): do not share with anyone!- the shareable public key (
~/.ssh/id_rsa.pub
): if a system administrator asks for a key, this is the one to send. It is also safe to upload to websites such as GitHub: it is meant to be seen.
The public key you uploaded to SAFE can be found in the .ssh
folder:
[yourUsername@cirrus-login1 ~]$ ls .ssh/
authorized_keys id_rsa id_rsa.pub
[yourUsername@cirrus-login1 ~]$ cat .ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCk44JLYQ4DCAcalNNJqtLsZAVSUvkbSt0OdPYycqo/2hvgvrs+8HsSyys+V6gKBA2zVL7rnLpMprJx8aN8bJwFfIBxzBsGZ7HFyL5Gs1cz1olbbouzBkS10TJu/9SAN6XyG7BVxAQC75Kz91Vb3sYQmFZC6pUZw4fShUAUVbXCXKbcIS+RjR9iaUBiTmpRoYoc6bdMiGHFLuHz4scCfHCGpjNI6OSpIbF6L99GhftmwZxlb9TaId8SBnOkBzjsYSFui0x06rFFdy7rrqwsYx0XKMmLwDY7U21z1DVx1/SCWll704b5BO111N/89SyEr3O4QtqDP4FKkSCFFayelNlvmQB4+QDGdvJHs0YBYMQ372fskItIUNOp5q2ioCt88mD15JPsxtEAUqbXcfSoZZE5y1FLVngAT5sUDqK+kX9sxhIf3E16gQOcMG3AxMMmVHuSFcqfoCLgU1jcT2x9hacc8QlPX7LQPPm8SzYCeVr3MavnNP+JiA1vhxKMlKbRThc= yourUsername@cirrus-login1
There May Be a Better Way
Policies and practices for handling SSH keys vary between HPC clusters: follow any guidance provided by the cluster administrators or documentation. Other systems may not have a online portal for managing SSH keys and you may need to upload your public key onto the HPC explicitly.
Key Points
An HPC system is a set of networked machines.
HPC systems typically provide login nodes and a set of worker nodes.
The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).
Files saved on one node are available on all nodes.
Exploring Remote Resources
Overview
Teaching: 25 min
Exercises: 10 minQuestions
How does my local computer compare to the remote systems?
How does the login node compare to the compute nodes?
Are all compute nodes alike?
Objectives
Survey system resources using
nproc
,free
, and the queuing systemCompare & contrast resources on the local machine, login node, and worker nodes
Learn about the various filesystems on the cluster using
df
Find out
who
else is logged inAssess the number of idle and occupied nodes
Look Around the Remote System
If you have not already connected to Cirrus, please do so now:
[user@laptop ~]$ ssh yourUsername@login.cirrus.ac.uk
Take a look at your home directory on the remote system:
[yourUsername@cirrus-login1 ~]$ ls
What’s different between your machine and the remote?
Open a second terminal window on your local computer and run the
ls
command (without logging in to Cirrus). What differences do you see?Solution
You would likely see something more like this:
[user@laptop ~]$ ls
Applications Documents Library Music Public Desktop Downloads Movies Pictures
The remote computer’s home directory shares almost nothing in common with the local computer: they are completely separate systems!
Most high-performance computing systems run the Linux operating system, which
is built around the UNIX Filesystem Hierarchy Standard. Instead of
having a separate root for each hard drive or storage medium, all files and
devices are anchored to the “root” directory, which is /
:
[yourUsername@cirrus-login1 ~]$ ls /
beegfs bin boot data dev etc home home-archive lib lib64 media mnt opt proc root run sbin scratch srv sys tmp usr var work
The “/home/tc036/tc036” directory is the one where we generally want to keep all of our files. Other folders on a UNIX OS contain system files and change as you install new software or upgrade your OS.
Using HPC filesystems
On HPC systems, you have a number of places where you can store your files. These differ in both the amount of space allocated and whether or not they are backed up.
- Home – often a network filesystem, data stored here is available throughout the HPC system, and often backed up periodically. Files stored here are typically slower to access, the data is actually stored on another computer and is being transmitted and made available over the network!
- Scratch – typically faster than the networked Home directory, but not usually backed up, and should not be used for long term storage.
- Work – sometimes provided as an alternative to Scratch space, Work is a fast file system accessed over the network. Typically, this will have higher performance than your home directory, but lower performance than Scratch; it may not be backed up. It differs from Scratch space in that files in a work file system are not automatically deleted for you: you must manage the space yourself.
Nodes
Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the login node, head node, landing pad, or submit node. A login node serves as an access point to the cluster.
As a gateway, the login node should not be used for time-consuming or resource-intensive tasks. You should be alert to this, and check with your site’s operators or documentation for details of what is and isn’t allowed. It is well suited for uploading and downloading files, setting up software, and running tests. Generally speaking, in these lessons, we will avoid running jobs on the login node.
Who else is logged in to the login node?
[yourUsername@cirrus-login1 ~]$ who
This may show only your user ID, but there are likely several other people (including fellow learners) connected right now.
Dedicated Transfer Nodes
If you want to transfer larger amounts of data to or from the cluster, some systems offer dedicated nodes for data transfers only. The motivation for this lies in the fact that larger data transfers should not obstruct operation of the login node for anybody else. Check with your cluster’s documentation or its support team if such a transfer node is available. As a rule of thumb, consider all transfers of a volume larger than 500 MB to 1 GB as large. But these numbers change, e.g., depending on the network connection of yourself and of your cluster or other factors.
Cirrus does not have dedicated transfer nodes but other systems may differ.
The real work on a cluster gets done by the compute (or worker) nodes. compute nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.
All interaction with the compute nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Slurm). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the compute nodes.
For example, we can view all of the compute nodes by running the command
sinfo
.
[yourUsername@cirrus-login1 ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
standard up infinite 4 drain r2i1n[5,14,23,32]
standard up infinite 1 mix r1i0n3
standard up infinite 19 resv r1i7n22,r2i0n[1-3,5-8,10-12,14-18],r2i2n[1,3,9]
standard up infinite 309 alloc r1i0n[0-2,4-35],r1i1n[0-35],r1i2n[0-35],r1i3n[0-16,18-35],r1i4n[0-18,23,26-30,32-35],r1i5n[0-35],r1i6n[1,3-5,8,12-35],r1i7n[1,3-6,9-12,18-21,23-24,27-28,30-33],r2i0n[0,4,9,13,20,22,24-29,32-35],r2i1n[1-4,6-12,17,19-22,24,27,29-31,33-35],r2i2n[0,2,10-11,18-21,27-30]
standard up infinite 35 idle r1i3n17,r1i4n[19-22,24-25,31],r1i6n[0,2,6-7,9-11],r1i7n[0,2,13-15,29],r2i0n[19,21,23,30-31],r2i1n[0,13,15-16,18,25-26,28],r2i2n12
highmem up infinite 1 drain highmem01
gpu up infinite 2 resv r2i3n[0-1]
gpu up infinite 5 mix r2i5n[3,6],r2i6n[1,7-8]
gpu up infinite 21 alloc r2i4n[0-8],r2i6n[0,2-3],r2i7n[0-8]
gpu up infinite 10 idle r2i5n[0-2,4-5,7-8],r2i6n[4-6]
A lot of the nodes are busy running work for other users: we are not alone here!
There are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically logon to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.
What’s in a Node?
All of the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space. CPUs are a computer’s tool for actually running programs and calculations. Information about a current task is stored in the computer’s memory. Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside of it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.
Explore Your Computer
Try to find out the number of CPUs and amount of memory available on your personal computer.
Note that, if you’re logged in to the remote computer cluster, you need to log out first. To do so, type
Ctrl+d
orexit
:[yourUsername@cirrus-login1 ~]$ exit [user@laptop ~]$
Solution
There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:
- Run system utilities
[user@laptop ~]$ nproc --all [user@laptop ~]$ free -m
- Equivalent OSX command
[user@laptop ~]$ vm_stat
- Read from
/proc
[user@laptop ~]$ cat /proc/cpuinfo [user@laptop ~]$ cat /proc/meminfo
- Run system monitor
[user@laptop ~]$ htop
Explore the Login Node
Now compare the resources of your computer with those of the login node.
Solution
[user@laptop ~]$ ssh yourUsername@login.cirrus.ac.uk [yourUsername@cirrus-login1 ~]$ nproc --all [yourUsername@cirrus-login1 ~]$ free -m
You can get more information about the processors using
lscpu
, and a lot of detail about the memory by reading the file/proc/meminfo
:[yourUsername@cirrus-login1 ~]$ less /proc/meminfo
You can also explore the available filesystems using
df
to show disk free space. The-h
flag renders the sizes in a human-friendly format, i.e., GB instead of B. The type flag-T
shows what kind of filesystem each resource is.[yourUsername@cirrus-login1 ~]$ df -Th
Different results from
df
- The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on).
- Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include yourUsername, depending on how it is mounted.
Shared Filesystems
This is an important point to remember: files saved on one node (computer) are often available everywhere on the cluster!
Explore a Worker Node
Finally, let’s look at the resources available on the worker nodes where your jobs will actually run. Try running this command to see the name, CPUs and memory available on the worker nodes:
[yourUsername@cirrus-login1 ~]$ scontrol show node r1i0n0
Compare Your Computer, the Login Node and the Compute Node
Compare your laptop’s number of processors and memory with the numbers you see on the cluster login node and compute node. What implications do you think the differences might have on running your research work on the different systems and nodes?
Solution
Compute nodes are usually built with processors that have higher core-counts than the login node or personal computers in order to support highly parallel tasks. Compute nodes usually also have substantially more memory (RAM) installed than a personal computer. More cores tends to help jobs that depend on some work that is easy to perform in parallel, and more, faster memory is key for large or complex numerical tasks.
Differences Between Nodes
Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphics Processing Units (GPUs or “video cards”).
With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!
Key Points
An HPC system is a set of networked machines.
HPC systems typically provide login nodes and a set of compute nodes.
The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).
Files saved on shared storage are available on all nodes.
The login node is a shared machine: be considerate of other users.
Scheduler Fundamentals
Overview
Teaching: 45 min
Exercises: 30 minQuestions
What is a scheduler and why does a cluster need one?
How do I launch a program to run on a compute node in the cluster?
How do I capture the output of a program that is run on a node in the cluster?
Objectives
Submit a simple script to the cluster.
Monitor the execution of jobs using command line tools.
Inspect the output and error files of your jobs.
Find the right place to put large datasets on the cluster.
Job Scheduler
An HPC system might have thousands of nodes and thousands of users. How do we decide who gets what and when? How do we ensure that a task is run with the resources it needs? This job is handled by a special piece of software called the scheduler. On an HPC system, the scheduler manages which jobs run where and when.
The following illustration compares these tasks of a job scheduler to a waiter in a restaurant. If you can relate to an instance where you had to wait for a while in a queue to get in to a popular restaurant, then you may now understand why sometimes your job do not start instantly as in your laptop.
The scheduler used in this lesson is Slurm. Although Slurm is not used everywhere, running jobs is quite similar regardless of what software is being used. The exact syntax might change, but the concepts remain the same.
Running a Batch Job
The most basic use of the scheduler is to run a command non-interactively. Any command (or series of commands) that you want to run on the cluster is called a job, and the process of using a scheduler to run the job is called batch job submission.
In this case, the job we want to run is a shell script – essentially a text file containing a list of UNIX commands to be executed in a sequential manner. Our shell script will have three parts:
- On the very first line, add
#!/bin/bash
. The#!
(pronounced “hash-bang” or “shebang”) tells the computer what program is meant to process the contents of this file. In this case, we are telling it that the commands that follow are written for the command-line shell (what we’ve been doing everything in so far). - Anywhere below the first line, we’ll add an
echo
command with a friendly greeting. When run, the shell script will print whatever comes afterecho
in the terminal.echo -n
will print everything that follows, without ending the line by printing the new-line character.
- On the last line, we’ll invoke the
hostname
command, which will print the name of the machine the script is run on.
[yourUsername@cirrus-login1 ~]$ nano example-job.sh
#!/bin/bash
echo -n "This script is running on "
hostname
Creating Our Test Job
Run the script. Does it execute on the cluster or just our login node?
Solution
[yourUsername@cirrus-login1 ~]$ bash example-job.sh
This script is running on cirrus-login1
This script ran on the login node, but we want to take advantage of
the compute nodes: we need the scheduler to queue up example-job.sh
to run on a compute node.
To submit this task to the scheduler, we use the
sbatch
command.
This creates a job which will run the script when dispatched to
a compute node which the queuing system has identified as being
available to perform the work.
[yourUsername@cirrus-login1 ~]$ sbatch --partition=standard --qos=standard example-job.sh
sbatch: Your job has no time specification (--time=) and the default time is short. You can cancel your job with 'scancel <JOB_ID>' if you wish to resubmit.
sbatch: Warning: It appears your working directory may not be on one of the work filesystem. It is /mnt/cephfs/ceph01/site-home/home/tc036/tc036/nkg85-whpc. The home filesystem is not available from the compute nodes - please check that this is what you intended. You can cancel your job with 'scancel <JOBID>' if you wish to resubmit.
Submitted batch job 3934401
Ah! What went wrong here? Slurm is telling us that the file system we are currently on, /home
, is not available
on the compute nodes and that we are getting the default, short runtime. We will deal with the runtime properly
later, but we need to move to a different file system to submit the job and have it visible to the
compute nodes. On Cirrus, this is the /work
file system. The path is similar to home but with
/work
at the start. Lets move there now, copy our job script across and resubmit:
[yourUsername@cirrus-login1 ~]$ cd /work/tc036/tc036/yourUsername
[yourUsername@cirrus-login1 ~]$ cp ~/example-job.sh .
[yourUsername@cirrus-login1 ~]$ sbatch --partition=standard --qos=standard --time=00:00:10 example-job.sh
Submitted batch job 3934430
That’s better! And that’s all we need to do to submit a job. Our work is done – now the
scheduler takes over and tries to run the job for us. While the job is waiting
to run, it goes into a list of jobs called the queue. To check on our job’s
status, we check the queue using the command
squeue -u yourUsername
.
[yourUsername@cirrus-login1 ~]$ squeue -u yourUsername
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
3934430 standard example-job.sh yourUsername R 0:00 1 r1i6n26
We can see all the details of our job, most importantly that it is in the R
or RUNNING
state. Sometimes our jobs might need to wait in a queue
(PD
or PENDING
) or become terminated, for example due to OUT_OF_MEMORY
(OOM
) error, TIMEOUT
(TO
) or some other FAILED
(F
) condition.
Where’s the Output?
On the login node, this script printed output to the terminal – but now, when
squeue
shows the job has finished, nothing was printed to the terminal.Cluster job output is typically redirected to a file in the directory you launched it from. Use
ls
to find andcat
to read the file.On some HPC systems you may need to redirect the output explictly in your job submission script. You can achieve this by setting the options for error
--error=<error_filename>
and output--output=<output_filename>
filenames with#SBATCH
in your job script. On Cirrus this is handled by default with output and error files named according to the job submission id.
Customising a Job
The job we just ran used all of the scheduler’s default options. In a real-world scenario, that’s probably not what we want. The default options represent a reasonable minimum. Chances are, we will need more cores, more memory, more time, among other special considerations. To get access to these resources we must customize our job script.
Comments in UNIX shell scripts (denoted by #
) are typically ignored, but
there are exceptions. For instance the special #!
comment at the beginning of
scripts specifies what program should be used to run it (you’ll typically see
#!/bin/bash
). Schedulers like Slurm also
have a special comment used to denote special scheduler-specific options.
Though these comments differ from scheduler to scheduler,
Slurm’s special comment is #SBATCH
. Anything
following the #SBATCH
comment is interpreted as an
instruction to the scheduler.
Let’s illustrate this by example. By default, a job’s name is the name of the
script, but the --job-name
option can be used to change the
name of a job. Add an option to the script:
[yourUsername@cirrus-login1 ~]$ cat example-job.sh
#!/bin/bash
#SBATCH --job-name=hello-world
echo -n "This script is running on "
hostname
Submit the job and monitor its status:
[yourUsername@cirrus-login1 ~]$ sbatch --partition=standard --qos=standard --time=00:00:10 example-job.sh
[yourUsername@cirrus-login1 ~]$ squeue -u yourUsername
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
3934492 standard hello-world yourUsername R 0:00 1 r1i3n17
Fantastic, we’ve successfully changed the name of our job!
Resource Requests
What about more important changes, such as the number of cores and memory for our jobs? One thing that is absolutely critical when working on an HPC system is specifying the resources required to run a job. This allows the scheduler to find the right time and place to schedule our job. If you do not specify requirements (such as the amount of time you need), you will likely be stuck with your site’s default resources, which is probably not what you want.
As a minimum, on the Cirrus platform, all job submissions must specify the budget that they wish to charge the job too, the partition they wish to use and the QoS they want to use with the options:
-
--account=<budgetID>
your budget ID is typically your project code, for exampletc036
. You can see which budget codes you can charge to in SAFE. -
--partition=<partition>
The partition specifies the set of nodes you want to run on. More information on available partitions is given in the Cirrus documentation. -
--qos="QoS"
The QoS specifies the limits to apply to your job. Again, more information on available QoS are given in the documentation.
Other common options that are used are:
-
--time=<hh:mm:ss>
the maximum walltime for your job. e.g. For a 6.5 hour walltime, you would use--time=06:30:00
. -
--job-name=<jobname>
set a name for the job to help identify it in Slurm command output.
In addition, parallel jobs will also need to specify how many nodes, parallel processes and threads they require.
-
--exclusive
to ensure that you have exclusive access to a compute node -
--nodes=<nodes>
the number of nodes to use for the job. -
--tasks-per-node=<processes per node>
the number of parallel processes (e.g. MPI ranks) per node. -
--cpus-per-task=<threads per task>
the number of threads per parallel process (e.g. number of OpenMP threads per MPI task for hybrid MPI/OpenMP jobs). Note: you must also set theOMP_NUM_THREADS
environment variable if using OpenMP in your job and usually add the--cpu-bind=cores
option to srun
Note that just requesting these resources does not make your job run faster, nor does it necessarily mean that you will consume all of these resources. It only means that these are made available to you. Your job may end up using less memory, or less time, or fewer nodes than you have requested, and it will still run.
It’s best if your requests accurately reflect your job’s requirements. We’ll talk more about how to make sure that you’re using resources effectively in a later episode of this lesson.
Command line options or job script options?
All of the options we specify can be supplied on the command line (as we do here for
--partition=standard
and--qos=standard
) or in the job script (as we have done for the job name above). These are interchangeable. It is often more convenient to put the options in the job script as it avoids lots of typing at the command line.
Submitting Resource Requests
Modify our
hostname
script so that it runs for a minute, then submit a job for it on the cluster. You should also move all the options we have been specifying on the command line (e.g.--partition
and--qos
) into the script at this point.Solution
[yourUsername@cirrus-login1 ~]$ cat example-job.sh
#!/bin/bash #SBATCH --partition=standard #SBATCH --qos=standard #SBATCH --time=00:01 # timeout in HH:MM echo -n "This script is running on " sleep 20 # time in seconds hostname
[yourUsername@cirrus-login1 ~]$ sbatch example-job.sh
Why are the Slurm runtime and
sleep
time not identical?
Job environment variables
When Slurm runs a job, it sets a number of environment variables for the job. One of these will let us check what directory our job script was submitted from. The
SLURM_SUBMIT_DIR
variable is set to the directory from which our job was submitted.Using the
SLURM_SUBMIT_DIR
variable, modify your job so that it prints out the location from which the job was submitted.Solution
[yourUsername@cirrus-login1 ~]$ nano example-job.sh [yourUsername@cirrus-login1 ~]$ cat example-job.sh
#!/bin/bash #SBATCH --partition=standard #SBATCH --qos=standard #SBATCH --time=00:01 # timeout in HH:MM echo -n "This script is running on " hostname echo "This job was launched in the following directory:" echo ${SLURM_SUBMIT_DIR}
Resource requests are typically binding. If you exceed them, your job will be killed. Let’s use wall time as an example. We will request 1 minute of wall time, and attempt to run a job for two minutes.
[yourUsername@cirrus-login1 ~]$ cat example-job.sh
#!/bin/bash
#SBATCH --partition=standard
#SBATCH --qos=standard
#SBATCH --job-name=long_job
#SBATCH --time=00:01 # timeout in HH:MM
echo "This script is running on ... "
sleep 240 # time in seconds
hostname
Submit the job and wait for it to finish. Once it is has finished, check the log file.
[yourUsername@cirrus-login1 ~]$ sbatch example-job.sh
[yourUsername@cirrus-login1 ~]$ squeue -u yourUsername
[yourUsername@cirrus-login1 ~]$ cat slurm-3935746.out
This script is running on slurmstepd: error: *** JOB 3935746 ON r1i3n17 CANCELLED AT 2023-01-12T14:32:16 DUE TO TIME LIMIT ***
Our job was killed for exceeding the amount of resources it requested. Although this appears harsh, this is actually a feature. Strict adherence to resource requests allows the scheduler to find the best possible place for your jobs. Even more importantly, it ensures that another user cannot use more resources than they’ve been given. If another user messes up and accidentally attempts to use all of the cores or memory on a node, Slurm will either restrain their job to the requested resources or kill the job outright. Other jobs on the node will be unaffected. This means that one user cannot mess up the experience of others, the only jobs affected by a mistake in scheduling will be their own.
But how much does it cost?
Although your job will be killed if it exceeds the selected runtime, a job that completes within the time limit is only charged for the time it actually used. However, you should always try and specify a wallclock limit that is close to (but greater than!) the expected runtime as this will enable your job to be scheduled more quickly. If you say your job will run for an hour, the scheduler has to wait until a full hour becomes free on the machine. If it only ever runs for 5 minutes, you could have set a limit of 10 minutes and it might have been run earlier in the gaps between other users’ jobs.
Cancelling a Job
Sometimes we’ll make a mistake and need to cancel a job. This can be done with
the scancel
command. Let’s submit a job and then cancel it using
its job number (remember to change the walltime so that it runs long enough for
you to cancel it before it is killed!).
[yourUsername@cirrus-login1 ~]$ sbatch example-job.sh
[yourUsername@cirrus-login1 ~]$ squeue -u yourUsername
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
3936036 standard long_job yourUsername R 0:03 1 r1i5n7
Now cancel the job with its job number (printed in your terminal). A clean return of your command prompt indicates that the request to cancel the job was successful.
[yourUsername@cirrus-login1 ~]$ scancel 3936036
# It might take a minute for the job to disappear from the queue...
[yourUsername@cirrus-login1 ~]$ squeue -u yourUsername
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
Cancelling multiple jobs
We can also cancel all of our jobs at once using the
-u
option. This will delete all jobs for a specific user (in this case us). Note that you can only delete your own jobs.Try submitting multiple jobs and then cancelling them all with
scancel -u yourUsername
.
Other Types of Jobs
Up to this point, we’ve focused on running jobs in batch mode. Slurm also provides the ability to start an interactive session.
There are very frequently tasks that need to be done interactively. Creating an
entire job script might be overkill, but the amount of resources required is
too much for a login node to handle. A good example of this might be building a
genome index for alignment with a tool like HISAT2. Fortunately, we
can run these types of tasks as a one-off with srun
.
srun
runs a single command in the queue system and then exits.
Let’s demonstrate this by running the
hostname
command with srun
. (We can cancel an srun
job with Ctrl-c
.)
[yourUsername@cirrus-login1 ~]$ srun --partition=standard --qos=standard --time=00:01:00 hostname
srun: job 3936112 queued and waiting for resources
srun: job 3936112 has been allocated resources
r1i5n7
srun
accepts all of the same options as sbatch
. However, instead of specifying these in a
script, these options are specified on the command-line when starting a job.
Typically, the resulting shell environment will be the same as that for
sbatch
.
Interactive jobs
Sometimes, you will need a lot of resource for interactive use. Perhaps it’s our first time running
an analysis or we are attempting to debug something that went wrong with a previous job.
Fortunately, SLURM makes it easy to start an interactive job with srun
:
[yourUsername@cirrus-login1 ~]$ srun --partition=standard --qos=standard --time=00:01:00 --pty /bin/bash
You should be presented with a bash prompt. Note that the prompt may change
to reflect your new location, in this case the compute node we are logged on.
You can also verify this with hostname
.
When you are done with the interactive job, type exit
to quit your session.
Key Points
The scheduler handles how compute resources are shared between users.
A job is just a shell script.
If in doubt, request more resources than you will need.
Accessing software via Modules
Overview
Teaching: 30 min
Exercises: 15 minQuestions
How do we load and unload software packages?
Objectives
Load and use a software package.
Explain how the shell environment changes when the module mechanism loads or unloads packages.
On a high-performance computing system, it is seldom the case that the software we want to use is available when we log in. It is installed, but we will need to “load” it before it can run.
Before we start using individual software packages, however, we should understand the reasoning behind this approach. The three biggest factors are:
- software incompatibilities
- versioning
- dependencies
Software incompatibility is a major headache for programmers. Sometimes the
presence (or absence) of a software package will break others that depend on
it. Two of the most famous examples are Python 2 and 3 and C compiler versions.
Python 3 famously provides a python
command that conflicts with that provided
by Python 2. Software compiled against a newer version of the C libraries and
then used when they are not present will result in a nasty 'GLIBCXX_3.4.20'
not found
error, for instance.
Software versioning is another common issue. A team might depend on a certain package version for their research project - if the software version was to change (for instance, if a package was updated), it might affect their results. Having access to multiple software versions allow a set of researchers to prevent software versioning issues from affecting their results.
Dependencies are where a particular software package (or even a particular version) depends on having access to another software package (or even a particular version of another software package). For example, the VASP materials science software may depend on having a particular version of the FFTW (Fastest Fourier Transform in the West) software library available for it to work.
Environment Modules
Environment modules are the solution to these problems. A module is a self-contained description of a software package – it contains the settings required to run a software package and, usually, encodes required dependencies on other software packages.
There are a number of different environment module implementations commonly
used on HPC systems: the two most common are TCL modules and Lmod. Both of
these use similar syntax and the concepts are the same so learning to use one
will allow you to use whichever is installed on the system you are using. In
both implementations the module
command is used to interact with environment
modules. An additional subcommand is usually added to the command to specify
what you want to do. For a list of subcommands you can use module -h
or
module help
. As for all commands, you can access the full help on the man
pages with man module
.
On login you may start out with a default set of modules loaded or you may start out with an empty environment; this depends on the setup of the system you are using.
Listing Available Modules
To see available software modules, use module avail
:
[yourUsername@cirrus-login1 ~]$ module avail
------------------------------------------------------- /mnt/lustre/indy2lfs/sw/modulefiles -------------------------------------------------------
altair-hwsolvers/13.0.213 gdb/10.2 intel-fc-19/19.0.0.117 openfoam/v2106
altair-hwsolvers/14.0.210 git/2.21.0(default) intel-itac-18/2018.5.025 openmpi/4.1.2
anaconda/python3 git/2.37.3 intel-itac-19/19.0.0.117 openmpi/4.1.2-cuda-11.6
ansys/18.0 gmp/6.2.0-intel intel-license openmpi/4.1.4(default)
ansys/19.0 gmp/6.2.1-mpt intel-mpi-18/18.0.5.274 openmpi/4.1.4-cuda-11.6
ant/1.10.8(default) gnu-parallel/20200522-gcc6(default) intel-mpi-19/19.0.0.117 perf/1.0.0
autotools/default gnuplot/5.4.0(default) intel-tbb-18/18.0.5.274 petsc/3.13.2-intel-mpi-18
binutils/2.36(default) gromacs/2020.2 intel-tbb-19/19.0.0.117(default) petsc/3.13.2-mpt
bison/3.6.4 gromacs/2020.2-gpu intel-tools-18/18.0.5.274 pyfr/1.14.0-gpu
boost/1.67.0 gromacs/2022.1(default) intel-tools-19/19.0.0.117 pyfr/1.15.0-gpu(default)
boost/1.73.0(default) gromacs/2022.1-gpu intel-vtune-18/2018.4.0.573462(default) python/3.9.12-gpu
castep/18/(default) gromacs/2022.3-gpu intel-vtune-19/2019.0.2.570779(default) python/3.9.13
castep/18/18.1.0 gsl/2.6-gcc8 java/jdk-14.0.1 python/3.9.13-gpu
castep/19/19.1.1 gsl/2.7-gcc8(default) lammps/3March2020-intel19-mpt pytorch/1.12.1
cmake/3.17.3(default) hdf5parallel/1.10.4-intel18-impi18 lammps/23Jun2022_intel19_mpt pytorch/1.12.1-gpu
cmake/3.22.1 hdf5parallel/1.10.6-gcc6-mpt225 libnsl/1.3.0(default) quantum-espresso/6.5-intel-19
cp2k/7.1 hdf5parallel/1.10.6-gcc8-mpt225 libpng/1.6.30 quantum-espresso/6.5-intel-20.4
CRYSTAL17/1.0.2_intel18 hdf5parallel/1.10.6-intel18-mpt225 libtirpc/1.2.6(default) R/3.6.3
CUnit/2.1.3(default) hdf5parallel/1.10.6-intel19-mpt225 libtool/2.4.6 R/4.0.2(default)
dolfin/2019.1.0-intel-mpi hdf5parallel/1.12.0-nvhpc-openmpi libxkbcommon/1.0.1(default) scalasca/2.6-gcc8-mpt225
dolfin/2019.1.0-mpt hdf5serial/1.10.6-intel18 matlab/R2019a scalasca/2.6-intel19-mpt225
eclipse/2020-09(default) horovod/0.25.0 matlab/R2019b singularity/3.7.2(default)
epcc/deprecated-software horovod/0.25.0-gpu matlab/R2020b(default) specfem3d/3.0(default)
epcc/setup-env htop/3.1.2 matlab/R2021b starccm+/13.06.012(default)
epcc/utils htop/3.2.1(default) metis/5.1.0 starccm+/13.06.012-R8
expat/2.2.9 ImageMagick/7.0.10-22(default) mpc/1.1.0 starccm+/14.04.013-R8
fenics/2019.1.0-intel-mpi intel-19.5/cc mpfr/4.0.2-intel starccm+/14.06.013-R8
fenics/2019.1.0-mpt intel-19.5/cmkl mpfr/4.0.2-mpt starccm+/15.02.009-R8
fftw/3.3.8-gcc8-ompi4 intel-19.5/compilers namd/2.14(default) starccm+/15.04.010-R8
fftw/3.3.8-intel18 intel-19.5/fc namd/2.14-gpu starccm+/15.06.008-R8
fftw/3.3.8-intel19(default) intel-19.5/itac namd/2.14-nosmp starccm+/16.02.009
fftw/3.3.9-impi19-gcc8 intel-19.5/mpi ncl/6.6.2 starccm+/2019.3.1-R8
fftw/3.3.10-intel19-mpt225 intel-19.5/pxse nco/4.9.3 starccm+/2020.1.1-R8
fftw/3.3.10-intel20.4 intel-19.5/tbb ncview/2.1.7 starccm+/2020.2.1-R8
flacs-cfd/20.1 intel-19.5/vtune netcdf-parallel/4.6.2-intel18-impi18 starccm+/2020.3.1-R8
flacs-cfd/20.2 intel-20.4/cc netcdf-parallel/4.6.2-intel19-mpt225 starccm+/2021.1.1
flacs-cfd/21.1 intel-20.4/cmkl ninja/1.10.2(default) strace/5.8(default)
flacs-cfd/21.2 intel-20.4/compilers nvidia/cudnn/8.2.1-cuda-11.6 svn/1.14.0(default)
flacs-cfd/22.1 intel-20.4/fc nvidia/cudnn/8.5.0-cuda-11.6 tensorflow/2.9.1-gpu
flacs/10.9.1 intel-20.4/itac nvidia/cudnn/8.6.0-cuda-11.6(default) tensorflow/2.10.0
flex/2.6.4 intel-20.4/mpi nvidia/nvhpc-byo-compiler/21.2 tmux/3.3a(default)
gaussian/16.A03(default) intel-20.4/psxe nvidia/nvhpc-byo-compiler/21.9 ucx/1.9.0
gcc/6.2.0 intel-20.4/tbb nvidia/nvhpc-byo-compiler/22.2 ucx/1.9.0-cuda-11.6
gcc/6.3.0 intel-20.4/vtune nvidia/nvhpc-nompi/22.2 udunits/2.2.26
gcc/8.2.0(default) intel-cc-18/18.0.5.274 nvidia/nvhpc/22.2 valgrind/3.16.1(default)
gcc/10.2.0 intel-cc-19/19.0.0.117 nvidia/tensorrt/7.2.3.4 vasp/5/5.4.4-intel19-mpt220(default)
gdal/2.1.2-gcc intel-cmkl-18/18.0.5.274 nvidia/tensorrt/8.4.3.1-u2 vasp/6/6.2.1-intel19-mpt220(default)
gdal/2.1.2-intel intel-cmkl-19/19.0.0.117 oneapi/2022.2.0(default) zlib/1.2.11(default)
gdal/2.4.4-gcc intel-compilers-18/18.05.274 openfoam/v8.0
gdal/2.4.4-intel intel-compilers-19/19.0.0.117 openfoam/v9.0
gdb/9.2(default) intel-fc-18/18.0.5.274 openfoam/v2006
--------------------------------------------------------- /usr/share/Modules/modulefiles ----------------------------------------------------------
dot hmpt/2.25 module-git module-info modules mpt/2.25 null perfboost use.own
Listing Currently Loaded Modules
You can use the module list
command to see which modules you currently have
loaded in your environment. If you have no modules loaded, you will see a
message telling you so
[yourUsername@cirrus-login1 ~]$ module list
Currently Loaded Modulefiles:
1) git/2.37.3 2) epcc/utils 3) /mnt/lustre/indy2lfs/sw/modulefiles/epcc/setup-env
Loading and Unloading Software
To load a software module, use module load
. In this example we will use
R.
Initially, R is not loaded. We can test this by using the which
command. which
looks for programs the same way that Bash does, so we can use
it to tell us where a particular piece of software is stored.
[yourUsername@cirrus-login1 ~]$ which R
/usr/bin/which: no R in (/mnt/lustre/indy2lfs/sw/git/2.37.3/bin:/opt/clmgr/sbin:/opt/clmgr/bin:/opt/sgi/sbin:/opt/sgi/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/c3/bin:/sbin:/bin)
We can load the R
command with module load
:
[yourUsername@cirrus-login1 ~]$ module load R
[yourUsername@cirrus-login1 ~]$ which R
/mnt/lustre/indy2lfs/sw/R/4.0.2/bin/R
So, what just happened?
To understand the output, first we need to understand the nature of the $PATH
environment variable. $PATH
is a special environment variable that controls
where a UNIX system looks for software. Specifically $PATH
is a list of
directories (separated by :
) that the OS searches through for a command
before giving up and telling us it can’t find it. As with all environment
variables we can print it out using echo
.
[yourUsername@cirrus-login1 ~]$ echo $PATH
/mnt/lustre/indy2lfs/sw/R/4.0.2/bin/:/mnt/lustre/indy2lfs/sw/gcc/8.2.0/bin:/mnt/lustre/indy2lfs/sw/git/2.37.3/bin:/opt/clmgr/sbin:/opt/clmgr/bin:/opt/sgi/sbin:/opt/sgi/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/c3/bin:/sbin:/bin
You’ll notice a similarity to the output of the which
command. In this case,
there’s only one difference: the different directory at the beginning. When we
ran the module load
command, it added a directory to the beginning of our
$PATH
. Let’s examine what’s there:
[yourUsername@cirrus-login1 ~]$ ls /mnt/lustre/indy2lfs/sw/R/4.0.2/bin/
R Rscript
In summary, module load
will add software to your $PATH
.
It “loads” software. A special note on this - depending on which version of the
module
program that is installed at your site, module load
will also load
required software dependencies.
To demonstrate, let’s load the gromacs
module and then use the module list
command to show which modules we currently have loaded in our environment.
(Gromacs is an open source molecular dynamics package.)
[yourUsername@cirrus-login1 ~]$ module load gromacs
[yourUsername@cirrus-login1 ~]$ module list
Currently Loaded Modulefiles:
1) git/2.37.3 2) epcc/utils 3) /mnt/lustre/indy2lfs/sw/modulefiles/epcc/setup-env 4) gcc/8.2.0(default) 5) mpt/2.25 6) gromacs/2022.1(default)
So in this case, loading the gromacs
module also loaded a variety of other
modules. Let’s try unloading the gromacs
package.
[yourUsername@cirrus-login1 ~]$ module unload gromacs
[yourUsername@cirrus-login1 ~]$ module list
Currently Loaded Modulefiles:
1) git/2.37.3 2) epcc/utils 3) /mnt/lustre/indy2lfs/sw/modulefiles/epcc/setup-env
So using module unload
“un-loads” a module along with its dependencies.
Note that this module loading process happens principally through
the manipulation of environment variables like $PATH
. There
is usually little or no data transfer involved.
The module loading process manipulates other special environment variables as well, including variables that influence where the system looks for software libraries, and sometimes variables which tell commercial software packages where to find license servers.
The module command also restores these shell environment variables to their previous state when a module is unloaded.
Software Versioning
So far, we’ve learned how to load and unload software packages. This is very useful. However, we have not yet addressed the issue of software versioning. At some point or other, you will run into issues where only one particular version of some software will be suitable. Perhaps a key bugfix only happened in a certain version, or version X broke compatibility with a file format you use. In either of these example cases, it helps to be very specific about what software is loaded.
Let’s examine the output of module avail
more closely.
[yourUsername@cirrus-login1 ~]$ module avail
------------------------------------------------------- /mnt/lustre/indy2lfs/sw/modulefiles -------------------------------------------------------
altair-hwsolvers/13.0.213 gdb/10.2 intel-fc-19/19.0.0.117 openfoam/v2106
altair-hwsolvers/14.0.210 git/2.21.0(default) intel-itac-18/2018.5.025 openmpi/4.1.2
anaconda/python3 git/2.37.3 intel-itac-19/19.0.0.117 openmpi/4.1.2-cuda-11.6
ansys/18.0 gmp/6.2.0-intel intel-license openmpi/4.1.4(default)
ansys/19.0 gmp/6.2.1-mpt intel-mpi-18/18.0.5.274 openmpi/4.1.4-cuda-11.6
ant/1.10.8(default) gnu-parallel/20200522-gcc6(default) intel-mpi-19/19.0.0.117 perf/1.0.0
autotools/default gnuplot/5.4.0(default) intel-tbb-18/18.0.5.274 petsc/3.13.2-intel-mpi-18
binutils/2.36(default) gromacs/2020.2 intel-tbb-19/19.0.0.117(default) petsc/3.13.2-mpt
bison/3.6.4 gromacs/2020.2-gpu intel-tools-18/18.0.5.274 pyfr/1.14.0-gpu
boost/1.67.0 gromacs/2022.1(default) intel-tools-19/19.0.0.117 pyfr/1.15.0-gpu(default)
boost/1.73.0(default) gromacs/2022.1-gpu intel-vtune-18/2018.4.0.573462(default) python/3.9.12-gpu
castep/18/(default) gromacs/2022.3-gpu intel-vtune-19/2019.0.2.570779(default) python/3.9.13
castep/18/18.1.0 gsl/2.6-gcc8 java/jdk-14.0.1 python/3.9.13-gpu
castep/19/19.1.1 gsl/2.7-gcc8(default) lammps/3March2020-intel19-mpt pytorch/1.12.1
cmake/3.17.3(default) hdf5parallel/1.10.4-intel18-impi18 lammps/23Jun2022_intel19_mpt pytorch/1.12.1-gpu
cmake/3.22.1 hdf5parallel/1.10.6-gcc6-mpt225 libnsl/1.3.0(default) quantum-espresso/6.5-intel-19
cp2k/7.1 hdf5parallel/1.10.6-gcc8-mpt225 libpng/1.6.30 quantum-espresso/6.5-intel-20.4
CRYSTAL17/1.0.2_intel18 hdf5parallel/1.10.6-intel18-mpt225 libtirpc/1.2.6(default) R/3.6.3
CUnit/2.1.3(default) hdf5parallel/1.10.6-intel19-mpt225 libtool/2.4.6 R/4.0.2(default)
dolfin/2019.1.0-intel-mpi hdf5parallel/1.12.0-nvhpc-openmpi libxkbcommon/1.0.1(default) scalasca/2.6-gcc8-mpt225
dolfin/2019.1.0-mpt hdf5serial/1.10.6-intel18 matlab/R2019a scalasca/2.6-intel19-mpt225
eclipse/2020-09(default) horovod/0.25.0 matlab/R2019b singularity/3.7.2(default)
epcc/deprecated-software horovod/0.25.0-gpu matlab/R2020b(default) specfem3d/3.0(default)
epcc/setup-env htop/3.1.2 matlab/R2021b starccm+/13.06.012(default)
epcc/utils htop/3.2.1(default) metis/5.1.0 starccm+/13.06.012-R8
expat/2.2.9 ImageMagick/7.0.10-22(default) mpc/1.1.0 starccm+/14.04.013-R8
fenics/2019.1.0-intel-mpi intel-19.5/cc mpfr/4.0.2-intel starccm+/14.06.013-R8
fenics/2019.1.0-mpt intel-19.5/cmkl mpfr/4.0.2-mpt starccm+/15.02.009-R8
fftw/3.3.8-gcc8-ompi4 intel-19.5/compilers namd/2.14(default) starccm+/15.04.010-R8
fftw/3.3.8-intel18 intel-19.5/fc namd/2.14-gpu starccm+/15.06.008-R8
fftw/3.3.8-intel19(default) intel-19.5/itac namd/2.14-nosmp starccm+/16.02.009
fftw/3.3.9-impi19-gcc8 intel-19.5/mpi ncl/6.6.2 starccm+/2019.3.1-R8
fftw/3.3.10-intel19-mpt225 intel-19.5/pxse nco/4.9.3 starccm+/2020.1.1-R8
fftw/3.3.10-intel20.4 intel-19.5/tbb ncview/2.1.7 starccm+/2020.2.1-R8
flacs-cfd/20.1 intel-19.5/vtune netcdf-parallel/4.6.2-intel18-impi18 starccm+/2020.3.1-R8
flacs-cfd/20.2 intel-20.4/cc netcdf-parallel/4.6.2-intel19-mpt225 starccm+/2021.1.1
flacs-cfd/21.1 intel-20.4/cmkl ninja/1.10.2(default) strace/5.8(default)
flacs-cfd/21.2 intel-20.4/compilers nvidia/cudnn/8.2.1-cuda-11.6 svn/1.14.0(default)
flacs-cfd/22.1 intel-20.4/fc nvidia/cudnn/8.5.0-cuda-11.6 tensorflow/2.9.1-gpu
flacs/10.9.1 intel-20.4/itac nvidia/cudnn/8.6.0-cuda-11.6(default) tensorflow/2.10.0
flex/2.6.4 intel-20.4/mpi nvidia/nvhpc-byo-compiler/21.2 tmux/3.3a(default)
gaussian/16.A03(default) intel-20.4/psxe nvidia/nvhpc-byo-compiler/21.9 ucx/1.9.0
gcc/6.2.0 intel-20.4/tbb nvidia/nvhpc-byo-compiler/22.2 ucx/1.9.0-cuda-11.6
gcc/6.3.0 intel-20.4/vtune nvidia/nvhpc-nompi/22.2 udunits/2.2.26
gcc/8.2.0(default) intel-cc-18/18.0.5.274 nvidia/nvhpc/22.2 valgrind/3.16.1(default)
gcc/10.2.0 intel-cc-19/19.0.0.117 nvidia/tensorrt/7.2.3.4 vasp/5/5.4.4-intel19-mpt220(default)
gdal/2.1.2-gcc intel-cmkl-18/18.0.5.274 nvidia/tensorrt/8.4.3.1-u2 vasp/6/6.2.1-intel19-mpt220(default)
gdal/2.1.2-intel intel-cmkl-19/19.0.0.117 oneapi/2022.2.0(default) zlib/1.2.11(default)
gdal/2.4.4-gcc intel-compilers-18/18.05.274 openfoam/v8.0
gdal/2.4.4-intel intel-compilers-19/19.0.0.117 openfoam/v9.0
gdb/9.2(default) intel-fc-18/18.0.5.274 openfoam/v2006
--------------------------------------------------------- /usr/share/Modules/modulefiles ----------------------------------------------------------
dot hmpt/2.25 module-git module-info modules mpt/2.25 null perfboost use.own
Let’s take a closer look at the gcc
module. GCC is an extremely widely used
C/C++/Fortran compiler. Lots of software is dependent on the GCC version, and
might not compile or run if the wrong version is loaded. In this case, there
are four different versions: gcc/6.2.0
, gcc/6.3.0
, gcc/8.2.0
, gcc/10.2.0
. How do
we load each copy and which copy is the default?
In this case, gcc/8.2.0
has a (default)
next to it. This indicates that it
is the default - if we type module load gcc
, this is the copy that will be
loaded.
[yourUsername@cirrus-login1 ~]$ module load gcc
[yourUsername@cirrus-login1 ~]$ gcc --version
gcc (GCC) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
So how do we load the non-default copy of a software package? In this case, the
only change we need to make is be more specific about the module we are
loading. There are four GCC modules: gcc/6.2.0
, gcc/6.3.0
, gcc/8.2.0
and gcc/10.2.0
To load a non-default module, we need to make add the version number after the
/
in our module load
command
[yourUsername@cirrus-login1 ~]$ module load gcc/6.2.0
Loading gcc/6.2.0
ERROR: gcc/6.2.0 cannot be loaded due to a conflict.
HINT: Might try "module unload gcc" first.
What happened? The module command is telling us that we cannot have two gcc
modules loaded at the same time as this could cause confusion about which
version you are using. We need to remove the default version before we load the
new version.
[yourUsername@cirrus-login1 ~]$ module unload gcc
[yourUsername@cirrus-login1 ~]$ module load gcc/6.2.0
[yourUsername@cirrus-login1 ~]$ gcc --version
gcc (GCC) 6.2.0
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
We now have successfully switched from GCC 8.2.0 to GCC 6.2.0.
As switching between different versions of the same module is often used you
can use module swap
rather than unloading one version before loading another.
The equivalent of the steps above would be:
[yourUsername@cirrus-login1 ~]$ module swap gcc gcc/8.2.0
[yourUsername@cirrus-login1 ~]$ gcc --version
gcc (GCC) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
This achieves the same result as unload followed by load but in a single step.
Using Software Modules in Scripts
Create a job that is able to run
R --version
. Remember, no software is loaded by default! Running a job is just like logging on to the system (you should not assume a module loaded on the login node is loaded on a compute node).Solution
[yourUsername@cirrus-login1 ~]$ nano R-module.sh [yourUsername@cirrus-login1 ~]$ cat R-module.sh
#!/bin/bash #SBATCH --partition=standard #SBATCH --qos=standard #SBATCH --time=00:01 module load R R --version
[yourUsername@cirrus-login1 ~]$ sbatch R-module.sh
Key Points
Load software with
module load softwareName
.Unload software with
module unload
The module system handles software versioning and package conflicts for you automatically.
Transferring files with remote computers
Overview
Teaching: 15 min
Exercises: 15 minQuestions
How do I transfer files to (and from) the cluster?
Objectives
Transfer files to and from a computing cluster.
Performing work on a remote computer is not very useful if we cannot get files to or from the cluster. There are several options for transferring data between computing resources using CLI and GUI utilities, a few of which we will cover.
Download Files From the Internet
One of the most straightforward ways to download files is to use either curl
or wget
. One of these is usually installed in most Linux shells, on Mac OS
terminal and in GitBash. Any file that can be downloaded in your web browser
through a direct link can be downloaded using curl -O
or wget
. This is a
quick way to download datasets or source code.
The syntax for these commands is: curl -O https://some/link/to/a/file
and wget https://some/link/to/a/file
. Try it out by downloading
some material we’ll use later on, from a terminal on your local machine.
[user@laptop ~]$ curl -O https://nclrse-training.github.io/hpc-intro-cirrus/files/hpc-intro-data.tar.gz
or
[user@laptop ~]$ wget https://nclrse-training.github.io/hpc-intro-cirrus/files/hpc-intro-data.tar.gz
tar.gz
?This is an archive file format, just like
.zip
, commonly used and supported by default on Linux, which is the operating system the majority of HPC cluster machines run. You may also see the extension.tgz
, which is exactly the same. We’ll talk more about “tarballs,” since “tar-dot-g-z” is a mouthful, later on.
Transferring Single Files and Folders With scp
To copy a single file to or from the cluster, we can use scp
(“secure copy”).
The syntax can be a little complex for new users, but we’ll break it down.
The scp
command is a relative of the ssh
command we used to
access the system, and can use the same public-key authentication
mechanism.
To upload to another computer:
[user@laptop ~]$ scp path/to/local/file.txt yourUsername@login.cirrus.ac.uk:/path/on/Cirrus
To download from another computer:
[user@laptop ~]$ scp yourUsername@login.cirrus.ac.uk:/path/on/Cirrus/file.txt path/to/local/
Note that everything after the :
is relative to our home directory on the
remote computer. We can leave it at that if we don’t care where the file goes.
[user@laptop ~]$ scp local-file.txt yourUsername@login.cirrus.ac.uk:
Upload a File
Copy the file you just downloaded from the Internet to your home directory on Cirrus.
Solution
[user@laptop ~]$ scp hpc-intro-data.tar.gz yourUsername@login.cirrus.ac.uk:~/
Most computer clusters are protected from the open internet by a firewall.
This means that the curl
command will fail, as an address outside the
firewall is unreachable from the inside. To get around this, run the curl
or
wget
command from your local machine to download the file, then use the scp
command to upload it to the cluster.
Why Not Download on Cirrus Directly?
Try downloading the file directly. Note that it may well fail, and that’s OK!
Commands
[user@laptop ~]$ ssh yourUsername@login.cirrus.ac.uk [yourUsername@cirrus-login1 ~]$ curl -O https://nclrse-training.github.io/hpc-intro-cirrus/files/hpc-intro-data.tar.gz or [yourUsername@cirrus-login1 ~]$ wget https://nclrse-training.github.io/hpc-intro-cirrus/files/hpc-intro-data.tar.gz
Did it work? If not, what does the terminal output tell you about what happened?
To copy a whole directory, we add the -r
flag, for “recursive”: copy the
item specified, and every item below it, and every item below those… until it
reaches the bottom of the directory tree rooted at the folder name you
provided.
[user@laptop ~]$ scp -r some-local-folder yourUsername@login.cirrus.ac.uk:target-directory/
Caution
For a large directory – either in size or number of files – copying with
-r
can take a long time to complete.
What’s in a /
?
When using scp
, you may have noticed that a :
always follows the remote
computer name; sometimes a /
follows that, and sometimes not, and sometimes
there’s a final /
. On Linux computers, /
is the root directory, the
location where the entire filesystem (and others attached to it) is anchored. A
path starting with a /
is called absolute, since there can be nothing above
the root /
. A path that does not start with /
is called relative, since
it is not anchored to the root.
If you want to upload a file to a location inside your home directory –
which is often the case – then you don’t need a leading /
. After the
:
, start writing the sequence of folders that lead to the final storage
location for the file or, as mentioned above, provide nothing if your home
directory is the destination.
A trailing slash on the target directory is optional, and has no effect for
scp -r
, but is important in other commands, like rsync
.
A Note on
rsync
As you gain experience with transferring files, you may find the
scp
command limiting. The rsync utility provides advanced features for file transfer and is typically faster compared to bothscp
andsftp
(see below). It is especially useful for transferring large and/or many files and creating synced backup folders.The syntax is similar to
scp
. To transfer to another computer with commonly used options:[user@laptop ~]$ rsync -avzP path/to/local/file.txt yourUsername@login.cirrus.ac.uk:directory/path/on/Cirrus/
The options are:
a
(archive) to preserve file timestamps and permissions among other thingsv
(verbose) to get verbose output to help monitor the transferz
(compression) to compress the file during transit to reduce size and transfer timeP
(partial/progress) to preserve partially transferred files in case of an interruption and also displays the progress of the transfer.To recursively copy a directory, we can use the same options:
[user@laptop ~]$ rsync -avzP path/to/local/dir yourUsername@login.cirrus.ac.uk:directory/path/on/Cirrus/
As written, this will place the local directory and its contents under the specified directory on the remote system. If the trailing slash is omitted on the destination, a new directory corresponding to the transferred directory (‘dir’ in the example) will not be created, and the contents of the source directory will be copied directly into the destination directory.
The
a
(archive) option implies recursion.To download a file, we simply change the source and destination:
[user@laptop ~]$ rsync -avzP yourUsername@login.cirrus.ac.uk:path/on/Cirrus/file.txt path/to/local/
All file transfers using the above methods use SSH to encrypt data sent through
the network. So, if you can connect via SSH, you will be able to transfer
files. By default, SSH uses network port 22. If a custom SSH port is in use,
you will have to specify it using the appropriate flag, often -p
, -P
, or
--port
. Check --help
or the man
page if you’re unsure.
Change the Rsync Port
Say we have to connect
rsync
through port 768 instead of 22. How would we modify this command?[user@laptop ~]$ rsync test.txt yourUsername@login.cirrus.ac.uk:
Solution
[user@laptop ~]$ rsync --help | grep port --port=PORT specify double-colon alternate port number See http://rsync.samba.org/ for updates, bug reports, and answers [user@laptop ~]$ rsync --port=768 test.txt yourUsername@login.cirrus.ac.uk:
Transferring Files Interactively with FileZilla
FileZilla is a cross-platform client for downloading and uploading files to and
from a remote computer. It is absolutely fool-proof and always works quite
well. It uses the sftp
protocol. You can read more about using the sftp
protocol in the command line in the
lesson discussion.
Download and install the FileZilla client from https://filezilla-project.org. After installing and opening the program, you should end up with a window with a file browser of your local system on the left hand side of the screen. When you connect to the cluster, your cluster files will appear on the right hand side.
To connect to the cluster, we’ll just need to enter our credentials at the top of the screen:
- Host:
sftp://login.cirrus.ac.uk
- User: Your cluster username
- Password: Your cluster password
- Port: (leave blank to use the default port)
Hit “Quickconnect” to connect. You should see your remote files appear on the right hand side of the screen. You can drag-and-drop files between the left (local) and right (remote) sides of the screen to transfer files.
Finally, if you need to move large files (typically larger than a gigabyte)
from one remote computer to another remote computer, SSH in to the computer
hosting the files and use scp
or rsync
to transfer over to the other. This
will be more efficient than using FileZilla (or related applications) that
would copy from the source to your local machine, then to the destination
machine.
Archiving Files
One of the biggest challenges we often face when transferring data between remote HPC systems is that of large numbers of files. There is an overhead to transferring each individual file and when we are transferring large numbers of files these overheads combine to slow down our transfers to a large degree.
The solution to this problem is to archive multiple files into smaller numbers of larger files before we transfer the data to improve our transfer efficiency. Sometimes we will combine archiving with compression to reduce the amount of data we have to transfer and so speed up the transfer.
The most common archiving command you will use on a (Linux) HPC cluster is
tar
. tar
can be used to combine files into a single archive file and,
optionally, compress it.
Let’s start with the file we downloaded from the lesson site,
hpc-into-data.tar.gz
. The “gz” part stands for gzip, which is a
compression library. Reading this file name, it appears somebody took a folder
named “hpc-intro-data,” wrapped up all its contents in a single file with
tar
, then compressed that archive with gzip
to save space. Let’s check
using tar
with the -t
flag, which prints the “table of contents”
without unpacking the file, specified by -f <filename>
, on the remote
computer. Note that you can concatenate the two flags, instead of writing
-t -f
separately.
[user@laptop ~]$ ssh yourUsername@login.cirrus.ac.uk
[yourUsername@cirrus-login1 ~]$ tar -tf hpc-intro-data.tar.gz
hpc-intro-data/
hpc-intro-data/north-pacific-gyre/
hpc-intro-data/north-pacific-gyre/NENE01971Z.txt
hpc-intro-data/north-pacific-gyre/goostats
hpc-intro-data/north-pacific-gyre/goodiff
hpc-intro-data/north-pacific-gyre/NENE02040B.txt
hpc-intro-data/north-pacific-gyre/NENE01978B.txt
hpc-intro-data/north-pacific-gyre/NENE02043B.txt
hpc-intro-data/north-pacific-gyre/NENE02018B.txt
hpc-intro-data/north-pacific-gyre/NENE01843A.txt
hpc-intro-data/north-pacific-gyre/NENE01978A.txt
hpc-intro-data/north-pacific-gyre/NENE01751B.txt
hpc-intro-data/north-pacific-gyre/NENE01736A.txt
hpc-intro-data/north-pacific-gyre/NENE01812A.txt
hpc-intro-data/north-pacific-gyre/NENE02043A.txt
hpc-intro-data/north-pacific-gyre/NENE01729B.txt
hpc-intro-data/north-pacific-gyre/NENE02040A.txt
hpc-intro-data/north-pacific-gyre/NENE01843B.txt
hpc-intro-data/north-pacific-gyre/NENE01751A.txt
hpc-intro-data/north-pacific-gyre/NENE01729A.txt
hpc-intro-data/north-pacific-gyre/NENE02040Z.txt
This shows a folder containing another folder, which contains a bunch of files.
If you’ve taken The Carpentries’ Shell lesson recently, these might look
familiar. Let’s see about that compression, using du
for “disk
usage”.
[yourUsername@cirrus-login1 ~]$ du -sh hpc-intro-data.tar.gz
36K hpc-intro-data.tar.gz
Files Occupy at Least One “Block”
If the filesystem block size is larger than 36 KB, you’ll see a larger number: files cannot be smaller than one block.
Now let’s unpack the archive. We’ll run tar
with a few common flags:
-x
to extract the archive-v
for verbose output-z
for gzip compression-f
for the file to be unpacked
When it’s done, check the directory size with du
and compare.
Extract the Archive
Using the four flags above, unpack the lesson data using
tar
. Then, check the size of the whole unpacked directory usingdu
.Hint:
tar
lets you concatenate flags.Commands
[yourUsername@cirrus-login1 ~]$ tar -xvzf hpc-intro-data.tar.gz
hpc-intro-data/ hpc-intro-data/north-pacific-gyre/ hpc-intro-data/north-pacific-gyre/NENE01971Z.txt hpc-intro-data/north-pacific-gyre/goostats hpc-intro-data/north-pacific-gyre/goodiff hpc-intro-data/north-pacific-gyre/NENE02040B.txt hpc-intro-data/north-pacific-gyre/NENE01978B.txt hpc-intro-data/north-pacific-gyre/NENE02043B.txt hpc-intro-data/north-pacific-gyre/NENE02018B.txt hpc-intro-data/north-pacific-gyre/NENE01843A.txt hpc-intro-data/north-pacific-gyre/NENE01978A.txt hpc-intro-data/north-pacific-gyre/NENE01751B.txt hpc-intro-data/north-pacific-gyre/NENE01736A.txt hpc-intro-data/north-pacific-gyre/NENE01812A.txt hpc-intro-data/north-pacific-gyre/NENE02043A.txt hpc-intro-data/north-pacific-gyre/NENE01729B.txt hpc-intro-data/north-pacific-gyre/NENE02040A.txt hpc-intro-data/north-pacific-gyre/NENE01843B.txt hpc-intro-data/north-pacific-gyre/NENE01751A.txt hpc-intro-data/north-pacific-gyre/NENE01729A.txt hpc-intro-data/north-pacific-gyre/NENE02040Z.txt
Note that we did not type out
-x -v -z -f
, thanks to the flag concatenation, though the command works identically either way.[yourUsername@cirrus-login1 ~]$ du -sh hpc-intro-data 77K hpc-intro-data
Was the Data Compressed?
Text files compress nicely: the “tarball” is one-quarter the total size of the raw data!
If you want to reverse the process – compressing raw data instead of
extracting it – set a c
flag instead of x
, set the archive filename,
then provide a directory to compress:
[user@laptop ~]$ tar -cvzf compressed_data.tar.gz hpc-intro-data
Working with Windows
When you transfer text files to from a Windows system to a Unix system (Mac, Linux, BSD, Solaris, etc.) this can cause problems. Windows encodes its files slightly different than Unix, and adds an extra character to every line.
On a Unix system, every line in a file ends with a
\n
(newline). On Windows, every line in a file ends with a\r\n
(carriage return + newline). This causes problems sometimes.Though most modern programming languages and software handles this correctly, in some rare instances, you may run into an issue. The solution is to convert a file from Windows to Unix encoding with the
dos2unix
command.You can identify if a file has Windows line endings with
cat -A filename
. A file with Windows line endings will have^M$
at the end of every line. A file with Unix line endings will have$
at the end of a line.To convert the file, just run
dos2unix filename
. (Conversely, to convert back to Windows format, you can rununix2dos filename
.)
Data Limits and File Systems
Note that file systems and storage quotas will differ between HPC platforms.
On Cirrus, every project has an allocation on the work file system and your project’s space can always be accessed via the path
/work/[project-code]
. The work file system is approximately 400 TB in size and is implemented using the Lustre parallel file system technology. There are currently no backups of any data on the work file system. Ideally, the work file system should only contain data that is actively in use, recently generated and in the process of being saved elsewhere or being made ready for up-coming work. This file system is visible from the login and compute nodes.Make sure that important data is always backed up elsewhere and that your work would not be significantly impacted if the data on the work file system was lost.
Every project has an allocation on the home file system and your project’s space can always be accessed via the path
/home/[project-code]
. The home file system is approximately 1.5 PB in size and is implemented using the Ceph technology. This means that this storage is not particularly high performance but are well suited to standard operations like compilation and file editing. This file system is visible from the Cirrus login nodes but not compute nodes.There are currently no backups of any data on the home file system.
More information on using the solid state storage on Cirrus can be found in the Solid state storage section of the user guide.
Key Points
wget
andcurl -O
download a file from the internet.
scp
andrsync
transfer files to and from your computer.You can use an SFTP client like FileZilla to transfer files through a GUI.
Running a parallel job
Overview
Teaching: 30 min
Exercises: 60 minQuestions
How do we execute a task in parallel?
What benefits arise from parallel execution?
What are the limits of gains from execution in parallel?
Objectives
Construct a program that can execute in parallel.
Prepare a job submission script for the parallel executable.
Launch jobs with parallel execution.
Record and summarize the timing and accuracy of jobs.
Describe the relationship between job parallelism and performance.
We now have the tools we need to run a multi-processor job. This is a very important aspect of HPC systems, as parallelism is one of the primary tools we have to improve the performance of computational tasks.
Our example implements a stochastic algorithm for estimating the value of π, the ratio of the circumference to the diameter of a circle. The program generates a large number of random points on a 1×1 square centered on (½,½), and checks how many of these points fall inside the unit circle. On average, π/4 of the randomly-selected points should fall in the circle, so π can be estimated from 4f, where f is the observed fraction of points that fall in the circle. Because each sample is independent, this algorithm is easily implemented in parallel.
Get code for this episode
The Python code you will use in this episode has been pre-written and you can obtain a copy in three ways:
- Under the Extras tab at the top of this page there is a link to the code. Create two new files in your working directory on Cirrus and copy the contents into them.
- Use the commands
curl
orwget
from the previous episode to download the files directly into your working directory on Cirrus and extract the archive. Remember you will need to specify the path to these Python files in your job submission scripts. It may be useful tocd
into this directory ormv
the contents directly to the path/work/tc036/tc036/yourUsername
.
[user@laptop ~]$ curl -O https://nclrse-training.github.io/hpc-intro-cirrus/files/python-pi-code.tar.gz
[user@laptop ~]$ tar -xvzf python-pi-code.tar.gz
or
[user@laptop ~]$ wget https://nclrse-training.github.io/hpc-intro-cirrus/files/python-pi-code.tar.gz
[user@laptop ~]$ tar -xvzf python-pi-code.tar.gz
- You can also create a local copy of the files on your machine and then use
scp
orrsync
to copy the file onto Cirrus.
[user@laptop ~]$ scp pi.py yourUsername@login.cirrus.ac.uk:/work/tc036/tc036/yourUsername
[user@laptop ~]$ scp pi-mpi-cirrus.py yourUsername@login.cirrus.ac.uk:/work/tc036/tc036/yourUsername
A Serial Solution to the Problem
We start from a Python script using concepts taught in Software Carpentry’s Programming with Python workshops. We want to allow the user to specify how many random points should be used to calculate π through a command-line parameter. This script will only use a single CPU for its entire run, so it’s classified as a serial process.
Let’s write a Python program, pi.py
, to estimate π for us.
Start by importing the numpy
module for calculating the results,
and the sys
module to process command-line parameters:
import numpy as np
import sys
We define a Python function inside_circle
that accepts a single parameter
for the number of random points used to calculate π.
See Programming with Python: Creating Functions
for a review of Python functions.
It randomly samples points with both x and y on the half-open interval
[0, 1).
It then computes their distances from the origin (i.e., radii), and returns
how many of those distances were less than or equal to 1.0.
All of this is done using vectors of double-precision (64-bit)
floating-point values.
def inside_circle(total_count):
x = np.random.uniform(size=total_count)
y = np.random.uniform(size=total_count)
radii = np.sqrt(x * x + y * y)
count = len(radii[np.where(radii<=1.0)])
return count
Next, we create a main function to call the inside_circle
function and
calculate π from its returned result.
See Programming with Python: Command-Line Programs
for a review of main
functions and parsing command-line parameters.
def main():
n_samples = int(sys.argv[1])
counts = inside_circle(n_samples)
my_pi = 4.0 * counts / n_samples
print(my_pi)
if __name__ == '__main__':
main()
If we run the Python script locally with a command-line parameter, as in
python pi.py 1024
, we should see the script print its estimate of
π:
[user@laptop ~]$ python pi.py 1024
3.04296875
Random Number Generation
In the preceding code, random numbers are conveniently generated using the built-in capabilities of NumPy. In general, random-number generation is difficult to do well, it’s easy to accidentally introduce correlations into the generated sequence.
- Discuss why generating high quality random numbers might be difficult.
- Is the quality of random numbers generated sufficient for estimating π in this implementation?
Solution
- Computers are deterministic and produce pseudo random numbers using an algorithm. The choice of algorithm and its parameters determines how random the generated numbers are. Pseudo random number generation algorithms usually produce a sequence numbers taking the previous output as an input for generating the next number. At some point the sequence of pseudo random numbers will repeat, so care is required to make sure the repetition period is long and that the generated numbers have statistical properties similar to those of true random numbers.
- Yes.
Measuring Performance of the Serial Solution
The stochastic method used to estimate π should converge on the true
value as the number of random points increases.
But as the number of points increases, creating the variables x
, y
, and
radii
requires more time and more memory.
Eventually, the memory required may exceed what’s available on our local
laptop or desktop, or the time required may be too long to meet a deadline.
So we’d like to take some measurements of how much memory and time the script
requires, and later take the same measurements after creating a parallel
version of the script to see the benefits of parallelizing the calculations
required.
Estimating Memory Requirements
Since the largest variables in the script are x
, y
, and radii
, each
containing n_samples
points, we’ll modify the script to report their
total memory required.
Each point in x
, y
, or radii
is stored as a NumPy float64
, we can
use NumPy’s dtype
function to calculate the size of a float64
.
Replace the print(my_pi)
line with the following:
size_of_float = np.dtype(np.float64).itemsize
memory_required = 3 * n_samples * size_of_float / (1024**3)
print(f"Pi: {my_pi}, memory: {memory_required} GiB")
The first line calculates the bytes of memory required for a single
64-bit floating point number using the dtype
function.
The second line estimates the total amount of memory required to store three
variables containing n_samples
float64
values, converting the value into
units of gibibytes.
The third line prints both the estimate of π and the estimated amount of
memory used by the script.
The updated Python script is:
import numpy as np
import sys
def inside_circle(total_count):
x = np.random.uniform(size=total_count)
y = np.random.uniform(size=total_count)
radii = np.sqrt(x * x + y * y)
count = len(radii[np.where(radii<=1.0)])
return count
def main():
n_samples = int(sys.argv[1])
counts = inside_circle(n_samples)
my_pi = 4.0 * counts / n_samples
size_of_float = np.dtype(np.float64).itemsize
memory_required = 3 * n_samples * size_of_float / (1024**3)
print(f"Pi: {my_pi}, memory: {memory_required} GiB")
if __name__ == '__main__':
main()
Run the script again with a few different values for the number of samples, and see how the memory required changes:
[user@laptop ~]$ python pi.py 1000
Pi: 3.144, memory: 2.2351741790771484e-05 GiB
[user@laptop ~]$ python pi.py 2000
Pi: 3.18, memory: 4.470348358154297e-05 GiB
[user@laptop ~]$ python pi.py 1000000
Pi: 3.140944, memory: 0.022351741790771484 GiB
[user@laptop ~]$ python pi.py 100000000
Pi: 3.14182724, memory: 2.2351741790771484 GiB
Here we can see that the estimated amount of memory required scales linearly
with the number of samples used.
In practice, there is some memory required for other parts of the script,
but the x
, y
, and radii
variables are by far the largest influence
on the total amount of memory required.
Estimating Calculation Time
Most of the calculations required to estimate π are in the
inside_circle
function:
- Generating
n_samples
random values forx
andy
. - Calculating
n_samples
values ofradii
fromx
andy
. - Counting how many values in
radii
are under 1.0.
There’s also one multiplication operation and one division operation required
to convert the counts
value to the final estimate of π in the main
function.
A simple way to measure the calculation time is to use Python’s datetime
module to store the computer’s current date and time before and after the
calculations, and calculate the difference between those times.
To add the time measurement to the script, add the following line below the
import sys
line:
import datetime
Then, add the following line immediately above the line calculating counts
:
start_time = datetime.datetime.now()
Add the following two lines immediately below the line calculating counts
:
end_time = datetime.datetime.now()
elapsed_time = (end_time - start_time).total_seconds()
And finally, modify the print
statement with the following:
print(f"Pi: {my_pi}, memory: {memory_required} GiB, time: {elapsed_time} s")
The final Python script for the serial solution is:
import numpy as np
import sys
import datetime
def inside_circle(total_count):
x = np.random.uniform(size=total_count)
y = np.random.uniform(size=total_count)
radii = np.sqrt(x * x + y * y)
count = len(radii[np.where(radii<=1.0)])
return count
def main():
n_samples = int(sys.argv[1])
start_time = datetime.datetime.now()
counts = inside_circle(n_samples)
my_pi = 4.0 * counts / n_samples
end_time = datetime.datetime.now()
elapsed_time = (end_time - start_time).total_seconds()
size_of_float = np.dtype(np.float64).itemsize
memory_required = 3 * n_samples * size_of_float / (1024**3)
print(f"Pi: {my_pi}, memory: {memory_required} GiB, time: {elapsed_time} s")
if __name__ == '__main__':
main()
Run the script again with a few different values for the number of samples, and see how the solution time changes:
[user@laptop ~]$ python pi.py 1000000
Pi: 3.139612, memory: 0.022351741790771484 GiB, time: 0.034872 s
[user@laptop ~]$ python pi.py 10000000
Pi: 3.1425492, memory: 0.22351741790771484 GiB, time: 0.351212 s
[user@laptop ~]$ python pi.py 100000000
Pi: 3.14146608, memory: 2.2351741790771484 GiB, time: 3.735195 s
Here we can see that the amount of time required scales approximately linearly with the number of samples used. There could be some variation in additional runs of the script with the same number of samples, since the elapsed time is affected by other programs running on the computer at the same time. But if the script is the most computationally-intensive process running at the time, its calculations are the largest influence on the elapsed time.
Now that we’ve developed our initial script to estimate π, we can see that as we increase the number of samples:
- The estimate of π tends to become more accurate.
- The amount of memory required scales approximately linearly.
- The amount of time to calculate scales approximately linearly.
In general, achieving a better estimate of π requires a greater number of
points.
Take a closer look at inside_circle
: should we expect to get high accuracy
on a single machine?
Probably not. The function allocates three arrays of size N equal to the number of points belonging to this process. Using 64-bit floating point numbers, the memory footprint of these arrays can get quite large. Each 100,000,000 points sampled consumes 2.24 GiB of memory. Sampling 400,000,000 points consumes 8.94 GiB of memory, and if your machine has less RAM than that, it will grind to a halt. If you have 16 GiB installed, you won’t quite make it to 750,000,000 points.
Running the Serial Job on a Compute Node
Replicate the pi.py
script in the /work/tc036/tc036/yourUsername
space on Cirrus. Guidance on how to do this can be found at the beginning of this episode.
Create a submission file, requesting one task on a single node. If we do not specify a maximum walltime for the job using --time=<hh:mm:ss>
then (on Cirrus) the job will be submitted with the short
default maximum time of 20 minutes. To avoid a warning message we will allocate a very generous 1 minute.
[yourUsername@cirrus-login1 ~]$ nano serial-pi.sh
[yourUsername@cirrus-login1 ~]$ cat serial-pi.sh
#!/bin/bash
#SBATCH --partition=standard
#SBATCH --qos=standard
#SBATCH --job-name serial-pi
#SBATCH --nodes=1
#SBATCH --tasks-per-node=1
#SBATCH --time=00:01
# Load the correct Python module
module load python/3.9.13
# Execute the task
python3 pi.py 100000000
Memory Requirements
On some HPC systems you may need to specify the memory requirements of the job using the
--mem
,--mem-per-cpu
,--mem-per-gpu
options. However, on Cirrus you cannot specify the memory for a job. The amount of memory you are assigned is calculated from the amount of primary resource you request. The primary resource you request on standard compute nodes are CPU cores. The maximum amount of memory you are allocated is computed as the number of CPU cores you requested multiplied by 1/36th of the total memory available (as there are 36 CPU cores per node). So, if you request the full node (36 cores), then you will be allocated a maximum of all of the memory (256 GB) available on the node; however, if you request 1 core, then you will be assigned a maximum of 256/36 = 7.1 GB of the memory available on the node.
Then submit your job.
[yourUsername@cirrus-login1 ~]$ sbatch serial-pi.sh
As before, use the status commands to check when your job runs.
Use ls
to locate the output file, and examine it. Is it what you expected?
- How good is the value for π?
- How much memory did it need?
- How long did the job take to run?
Modify the job script to increase both the number of samples (perhaps by a factor of 2, then by a factor of 10), and resubmit the job each time.
- How good is the value for π?
- How much memory did it need?
- Did you encounter any errors?
Even with sufficient memory for necessary variables, a script could require enormous amounts of time to calculate on a single CPU. To reduce the amount of time required, we need to modify the script to use multiple CPUs for the calculations. In the largest problem scales, we could use multiple CPUs in multiple compute nodes, distributing the memory requirements across all the nodes used to calculate the solution.
Running the Parallel Job
We will run an example that uses the Message Passing Interface (MPI) for parallelism – this is a common tool on HPC systems.
What is MPI?
The Message Passing Interface is a set of tools which allow multiple parallel jobs to communicate with each other. Typically, a single executable is run multiple times, possibly on different machines, and the MPI tools are used to inform each instance of the executable about how many instances there are, which instance it is. MPI also provides tools to allow communication and coordination between instances. An MPI instance typically has its own copy of all the local variables.
While MPI jobs can generally be run as stand-alone executables, in order for
them to run in parallel they must use an MPI run-time system, which is a
specific implementation of the MPI standard.
To do this, they should be started via a command such as mpiexec
(or
mpirun
, or srun
, etc. depending on the MPI run-time you need to use),
which will ensure that the appropriate run-time support for parallelism is
included.
MPI Runtime Arguments
On their own, commands such as
mpiexec
can take many arguments specifying how many machines will participate in the execution, and you might need these if you would like to run an MPI program on your laptop (for example). In the context of a queuing system, however, it is frequently the case that we do not need to specify this information as the MPI run-time will have been configured to obtain it from the queuing system, by examining the environment variables set when the job is launched.
What Changes Are Needed for an MPI Version of the π Calculator?
On Cirrus we need to first import
mpi4py.rc
and setmpi4py.rc.initialize = False
before we can import the standardmpi4py
Python module. This may not be necessary on other HPC systems and you should consult the documentation if you experience problems setting up MPI.Next, we need to import the
MPI
object from the Python modulempi4py
by adding anfrom mpi4py import MPI
line immediately below theimport datetime
line.Second, we need to modify the “main” function to perform the overhead and accounting work required to:
- subdivide the total number of points to be sampled,
- partition the total workload among the various parallel processors available,
- have each parallel process report the results of its workload back to the “rank 0” process, which does the final calculations and prints out the result.
The modifications to the serial script demonstrate four important concepts:
- COMM_WORLD: the default MPI Communicator, providing a channel for all the processes involved in this
mpiexec
to exchange information with one another.- Scatter: A collective operation in which an array of data on one MPI rank is divided up, with separate portions being sent out to the partner ranks. Each partner rank receives data from the matching index of the host array.
- Gather: The inverse of scatter. One rank populates a local array, with the array element at each index assigned the value provided by the corresponding partner rank – including the host’s own value.
- Conditional Output: since every rank is running the same code, the partitioning, the final calculations, and the
We add the lines:
comm = MPI.COMM_WORLD
cpus = comm.Get_size()
rank = comm.Get_rank()
immediately before the n_samples
line to set up the MPI environment for
each process.
We replace the start_time
and counts
lines with the lines:
if rank == 0:
start_time = datetime.datetime.now()
partitions = [ int(n_samples / cpus) ] * cpus
counts = [ int(0) ] * cpus
else:
partitions = None
counts = None
This ensures that only the rank 0 process measures times and coordinates
the work to be distributed to all the ranks, while the other ranks
get placeholder values for the partitions
and counts
variables.
Immediately below these lines, let’s
- distribute the work among the ranks with MPI
scatter
, - call the
inside_circle
function so each rank can perform its share of the work, - collect each rank’s results into a
counts
variable on rank 0 using MPIgather
.
by adding the following three lines:
partition_item = comm.scatter(partitions, root=0)
count_item = inside_circle(partition_item)
counts = comm.gather(count_item, root=0)
Illustrations of these steps are shown below.
Setup the MPI environment and initialize local variables – including the vector containing the number of points to generate on each parallel processor:
Distribute the number of points from the originating vector to all the parallel processors:
Perform the computation in parallel:
Retrieve counts from all the parallel processes:
Print out the report:
Finally, we’ll ensure the my_pi
through print
lines only run on rank 0.
Otherwise, every parallel processor will print its local value,
and the report will become hopelessly garbled:
if rank == 0:
my_pi = 4.0 * sum(counts) / sum(partitions)
end_time = datetime.datetime.now()
elapsed_time = (end_time - start_time).total_seconds()
size_of_float = np.dtype(np.float64).itemsize
memory_required = 3 * sum(partitions) * size_of_float / (1024**3)
pi_specific = np.pi
accuracy = 100*(1-my_pi/pi_specific)
print(f"Pi: {my_pi:6f}, memory: {memory_required:6f} GiB, time: {elapsed_time:6f} s, error: {accuracy:6f}%")
A fully commented version of the final MPI parallel python code is available: pi-mpi-cirrus.py.
Our purpose here is to exercise the parallel workflow of the cluster, not to optimize the program to minimize its memory footprint. Rather than push our local machines to the breaking point (or, worse, the login node), let’s give it to a cluster node with more resources.
Create a submission file, requesting more than one task on a single node:
[yourUsername@cirrus-login1 ~]$ nano parallel-pi.sh
[yourUsername@cirrus-login1 ~]$ cat parallel-pi.sh
#!/bin/bash
#SBATCH --partition=standard
#SBATCH --qos=standard
#SBATCH --job-name parallel-pi
#SBATCH --nodes=1
#SBATCH --tasks-per-node=4
#SBATCH --time=00:01
# Load the correct Python module
module load python/3.9.13
# Execute the task
srun python pi-mpi-cirrus.py 100000000
Then submit your job.
[yourUsername@cirrus-login1 ~]$ sbatch parallel-pi.sh
As before, use the status commands to check when your job runs.
Use ls
to locate the output file, and examine it.
Is it what you expected?
- How good is the value for π?
- How much memory did it need?
- How much faster was this run than the serial run with 100000000 points?
Modify the job script to increase the number of samples (perhaps by a factor of 2, then by a factor of 10), and resubmit the job each time. You can also increase the number of CPUs.
- How good is the value for π?
- How much memory did it need?
- How long did the job take to run?
How Much Does MPI Improve Performance?
In theory, by dividing up the π calculations among n MPI processes, we should see run times reduce by a factor of n. In practice, some time is required to start the additional MPI processes, for the MPI processes to communicate and coordinate, and some types of calculations may only be able to run effectively on a single CPU.
Additionally, if the MPI processes operate on different physical CPUs in the computer, or across multiple compute nodes, additional time is required for communication compared to all processes operating on a single CPU.
Amdahl’s Law is one way of predicting improvements in execution time for a fixed parallel workload. If a workload needs 20 hours to complete on a single core, and one hour of that time is spent on tasks that cannot be parallelized, only the remaining 19 hours could be parallelized. Even if an infinite number of cores were used for the parallel parts of the workload, the total run time cannot be less than one hour.
In practice, it’s common to evaluate the parallelism of an MPI program by
- running the program across a range of CPU counts,
- recording the execution time on each run,
- comparing each execution time to the time when using a single CPU.
The speedup factor S is calculated as the single-CPU execution time divided by the multi-CPU execution time. For a laptop with 8 cores, the graph of speedup factor versus number of cores used shows relatively consistent improvement when using 2, 4, or 8 cores, but using additional cores shows a diminishing return.
For a set of HPC nodes containing 28 cores each, the graph of speedup factor versus number of cores shows consistent improvements up through three nodes and 84 cores, but worse performance when adding a fourth node with an additional 28 cores. This is due to the amount of communication and coordination required among the MPI processes requiring more time than is gained by reducing the amount of work each MPI process has to complete. This communication overhead is not included in Amdahl’s Law.
In practice, MPI speedup factors are influenced by:
- CPU design,
- the communication network between compute nodes,
- the MPI library implementations, and
- the details of the MPI program itself.
In an HPC environment, we try to reduce the execution time for all types of jobs, and MPI is an extremely common way to combine dozens, hundreds, or thousands of CPUs into solving a single problem. To learn more about parallelization, see the parallel novice lesson lesson.
Key Points
Parallel programming allows applications to take advantage of parallel hardware; serial code will not ‘just work.’
Distributed memory parallelism is a common case, using the Message Passing Interface (MPI).
The queuing system facilitates executing parallel tasks.
Performance improvements from parallel execution do not scale linearly.
Using resources effectively
Overview
Teaching: 10 min
Exercises: 20 minQuestions
How can I review past jobs?
How can I use this knowledge to create a more accurate submission script?
Objectives
Look up job statistics.
Make more accurate resource requests in job scripts based on data describing past performance.
We’ve touched on all the skills you need to interact with an HPC cluster: logging in over SSH, loading software modules, submitting parallel jobs, and finding the output. Let’s learn about estimating resource usage and why it might matter.
Estimating Required Resources Using the Scheduler
Although we covered requesting resources from the scheduler earlier with the π code, how do we know what type of resources the software will need in the first place, and its demand for each? In general, unless the software documentation or user testimonials provide some idea, we won’t know how much memory or compute time a program will need.
Read the Documentation
Most HPC facilities maintain documentation as a wiki, a website, or a document sent along when you register for an account. Take a look at these resources, and search for the software you plan to use: somebody might have written up guidance for getting the most out of it.
A convenient way of figuring out the resources required for a job to run
successfully is to submit a test job, and then ask the scheduler about its
impact using sacct -u yourUsername
. You can use this knowledge to set up the
next job with a closer estimate of its load on the system. A good general rule
is to ask the scheduler for 20% to 30% more time and memory than you expect the
job to need. This ensures that minor fluctuations in run time or memory use
will not result in your job being cancelled by the scheduler. Keep in mind that
if you ask for too much, your job may not run even though enough resources are
available, because the scheduler will be waiting for other people’s jobs to
finish and free up the resources needed to match what you asked for.
Stats
Since we already submitted pi.py
to run on the cluster, we can query the
scheduler to see how long our job took and what resources were used. We will
use sacct -u yourUsername
to get statistics about parallel-pi.sh
.
[yourUsername@cirrus-login1 ~]$ sacct -u yourUsername
JobID JobName Partition Account AllocCPUS State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
3938249 serial-pi standard tc036 1 COMPLETED 0:0
3938249.bat+ batch tc036 1 COMPLETED 0:0
3938249.ext+ extern tc036 1 COMPLETED 0:0
3938265 serial-pi standard tc036 1 COMPLETED 0:0
3938265.bat+ batch tc036 1 COMPLETED 0:0
3938265.ext+ extern tc036 1 COMPLETED 0:0
3938266 serial-pi standard tc036 1 OUT_OF_ME+ 0:125
3938266.bat+ batch tc036 1 OUT_OF_ME+ 0:125
3938266.ext+ extern tc036 1 OUT_OF_ME+ 0:125
3939324 parallel-+ standard tc036 4 COMPLETED 0:0
3939324.bat+ batch tc036 4 COMPLETED 0:0
3939324.ext+ extern tc036 4 COMPLETED 0:0
3939324.0 python tc036 4 COMPLETED 0:0
This shows all the jobs we ran recently (note that there are multiple entries per job). To get info about a specific job, we change command slightly.
[yourUsername@cirrus-login1 ~]$ sacct -u yourUsername -l -j 3939324
It will show a lot of info; in fact, every single piece of info collected on
your job by the scheduler will show up here. It may be useful to specify the
infomation we want using the -o
or --format
option. Use the command
sacct --helpformat
to get a list of output options.
[yourUsername@cirrus-login1 ~]$ sacct -u yourUsername -j 3939324 -o 'JobID, AllocCPUS,State,ExitCode,Elapsed,ReqMem'
JobID AllocCPUS State ExitCode Elapsed ReqMem
------------ ---------- ---------- -------- ---------- ----------
3939324 4 COMPLETED 0:0 00:00:08 28600M
3939324.bat+ 4 COMPLETED 0:0 00:00:08
3939324.ext+ 4 COMPLETED 0:0 00:00:08
3939324.0 4 COMPLETED 0:0 00:00:06
Discussion
This view can help compare the amount of time requested and actually used, duration of residence in the queue before launching, and memory footprint on the compute node(s).
How accurate were our estimates?
Improving Resource Requests
Using the job history we can give better time estimates for our jobs. When we overestimate the time needed to complete a job it makes it harder for the queuing system to accurately estimate when resources will become free for other jobs. Practically, this means that the queuing system waits to dispatch our job until the full requested time slot opens, instead of “sneaking it in” a much shorter window where the job could actually finish. Specifying the expected runtime in the submission script more accurately will help alleviate cluster congestion and may get your job dispatched earlier.
Key Points
Accurate job scripts help the queuing system efficiently allocate shared resources.
Using shared resources responsibly
Overview
Teaching: 15 min
Exercises: 5 minQuestions
How can I be a responsible user?
How can I protect my data?
How can I best get large amounts of data off an HPC system?
Objectives
Describe how the actions of a single user can affect the experience of others on a shared system.
Discuss the behaviour of a considerate shared system citizen.
Explain the importance of backing up critical data.
Describe the challenges with transferring large amounts of data off HPC systems.
Convert many files to a single archive file using tar.
One of the major differences between using remote HPC resources and your own system (e.g. your laptop) is that remote resources are shared. How many users the resource is shared between at any one time varies from system to system, but it is unlikely you will ever be the only user logged into or using such a system.
The widespread usage of scheduling systems where users submit jobs on HPC resources is a natural outcome of the shared nature of these resources. There are other things you, as an upstanding member of the community, need to consider.
Be Kind to the Login Nodes
The login node is often busy managing all of the logged in users, creating and editing files and compiling software. If the machine runs out of memory or processing capacity, it will become very slow and unusable for everyone. While the machine is meant to be used, be sure to do so responsibly – in ways that will not adversely impact other users’ experience.
Login nodes are always the right place to launch jobs. Cluster policies vary, but they may also be used for proving out workflows, and in some cases, may host advanced cluster-specific debugging or development tools. The cluster may have modules that need to be loaded, possibly in a certain order, and paths or library versions that differ from your laptop, and doing an interactive test run on the head node is a quick and reliable way to discover and fix these issues.
Login Nodes Are a Shared Resource
Remember, the login node is shared with all other users and your actions could cause issues for other people. Think carefully about the potential implications of issuing commands that may use large amounts of resource.
Unsure? Ask your friendly systems administrator (“sysadmin”) if the thing you’re contemplating is suitable for the login node, or if there’s another mechanism to get it done safely.
You can always use the commands top
and ps ux
to list the processes that
are running on the login node along with the amount of CPU and memory they are
using. If this check reveals that the login node is somewhat idle, you can
safely use it for your non-routine processing task. If something goes wrong
– the process takes too long, or doesn’t respond – you can use the
kill
command along with the PID to terminate the process.
Login Node Etiquette
Which of these commands would be a routine task to run on the login node?
python physics_sim.py
make
create_directories.sh
molecular_dynamics_2
tar -xzf R-3.3.0.tar.gz
Solution
Building software, creating directories, and unpacking software are common and acceptable > tasks for the login node: options #2 (
make
), #3 (mkdir
), and #5 (tar
) are probably OK. Note that script names do not always reflect their contents: before launching #3, pleaseless create_directories.sh
and make sure it’s not a Trojan horse.Running resource-intensive applications is frowned upon. Unless you are sure it will not affect other users, do not run jobs like #1 (
python
) or #4 (custom MD code). If you’re unsure, ask your friendly sysadmin for advice.
If you experience performance issues with a login node you should report it to the system staff (usually via the helpdesk) for them to investigate.
Test Before Scaling
Remember that you are generally charged for usage on shared systems. A simple mistake in a job script can end up costing a large amount of resource budget. Imagine a job script with a mistake that makes it sit doing nothing for 24 hours on 1000 cores or one where you have requested 2000 cores by mistake and only use 100 of them! This problem can be compounded when people write scripts that automate job submission (for example, when running the same calculation or analysis over lots of different parameters or files). When this happens it hurts both you (as you waste lots of charged resource) and other users (who are blocked from accessing the idle compute nodes). On very busy resources you may wait many days in a queue for your job to fail within 10 seconds of starting due to a trivial typo in the job script. This is extremely frustrating!
Most systems provide dedicated resources for testing that have short wait times to help you avoid this issue.
Test Job Submission Scripts That Use Large Amounts of Resources
Before submitting a large run of jobs, submit one as a test first to make sure everything works as expected.
Before submitting a very large or very long job submit a short truncated test to ensure that the job starts as expected.
Have a Backup Plan
Although many HPC systems keep backups, it does not always cover all the file systems available and may only be for disaster recovery purposes (i.e. for restoring the whole file system if lost rather than an individual file or directory you have deleted by mistake). Protecting critical data from corruption or deletion is primarily your responsibility: keep your own backup copies.
Version control systems (such as Git) often have free, cloud-based offerings (e.g., GitHub and GitLab) that are generally used for storing source code. Even if you are not writing your own programs, these can be very useful for storing job scripts, analysis scripts and small input files.
If you are building software, you may have a large amount of source code that you compile to build your executable. Since this data can generally be recovered by re-downloading the code, or re-running the checkout operation from the source code repository, this data is also less critical to protect.
For larger amounts of data, especially important results from your runs,
which may be irreplaceable, you should make sure you have a robust system in
place for taking copies of data off the HPC system wherever possible
to backed-up storage. Tools such as rsync
can be very useful for this.
Your access to the shared HPC system will generally be time-limited so you should ensure you have a plan for transferring your data off the system before your access finishes. The time required to transfer large amounts of data should not be underestimated and you should ensure you have planned for this early enough (ideally, before you even start using the system for your research).
In all these cases, the helpdesk of the system you are using should be able to provide useful guidance on your options for data transfer for the volumes of data you will be using.
Your Data Is Your Responsibility
Make sure you understand what the backup policy is on the file systems on the system you are using and what implications this has for your work if you lose your data on the system. Plan your backups of critical data and how you will transfer data off the system throughout the project.
Transferring Data
As mentioned above, many users run into the challenge of transferring large amounts of data off HPC systems at some point (this is more often in transferring data off than onto systems but the advice below applies in either case). Data transfer speed may be limited by many different factors so the best data transfer mechanism to use depends on the type of data being transferred and where the data is going.
The components between your data’s source and destination have varying levels of performance, and in particular, may have different capabilities with respect to bandwidth and latency.
Bandwidth is generally the raw amount of data per unit time a device is capable of transmitting or receiving. It’s a common and generally well-understood metric.
Latency is a bit more subtle. For data transfers, it may be thought of as the amount of time it takes to get data out of storage and into a transmittable form. Latency issues are the reason it’s advisable to execute data transfers by moving a small number of large files, rather than the converse.
Some of the key components and their associated issues are:
- Disk speed: File systems on HPC systems are often highly parallel, consisting of a very large number of high performance disk drives. This allows them to support a very high data bandwidth. Unless the remote system has a similar parallel file system you may find your transfer speed limited by disk performance at that end.
- Meta-data performance: Meta-data operations such as opening and closing files or listing the owner or size of a file are much less parallel than read/write operations. If your data consists of a very large number of small files you may find your transfer speed is limited by meta-data operations. Meta-data operations performed by other users of the system can also interact strongly with those you perform so reducing the number of such operations you use (by combining multiple files into a single file) may reduce variability in your transfer rates and increase transfer speeds.
- Network speed: Data transfer performance can be limited by network speed. More importantly it is limited by the slowest section of the network between source and destination. If you are transferring to your laptop/workstation, this is likely to be its connection (either via LAN or WiFi).
- Firewall speed: Most modern networks are protected by some form of firewall that filters out malicious traffic. This filtering has some overhead and can result in a reduction in data transfer performance. The needs of a general purpose network that hosts email/web-servers and desktop machines are quite different from a research network that needs to support high volume data transfers. If you are trying to transfer data to or from a host on a general purpose network you may find the firewall for that network will limit the transfer rate you can achieve.
As mentioned above, if you have related data that consists of a large number of
small files it is strongly recommended to pack the files into a larger
archive file for long term storage and transfer. A single large file makes
more efficient use of the file system and is easier to move, copy and transfer
because significantly fewer metadata operations are required. Archive files can
be created using tools like tar
and zip
. We have already met tar
when we
talked about data transfer earlier.
Consider the Best Way to Transfer Data
If you are transferring large amounts of data you will need to think about what may affect your transfer performance. It is always useful to run some tests that you can use to extrapolate how long it will take to transfer your data.
Say you have a “data” folder containing 10,000 or so files, a healthy mix of small and large ASCII and binary data. Which of the following would be the best way to transfer them to Cirrus?
[user@laptop ~]$ scp -r data yourUsername@login.cirrus.ac.uk:~/
[user@laptop ~]$ rsync -ra data yourUsername@login.cirrus.ac.uk:~/
[user@laptop ~]$ rsync -raz data yourUsername@login.cirrus.ac.uk:~/
[user@laptop ~]$ tar -cvf data.tar data [user@laptop ~]$ rsync -raz data.tar yourUsername@login.cirrus.ac.uk:~/
[user@laptop ~]$ tar -cvzf data.tar.gz data [user@laptop ~]$ rsync -ra data.tar.gz yourUsername@login.cirrus.ac.uk:~/
Solution
scp
will recursively copy the directory. This works, but without compression.rsync -ra
works likescp -r
, but preserves file information like creation times. This is marginally better.rsync -raz
adds compression, which will save some bandwidth. If you have a strong CPU at both ends of the line, and you’re on a slow network, this is a good choice.- This command first uses
tar
to merge everything into a single file, thenrsync -z
to transfer it with compression. With this large number of files, metadata overhead can hamper your transfer, so this is a good idea.- This command uses
tar -z
to compress the archive, thenrsync
to transfer it. This may perform similarly to #4, but in most cases (for large datasets), it’s the best combination of high throughput and low latency (making the most of your time and network connection).
What to expect on different HPC systems
This course has aimed to give an introduction to HPC and equip you with the general skills to start using these systems. You may find when you return to your institutions that there are differences between Cirrus and the systems that are available to you. For example,
- Hardware & Architecture e.g. CPU vs. GPU
- Queues & Partitions e.g. priority or job type based queuing systems
- File System e.g. differences in where to launch jobs, install software and store data
- Modules e.g. default versions of software, how and what a user can install
- Scheduler e.g. Slurm vs. Torque
- Inline Editor e.g. vim, emacs as default instead of nano
Scheduler
SLURM (Simple Linux Utility for Resource Management) is a very popular schedulers for HPC systems but there are others that you may encounter, particularly on legacy or smaller HPC systems. Many of the concepts are similar however. The main differences between schedulers is in the commands used to submit and monitor jobs, the syntax used to request resources and the behaviour of environment variables. Some alternative schedulers include MOAB/Torque and PBS/Torque.
For example, submitting a job on Slurm and Torque systems:
[yourUsername@cirrus-login1 ~]$ sbatch <job script>
[yourUsername@other-hpc-login1 ~]$ qsub <job script>
Comparison guides for these systems can easily by found online, for example here.
Key Points
Be careful how you use the login node.
Your data on the system is your responsibility.
Plan and test large data transfers.
It is often best to convert many files to a single archive file before transferring.