allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work,
providing a framework for starting, executing, and monitoring work, typically a parallel job such as Message Passing Interface (MPI) on a set of allocated nodes, and
arbitrating contention for resources by managing a queue of pending jobs.
Slurm is the workload manager on about 60% of the TOP500 supercomputers.[1]
Slurm began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD,[3] Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. It was inspired by the closed source Quadrics RMS and shares a similar syntax. The name is a reference to the soda in Futurama.[4] Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.
As of November 2021[update], TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on more than half of the top ten systems.
Structure
Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization.
Support for burst buffer that accelerates scientific data movement
The following features are announced for version 14.11 of Slurm, was released in November 2014:[5]
Improved job array data structure and scalability
Support for heterogeneous generic resources
Add user options to set the CPU governor
Automatic job requeue policy based on exit value
Report API use by user, type, count and time consumed
Communication gateway nodes improve scalability
Supported platforms
Slurm is primarily developed to work alongside Linux distributions, although there is also support for a few other POSIX-based operating systems, including BSDs (FreeBSD, NetBSD and OpenBSD).[6] Slurm also supports several unique computer architectures, including:
In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bull, Cray, and Science + Computing (subsidiary of Atos).
Usage
The `slurm` system has three main parts:
a central `slurmctld` (slurm control) daemon running on a single control node (optionally with failover backups);
many computing nodes, each with one or more `slurmd` daemons;
clients that connect to the manager node, often with ssh.
The clients can issue commands to the control daemon, which would accept and divide the workload to the computing daemons.
For clients, the main commands are `srun` (queue up an interactive job), `sbatch` (queue up a job), `squeue` (print the job queue), `scancel` (remove a job from the queue).
Jobs can be run in batch mode or interactive mode. For interactive mode, a compute node would start a shell, connects the client into it, and run the job. From there the user may observe and interact with the job while it is running. Usually, interactive jobs are used for initial debugging, and after debugging, the same job would be submitted by `sbatch`. For a batch mode job, its `stdout` and `stderr` outputs are typically directed to text files for later inspection.
^Pascual, Jose Antonio; Navaridas, Javier; Miguel-Alonso, Jose (2009). Effects of Topology-Aware Allocation Policies on Scheduling Performance. Job Scheduling Strategies for Parallel Processing. Lecture Notes in Computer Science. Vol. 5798. pp. 138–144. doi:10.1007/978-3-642-04633-9_8. ISBN978-3-642-04632-2.