Slurm » History » Version 16
« Previous -
Version 16/136
(diff) -
Next » -
Current version
Kerstin Paech, 09/19/2013 03:16 PM
How to run jobs on the euclides nodes¶
Use slurm to submit jobs to the euclides nodes (node1-8), ssh login access to those nodes will be restricted in the near future.
Please read through this entire wikipage so everyone can make efficient use of this cluster
alexandria¶
Please do not use alexandria as a compute node - it's hardware is different from the nodes. It hosts our file server and other services that are important to us.
You should use alexandria to
- transfer files
- compile your code
- submit jobs to the nodes
If you need to debug, please start an interactive job to one of the nodes using slurm. For instructions see below.
euclides nodes¶
Job submission to the euclides nodes is handled by the slurm jobmanager (see http://slurm.schedmd.com and https://computing.llnl.gov/linux/slurm/).
Important: In order to run jobs, you need to be added to the slurm accounting system - please contact Kerstin
All slurm commands listed below have very helpful man pages (e.g. man slurm, man squeue, ...).
If you are already familiar with another jobmanager the following information may be helpful to you http://slurm.schedmd.com/rosetta.pdf.
Scheduling of Jobs¶
At this point there are two queues, called partitions in slurm:- normal which is the default partition your jobs will be sent to if you do not specify it otherwise. At this point there is a time limit of
two days. Jobs at this point can only run on 1 node. - debug which is meant for debugging, you can only run one job at a time, other jobs submitted will remain in the queue. Time limit is
12 hours.
We have also set up a scheduler that goes beyond the first come first serve - some jobs will be favoured over others depending
on how much you or your group have been using euclides in the past 2 weeks, how long the job has been queued and how much
resources it will consume.
This is serves as a starting point, we may have to adjust parameters once the slurm jobmanager is used. Job scheduling is a complex
issue and we still need to build expertise and gain experience what are the user needs in our groups. Please feel free to speak out if
there is something that can be improved without creating an unfair disadvantage for other users.
You can run interactive jobs on both partitions.
Running an interactive job with slurm¶
To run an interactive job with slurm in the default partition, use
srun -u --pty bash
If you want to use tcsh use
srun -u --pty tcsh
In case the 'normal' partition is overcrowded, to use the 'debug' partition, use:
srun --account cosmo_debug -p debug -u --pty bash # if you are part of the Cosmology group srun --account euclid_debug -p debug -u --pty bash # if you are part of the EuclidDM groupAs soon as a slot is open, slurm will log you in to an interactive session on one of the nodes.
Running a simple once core batch job with slurm using the default partition¶
- To see what queues are available to you (called partitions in slurm), run:
sinfo
- To run slurm, create a myjob.slurm containing the following information:
#!/bin/bash #SBATCH --output=slurm.out #SBATCH --error=slurm.err #SBATCH --mail-user <put your email address here> #SBATCH --mail-type=BEGIN #SBATCH -p normal /bin/hostname
- To submit a batch job use:
sbatch myjob.slurm
- To see the status of you job, use
squeue
- To kill a job use:
scancel <jobid>
the <jobid> you can get from using squeue.
- For some more information on your job use
scontrol show job <jobid>
the <jobid> you can get from using squeue.
Running a simple once core batch job with slurm using the debug partition¶
Change the partition to debug and add the appropriate account depending if you're part of
the euclid or cosmology group.
#!/bin/bash #SBATCH --output=slurm.out #SBATCH --error=slurm.err #SBATCH --mail-user <put your email address here> #SBATCH --mail-type=BEGIN #SBATCH -p debug #SBATCH -account [cosmo_debug/euclid_debug] /bin/hostname
Batch script for running a multi-core job¶
To run a 4 core job you can use
#!/bin/bash #SBATCH --output=slurm.out #SBATCH --error=slurm.err #SBATCH --mail-user <put your email address here> #SBATCH --mail-type=BEGIN #SBATCH -n 4 <mpirun call/pogram>