'Per-node default partition in SLURM

I'm configuring a small cluster, controlled by SLURM.

This cluster has one master node and two partitions. Users submit their jobs from worker nodes, I've restricted their access to the master node. Each partition in the cluster is dedicated to a team in our company.

I'd like that members of different teams submit their jobs to different partitions without bothering with additional command line switches.

That is, I'd like default partition for srun or sbatch to be different depending on the node, running these commands.

For example: all jobs, submitted from the host worker1 should go to the partition1, and all jobs, submitted from the hosts worker[2-4] should go to the partition2.

And all invocations of sbatch or srun should not contain -p (or --partition) switch.

I've tried setting default=YES on different lines in slurm.conf files on different computers, but this did not help.



Solution 1:[1]

This can be solved using SLURM_PARTITION and SBATCH_PARTITION environment variables, put in the /etc/environment file.

Details on environment variables are in manual pages for sbatch and srun

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1