Arguments for the slurm external scheduler – HP XC System 3.x Software User Manual

Page 52

Advertising
background image

bsub -n num-procs -ext "SLURM[slurm-arguments]"

[bsub-options][ -srun

[srun-options]] [jobname] [job-options]

The slurm-arguments parameter can be one or more of the following srun options, separated by
semicolons, as described in

Table 5-1

.

Table 5-1 Arguments for the SLURM External Scheduler

Function

SLURM Arguments

Specifies the minimum and maximum number of nodes allocated to job. The job
allocation will contain at least the minimum number of nodes.

nodes=min[-max]

Specifies minimum number of cores per node. Default value is 1.

mincpus=ncpus

Specifies a minimum amount of real memory in megabytes of each node.

mem=value

Specifies a minimum amount of temporary disk space in megabytes of each node.

tmp=value

Specifies a list of constraints. The list may include multiple features separated by
“&” or “|”. “&” represents ANDed, “|” represents ORed.

constraint=feature

Requests a specific list of nodes. The job will at least contain these nodes. The list
may be specified as a comma-separated list of nodes or a range of nodes

nodelist=list of nodes

Requests that a specific list of hosts not be included in resource allocated to this job.
The list may be specified as a comma-separated list of nodes or a range of nodes.

exclude=list of nodes

Requests a mandatory contiguous range of nodes.

contiguous=yes

The Platform Computing LSF-HPC documentation provides more information on general external scheduler
support.

Consider an HP XC system configuration in which lsfhost.localdomain is the LSF execution host
and nodes n[1-10] are compute nodes in the lsf partition. All nodes contain two cores, providing 20
cores for use by LSF-HPC jobs.

Example 5-9

shows one way to submit a parallel job to run on a specific node or nodes.

Example 5-9 Using the External Scheduler to Submit a Job to Run on Specific Nodes

$ bsub -n4 -ext "SLURM[nodelist=n6,n8]" -I srun hostname

Job <70> is submitted to default queue <normal>.

<<Waiting for dispatch ...>>

<<Starting on lsfhost.localdomain>>

n6

n6

n8

n8

In the previous example, the job output shows that the job was launched from the LSF execution host
lsfhost.localdomain

, and it ran on four cores on the specified nodes, n6 and n8.

Example 5-10

shows one way to submit a parallel job to run one task per node.

Example 5-10 Using the External Scheduler to Submit a Job to Run One Task per Node

$ bsub -n4 -ext "SLURM[nodes=4]" -I srun hostname

Job <71> is submitted to default queue <normal>.

<<Waiting for dispatch ...>>

<<Starting on lsfhost.localdomain>>

n1

n2

n3

n4

In the previous example, the job output shows that the job was launched from the LSF execution host
lsfhost.localdomain

, and it ran on four cores on four different nodes (one task per node).

Example 5-11

shows one way to submit a parallel job to avoid running on a particular node.

52

Submitting Jobs

Advertising