HP XC System 2.x Software User Manual

Page 142

Advertising
background image

Show the environment:

$ lsid

Platform LSF HPC 6.0 for SLURM, Sep 23 2004

Copyright 1992-2004 Platform Computing Corporation

My cluster name is penguin

My master name is lsfhost.localdomain

$ sinfo

PARTITION

AVAIL

TIMELIMIT

NODES

STATE

NODELIST

lsf

up

infinite

4

alloc

n[13-16]

$ lshosts

HOST_NAME

type

model

cpuf ncpus maxmem maxswp server RESOURCES

lsfhost.loc

SLINUX6

DEFAULT 1.0

8

1M

-

Yes

(slurm)

$ bhosts

HOST_NAME

STATUS

JL/U

MAX NJOBS RUN SSUSP USUSP RSV

lsfhost.localdomai

ok

-

8

0

0

0

0

0

Run the job:

$ bsub -I -n6 -ext "SLURM[nodes=3]" mpirun -srun /usr/share/hello

Job <1009> is submitted to default queue <normal>.

<<Waiting for dispatch ...>>

<<Starting on lsfhost.localdomain>>

I’m process 0! from ( n13 pid 27222)

Greetings from process 1! from ( n13 pid 27223)

Greetings from process 2! from ( n14 pid 14011)

Greetings from process 3! from ( n14 pid 14012)

Greetings from process 4! from ( n15 pid 18227)

Greetings from process 5! from ( n15 pid 18228)

mpirun exits with status: 0

View the running job:

$ bjobs -l 1009

Job <1009>, User <smith>, Project <default>,

Status <DONE>, Queue <normal>,

Interactive mode, Extsched <SLURM[nodes=3]>,

Command </opt/hpmpi/bin/mpirun -srun /usr/share/hello>

date and time stamp: Submitted from host <lsfhost.localdomain>,

CWD <$HOME>, 6 Processors Requested;

date and time stamp: Started on 6 Hosts/Processors

<6*lsfhost.localdomain>;

date and time stamp: slurm_id=22;ncpus=6;slurm_alloc=n[13-15];

date and time stamp: Done successfully.

The CPU time used is 0.0 seconds.

SCHEDULING PARAMETERS:

r15s r1m r15m ut pg io ls it tmp swp mem

loadSched

-

-

-

-

-

-

-

-

-

-

-

loadStop

-

-

-

-

-

-

-

-

-

-

-

EXTERNAL MESSAGES:

MSG_ID

FROM

POST_TIME

MESSAGE

ATTACHMENT

0

-

-

-

-

1

lsfadmin

date and time

SLURM[nodes=2]

N

View the finished job:

$ bhist -l 1009

Job <1009>, User <smith>, Project <default>,

Interactive mode, Extsched <SLURM[nodes=3]>,

Command </opt/hpmpi/bin/mpirun -srun /usr/share/hello>

A-8

Examples

Advertising