4 hp-mpi, Section 1.5) – HP XC System 2.x Software User Manual

Page 25

Advertising
background image

request. LSF-HPC always tries to pack multiple serial jobs on the same node, with one CPU per
job. Parallel jobs and serial jobs cannot coexist on the same node.

After the LSF-HPC scheduler allocates the SLURM resources for a job, the SLURM allocation
information is recorded with the job. You can view this information with the

bjobs

and

bhist

commands.

When LSF-HPC starts a job, it sets the

SLURM_JOBID

and

SLURM_NPROCS

environment

variables in the job environment.

SLURM_JOBID

associates the LSF-HPC job with SLURM’s

allocated resources. The

SLURM_NPROCS

environment variable is set to the originally

requested number of processors. LSF-HPC dispatches the job from the LSF-HPC execution
host, which is the same node on which LSF-HPC daemons run. The LSF-HPC

JOB_STARTER

script, which is configured for all queues, uses the

srun

command to launch a user job on the

first node in the allocation. Your job can contain additional

srun

or

mpirun

commands to

launch tasks to all nodes in the allocation.

While a job is running, all LSF-HPC-supported resource limits are enforced, including core
limit, cputime limit, data limit, file size limit, memory limit and stack limit. When you kill a
job, LSF-HPC uses the SLURM

scancel

command to propagate the signal to the entire job.

After a job finishes, LSF-HPC releases all allocated resources.

A detailed description, along with an example and illustration, of how LSF-HPC and SLURM
cooperate to launch and manage jobs is provided in Section 7.1.4. It is highly recommended
that you review this information.

1.4.4 HP-MPI

HP-MPI is a high-performance implementation of the Message Passing Interface standard
and is included with the HP XC system. HP-MPI uses SLURM to launch jobs on an HP XC
system — however, it manages the global MPI exchange so that all processes can communicate
with each other.

HP-MPI complies fully with the MPI-1.2 standard. HP-MPI also complies with the MPI-2
standard, with some restrictions. HP-MPI provides an application programming interface and
software libraries to support parallel, message-passing applications that are efficient, portable,
and flexible. HP-MPI version 2.1 is included in this release of HP XC.

HP-MPI 2.1 for HP XC is supported on XC4000 and XC6000 clusters, and includes support for
the following system interconnects:

XC4000 Clusters — Myrinet, Gigabit Ethernet, TCP/IP, InfiniBand

XC6000 Clusters — Quadrics Elan4, Gigabit Ethernet, TCP/IP

1.5 Components, Tools, Compilers, Libraries, and Debuggers

This section provides a brief overview of the some of the common tools, compilers, libraries,
and debuggers supported for use on HP XC.

An HP XC system is integrated with several open source software components. HP XC
incorporates the Linux operating system, and its standard commands and tools, and does not
diminish the Linux ABI in any way. In addition, HP XC incorporates LSF and SLURM to
launch and manage jobs, and includes HP-MPI for high performance, parallel, message-passing
applications, and HP MLIB math library for intensive computations.

Most standard open source compilers and tools can be used on an HP XC system, however
they must be purchased separately. Several open source and commercially available software
packages have been tested with the HP XC Software. The following list shows some of the
software packages that have been tested for use with HP XC. This list provides an example
of what is available on HP XC and is not intended as a complete list. Note that some of the
packages listed are actually included as part of the HPC Linux distribution and as such are

Overview of the User Environment

1-7

Advertising