2 installation instructions – HP StorageWorks Scalable File Share User Manual

Page 42

Advertising
background image

If the client is using the HP recommended 10 GigE ConnectX cards from Mellanox, the ConnectX
EN drivers must be installed. These drivers can be downloaded from

www.mellanox.com

, or

copied from the HP SFS G3.2-0 server software image in the
/opt/hp/sfs/ofed/mlnx_en-1.4.1

subdirectory. Copy that software to the client system

and install it using the supplied install.sh script. See the included README.txt and release notes
as necessary.

Configure the selected Ethernet interface with an IP address that can access the HP SFS G3.2-0
server using one of the methods described in

“Configuring Ethernet and InfiniBand or 10 GigE

Interfaces” (page 34)

.

4.2 Installation Instructions

The following installation instructions are for a CentOS 5.3/RHEL5U3 system. The other systems
are similar, but use the correct Lustre client RPMs for your system type from the HP SFS G3.2-0
software tarball /opt/hp/sfs/lustre/client directory.

The Lustre client RPMs that are provided with HP SFS G3.2-0 are for use with CentOS
5.3/RHEL5U3 kernel version 2.6.18_128.1.6.e15. If your client is not running this kernel, you need
to either update your client to this kernel or rebuild the Lustre RPMs to match the kernel you
have using the instructions in

“CentOS 5.3/RHEL5U3 Custom Client Build Procedure” (page 43)

.

You can determine what kernel you are running by using the uname -r command.

1.

Install the required Lustre RPMs for the kernel version 2.6.18_128.1.6.e15 Enter the following
command on one line:

# rpm -Uvh lustre-client1.8.0.1-2.6.18_128.1.6.el5_lustre.1.8.0.1smp.x86_64.rpm \
lustre-client-modules-
1.8.0.1-2.6.18_128.1.6.el5_lustre.1.8.0.1smp.x86_64.rpm

For custom-built client RPMs, the RPM names are slightly different. In this case, enter the
following command on one line:

# rpm -Uvh lustre-1.8.0.1-2.6.18_53.1.21.el5.6hp_*.x86_64.rpm \
lustre-modules-
1.8.0.1-2.6.18_53.1.21.el5.6hp_*.x86_64.rpm \
lustre-tests-
1.8.0.1-2.6.18_53.1.21.el5.6hp_*.x86_64.rpm

2.

Run the depmod command to ensure Lustre modules are loaded at boot.

3.

For InfiniBand systems, add the following line to /etc/modprobe.conf:

options lnet networks=o2ib0

For 10 GigE systems, add the following line to /etc/modprobe.conf:

options lnet networks=tcp(eth2)

In this example, eth2 is the Ethernet interface that is used to communicate with the HP SFS
system.

4.

Create the mount-point to use for the file system. The following example uses a Lustre file
system called testfs, as defined in

“Creating a Lustre File System” (page 45)

. It also uses

a client mount-point called /testfs. For example:

# mkdir /testfs

NOTE:

The file system cannot be mounted by the clients until the file system is created

and started on the servers. For more information, see

Chapter 5 (page 45)

.

5.

For InfiniBand systems, to automatically mount the Lustre file system after reboot, add the
following line to /etc/fstab:

172.31.80.1@o2ib:172.31.80.2@o2ib:/testfs /testfs lustre _netdev,rw,flock 0 0

42

Installing and Configuring HP SFS Software on Client Nodes

Advertising