Storage, Nfs server – Dell PowerVault MD1200 User Manual

Page 13

Advertising
background image

Improving NFS Performance on HPC Clusters with Dell Fluid Cache for DAS

13

2.4.1. Storage

• 3TB NL SAS disks are selected for large capacity at a cost-effective price point.

• Virtual disks are created using a RAID 60 layout. The RAID 6 span is across 10 data disks and 2

parity disks and the stripe is across all four storage enclosures. This RAID configuration provides
a good balance between capacity, reliability to tolerate multiple disk failures and
performance.

4

• The segment size used for the RAID stripe is 512 KB to maximize performance.

4

This value

should be set based on the expected application I/O profile for the cluster.

2.4.2. NFS server

• The default OS scheduler is changed from cfq to deadline to maximize I/O performance.

4

• The number of concurrent nfsd threads is increased to 256 from the default of 8 to maximize

performance.

4

• RHEL 6.3 errata Kernel version 2.6.32-279.14.1 fixes some important XFS bugs

6

and is

recommended for this solution.

• The PowerEdge R720 is a dual-socket system that uses the Intel Xeon E5-2600 series processors.

On these processors, the PCI-E controller is integrated on the processor chip, making some slots
‘closer’ to one socket and further from the other. This makes card-to-PCI slot mapping an
important factor to consider in performance tuning. The solution presented in this white paper
balances the three PCI cards across the two processors based on server design and engineering
best practices. Refer to Table 1 and Table 3 for card-to-slot recommendations.

• Two internal disks on the NFS server are configured in a RAID 0 stripe and used as swap space

for the operating system. This provides a large swap space in case there is a need for the XFS
repair program to run after an ungraceful system shutdown.

• XFS create options are optimized as well. By default, XFS tries to query the underlying storage

device and optimize the settings accordingly. In the case of using LVM, this works fine;
however, when presenting a raw virtual disk to XFS, it is important to specify the stripe unit
(su) and stripe width (sw). The stripe unit is the stripe element size that was used to format
the virtual disk. The stripe width is the number of data drives in the virtual disk.

By default, there are 8 log buffers, and the -l size option tells xfs how large the log buffers can
become. This can improve metadata performance; however, the larger the log, the longer it
may take to recover a file system that was not unmounted cleanly.

In this solution the mkfs.xfs command used was:

mkfs.xfs -d su=512k,sw=40 -l size=128m /dev/sdc

• There are XFS mount options to optimize the file system as well. The options used for this

solution are similar to the Dell NSS and are noatime, allocsize=1g, nobarrier, inode64,
logbsize=262144, attr2. Details of these mount options are provided in [4].

• The “sync” NFS export option is used when exporting the XFS file system at the expense of

lower performance. This is an added layer of protection to ensure data reliability as the NFS
server will not acknowledge the write until it has written the data to disk.

Advertising