HP MSR Encryption Accelerators User Manual

Page 58

Advertising
background image

NUMA configuration 58

configuration creates a load balancing problem in the system when IO Accelerator devices are under heavy

traffic. During these periods of high use, half of the processors in the system sit idle while the other half of the
processors are 100% utilized, thus limiting the throughput of the IO Accelerator devices.
To avoid this situation, you must manually configure the affinity of the IO Accelerator devices using the
FIO_AFFINITY configuration parameter to distribute the work load across all NUMA nodes. This

parameter overrides the default behavior of the IO Accelerator driver. For more information about the
FIO_AFFINITY configuration parameter, refer to the syntax explanation below.
Syntax:
The following is an example of how to configure 10 HP IO Accelerator ioDrive Duo devices (each with two

IO Accelerator devices) in a HP DL580 G7 system manually as described in the preceding paragraphs. Slot
1 is a Generation 1 PCIe slot, so it is not compatible with an ioDrive Duo device. Therefore you can fill slots

2-11 with ioDrive Duo devices. Because each ioDrive Duo device has two IO Accelerator devices, each

ioDrive Duo devices has two device numbers (one for each IO Accelerator device). Each slot has two device

numbers.
The following tables list the default BIOS NUMA node assignments.

BIOS assigned

NUMA node

PCIe slots

FCT device numbers

Processor Affinity

0

7-11

8,9,13,14,18,19,23,24,28,29

All processors in the node

1

None

None

None

2

2-6

135,136,140,141,145,146,150,151,
155,156

All processors in the node

3

None

None

None

Assigned
NUMA node

PCIe slots

FCT device numbers

Processor Affinity

0

7-9

8,9,13,14,18,19

All processors in the node (no hex mask)

1

10-11

23,24,28,29

All processors in the node (no hex mask)

2

2-3

135,136,140,141

All processors in the node (no hex mask)

3

4-6

145,146,150,151,155,156

All processors in the node (no hex mask)

In this example, the BIOS creates a load imbalance by assigning the cards to only two NUMA nodes in the

system. To balance the work load, enter the following settings:
Manually configure the VSL driver with these override settings, and then set the numa_node_override

parameter with the following string:
numa_node_override=fct8:0,fct9:0,fct13:0,fct14:0,fct18:0,fct19:0,fct23:1,fct

24:1,fct28:1,fct29:1,fct135:2,fct136:2,fct140:2,fct141:2,fct145:3,fct146:3,f

ct150:3,fct151:3,fct155:3,fct156:3

Advertising