Wait for link – Dell Intel PRO Family of Adapters User Manual

Page 50

Advertising
background image

Distributed Multi-Root PCI/Memory Architecture

Example 4: The number of available NUMA node CPUs is not sufficient for queue allocation. If your platform has a pro-
cessor that does not support an even power of 2 CPUs (for example, it supports 6 cores), then during queue allocation
if SW runs out of CPUs on one socket it will by default reduce the number of queues to a power of 2 until allocation is
achieved. For example, if there is a 6 core processor being used, the SW will only allocate 4 FCoE queues if there only
a single NUMA node. If there are multiple NUMA nodes, the NUMA node count can be changed to a value greater than
or equal to 2 in order to have all 8 queues created.

Determining Active Queue Location

The user of these performance options will want to determine the affinity of FCoE queues to CPUs in order to verify
their actual effect on queue allocation. This is easily done by using a small packet workload and an I/O application
such as IoMeter. IoMeter monitors the CPU utilization of each CPU using the built-in performance monitor provided by
the operating system. The CPUs supporting the queue activity should stand out. They should be the first non-hyper
thread CPUs available on the processor unless the allocation is specifically directed to be shifted via the performance
options discussed above.

To make the locality of the FCoE queues even more obvious, the application affinity can be assigned to an isolated set
of CPUs on the same or another processor socket. For example, the IoMeter application can be set to run only on a
finite number of hyper thread CPUs on any processor. If the performance options have been set to direct queue alloc-
ation on a specific NUMA node, the application affinity can be set to a different NUMA node. The FCoE queues should
not move and the activity should remain on those CPUs even though the application CPU activity moves to the other
processor CPUs selected.

Wait for Link

Determines whether the driver waits for auto-negotiation to be successful before reporting the link state. If this feature is
off, the driver does not wait for auto-negotiation. If the feature is on, the driver does wait for auto-negotiation.

If this feature is on, and the speed is not set to auto-negotiation, the driver will wait for a short time for link to complete
before reporting the link state.

If the feature is set to Auto Detect, this feature is automatically set to On or Off depending on speed and adapter type
when the driver is installed. The setting is:

l

Off for copper Intel gigabit adapters with a speed of "Auto".

l

On for copper Intel gigabit adapters with a forced speed and duplex.

l

On for fiber Intel gigabit adapters with a speed of "Auto".

Default

Auto Detect

Range

l

On

l

Off

l

Auto Detect

Advertising