Dell PowerVault MD3000 User Manual

Page 16

Advertising
background image

Dell™ PowerVault MD3000 and MD3000i Array Tuning Best Practices

December 2008 – Revision A01 

Page 16 

to how tightly the variance of sequential or random data access is contained
within the volume. This can be purely random across an entire virtual disk, or
random within some bounds, such as a large file stored within a virtual disk
compared to large non-contiguous bursts of sequential data access randomly
distributed within some bounds. Each of these is a different I/O pattern, and has
a discrete case to be applied when tuning the storage.

The data from the stateCaptureData.txt file can be helpful in determining these
characteristics. Sequential read percentage can be determined based on the
percentage of total cache hits. If the cache hits and read percentage are high,
then first assume the I/O pattern tends to be more sequential I/O. However,
since cache hits are not broken out statistically by read and write, some variable
experimentation may have to be performed with the representative data set if the
pattern is unknown. For single threaded I/O host streams, this behavior can be
confirmed by comparing the magnitude of reads to read pre-fetch statistics.

In cases where many sequential read operations are expected, enabling read
pre-fetch in cache is recommended. If the cache hits percentage is low, the
application tends to be more random and read-ahead should be disabled. Mid-
range percentages possibly indicate bursts of sequential I/O but do not
necessarily denote their affiliation to read or write I/O. Again, testing with read-
ahead on/off would be required.

In the second generation firmware, the segment, stripe, and pre-fetch statistics
have been reorganized as seen in Figure 4 from the lower half of Figure 2.

4.7.4 Stripe

Size

For the best performance, stripe size should always be larger than the maximum
I/O size performed by the host. As identified previously, stripes should be sized
as even powers of two. The average block size can be identified from the
collected data. Additionally, I/Os over 2MiB are considered large and broken out
separately from smaller I/Os in the statistics. While all RAID level’s benefit from
careful tuning of stripe and segment size, RAID 5 and 6 with their parity
calculations are the most dependant.

*** Performance stats ***

Cluster Reads Cluster Writes Stripe Reads
6252626 3015009 5334257
Stripe Writes Cache Hits Cache Hit Blks
2040493 4685032 737770040
RPA Requests RPA Width RPA Depth
982036 3932113 418860162
Full Writes Partial Writes RMW Writes
653386 29 328612
No Parity Writes Fast Writes Full Stripe WT
0 0 0

Figure 4: Second Generation Firmware - Performance

Statistics Broken Out. File: stateCaptureData.txt

Advertising