Save and restore rates, Large file concurrent runs – Intel AS/400 RISC Server User Manual

Page 250

Advertising
background image

15.9.2 Large File Concurrent

For the concurrent testing 16 libraries were built, each containing a single 320 GB file with 80 4 GB
members. The file size was chosen to sustain a flow across the HSL, system bus, processors, memory and
tapes drives for about an hour. We were not interested in peak performance here but sustained
performance. Measurements were done to show scaling from 1 to 16 tape drives, knowing that near the
top number of tape drives that the system would become the limiting factor and not the tape drives. This
could be used by customers to give them an estimate at what might be a reasonable number of tape drives
for their situation.

4.68

TB/HR

4.54

TB/HR

4.28

TB/HR

4.02

TB/HR

3.73

TB/HR

2.56

TB/HR

1.33

TB/HR

1.01

TB/HR

680

GB/HR

340

GB/HR

R

5.21

TB/HR

5.14

TB/HR

4.90

TB/HR

4.63

TB/HR

4.15

TB/HR

2.88

TB/HR

1.45

TB/HR

1.09

TB/HR

730

GB/HR

365

GB/HR

S

320 GB

DB file with

80 4 GB

members

16

15

14

13

12

8

4

3

2

1

# 3580.002

Tape drives

Table 15.9.2.1 iV5R2 16 - 3580.002 Fiber Channel Tape Device Measurements (Concurrent)

(Save = S, & Restore = R)

In the table above, you will notice that the 16th drive starts to loose value. Even though there is gain we
feel we are starting to see the system saturation points start to factor in. Unfortunately, we didn’t have
anymore drives to add in but believe that the total data throughput would be relatively equal, even if any
more drives were added.

IBM i 6.1 Performance Capabilities Reference - January/April 2008

©

Copyright IBM Corp. 2008

Chapter 15. Save/Restore Performance

250

5.2 TB/HR

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

# 3580 model 002 Tape Drives

0

1

2

3

4

5

6

TB/HR

Save

Restore

Save and Restore Rates

Large File Concurrent Runs

Advertising
This manual is related to the following products: