Verification and validation – Rockwell Automation Arena Contact Center Edition Users Guide User Manual

Page 24

Advertising
background image

A

RENA

C

ONTACT

C

ENTER

E

DITION

U

SER

S

G

UIDE

16

• •

First, using raw empirical data implies that we are only simulating the past; by using data
from one year, we replicate the performance of that year but not necessarily of future
years. When sampling directly from historical data, the only events possible are those that
transpired during the period when the data was gathered. It is one thing to assume that the
basic form of the distribution will remain unchanged with time; it is quite another to
assume that the idiosyncrasies of a particular year will always be repeated.

Second, it is much easier to change certain aspects of the input if theoretical random
variate generation is being used; i.e., there is greater flexibility. For example, if we want
to determine what happens if inputs increase by 10% per week, we need only increase the
mean arrival rate of the theoretical distribution by the required 10%. On the other hand, if
we are sampling directly from the empirical data, it is not clear how we increase the
contact arrival rate by the required amount.

Third, it is highly desirable to test the sensitivity of the system to changes in the
parameters. For example, we may want to know how much the contact arrival rate can
increase before system performance deteriorates to an unacceptable degree. Again,
sensitivity analysis is easier with theoretical distributions than with sampling directly
from empirical data.

The problem is exacerbated when no historical behavioral data exist (either because the
system has not yet been built or because the data cannot be gathered). In these cases, we
must estimate both the distribution and the parameters based on theoretical considerations.

Verification and validation

After the development of the model is functionally complete, we should ask ourselves a
question: Does it work? There are two aspects to this question. First, does it do what the
analyst expects it to do? Second, does it do what the user expects it to do? We find the
answers to these questions through model verification and validation. Verification seeks
to show that the computer program performs as expected and intended, thus providing a
correct logical representation of the model. Validation, on the other hand, establishes that
model behavior validly represents that of the real-world system being simulated. Both
processes involve system testing that demonstrates different aspects of model accuracy.

Verification can be viewed as rigorous debugging with one eye on the model and the other
eye on the model requirements. In addition to simply debugging any model development
errors, it also examines whether the code reflects the description found in the conceptual
model. One of the goals of verification is to show that all parts of the model work, both
independently and together, and use the right data at the right time.

The greatest aid to program verification is correct program design, followed by clarity,
style, and ease of understanding. Very often, simulation models are poorly documented,
especially at the model statement level. Verification becomes much easier if the analyst
comments the model liberally. This includes comments wherever Arena Contact Center

Advertising