Kernel realtime characterization, Name, Description – Comtrol eCos User Manual

Page 93: Methodology, Kernel real-time characterization

Advertising
background image

Kernel Real-time Characterization

Name

tm_basic

— Measure the performance of the eCos kernel

Description

When building a real-time system, care must be taken to ensure that the system will be able to perform properly
within the constraints of that system. One of these constraints may be how fast certain operations can be performed.
Another might be how deterministic the overall behavior of the system is. Lastly the memory footprint (size) and
unit cost may be important.

One of the major problems encountered while evaluating a system will be how to compare it with possible alterna-
tives. Most manufacturers of real-time systems publish performance numbers, ostensibly so that users can compare
the different offerings. However, what these numbers mean and how they were gathered is often not clear. The
values are typically measured on a particular piece of hardware, so in order to truly compare, one must obtain
measurements for exactly the same set of hardware that were gathered in a similar fashion.

Two major items need to be present in any given set of measurements. First, the raw values for the various opera-
tions; these are typically quite easy to measure and will be available for most systems. Second, the determinacy of
the numbers; in other words how much the value might change depending on other factors within the system. This
value is affected by a number of factors: how long interrupts might be masked, whether or not the function can
be interrupted, even very hardware-specific effects such as cache locality and pipeline usage. It is very difficult to
measure the determinacy of any given operation, but that determinacy is fundamentally important to proper overall
characterization of a system.

In the discussion and numbers that follow, three key measurements are provided. The first measurement is an
estimate of the interrupt latency: this is the length of time from when a hardware interrupt occurs until its Inter-
rupt Service Routine (ISR) is called. The second measurement is an estimate of overall interrupt overhead: this
is the length of time average interrupt processing takes, as measured by the real-time clock interrupt (other in-
terrupt sources will certainly take a different amount of time, but this data cannot be easily gathered). The third
measurement consists of the timings for the various kernel primitives.

Methodology

Key operations in the kernel were measured by using a simple test program which exercises the various kernel
primitive operations. A hardware timer, normally the one used to drive the real-time clock, was used for these
measurements. In most cases this timer can be read with quite high resolution, typically in the range of a few
microseconds. For each measurement, the operation was repeated a number of times. Time stamps were obtained
directly before and after the operation was performed. The data gathered for the entire set of operations was then
analyzed, generating average (mean), maximum and minimum values. The sample variance (a measure of how
close most samples are to the mean) was also calculated. The cost of obtaining the real-time clock timer values was
also measured, and was subtracted from all other times.

Most kernel functions can be measured separately. In each case, a reasonable number of iterations are performed.
Where the test case involves a kernel object, for example creating a task, each iteration is performed on a different
object. There is also a set of tests which measures the interactions between multiple tasks and certain kernel
primitives. Most functions are tested in such a way as to determine the variations introduced by varying numbers

93

Advertising