Microcontroller, Processor simulation, Virtual processor simulation – Rainbow Electronics AT89C2051 User Manual

Page 2: Efficiency and overhead

Advertising
background image

Microcontroller

5-48

Processor Simulation

The concept of microprocessor simulation is widely used
and well understood. Simulation is often used for develop-
ment

purposes where a PC program models a specific proces-
sor’s architecture and interprets and executes its binary
instruction set. Using this technique enables one to
develop, test, and debug algorithms that will ultimately be
combined into a larger program. Such a program will even-
tually run on a standalone microprocessor or microcontrol-
ler. Using simulation early in the design cycle is attractive
because it allows you to start developing code long before
the actual target hardware is available.

Processor simulation has also been applied to simulate
entire computing systems. In this context, existing applica-
tion programs, in their native binary format, have been
coerced to run on various computers powered by com-
pletely different processors. For obvious reasons, the per-
formance resulting from such an approach often proves to
be disappointing. This does not necessarily have to be the
case if the implementation is designed for a specific pur-
pose. Factors effecting performance efficiency include the
host processor’s strengths and limitations, the specific
types of operations that are to be simulated, and, to an
extent, the language the original program is written in.

Virtual Processor Simulation

Many developmental simulators have been produced that
emulate the functions of popular processors and microcon-
trollers using standard desktop computers. The same prin-
ciples can be utilized at the other end of the spectrum;
there are cases where running a simulation on a small
microcontroller can be put to an advantage. In this case,
however, the benefit is not derived from simulating a known
processor, but one that offers inherent advantages tailored
to solving the specific problem at hand. The implication, of
course, points to the design of a virtual processor. The idea
is based on the premise of using a real processor to imple-
ment a virtual device specifically designed to suit the spe-
cial needs of a particular application. In other words,
designing the tool set for a particular job.

The fact is that adopting such a methodology can ultimately
result in an architecture that can be pressed to serve as an
efficient vehicle for a number of specialized tasks. Details
including the fundamental architecture, instruction set, and
memory model can be approached with total freedom. But,
can such an approach provide the level of performance
demanded by embedded applications?

Efficiency and Overhead

To illustrate that efficiency is a subjective matter, consider
what happens when a typical C program is compiled to run
on an 8051 processor. It’s inconceivable that, on such an
architecture, any C statement will effectively compile down
to any corresponding 8051 instruction. A single C state-
ment invariably results in the execution of multiple instruc-
tion steps. It follows that, given an efficient simulated
instruction set, the simulation overhead might account for a
very small percentage of the overall execution time.

The key behind making this premise work is to devise an
instruction set and processor architecture that’s conducive
to performing the types of operations that a C compiler nat-
urally generates. In such an implementation, the contrived
instruction set essentially amounts to an intermediate lan-
guage. The op codes merely serve as a vehicle for suc-
cinctly conveying the compiler’s directives to the target pro-
cessor for execution.

The target processor, while performing the functions of a
simulator, interprets the intermediate instructions to per-
form the functions specified in the original high level lan-
guage source statements. The resulting efficiency can be
quite tolerable since the bulk of the instructions would exe-
cute regardless of whether they were emitted directly by
the compiler or invoked by the simulation kernel.

It turns out the performance penalty of such an approach
is, to a great extent, dependent on the way the program
memory itself is implemented. Since the AT89C2051 has
no external bus structure it makes sense to use a serial bus
to access the program memory. Using I

2

C for this purpose

provides the required flexibility along with reasonable
throughput.

Selecting I

2

C as a memory bus presents the potential of

choosing from a wide variety of EEPROM memory devices.
The most favorable configuration is Atmel’s AT24C64 that
offers 8K bytes of storage in an 8-pin package. Utilizing
extended 16-bit addressing, the AT24C64 provides linear
access to the entire internal memory array. And although a
lot of functionality can be crammed into a single chip, addi-
tional devices can easily be added in 8K increments to han-
dle very complex applications. Up to eight AT24C64s can
simultaneously reside on the I

2

C bus providing a full 64K of

storage while using just two wires.

Of course, serial memory access does come at a cost. In
this case the expense comes in the form of access time. To
an extent, this is moderated by the fact that the AT24C64
can operate at a 400 kHz clock rate (standard I

2

C is speci-

fied at a maximum of 100 kHz). Remember however, that
I

2

C can exact a significant performance penalty because a

substantial percentage of its bandwidth can be consumed
for control functions.

Advertising