Achronix ACE Version 5.0 User Manual

Page 262

Advertising
background image

Running the Flow

Chapter 4. Tasks

Configuring the Execution Queues

Within the Multiprocess view, the

Execution Queue Management

section allows the user to configure the

desired number of parallel processes used to consume the queue of selected implementations. Simply set
the value of Parallel Job Count to the desired number of parallel processes. Using the minimum value of 1
will cause all queued implementations to be executed sequentially, one after another.

ACE may be configured to execute the parallel processes in the background on the host workstation running
the ACE GUI, or ACE may submit each implementation as independent executable jobs to an external
GridEngine (like OGE, the

Oracle Grid Engine

2

) via the

qsub

command. Configuration of the

qsub

command is handled on the

Multiprocess View Preference Page

.

Important Considerations When Using Background Execution on the Local Host Workstation

Be aware

that if the configured number of parallel processes is too high, total execution time will actually take longer
than it would at lower values. The constraints are available memory and available processor cores, as well
as the load from other processes running on the host workstation.

When choosing how many parallel background implementations to allow, it is very important that users
ensure they don’t exhaust the physical memory (RAM) available on the executing workstation, otherwise
flow execution times will quickly increase (due to memory page swapping to disk). Don’t forget to take into
account any other users on the same workstation, as well as the memory currently in use by the already-
running ACE GUI and associated back-end

acx

process.

Each additional background ACE process will take multiple Gigabytes of memory - the exact amount will
vary depending upon the size of the design and the size of the target Achronix device. (Smaller designs and
smaller devices will, of course, take less memory.) A guesstimate for large designs on the 22iHD device is
around 16GB of memory used for each background process. Large 22iHP designs may take up to 10GB of
memory per background process. Large RDR1000 designs may take up to 2.0GB of memory per background
process. Again, these are guesstimates designs nearing 100% device utilization may require even more
memory.

Be aware that with modern multi-core hyperthreading workstations, memory limits are usually going to
be the reason to constrain the parallel process count. It is not unusual to find workstations capable of
running 8 simultaneous threads while only having 16GB of RAM. While on this example workstation, if
the ACE user is running the flow on a large 22iHP design (where our guesstimate was up to 10GB per
background process), the most efficient parallel process count would likely be 1 or 2; it would depend
upon the Operating System, how much memory ACE and other currently-running processes were already
using, and whether the user planned to continue using the workstation interactively while the background
processes were executing. Since multiple iterations through the flow are likely, it may be worth the user’s
time to track the total multiprocess duration at multiple parallel process counts, so as they continue working
in the future, they can use the most efficient settings for that workstation.

In the majority of cases, the parallel process count should at most be the lesser of the following two values
(though even lower values may be faster):

processor constraint: 1 + T

where
T = the total number of simultaneous threads supported by the workstation,
T = ( P * ( C * H ) ), where
P = the total number of processors in the workstation
C = the number of cores per processor
H = 2 if the cores are hyperthreaded, 1 if not

memory constraint: A / D

where
D = amount of memory needed by the design, as reported in ACE log files (or the Tcl Console) during
a prior flow execution
A = the total available (unused) RAM memory,

2

With Oracle’s recent purchase of Sun, the free, open-source GridEngine project is in some flux. The

wikipedia article

is probably

the safest site long-term to find the latest GridEngine-related information. The

http://www.gridengine.info/

site may also be relevant.

UG001 Rev. 5.0 - 5th December 2012

http://www.achronix.com

250

Advertising