Achronix ACE Version 5.0 User Manual
Page 263
Running the Flow
Chapter 4. Tasks
A = R - ( O + G + B + U ), where
R = total RAM installed in the workstation
O = amount of memory required by the Operating System
G = amount of memory required by the currently-running ACE GUI
B = amount of memory required by the currently-running ACE backend process (named
acx
or
acx.exe
in process lists)
U = amount of memory required by all other user processes expected to execute while the background
processes are running
Continuing the example of the 8 thread 16GB workstation: If the workstations is running Linux, estimate
the OS requires 0.5GB, the ACE GUI requires 1GB, the backend requires 3GB, and no other user processes
are running; the available memory A = ( 16GB - ( 0.5GB + 1GB + 3GB + 0GB ) = ) 11.5GB. If the log files
report the user’s design taking 7GB, then the memory constraint value is ( 11.5GB / 7GB = ) about 1.6. The
processor constraint would be ( 8 threads + 1 = ) 9. The lesser of the two values is the 1.6. So following
the guidelines, the ideal parallel process count would be between 1 and 2. To completely balance the two
constraints for the design, the example user would need 7GB * 9 threads = 63GB of available memory before
they could expect optimal performance running 9 parallel processes.
Tip:
ACE logs the amount of memory (RAM) used by the backend as a design proceeds through
the flow. This number is reported at the end of every
in the log files and (when the
GUI is running the flow in single process mode) the Tcl Console. It is also possible to directly
query ACE to find out the peak backend memory usage in KB with the Tcl command
. These features should allow the user to make an educated
decision as to how much memory each parallel background process will require for their
design, and thus how many processes may be executed in parallel within the current
memory constraints.
Example from log:
Flow step "report_timing_final" completed in 1 seconds.
Peak memory usage is 4917 MB.
Example from Tcl Console View query:
1
cmd> get ace peak memory usage
2
5035008
Configuring ACE to Use an External GridEngine
The GridEngine job submissions are via the GridEngine
command-line executable ”
qsub
”. The
qsub
command must be in the path, the ACE executable must
be reachable from the grid’s execution hosts, and the GridEngine environment variables must already
be configured
3
, before ACE’s
qsub
job submissions will be successful. Optionally, some GridEngine
configuration options may be sent in via command-line options on
qsub
- these custom command-line
options may be configured via the
Multiprocess View Preference Page
, reached by following the (configured
in Preferences)
hyperlink. By default the ACE
qsub
calls inherit the complete execution environment
4
used
to start ACE. If the user is able to successfully use
qsub
from the command-line, ACE’s
qsub
calls should
also succeed.
For a complete list of
qsub
command-line options, please see the GridEngine documentation, the
qsub
man
pages, or speak with a local System Administrator. Note that some of the command-line options used by
ACE are hard-coded, and may not be disabled. These hardcoded options are necessary for ACE’s correct
interpretation of the
qsub
results.
Debugging
qsub
configurations: If the GridEngine and
qsub
are properly configured
5
on the host machine,
3
either as custom
qsub
command-line options on the
Multiprocess View Preference Page
, or in the execution environment used
when starting ACE
4
via the optional command-line switch ”
-V
”
5
meaning the user is able to successfully execute non-ACE tasks using
qsub
from the command-line
251
UG001 Rev. 5.0 - 5th December 2012