Getting Started
Access Information
Code Development
Computing Resources
Running Jobs
Documentation and Training
Careers at A*CRC
Running Jobs  
Running Jobs
The A*CRC pool of computers allows users to run very large number of interactive and batch jobs of varied footprints and duration, with appropriate management, scheduling, control and status monitoring capabilities built into the entire system.
The submission of jobs for running on A*CRC resources implies fully agreeing to the terms and conditions of the A*CRC systems and resources usage.
Important to maximise the use of each HPC resource, a scheduler selects jobs based on a set of pre-defined parameters such as CPU time required, memory required, number of cores or nodes required and several other parameters. The job in a queue with the highest priority, defined by a set of flags and a queue policy at a given machine, is the one the scheduler is going to start next.

Platform LSF scheduler has been installed in all A*CRC systems.
Interactive-use machines have nodes specifically allocated to handle user logins as well as other nodes that are configured into an interactive use pool. Login nodes main purpose is editing files, code compiling and linking, batch system interaction such as submit or query. In addition, they can start both single-node applications as well as parallel jobs that would run in the compute pool. The login node is usually not selected by the users, but auto assigned by a distribution system.

You can query the system and follow your job status using the following commands. See the man page for each command for details.

Interactive Job Commands


Machine Availability




Show current status of processes.

Aurora and Fuji

Display and update information about the top CPU processes.


Display and update information about the IBM AIX system events.
These jobs can be run without terminal access (default), with terminal access via run/proxy, or using the specific utility for each of the A*CRC systems. The run and proxy utilities are available to allow connection to the standard in, standard out, and standard error channels of jobs running in batch or elsewhere. Run must be used in starting the job to be connected to and then proxy can be used in an interactive environment to deal with the messages. See the man pages for run and proxy for more information.
Job Limits
Limits on job size and run duration are imposed on interactive and batch jobs. These limits are viewable per machine by invoking a set of commands specific for each machine, as an example: use command qstat –Q on Aurora or llclass on Cirrus.

Machine Status
The queue status of each machine can be found at Machine Status page.

Ganglia is a utility which provides detailed machine status information of each of A*CRC machines. It is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. It leverages widely used technologies such as XML for data representation, XDR for compact, portable class="sortable" data transport, and RRDtool for data storage and visualization. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures, and is currently in use on thousands of clusters around the world. It has been used to link clusters across university campuses and around the world and can scale to handle clusters with 2000 nodes.
More information at http://ganglia.sourceforge.net/

Tuning and Optimisation

The most important goal of performance tuning and optimization is to reduce a program's wall-clock execution time. Reducing resource usage in other areas, such as memory or disk requirements, may also be a tuning goal. Performance analysis tools are essential to optimizing an application's performance. A*CRC has a range of tuning and optimization tools installed on our systems, please refer to the Code Development for more information.
Shells and Scripting
The main function for shells is to interpret UNIX commands. The interactive and scripting language options differ between various shells. Each user would have an initial login shell that with an entry in the /etc/passwd file. The entry can be changed by the system administrator or by the A*CRC support staff.
The most commonly invoked shells include:

The standard C shell based on the syntax and commands seen in the C programming language.
Available in several flavours, either as the standard POSIX shell or alternatively the Korn or the Bourne shell.
The Korn shell, sometime backward compatible with the Bourne shell and often integrated with the POSIX shell into one common shell
A superset of the C shell adding more features like file name completion and command line editing, Tcsh is also compatible with the Berkeley Unix C shell versions
The GNU Bourne-Again shell is sh-compatible, with extra features added from the Korn and C shells. It is the default shell on most Linux flavours
IBM AIX version of bash on A*CRC Cirrus system
The Z shell is created with interactive use in mind, as well as a scripting language. Similar to ksh but with many additions, like in the command-line editor or customisation options.
Storage and Purge
A*CRC has a major archival storage system available on all its system. Users are strongly urged to store vital files in archival storage because online files can be lost during a machine crash, not all directories are backed up, and files on some machines are purged. If you have an A*CRC, you also have a storage account. To connect to storage, type ftp

Purge policies are subject to change and, when revised, are announced in news postings, and status e-mails. Once files are purged, there is no possibility of recovering them.
Visualisation group at Advanced Computing Program of IHPC manages Visualisation resources and offers interactive, real time realistic 3-D visualization capability for A*CRC users. More information can be found at Visualisation page.

Fault Reporting

For more information, please refer to the Fault Reporting
User Login – Online
Most A*CRC resources are available from our login nodes on each machine. Access the host via ssh.
The command to log in as user is:
% ssh user@
An example is:
Once you are logged in, you can find your files or data through the UNIX file system.
Last Updated - 29th Nov 2016
Privacy Policy