Home
 
Getting Started
 
Access Information
 
Code Development
 
Computing Resources
 
Running Jobs
 
Documentation and Training
 
Events
 
Careers at A*CRC
 
   
Code Development  
 
Parallel Compilation call examples
 
MPI
Message-passing programming coordinates multiple computing elements (processes) through primitives (such as sending a message to one or more other computing elements, receiving a message from a computing element, and synchronizing with other computing elements) so that a process can exchange information (such as an array) with a second process. The synchronizing primitives allow two or more processes to ensure that each process is "ready" for the next step in a parallel algorithm. Beside the target system, the actual invoking call also depends on the schedulers and resource managers used.



OpenMP
OpenMP has emerged as an important model and language extension for shared-memory parallel programming. OpenMP is a collection of compiler directives and library routines used to write portable class="sortable" parallel programs for shared-memory architectures. Writing efficient parallel programs for NUMA architectures, which have characteristics of both shared-memory and distributed-memory architectures, requires that a programmer control the placement of data in memory and the placement of computations that operate on that data. Optimal performance is obtained when computations occur on processors that have fast access to the data needed by those computations.

EXAMPLE – valid on all OpenMP systems
export OMP_NUM_THREADS=n {where n is the number of threads you wish to spawn}
./[your_executable class="sortable"_file-name].out
 

Shmem
SHMEM is the ultrafast native ‘shared virtual memory’ communication library on several Cray, SGI multiprocessor machines as well as Quadrics and Dolphin cluster interconnects, with substantial performance gains over MPI. A shmem_put() call allows a node to write data directly into user space on another node, and a shmem_get() function allows it to get data from another node. Both occur without the cooperation of the second node. A*Star has several systems, both SMP and Quadrics-interconnected clusters, that support accelerated shmem in hardware. A Shmem programming manual is available here
 
EXAMPLE
Here is the command line interface for the commonly used remote read shmem program, sping.
 
sping -n number[k|K|m|M] -eh nwords [maxWords [incWords]]
 
The options for the programs are:
-n number[k|K|m|M]
 
Specify the number of times to ping. The number may have a k or
an m appended to it (or their upper case equivalents) to denote
multiples of 1024 and 1,048,576 respectively. By default, the
program pings 10,000 times.
-e Instructs every process to print their timing statistics.
Programming Examples 3-1
Header Files and Variables
-h Displays the list of options.
nwords [maxWords [incWords]]
 
nwords specifies to sping how many words there are in each
packet. If maxWords is given, it speci_es a maximum number of
words to send in each packet and invokes the following behavior.
After each n repetitions (as specified with the -n option), the packet
size is increased by incWords (the default is a doubling in size) and
another set of repetitions is performed until the packet size exceeds
maxWords.
Last Updated - 15th Feb 2012
 
     
Privacy Policy