Home
 
Getting Started
 
Access Information
 
Code Development
 
Computing Resources
 
Running Jobs
 
Documentation and Training
 
Events
 
Careers at A*CRC
 
   
Past Events  
 
Event
Period
No of Participants
2016  
30 November 65
30 August 20
Chemistry and materials with the ADF Modeling Suite - a hands-on workshop 8 July 25
11-12, 13-16 May 60
10 May 13
19 April 11
15-18 March 300
2015  
13 October 35
10 April 27
17 - 20 March 314
2014  
25 November 32
13 - 17 October 11
7 October 80
29 August 13
9 July 2014 27
7 April 23
31 March & 1 April 2014 15, 16, 11
10 March 24
24, 25, 26 February 11, 13, 19
17 February 63
6 February 29
24 January 6
22 January 43
14 January 38
2013  
17 December 10
IBM-A*STAR Workshop for Collaborative Research Projects 2-3 December 50
28 November 18
100 days of Cumulus ( BG/Q ) Workshop 27 November 15
High Performance Computing Technologies in Finances 15 - 16 July 98
Big Data and CFD Simulation on TSUBAME 2.0 2 July 99
HPC Academy 28 February to 13 June 36
Hardware Accelerated HPC 7 June 9
GPU-Acceleration of MCAE Applications 5 June 16
Bluegene/Q Workshop: Getting Started & Initial Optimizing, Tuning, Scaling 3 - 5 June 11
NAG Seminar 28, 29, 30 May 42, 24, 18
Parallel Programming Using CUDA 09 May 40
Introduction to OpenFOAM 19/22 April 45
Introduction to Computational Fluid Dynamics 25 March 27
The Art of Differentiating Computer Programs. An Introduction to Algorithmic Differentiation 18 February 21
2012  
NWChem Tutorial 23 - 25 October 80
Advanced High Performance Scientific Computing Workshop 18 September 33
ATIP/A*CRC Workshop 7 - 10 May 115
HPC and Big Data Workshop 25 - 27 April 54
NAG training workshop 16 - 19th April 65

Software engineering best practices in scientific programming – Workshop #1

Date: 30 November 2016, Wednesday
Time: 2.00 pm - 05.00 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way, Connexis South, Singapore 138632

Bio
Łukasz Orłowski is a computational scientist at A*CRC Software Team, where he develops, ports and optimises scientific codes. He holds Master’s degree in Computing in Business and Economy and currently is pursuing a PhD in Computational Applied Mathematics. Previously he worked at Intel Corporation in Data Centre Software Group, designing and developing software for cloud orchestration, data storage acceleration, redundant data storage and storage disaster recovery. His domain of expertise covers parallel and distributed programming, cloud, data centre and data storage software, development operations (DevOps) and software release engineering.

Objectives
This series of workshops aims at familiarising scientists without formal background in computer engineering with best practices in software development, including:
  • Understanding of compilation and linking
  • Code modularisation and portability
  • Application of external libraries
  • Automation of build process using script languages
  • Unit testing
  • Code versioning
Prerequisites
Basics of C programming language
The series of workshops will cover
Prerequisites
Basics of C programming language

The series of workshops will cover
  • Workshop #1 – pre-processor, compilation, linking and building libraries – 30/11/2016
  • Workshop #2 – build automation and unit testing - TBD
  • Workshop #3 – introduction to BLAS, PETSc, Trilinos - TBD
  • Workshop #4 – version control and repository management - TBD
Workshop #1 
The first workshop is going to cover the following topics: 
  • Understanding compilation and object files
  • Understanding linker
  • Separating code into compilation units
  • Building static and dynamic libraries
  • Understanding pre-processor
  •  Using pre-processor to write portable code

Hyperworks Workshop

Date: 30 August 2016, Tuesday
Time: 9.30 am - 12.30 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way, Connexis South, Singapore 138632

Abstract:
Historically, industry practices have limited virtual prototypes to design verification and validation, but computational simulation can serve far more of a purpose than just virtual testing. Rather than making design decisions relying on guess work, simulation can lead to informed design decisions based on both functional and performance requirements, and thereby leading to product innovation. There are three enablers for simulation-driven innovation: optimization technologies; broad portfolio of physics; and computational performance. All three need to be considered and present as part of an organization’s CAE and overall product development innovation strategy. As engineers and product designers, you have a professional imperative to use the best tools possible in order to achieve results undreamt of just a few decades ago.With the computational performance now possible with the implementation of the HPC infrastructure at A*star; the optimization technologies and physics available from Altair will truly leverage the same towards new age product innovation in Singapore.This short seminar will cover the aspects of optimization and multi-physics, introducing the participants to the entire range of technologies that enable;01. Design Innovation02. Reduce engineering time considerably03. Designing and Engineering for 3D Printing Historically, industry practices have limited virtual prototypes to design verification and validation, but computational simulation can serve far more of a purpose than just virtual testing. Rather than making design decisions relying on guess work, simulation can lead to informed design decisions based on both functional and performance requirements, and thereby leading to product innovation.
There are three enablers for simulation-driven innovation: optimization technologies; broad portfolio of physics; and computational performance. All three need to be considered and present as part of an organization’s CAE and overall product development innovation strategy. As engineers and product designers, you have a professional imperative to use the best tools possible in order to achieve results undreamt of just a few decades ago. 

With the computational performance now possible with the implementation of the HPC infrastructure at A*star; the optimization technologies and physics available from Altair will truly leverage the same towards new age product innovation in Singapore.
This short seminar will cover the aspects of optimization and multi-physics, introducing the participants to the entire range of technologies that enable:
  • Design Innovation
  • Reduce engineering time considerably
  • Designing and Engineering for 3D Printing
  • Analyst Solutions - Solvers
  • FE Modelling (Pre-Processor)
  • Structural Analysis for Static & Dynamic (implicit solver)
  • Highly Non-linear analysis (explicit solver)
  • Multibody Dynamics (MBD)
  • Computational Fluid Dynamic (CFD)
  • Electromagnetic Analysis (EM)
  • Control System (1D simulation) 


Chemistry and materials with the ADF Modeling Suite  - a hands-on workshop

Date: 8 July 2016, Friday
Time: 9.30 am - 5 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way, Connexis South, Singapore 138632

Abstract:
The foundations for ADF (Amsterdam Density Functional) were laid in the 1970s, when Prof. Baerends pursued his Ph.D. at the VU University in Amsterdam. Prof. Ziegler joined the development efforts from early on as a post-doc, as well as the Snijders group in Groningen. These theoretical chemists actively advocated the use of DFT methods to get insight in chemistry and materials, while this was still not accepted at all in the chemical community.

Their PhD students and post-docs as well as many other academic groups continue to contribute to the development of ADF as well as the its periodic companion, BAND. More recently a tight-binding (DFTB) module has been added, to deal with (both molecular and periodic) systems of 1000s of atoms. In collaboration with the van Duin group ReaxFF, employing reactive atomistic potentials, has been implemented and optimized enabling the study of larger, more complex systems at longer time-scales. The post-QM method COSMO-RS for thermodynamic properties of fluids and solutions has been implemented with academic groups in collaboration with industry.

From its inception in the 1970s, ADF and BAND have been targeting chemical bonding analysis and spectroscopic properties. In particular for inorganic systems where the fragment-based approach, Slater orbitals and efficient relativistic treatment (ZORA) are particularly suitable. In an overview talk, we will briefly discuss the technicalities of ADF, BAND and the other modules and show applications by recent research papers from the literature. A short demo will be given of the integrated graphical interface for the ADF Modeling Suite, after which attendees get started with hands-on examples and exercises.

Participants should bring their own laptop and pre-install ADF. Besides from the standard tutorials and introductory examples, everyone is encouraged to get started with studying their own system of interest.

About the Speaker:
Dr. Fedor Goumans has a broad background applying computational chemistry in many different fields. After obtaining his masters in organic photochemistry, where he first combined theory with experiment, Fedor undertook his PhD research in computational chemistry at the VU University in Amsterdam within the experimental group of Prof. Koop Lammertsma. During his PhD he studied many different topics: scrutinizing the factors influencing ring strain with high-level calculations and conceptual quantum chemistry, QM/MM calculations on transition metal catalyzed metathesis to understand chiral induction, understanding photochemical dissocation from quantum dynamics on a TDDFT-generated PES, rationally designing catalytic polymerization of phospha-alkynes, and the electronic properties of organic polymers.

He received his PhD in 2005 and went on to post-doc at University College London to computationally study astrochemical reactions on surfaces in the experimental group of Dr. Wendy Brown. He developed and tested transition state search methods within DL-FIND / ChemShell for studying reaction mechanisms on surfaces within the embedded cluster formalism (QM/MM). He also studied electron attachment processes and cluster nucleation rates in the interstellar medium.

In London, Fedor also started working on quantum tunneling, with the instanton formalism, to get more accurate reaction rates involving hydrogen atoms at 10-20K in dense molecular clouds. During a 3-year independent fellowship at Leiden University (2008-2011) he further developed and applied these methodologies. Fedor joined SCM in 2012 as business developer, talking to scientists interested in developing and/or applying computational chemistry.

Please click here for the presentation slides.


Sequences to Systems: Creating the CompGen Engine at Illinois
The growth of sequencing data now far outstrips today’s computer technologies with genomic data quadrupling every year while compute power at best doubling. Many bioinformatics algorithms rely on direct comparisons of nucleotide sequences and optimization combined with statistical techniques that do not scale to massive datasets. To realize the biological and healthcare innovations and breakthroughs promised by advances in genome sequencing, new and disruptive algorithms and computational models must be invented. To this end, working with a multi-disciplinary team of computer engineers/scientists, biologists, clinicians and statisticians, we have designed the CompGen engine - a software framework supported by custom FPGA based accelerators to significantly speed up the variant calling workflows, while maintaining its accuracy.

While accelerating variant calling is often an essential first step, it is indeed the subsequent analysis of the determined variants jointly with clinical and other data that is necessary to derive patient-specific actionable intelligence. Described as predictive hypotheses, such information can then be used by clinicians and biologists for accurate diagnoses or discovery testing in the labs, thereby reducing costs and time to scientific and clinical breakthroughs. Some examples our work include: unsupervised learning to infer drug mechanisms in breast cancer, game theoretic approach to predicting lung adenocarcinoma, probabilistic graphical models to study diabetic populations in Singapore and prediction of brain trauma.

This talk will outline the features and performance of our NSF funded and industry supported project aimed at building the CompGen Engine as well as subsequent analysis, in collaboration with Mayo Clinic and with NUH in Singapore to derive actionable intelligence from sequenced and clinical data.

Speaker:
Ravishankar K. Iyer Coordinated Science Laboratory & Institute for Genomic Biology
Department of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign

Ravishankar Iyer is the George and Ann Fisher Distinguished Professor of Engineering at the University of Illinois at Urbana-Champaign. He holds joint appointments in the Department of Electrical and Computer Engineering, the Coordinated Science Laboratory (CSL), and the Department of Computer Science and serves as Chief Scientist of the Information Trust Institute and is affiliate faculty of the National Center for Supercomputing Applications (NCSA) and the Carl R. Woese Institute for Genomic Biology at Illinois.

Iyer has led several large successful projects funded by the National Aeronautics and Space Administration (NASA), Defense Advanced Research Projects Agency (DARPA), National Science Foundation (NSF), and industry. He currently co-leads the CompGen Initiative at Illinois. Funded by NSF and partnering with industry leaders, hospitals, and research laboratories, CompGen aims to build a new computational platform to address both accuracy and performance issues for a range of genomics applications.

Professor Iyer is a Fellow of the American Association for the Advancement of Science, the Institute of Electrical and Electronics Engineers (IEEE), and the Association for Computing Machinery (ACM). He has received several awards, including the American Institute of Aeronautics and Astronautics (AIAA) Information Systems Award, the IEEE Emanuel R. Piore Award, and the 2011 Outstanding Contributions award by the Association of Computing Machinery—Special Interest Group on Security for his fundamental and far-reaching contributions in secure and dependable computing. Professor Iyer is also the recipient of the degree of Doctor Honaris Causa from Toulouse Sabatier University in France.

Python for Finance workshop is addressed to everyone who wishes to learn programming in Python language (Day 1) and begin coding a variety of financial models and ideas effortlessly (Day 2).

The workshop will cover the fundamentals of Python 3.5+, numerical aspects of coding and over 100 individually crafted examples covering various applications from finance, risk management, data analysis, statistics, and machine learning techniques in finance and beyond.

Date: 11-12 May 2016, Wednesday and Thursday (Workshop 1) 
Date: 13-16 May 2016, Friday and Monday Workshop 2)
Time: 8:00 AM to 5:30 PM
Venue: Charles Babbage Room (Fusionopolis, level 17, Connexis South)

Title: Python for Finance 2-Day Intensive Workshop

Every participant will receive a free copy of Python for Quants. Volume I. e-book by Pawel Lachowicz, PhD (235 pages).

None-to-some prior programming experience is the baseline. Bring your own laptop and lot of enthusiasm to learn really a lot of new and amazing things! The Wi-Fi will be available during the course.

More information to the course are available at http://www.quantatrisk.com/SingaporePy/

Instructor: Dr. Pawel Lachowicz
Dr. Pawel Lachowicz (Sydney, Australia) received his PhD by applying novel techniques of signal processing in astrophysics from Polish Academy of Sciences in 2007. He worked at Temasek Laboratories and NUS in Singapore after that. He is a leading expert in data analysis covering financial markets, an educator, an author of books on finance, data processing, and applied programming. He also is a founder and writer at QuantAtRisk.com. He specializes in Python and Matlab programming for finance.
Introduction to CENTRA
This talk describes a recently initiated funded international partnership, called CENTRA, to facilitate research collaborations on transnational cyberinfrastructure and its applications. The rationale, goals, progress and opportunities of CENTRA are presented. A brief introduction is made to the scientific advances being sought by CENTRA, the important societal problems it targets and the objective of creating international networks of scientists working on cyberinfrastructure and its applications. The CENTRA framework, including mechanisms to engage new institutions and researchers, is also discussed briefly.

Speaker:
José A.B. Fortes, PhD Professor and AT&T Eminent Scholar,
Director, Advanced Computing and Information Systems (ACIS) Laboratory

José A.B. Fortes is the AT&T Eminent Scholar and Professor of Electrical and Computer Engineering and Computer Science at the University of Florida where he founded and is the Director of the Advanced Computing and Information Systems Laboratory.

He received the B.S. degree in Electrical Engineering (Licenciatura em Engenharia Electrotécnica) from the Universidade de Angola in 1978, the M.S. degree in Electrical Engineering from the Colorado State University, Fort Collins in 1981 and the Ph.D. degree in Electrical Engineering from the University of Southern California, Los Angeles in 1984. From 1984 until 2001 he was on the faculty of the School of Electrical Engineering of Purdue University at West Lafayette, Indiana. In 2001 he joined both the Department of Electrical and Computer Engineering and the Department of Computer and Information Science and Engineering of the University of Florida as Professor and BellSouth Eminent Scholar. From July 1989 through July 1990 he served at the National Science Foundation as director of the Microelectronics Systems Architecture program. From June 1993 till January 1994 he was a Visiting Professor at the Computer Architecture Department of the Universitat Politecnica de Catalunya in Barcelona, Spain.

His research interests are in the areas of distributed computing, autonomic computing, computer architecture, parallel processing and fault-tolerant computing. He has authored or coauthored over 200 technical papers and has lead the development and deployment of Cloud and Grid-computing software used in several cyberinfrastructures for e-Science and digital government. His research has been funded by the Office of Naval Research, AT&T Foundation, IBM, General Electric, Intel, Northrop-Grumman, Army Research Office, NASA, Semiconductor Research Corporation and the National Science Foundation.

Supercomputing Frontiers 2016 (SCF2016) is back again and will be held from March 15 – 18, 2016 at Matrix Building, Biopolis. Organised by A*STAR Computational Resource Centre (A*CRC), SCF2016 is a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important global trends and substantial innovations in supercomputing.

CONFERENCE THEMES
The conference themes for this year will focus on the following:
• Towards Exascale
• Novel non-standard processor architecture – Automata Processor, Rex, Neuromorphic Processor, Quantum Annealer
• Convolution of Supercomputing, AI and biological brain
• Languages for Exascale & for human-computer interaction
• Supercomputing Applications – Computational Fluid Dynamics, Molecular Dynamics, Genomics, Bioinformatics etc.

KEYNOTE SPEAKERS
We have a stellar lineup of speakers including the following keynote speakers:
BARONESS SUSAN GREENFIELD, Neuroscientist and Senior Research Fellow, Oxford University, United Kingdom
SRINIVAS ALURU, Professor, School of Computational Science & Engineering, Georgia Institute of Technology
HORST SIMON, Deputy Laboratory Director & Chief Research Officer, Lawrence Berkeley National Laboratory, USA
BRONIS R. DE SUPINSKI, Chief Technology Officer, Livermore Computing, Lawrence Livermore National Laboratory, USA

For more information, please visit our website - http://supercomputingfrontiers.com/2016/ .

IBM OpenPower

Date: 13 October 2015, Tuesday
Time: 10 am - 12.30 pm and 1.30 pm - 3 pm
Venue: Aspiration @ MATRIX Level 2M

Topic Title Speaker Period
1 Overview of the POWER8 Processor and Systems Peter Hofstee , IBM 45 mins
2 (Open)POWER-based Big Data Systems from Watson to Spark Peter Hofstee , IBM 45 mins
3 NVIDIA's GPU accelerator for OpenPOWER Simon See, nVidia 45 mins
4 Reconfigurable acceleration for Big Data and Genomics Peter Hofstee , IBM 45 mins
5 OpenPOWER in Academia and Research Ganesan Narayanasamy , IBM 45 mins


Title: Overview of the POWER8 Processor and Systems

This session will provide an overview of the Power 8 processor and the systems built with Power 8 technology. With 96 hardware threads over twelve cores running at up to 4GHz, 96 MB of on-chip L3, 128 MB of L4, up to 230GB/s of main memory bandwidth, and numerous architectural innovations such as transactional memory, the Power 8 processor provides leading throughput performance. Processor performance is matched by a powerful I/O subsystem with additional innovations such as the Coherent Attach Processor Interface, and a high-bandwidth SMP fabric that scales to a very flat 16-socket, 1,536-thread 16TB SMP. We will also describe how Power 8 technology is leveraged in the OpenPOWER systems, by combining Power 8 technology with that of our OpenPOWER partners such as NVIDIA, Xilinx and Altera, Mellanox, and others for a wide variety of systems from IBM and other system vendors.

Title: (Open)POWER-based Big Data Systems from Watson to Spark

This session will address how we are using Power 8 technology to create leading solutions for analytics and Big Data. We provide a short introduction to the types of problems we aim to solve with Big Data and Analytics, and then discuss how Power 8 is used in systems for Big Data such as Hadoop and Spark. Storage architecture determines a large fraction of the cost of these systems, and we show how Power 8 technology, and accelerators, are used in cost effective storage systems as well as a flash-based Redis key-value store. We also show how the technology can be used to create a high-performance implementation of write logging in Cassandra.

Title: Reconfigurable acceleration for Big Data and Genomics

This session dives a bit deeper on the use of reconfigurable acceleration leveraging the CAPI interface on Power 8. We start by describing a high-performance implementation of gzip compression, an application that at first sight seems difficult to accelerate. Our implementation of gzip in reconfigurable logic achieves up to 4GB/s encode and decode speeds ( at the macro level ) and a latency of just a few microseconds per 4KB page. Next we discuss how CAPI has been leveraged to accelerate gene sequencing. We describe the benefits of acceleration for each of the three major stages of the pipeline ( alignment, duplicate removal, and variant calling ). We also describe the CAPI-flash implementation in a bit more detail, and discuss its various APIs. Next we touch on some image processing work, and look ahead at our next set of opportunities and challenges for acceleration. We end with a description of accelerated cloud-based infrastructure that can freely be leveraged by OpenPOWER members ( and academic membership is free ).

Title: OpenPOWER in Academia and Research

We will be sharing the up to date information on Universities who involve in OPEN POWER related projects and research collaborations . What resources available for research and collaboration at Universities around the world which will help to develop and build skills. Will share a list of the ongoing Academia OPENPOWER Projects, the details of operation model for academic member to contribute to the projects. Also details about current Academia Workgroup members and various activities.


Speaker: H. Peter Hofstee ( Ph.D. California Inst. of Technology, 1995 )

H. Peter Hofstee is a distinguished research staff member at the IBM Austin Research Laboratory, USA, and a part-time professor in Big Data Systems at Delft University of Technology, Netherlands. Peter is best known for his contributions to heterogeneous computer architecture as the chief architect of the Synergistic Processor Elements in the Cell Broadband Engine processor, used in the Sony Playstation3 and the first supercomputer to reach sustained Petaflop operation. After returning to IBM research in 2011 he has focused on optimizing the system roadmap for big data, analytics, and cloud, including the use of accelerated compute. His early research work on coherently attached reconfigurable acceleration on Power 7 paved the way for the new coherent attach processor interface on POWER 8. Peter is an IBM master inventor with more than 100 issued patents and a member of the IBM Academy of technology.

Speaker: Ganesan Narayanasamy

Ganesan Narayanasamy is an OpenPOWER leader for Academia and research at the IBM Lab. Ganesan is best known for his contributions to High Performance Computing as senior leader for nearly 1.5 decades. He is also leading the WW Academia workgroup for OpenPOWER and putting together OpenPOWER ECO System development activities like setting up OpenPOWER center of excellence, OpenPOWER labs, Curriculum development etc. Ganesan is always passionate about working with Universities and research Institutes and provide all kinds of technical support.

A*CRC Workshop for Cluster Management and OpenStack Private Cloud

Date: 10 April 2015, Friday
Time: 9.30 am – 4.00 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way, Connexis South, Singapore 138632

Information
Bright Cluster Manager (BCM) is one of the leading tools in the area of cluster management. In this workshop, we will learn how to set up a HPC cluster with BCM through demonstration and hands-on activities. Another new and important feature for BCM is that it can incorporate with OpenStack to provision virtual nodes (private cloud) on a HPC (aka HPC Cloud), and these virtual nodes will be able to support InfinBand connectivity, and ready to accept jobs despatched by the workload manager installed on the HPC cluster.

Contents
  • Bright Cluster Manager Introduction
  • Network Topology
  • Installing a Head Node, Compute Nodes
  • Fine-tuning Cluster Configuration
  • Validating a Cluster
  • Deploying an OpenStack Private Cloud 
  • Maintaining and Monitoring a Cluster with/without OpenSteck Private Cloud
Schedule
Morning Session (9:30am-12:30pm): demonstration and hands-on
  1. Install Bright Cluster Manager
  2. Create a fully operational cluster from bare metal hardware
  3. Configure the cluster for delivery to system administrator
  4. Verify that a cluster is working properly
  5. Diagnose and troubleshoot problems
Afternoon Session (2:00pm-4:00pm): demonstration and discussion
  1. Create an Openstack private cloud
  2. Maintaining and Monitoring the cluster

Biography 
Mr. Stober has extensive experience in HPC. Since 1997, Robert worked first as a systems engineer at Westinghouse Power Generation and eventually as a solutions architect at Platform Computing. Robert joined to Bright Computing in January 2011 and hasn’t looked back.

Robert Stober has over 18 years of experience in HPC. Over the years he's worn many hats. He started as a UNIX System Administrator in the technology development division at Westinghouse Power Generation, where he specialized in application integration and workload management. He then went to work for Platform Computing where he was a Technical Consultant and Systems Engineer focused on sales and implementation of Platform LSF at major customer locations. For the last four years Robert has been building HPC clusters with Bright Cluster Manager. Robert’s extensive background in workload management, application integration and solutions design bring a unique understanding of the requirements for HPC cluster management.

Specialties: HPC Application Integration, Workload Management, Solution Architecture

Supercomputing Frontiers 2015

Date: 17 - 20 March 2015, Tuesday - Friday
Venue: Biopolis: Matrix Building, Levels 2, 3 & 4

Supercomputing Frontiers 2015 is Singapore’s inaugural conference on trends and innovations in the world of high performance computing.
It will be held on March 17 – 20, 2015 at Biopolis’ Matrix Building in Singapore.

Organised by A*STAR Computational Resource Centre, Supercomputing Frontiers 2015 will explore global trends and innovations in high performance computing in convolution of the following important areas:

  • Supercomputing applications in domains of critical impact in economic and human terms, and especially those requiring computer resources approaching Exascale;
  • Big Data science merging with Supercomputing with associated issues of I/O, high bandwidth networking, storage, workflows and real time processing;
  • Architectural complexity of Exascale systems with special focus on supercomputing interconnects, interconnect topologies and routing, and interplay of interconnect topologies with algorithmic communication patterns for both numerically intensive computations and Big Data; and
  • Any other topics that push the boundaries of Supercomputing to Exascale and beyond.

More details are found at the conference website: http://supercomputingfrontiers2015.com
ParaGauss – An HPC DFT program package for molecules and clusters. Methods, computational strategies, algorithms

Date: 25 November 2014, Tuesday 
Time: 2.30pm - 4.30pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way, Connexis South, Singapore 138632
Speaker: Alexei Matveev, Technische Universität München, Department Chemie, Germany

Quantum chemistry relies on a wide spectrum of methods and algorithms implemented as computer programs. ParaGauss offers most popular density functional methods including local density and generalized gradient approximations (LDA, GGA), meta-GGA and hybrid density functionals. For systems containing heavy elements effective core potentials and all-electron relativistic methods with or without inclusion of spin-orbit effects are available. Point group symmetry and extensions to double groups are fully exploited. Empirical van der Waals corrections, DFT+U self-interaction correction schemes, and QM/MM embedding schemes for solid and liquid environments enlarge the applicability of ParaGauss to complex systems. For most method combinations both first-and second-order energy derivatives are available. Efficient parallelization strategies allow using of more than 2000 CPU cores for computationally demanding hybrid DFT calculations. 

This presentation will provide an overview of features and computational strategies implemented in ParaGauss and discuss recent developments, like efforts to improve the scalability of hybrid DFT and matrix operations, a toolbox for exploring potential energy surfaces, and an embedding scheme for liquid environments beyond the polarizable continuum approach.

About the Speaker
Dr. Alexei Matveev
is a researcher at theoretical chemistry department of Technical University of Munich with 15 years of experience in the field of quantum chemistry and more than 30 publications in peer-reviewed journals and conference volumes.

In 2003 Alexei Matveev received a PhD (Dr. rer. nat.) from the Technical University of Munich for his work on extending the Douglas-Kroll-Hess method of treating relativistic effects in heavy elements by a variational treatment of electron-electron interactions in the framework of the Density Functional Theory and a method to exploit the point group symmetry for constructing spinor bases that allow efficient representations of spin-free Hamiltonian terms in spin-orbit calculations. After that Alexei Matveev continued to work as a researcher at the same institution. He is the author of the first implementation of second energy derivatives for the Douglas-Kroll-Hess approach and an one of the first efficient local unitary transformation schemes which are becoming increasingly popular as an approximation for relativistic treatment of large systems. During this time his responsibilities included supervising local computing facilities and assisting user operations.

He contributed as a co-author to the design and implementation of a Python toolbox to build and explore potential energy surfaces, a high-level parallel library for matrix algebra, a static scheduler for malleable tasks and a generic dynamic loading balance library. He co-authored works exploring an empiric correction scheme for spin-orbit interactions, a practical scheme for g-tensor evaluation, characterization of excitation spectra, empiric self-interaction correction scheme, and various applications of the Density Functional Theory. He is currently working on a parallel implementation of a statistical method of describing molecular solvent in combination with a quantum chemical description of the solute.
VisNOW Visualization Workshop

Date: 13 - 17 October 2014, Monday - Friday
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way, Connexis South, Singapore 138632
Speaker: Bartosz Borucki, Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw

VisNow is a generic visualization framework in Java technology, developed by Interdisciplinary Centre for Mathematical and Computational Modelling at University of Warsaw.

It is a modular data flow driven platform enabling users to create schemes for data visualization, visual analysis, data processing and simple simulations. Motivated by 'Read and Watch' idea, VisNow shows the data as soon and fast as possible giving further opportunity for processing and more in-depth visualization. In a few steps it can create professional images and movies as well as discover unknown information hidden in datasets.

Part 1:  VisNow introduction (Monday – Tuesday) 
Prerequisites: general knowledge on scientific data and visualization 
Abstract: Within Part 1 of VisNow training we will introduce the general concepts of scientific visualization and visual analysis, especially in HPC environments. VisNow platform will be described, including the philosophy beneath, generic data structures and data flow network. User interfaces will be introduced and several case studies presented in hands-on sessions, to familiarize participants with VisNow interface and most common visualization schemes.   
Introduction to Visual Analysis
Visualization systems and paradigms
Generic data structures
Introduction to VisNow
Case Study #1 – 2D data visualization
Case Study #2 – 3D data visualization
Case Study #3 – Vector data visualization
Case Study #4 – Unstructured data visualization
Data computations
Part 1:  VisNow introduction (Monday – Tuesday) 
Prerequisites: general knowledge on scientific data and visualization 
Abstract: Within Part 1 of VisNow training we will introduce the general concepts of scientific visualization and visual analysis, especially in HPC environments. VisNow platform will be described, including the philosophy beneath, generic data structures and data flow network. User interfaces will be introduced and several case studies presented in hands-on sessions, to familiarize participants with VisNow interface and most common visualization schemes.   
  1. Introduction to Visual Analysis
  2. Visualization systems and paradigms
  3. Generic data structures
  4. Introduction to VisNow
  5. Case Study #1 – 2D data visualization
  6. Case Study #2 – 3D data visualization
  7. Case Study #3 – Vector data visualization
  8. Case Study #4 – Unstructured data visualization
  9. Data computations
Part 2:  VisNow advanced topics and programming (Wednesday  – Friday) 
Prerequisites: VisNow introduction, Java SE programming 
Abstract: Within part 2 of VisNow training we will cover the more sophisticated or specific problem related visualization possibilities of VisNow, including the overview of time dependent data, applications of visualization in debugging and data exploration, and various visualization techniques. Participants will be introduced to the generic data header for own regular datasets I/O. For most advanced users and programmers, the functionality of VisNow plugins will be introduced with a hands-on module programming course.    
  1. Introduction to time dependent data in VisNow
  2. Case Study #5 – Time dependent data visualization
  3. Case Study #6 – Visual debugging and analysis
  4. Various visualization techniques and modules
  5. VisNow data I/O
  6. VisNow plugins and module programming
Detailed Information about the VisNOW workshop is available here.

A VisNOW paper submitted to the Conference on Computer Graphics, Visualization and Computer Vision 2014 is available here.

About the Speaker
Bartosz Borucki is currently a researcher and scientific project manager at the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) at the University of Warsaw.

Bartosz Borucki graduated the Faculty of Physics at the University of Warsaw in the field of Computational Methods of Physics and Fourier Optics and Optical Information Processing. He is an employee of ICM UW since 2006. He is a specialist in the field of scientific visualisation and visual analysis, image data processing and High Performance Computing.  


HPC Advisory Council Singapore Conference 2014

Date: 7 October 2014, Tuesday
Venue: Biopolis, Matrix, Level 4, 30 Biopolis Street, Singapore 138671

The HPC Advisory Council, in association with A-STAR Computational Resource Centre, will hold the HPC Advisory Council Singapore Conference 2014 on October 7, 2014. The conference will focus on High-Performance Computing (HPC) usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. The conference is open to the public and will bring together system managers, researchers, developers, computational scientists and industry affiliates.

More information can be obtained here.

Advanced Computational and Data-centric Science at Stony Brook University and Brookhaven National Laboratory

Date: 29 August 2014, Friday
Time: 11.00 am – 12.00 pm
Venue: Charles Babbage Room (Fusionopolis, level 17, Connexis South)
Speaker: Dr Robert Harrison, Executive Director, Institute for Advanced Computational Science, Stony Brook University

He is the co-author of MADNESS (Multiresolution Adaptive Numerical Environment for Scientific Simulation).  Please refer to the following link for more information on MADNESS: https://code.google.com/p/m-a-d-n-e-s-s/

Biography:
Professor Robert Harrison is a distinguished expert in high-performance computing. Through a joint appointment with Brookhaven National Laboratory, Professor Harrison has also been named Director of the Computational Science Center at Brookhaven National Laboratory. Dr. Harrison comes to Stony Brook from the University of Tennessee and Oak Ridge National Laboratory, where he was Director of the Joint Institute of Computational Science, Professor of Chemistry and Corporate Fellow. He has a prolific career in high-performance computing with over one hundred publications on the subject, as well as extensive service on national advisory committees.

Tackling problems on Blue Waters, the fastest supercomputer on a university campus

This ADSC – A*CRC joint seminar aims to present NCSA's and A*CRC's facilities, the applications running on the machines and their impact on the HPC scene in US and Singapore.

Date: 9 July 2014, Wednesday
Time: 2.30 pm – 5.00 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way, Connexis South, Singapore 138632
Speaker: Dr Cristina Beldica, Associate Director at the National Center for Supercomputing Applications (NCSA) of University of Illinois at Urbana-Champaign (UIUC), United States of America

Abstract
Science and engineering research has been revolutionized by computation, and opportunities abound for revolutionizing data-driven science. A new generation of supercomputers is providing scientists and engineers with the ability to simulate a broad range of natural and engineered systems with unprecedented fidelity. Established in 1986, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign has a rich tradition in leading the transformation of all research disciplines and society itself through a comprehensive set of hardware, software, and algorithmic tools and environments supporting research, education, and collaboration across disciplines. NCSA is the home of the Blue Waters computing system that is capable of sustained performance exceeding one quadrillion calculations per second on a broad range of science and engineering applications. This computer has been configured to enable it to solve the most compute-, memory- and data-intensive problems in science and engineering. NCSA also leads NSF’s Extreme Science and Engineering Discovery Environment (XSEDE), which unites skilled staff and powerful resources across the country. XSEDE assists digital research at all levels, creating on-ramps for more campuses, more investigators and scholars, and more students. The National Center for Supercomputing Applications has strategic investments in data and information science and technology. NCSA works with researchers to aggregate distributed, highly heterogeneous datasets at all scales, analyze them to obtain reproducible results with scalable, maintainable software pipelines on optimized computational data platforms in highly efficient, secure collaborative environments. Through its Private Sector Program, NCSA plays a leading role in transferring knowledge to the private sector and helping industry partners take advantage of advanced computing and digital ecosystems to enhance their competitiveness.

Agenda
2:30 pm – 2:45 pm Registration
2:45 pm – 3:30 pm “Tackling problems on Blue Waters, the fastest supercomputer on a university campus” - Dr Cristina Beldica, Associate Director at NCSA
3:30 pm – 4:00 pm “A*STAR Computational Resource Center (A*CRC)” - Dr Marek Michalewicz, Senior Director at A*CRC
4:00 pm – 4:30 pm Q & A
4:30 pm – 5:00 pm Guided tour of A*CRC facilities

Biography
Dr. Cristina Beldica is Associate Director at the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign. She serves as the Executive Director and co-principal investigator of the $360M NSF-funded Blue Waters Petascale computer project. The Blue Waters system is capable of sustained petaflop performance and designed to meet the compute-intensive, memory-intensive, and data-intensive needs of a wide range of science and engineering applications. Prior to joining Blue Waters project, she worked on other large cyber infrastructure projects like the Dark Energy Survey (DES), Large Synoptic Survey Telescope (LSST), and the Network for Earthquake Engineering Simulation (NEES). Dr. Beldica began her career at NCSA as a research scientist for Computational Structural Mechanics. Her research interests include characterization of finite elements, meshless methods for computational mechanics, viscoelastic materials, linear and nonlinear stress analysis, fracture properties of reinforced composites. Dr. Beldica holds a Ph.D. degree in Engineering Mechanics, a Master of Science degree in Aeronautical Engineering and a Master of Business Administration.

Prediction of physical and chemical materials properties by using MedeA® software

Date: 7 April 2014, Monday
Time: 9 am  – 5 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way, Connexis South, Singapore 138632
Speaker: Dr Hannes Schweiger, Senior Scientist, Materials Design Inc

Abstract
The MedeA® software package predicts materials properties using simulations based on quantum mechanics, statistical thermodynamics, classical mechanics and electrodynamics as well as correlation methods involving empirical data.

Integration of advanced computational approaches based on density functional theory and forcefield methods with comprehensive experimental databases provides high predictive power.

MedeA® is designed for materials engineers and scientists who want rapid and reliable answers for a range of materials issues related to application areas such as electrical power generation, automotive applications, energy storage, alloy design, microelectronics, the chemical industry and petrochemicals.

Academic researchers rely on MedeA® for interpretation of experimental data, gaining deep understanding of materials properties, and as a basis for research in computational materials science. As such it also has unique value as tool for learning and teaching.

Biography
Dr Hannes Schweiger, is the Senior Scientist, Director of Support at Materials Design Inc. He holds a Ph.D. in physics (computational materials science) from University of Vienna. He has been working with VASP and FLAPW, followed by research positions at the French Petroleum Institute (IFP energies nouvelles), ISMANS (Le Mans, France), and Case Western Research University (USA) working on heterogeneous catalysis and fuel cells.

In addition to his scientific work, Dr Hannes Schweiger is adept in making sure that MedeA installs on various platforms and integrates with queuing systems. Proud of working with outstanding scientific colleagues, he is responsible for creating documentation and training workshops for the whole suite of programs integrated in MedeA, from DFT to atomistic simulations of millions of atoms.

During the presentation and workshop on 7th April, he will introduce the MedeA software package to the audience on how the software can help to predict the physical and chemical materials properties.

National Cheng Kung University, Taiwan: Three Lectures mini-series.

The Supercomputing Research Center at National Cheng Kung University (NCKU-SRC) Taiwan aims to boost the research energy by collaborating with domestic and foreign high performance computing institutes. The vision of SRC is to facilitate the research institutes in Taiwan, including the fields of hazard and disaster mitigation, renewable energy innovation, biomedical, photo-electronics technology and so forth, to achieve world-class research excellence. The SRC also plan to provide learning and training opportunities for the domestic professionals in supercomputer hardware and software development. NCKU-SRC had developed world’s first switchless cluster supercomputer at Nov. 2014, in the long run, NCKU-SRC expects to play a role as a mediator in the collaboration among the domestic high-tech firms and the oversea HPC teams to design and develop the next-generation supercomputers. Finally, NCKU-SRC currently pays special attention to HPC applications on genetic/genomic medicine and other related issues in computer aided drug design.

  Title Date Time
Lecture 1 Market Trend, Market of Supercomputer Based Gene Sequencing & Genetic Testing 31 March 2014, Monday 2.00 pm
Lecture 2 Liberal Communications, Democracy Topologies, Fraternity Software Engineering: Approach to the Exascale Supercomputer 1 April 2014, Tuesday 10.00 am
Lecture 3 Core Technologies of Genetic Testing & Disease Diagnostic Services 1 April 2014, Tuesday 2.00 pm

Venue
: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Prof Huang Chi Chuan, Frank Yeh, Dr Chen Yu Tin, Dr Liang Chi Hsiu

ABSTRACT
Lecture 1: Market Trend, Market of Supercomputer Based Gene Sequencing & Genetic Testing
Based on NCKU SRC new invention of HPC, we have some evaluation about the potential marketing value and the influence. With the new invention, we have implemented it on some specific application of Gene sequencing. We find the advantages of supercomputing implementing on gene sequencing, will also bring some change in the market of gene testing. More than gene sequencing, the NCKU SRC invention of HPC will bring the end users more efficiency in computing and energy which make the application more effective as well.

Lecture 2: Liberal Communications, Democracy Topologies, Fraternity Software Engineering: Approach to the Exascale Supercomputer
Switchless cluster, CK-STAR, are established in NCKU, which has been the only so far in world. Performance data of the "CK-STAR" proves the achievement. Eight 2 way-server reached 98.41%, MPI applications like LAMMPS (molecular dynamics), WRF (Weather Research and Forecasting) and ABySS (de novo assembler) were well perform at this architecture. Furthermore, the system adopt eight pieces of Xeon Phi, It reached, 80.2% and 77.5% of the four nodes and eight nodes. Switchless resolve bottleneck while numbers of nodes were getting too much. The CK-STAR, solves such dilemma and enlarges the model of cluster computers.

Democracy topologies established achievement in competitive average hop distances. Construct with 10 interconnections from each node, and 1024 nodes has short average hop distance as 3.63, which break IBM 5D torus' achievement. Furthermore, with 16 and 8 interconnections from each node, large supercomputer has 100 K nodes reach average hop distance as 5.365 and 8.665, respectively.

Fraternity software engineering are challenging tasks, it is necessary for using accelerators in HPC. Practical course for Intel coprocessors, and parallel software engineering task were in progressing in SRC, NCKU. We purpose software engineering workflow for porting application to accelerator.

In conclusion, this is a prototype design, in which, Smart-post office protocol (liberal communication), quasi-optimal interconnection network (Democracy topology), parallel software application porting in heterogeneous environment (fraternity software engineering) are three practice methods approach to resolve exascale challenge.

Lecture 3: Core Technologies of Genetic Testing & Disease Diagnostic Services
This presentation covers recent developments in genetic risk assessment core technologies and our proposal of utilising qualitative information in nucleotide polymorphism. Fairly introductory information about gene assessment and disease diagnostic will be given in this talk together with our strategies. The data collected and analysed with our developing expert system eventually will be used to uncover the missing links in the current healthcare services. In all, the genetic assessment as a tool can be used to improve the rational drug and early treatment suggestions after disease prognosis; furthermore, it can also be considered as a part of the routine tasks to increase the facets in healthcare services.

Biography
Prof. Huang Chi Chuan
Prof Huang Chi-Chuan is Chair professor of Department of Engineering Science, NCKU. He was also previous Dean of Academic Affairs of NCKU. Prof Hwang is specialised in non-linear chaotic theory, microscopic mechanics, semiconductor engineering, quantum information, quantum system control theory, bioinformatics, and neurosciences.

Frank Yeh
Frank Yeh was working for Cray Inc. in the past 4 years. At Cray, he was in charge of the business development crossing Taiwan, Hong Kong and Philippines. He is currently recruited as the Chief Consultant of Supercomputing Research Center, NCKU. At same time, Frank is also the representative of a pre-opening office of "Taiji Gene and Life Company". This start-up company will work very closely with NCKU SRC to get technical support in gene and disease analysis. Before Cray, Frank also worked with different responsibility as the role of management and senior sales marketing for other MNC, including EMC, SGI, McAfee and HP.

Dr Chen Yu Tin
Dr. Chen Yu-Tin received his PhD in Computer Science from National Tsing Hua University, Hsin-Chu in 2012. He is currently a post-doctoral fellow of Supercomputing Research Center, NCKU (2013-Now), Tainan, his research fields are in comparative genomics and evolution biology, and he was a postdoctoral follow of Life Science, NCKU (2012-2013). Four years of professional career in bioinformatics and pharmacogenomics (2000-2004), and was a research assistant, a research associate at National Yang Ming University, Vita Genomics, Inc., respectively.

Dr Liang Chi Hsiu
Dr Liang Chi-Hsiu is specialized in biology and physical sciences. He received his PhD in University of Manchester, UK. In his first position he contributed to MESMER, an object-oriented code describing statistical aspects of gas phase chemitry. He is now working on genetic risk assessment of diseases and several other projects related to high-performance computing.
Supercomputing Activities in Korea

Date: 10 March 2014, Monday
Time: 10 am  – 12 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Jysoo Lee, National Institute of Supercomputing and Networking, Korea

Abstract
An overview of supercomputing activities in Korea will be presented. Emphasis will be given to the legislation of the National Supercomputing Promotion Act passed by the National Assembly of Korea in 2011. With the law, increased investment in supercomputing ecosystem of the nation and coordination among government agencies are expected. The five year master plan of national supercomputing, which was established in 2012, will be described in such areas as application, infrastructure, and R&D. The talk will be concluded with brief description of newly created institute of National Institute of Supercomputing and Networking.

Biography
Dr. Jysoo Lee is the founding director general of NISN (National Institute of Supercomputing and Networking) in Korea. He received a BS degree from Seoul National University in 1985, and a Ph. D. degree from Boston University in 1992, all on physics. He was a visiting scholar at the Julich Supercomputing Center in Germany, and was a research associate at the Levich Institute of the City College of New York. He joined KISTI (Korea Institute of Science and Technology Information) in 2001, and had been the director of the supercomputing center for 2004-2006 and 2009-2012, and assumed the present position in 2013.

He has been leading several national scale high-performance computing projects, such as National Grid Project of K*Grid, National e-Science Project, and National Supercomputing Infrastructure Initiative. He played the leading role in the legislation of the Utilization and Promotion of National Supercomputing Act, similar to the HPC Act of the States, for broader adoption of high-performance computing in Korea. He also played a key role in establishing national supercomputing master-plan of Korea.

He established the Grid and Supercomputing Program in the UST (University of Science and Technology), and has been the chief professor from its inception. He also is a vice-chair of the KSCSE (Korea Society of Computational Sciences and Engineering), and a member of the National Supercomputing Committee of Korea.

Mathematical and Computational Modelling in Mutlidisciplinary areas of Science and Engineering.  Three Lectures mini-series.
Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) was founded in 1993 as an institution new to then existing Polish system of science. ICM’s interdisciplinary attitude was a driving force in all forms of activity: research, research infrastructures, education and promotion. Due to this attitude, a high number of research and R&D projects were undertaken, with quite a few rather spectacular results, in areas ranging from simulation of enzymatic reactions dynamics at mixed quantum-classical scale, computational design of biomolecular machines, search for new classes of advanced functional materials up to computational modelling of complex operational networks for air transportation applications via optimization of technological processes in food industry (chocolate manufacturing and espresso coffee making).

As a natural consequence of the interdisciplinary character, ICM lives on partnerships, offering high add-on value in several respects. In particular, there is no close analogue of the harmonized combination of interdisciplinarity in research and education, capability in ICT research infrastructures, and a wide range of contributions to national and broader programs.

  Title Date Time
Lecture 1 The concept of interdisciplinarity as driving constitutive idea for ICM research & development activities 24 February 2014, Monday 2.30 pm – 3.30 pm
Lecture 2 ICM’ contributions and role in national and international research e-infrastructures 25 February 2014, Tuesday 10 am – 11 am
Lecture 3 Computational modelling and management of complex operational dynamical networks 26 February 2014, Wednesday 2.30 pm – 3.30 pm

Venue
: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Professor Marek Niezgódka PhD, DSc

ABSTRACT
Lecture 1: The concept of interdisciplinarity as driving constitutive idea for ICM research & development activities:
In this talk, a selection of representative projects will be overviewed, with the aim to reflect today’s interdisciplinary capacity of ICM. The specific problems addressed will include:
Modelling of the dynamics of systems with complex geometry and topology:
o operational networks
o structured populations
o multi-scale porous media
o cosmology of dark matter 
Applications to life sciences, biotechnology and personalized medicine:
o non-invasive diagnostics in cardiology and cardiosurgery planning
o new protocols in traumatology
o high-throughput solutions for multiscale modelling of disease agents
o dynamics, aggregation and diffusion in macromolecular complexes
o impact of external stresses and shocks on biological systems
o enzymatic reactions in biosystems (quantum-classical scale of resolution)
Applications to natural environment:
o high-resolution numerical weather prediction
o decision support for atmospheric developments-dependent systems in public sector, industry and agriculture
o detection of clear-air turbulence
o air transportation systems
Materials science and technology:
o New SiC-based materials
o Quantum semiconducting nanostructures for biosensing
A few out of wide range of new algorithmic solutions and their implementations optimized for latest (and forthcoming) computer architectures will also be referred to:
Scientific visualization and visual data analysis
Specific highly-scalable software solutions for next-generation architectures and distributed environments

Lecture 2: ICM’ contributions and role in national and international research e-infrastructures
Some of the ICM contributions to large research infrastructures and their development:
national networked HPC infrastructure: applications software, capability computing
national virtual library of science: content, software system
unified national academic information infrastructure: integrated system
Polish Research Bibliography   and    Polish Citation Index
EU:  DRIVER  and  OpenAIRE open repository infrastructures
EU: EuDML (European Digital Mathematics Library)
EU:  UNICORE – grid infrastructure (security functionalities)

In this talk, a synthetic overview will be given, focused on:
main underlying ideas and implementation concepts of ICM’s mission
positioning of ICM in the national and international research and e-infrastructure landscape
current activities and 2020-oriented action plan

Lecture 3: Computational modelling and management of complex operational dynamical networks
Research challenges in mathematical modelling of processes in systems with complex geometry/topology arise at:
basic mathematical level where assemblies of standard models exhibit features preventing any use of existing formalisms; this will be addressed for nonlinear diffusion in non-homogeneous systems over complex domains
computational level where natural multiscale set-ups lead to extreme range gaps of the driving variables, hence contributing to high sensitivity and lacking numerical stability
At ICM, problems of such a complexity are explored equally in the context of theoretical foundations and applied aspects ranging up to computer-based implementations of the developed solutions.  All this refers to the process dynamics on complex operational networks with intrinsic structure. 
Specific solutions were proposed within stochastic framework, with a wide range of application areas to prediction, planning and control of such systems. 
Among applications, a number  of problems arising in various areas of air transportation will be reported in the talk. Those problems refer to:
Air traffic management, in the context of forthcoming Single European Sky  policy
Airport operations planning and optimization
Multi-horizon airline operations planning, scheduling and management
Other associated application areas that will be reviewed include modelling of epidemic spreading, specifically addressing various types of influenza. Recent large-scale computer simulation solutions developed at ICM  for modelling multiscale biological developments will be presented, exhibiting high scalability of the implementations on BlueGene/Q systems.

Biography
Professor Marek Niezgódka PhD, DSc
Main Scientific Achievement
  • Mathematical results on the uniqueness in 2-phase Stefan-like problems with nonlinearities in source and boundary flux terms.
  • Underlying contributions to the construction of mathematical models for the dynamics of structural transformations of martensitic type activated by coupled physical mechanisms, with application to modelling thermomechanically-driven processes in shape memory alloys; construction of effective computational approaches to the coupled dynamical systems of mixed-type balance laws with degenerations.
Those contributions have got a visible impact on the research of several groups of applied mathematicians and computational scientists, in particular within the Free Boundary Community, until now.
  • Construction of mathematical models for the dynamics of non-isothermal diffusion-driven phase separation phenomena, accounting for multiscale mechanisms of phase separation and coupled driving mechanisms: characterization results on large-time developments and structure stabilization; basis for the development of effective computational approaches to multiscale systems controlled by external forcing.
Those results have influenced the mathematical research of several groups, despite of numerous further developments still opening challenging perspectives.
Expertise
  • parabolic free boundary problems (Stefan problems and its control),
  • inverse parabolic problems,
  • nonlinear degenerate parabolic problems,
  • nonlinear evolutionary systems (incl. variational inequalities),
  • mathematical models of dynamic phase transitions,
  • including diffusion-driven processes,
  • phase change phenomena,
  • structural displacive transformations in shape memory alloy,
  • numerical analysis and computational aspects of models for dynamics of phase transitions and structure formation phenomena.

Quantum Computing - A new resource for HPC

Date: 17 February 2014, Monday
Time: 3 pm – 5 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Dr Colin Williams and Mr Bill Trestrail, D-wave Systems

Abstract
D-Wave is the first company which claims to have built a Quantum Computer.  We will be hosting D-Wave experts at A*CRC on the 17th and 18th of February. 
D-Wave Two, a 512 qubit quantum computer has already been bought by NASA Jet Propulsion Laboratory, Google and Lockheed-Martin, and there are reports of it's unprecedented superiority over a traditional digital computers for special classes of complex problems.
Dr Colin Williams, Director of Strategic Partner and Mr Bill Trestrail, General Manager - Asia Pacific from D-wave Systems will provide a presentation on how the D-Wave 2 Quantum Computer can be implemented in conjunction with existing HPC resources to solve complex computational problems. During this presentation an understanding of how the D-Wave adiabatic quantum machine works, how it can be integrated into existing HPC workflows and the sorts of use cases in which the technology could be employed.
The D-Wave Two system has a wide variety of potential use cases in discrete optimisation, machine learning and sampling.
A second in-depth discussion session is planned on 18 Feb at 10 am, where Dr Williams will talk on specific use cases of Quantum Computing algorithms and application areas.  All interested are welcome.
D-Wave is the first company which claims to have built a Quantum Computer.  We will be hosting D-Wave experts at A*CRC on the 17th and 18th of February. 

D-Wave Two, a 512 qubit quantum computer has already been bought by NASA Jet Propulsion Laboratory, Google and Lockheed-Martin, and there are reports of it's unprecedented superiority over a traditional digital computers for special classes of complex problems.

Dr Colin Williams, Director of Strategic Partner and Mr Bill Trestrail, General Manager - Asia Pacific from D-wave Systems will provide a presentation on how the D-Wave 2 Quantum Computer can be implemented in conjunction with existing HPC resources to solve complex computational problems. During this presentation an understanding of how the D-Wave adiabatic quantum machine works, how it can be integrated into existing HPC workflows and the sorts of use cases in which the technology could be employed.

The D-Wave Two system has a wide variety of potential use cases in discrete optimisation, machine learning and sampling.

A second in-depth discussion session is planned on 18 Feb at 10 am, where Dr Williams will talk on specific use cases of Quantum Computing algorithms and application areas.  All interested are welcome.

Biography
Dr Williams is Director of Business Development & Strategic Partnerships at D-Wave Systems Inc. where he works with corporate and government clients to explore how D-Wave quantum computing technology could enhance their products and services.

Dr Williams holds a Ph.D. in artificial intelligence from the University of Edinburgh, a M.Sc. and D.I.C. in atmospheric physics and dynamics from Imperial College, University of London, and a B.Sc. (with Hons.) in mathematical physics from the University of Nottingham. He was formerly a research assistant in general relativity & quantum cosmology to Prof. Stephen W. Hawking, at the University of Cambridge, a research scientist at Xerox PARC, and an Associate Professor of Computer Science at Stanford University.

Accelerating CFD/CAE/CWO Applications with GPU

Date: 6 February 2014, Thursday
Time: 2 pm – 5 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Stan Posey, HPC Applications and Industry Development Lead, NVIDIA

Seminar
Graphics processing units (GPUs) contain hundreds of arithmetic units and can be harnessed to provide tremendous acceleration for scientific applications such as computational fluid dynamics, climate-weather modeling and computational mechanics.

This seminar will feature a talk by Stan Posey, NVIDIA lead for HPC Applications and Industry Development.  He will provide the most recent and relevant updates on GPU-acceleration for the following applications:

Computer-Aided Engineering (CAE): ANSYS Mechanical, Abaqus, MSC Nastran, Altair solutions
Fluid Dynamics (FD): ANSYS Fluent, OpenFOAM
Climate/Weather/Ocean (CWO): WRF, UM, and others 

Agenda

2:00 pm – 2:30 pm Registration / coffee break
2:30 pm – 3:10 pm “GPU-Accelerated CAE Applications” – Stan Posey, HPC Applications and Industry Development, NVIDIA
3:10 pm – 3:50 pm “GPU-Accelerated CFD Applications” - Stan Posey, HPC Applications and Industry Development, NVIDIA
3:50 pm – 4:00 pm Q & A
4:00 pm – 4:15 pm Tea Break
4:15 pm – 4:55 pm “GPU-Accelerated Climate/Weather/Ocean Applications” - Stan Posey, HPC Applications and Industry Development, NVIDIA
4:55 pm – 5:10 pm Q & A

Biography
Stan Posey currently manages the NVIDIA strategy of HPC applications and industry development for a variety of disciplines, with special focus on computational mechanics and climate-weather domains. Prior to joining NVIDIA in 2009, Mr. Posey has contributed for more than 20 years in applied HPC including vendor roles at Panasas, SGI, and Control Data Corporation, and engineering roles at CD-adapco and US DOE Oak Ridge National Laboratory. Mr. Posey earned a B.Sc. and M.Sc. in Mechanical Engineering from the University of Tennessee, Knoxville, TN, USA.

Accelerating Life Science Applications with GPU

Date: 22 January 2014, Wednesday
Time: 2.30 pm – 5 pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Dr Simon See, High Performance Computing Technology Director and Chief Solution Architect, Nvidia Inc, Asia

Lecture
Modern graphics processing units (GPUs) contain hundreds of arithmetic units and can be harnessed to provide tremendous acceleration for scientific applications such as molecular modeling, computational biology & chemistry and bioinformatics.

Learn about Life Science applications that leverage the power of GPU.  Hear the case studies that were able to shorten research cycles and speed up discovery process through GPU acceleration.

Agenda

2:30 pm – 2:45 pm Registration / coffee break
2:45 pm – 3:15 pm “GPU-Accelerated Molecular Dynamics Applications Overview” - Dr Simon See, Chief Solution Architect, NVIDIA
3:15 pm – 3:45 pm “Life Science research at Beijing Genome Institute” - Dr Simon See, Chief Scientific Computing Advisor, Beijing Genome Institute
3:45 pm – 4:00 pm Q&A with Dr See 
4:00 pm – 4:15 pm Tea Break
4:15 pm – 4:30 pm "Benchmarks of GROMACS on GPUs:  a Case Study" - Li Jianguo, Bioinformatics Institute, A*Star 
4:30 pm – 4:40 pm Q&A with Li Jianguo 
4:40 pm – 4:55 pm "Case Study: Exploring conformational space of Biomolecules with Amber on GPUs" –Dr. Srinivasaraghavan Kannan, Bioinformatics Institute, A*Star
4:55 pm – 5:00 pm Q&A with Dr Kannan 

Biography
Dr Simon See is currently the High Performance Computing Technology Director and Chief Solution Architect for Nvidia Inc, Asia and also the Chief Scientific Computing Advisor for BGI (China).  Concurrently Dr See is also Professor and Chief Scientific Computing Officer in Shanghai Jiao Tong University.  His research interests are in the area of High Performance Computing, computational science, Applied Mathematics and simulation methodology.  He has published over 100 papers in these areas and has won various awards.  Dr. See graduated from University of Salford (UK) with a Ph.D. in electrical engineering and numerical analysis in 1993.  Prior to joining NVIDIA, Dr See worked for SGI, DSO National Lab. of Singapore, IBM, International Simulation Ltd (UK), Sun Microsystems and Oracle.  He is also providing consultancy to a number of national research and supercomputing centers.

OpenStack in the High Performance Computing context

Date: 20 January 2014, Monday
Time: 10 am – 12pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Jakub Chrzeszczyk, Cloud Architect, National Computational Infrastructure, The Australian National University

Lecture
The talk will cover the differences between traditional cloud setups,focusing on low cost and high workload density and the HPC approach, which aims to achieve top performance. This will cover differences in the general system design, interconnect, local and shared storage and the configuration of cloud software for optimal performance in high performance computing.

Biography
Mr Jakub Chrzeszczyk is the Cloud Architect at the National Computational Infrastructure of the Australian National University, where they are implementing a cloud infrastructure high-performance computing. As the Technical Project Manager, he has been involved in many aspects of the project, including making recommendations to the OpenStack consortium, architecting the hardware and software platforms and making purchasing recommendations.
Jakub Chrzeszczyk is a certified Redhat Architect with numerous Certificates of Expertise. His Master's Thesis entitled "Designing and Implementing a High Availability and High Performance Cluster for Web Applications using Linux" was published in 2008.

Scaling I/O Beyond 100,000 Cores using ADIOS

Date: 14 January 2014, Tuesday
Time: 9 am – 12pm
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Scott A. Klasky, Oak Ridge National Laboratory

Lecture
Supercomputers are capable of performing 10^16 floating-point operations per second (34 PFlops). The greatest challenges facing computer and computational scientists are to further increase the computer speeds and, more challengingly, to develop programming models to enable realization of the potentials of such massive systems.
We will discuss three related topics. First, we review the latest innovations in interconnection networks and processor technologies for achieving ever increasing raw computing speeds. Second, we analyse a parallel computing algorithm, task mapping, for helping minimize and balance data movement over a complex network of processors. Third, we discuss several computational science projects including study of the mechanisms of human blood platelet activations, causing heart attacks and strokes, by multi-scale discrete particle dynamics and molecular dynamics.
As concurrency continues to increase on high-end machines, from both the number of cores and storage devices, we must look for a revolutionary way to treat Input/Output (I/O).  As a matter of fact, one of the major roadblocks to exascale is how to write and read big datasets quickly and efficiently on high-end machines.  On the other hand, applications often want to process data in an efficient and flexible manner, in terms of data formats and operations performed (e.g., files, data streams).  We will show how users can do that and get high performance with ADIOS on 100,000+ cores.  Part I will introduce parallel I/O and the ADIOS framework to the audience. Specifically, we will discuss the concept of ADIOS I/O abstraction, the binary-packed file format, and I/O methods along with the benefits to applications. Since 1.4.1, ADIOS can operate on both files and data streams. Part II will include a session on how to write/read data, and how to use different I/O componentizations inside of ADIOS. Part III will show users how to take advantage of the ADIOS framework to do compression/indexing. Finally, we will discuss how to run in-situ visualization using VisIt/Paraview+ADIOS. 

Biography
Scott A. Klasky is the group leader for Scientific Data in the Computer Science and Mathematics Division at the Oak Ridge National Laboratory. He holds a Ph.D. in Physics from the University of Texas at Austin (1994), and has previously worked at the University of Texas at Austin, Syracuse University, and the Princeton Plasma Physics Laboratory. Dr. Klasky is a co-author on over 180 papers, and is the team leader of the Adaptable I/O System (ADIOS), which won an R&D 100 Award in 2013.

Recent Advances in eChemistry Research Laboratory at Swinburne University, Australia

Date: 17 December 2013, Tuesday
Time: 11AM - 1PM
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Professor Feng Wang, Swinburne University of Technology, Australia

Lecture
Supercomputers are capable of performing 10^16 floating-point operations per second (34 PFlops). The greatest challenges facing computer and computational scientists are to further increase the computer speeds and, more challengingly, to develop programming models to enable realization of the potentials of such massive systems.
We will discuss three related topics. First, we review the latest innovations in interconnection networks and processor technologies for achieving ever increasing raw computing speeds. Second, we analyse a parallel computing algorithm, task mapping, for helping minimize and balance data movement over a complex network of processors. Third, we discuss several computational science projects including study of the mechanisms of human blood platelet activations, causing heart attacks and strokes, by multi-scale discrete particle dynamics and molecular dynamics.
Molecular spectroscopy is the physical description of chemical systems. As the advancement of our instrumental techniques, such as synchrotron sourced spectroscopy with better detectors and more powerful lasers, our knowledge of matter, molecules and their interactions has been constantly improving and sometimes earlier conclusions can be even overturned. The role of theory is not only interpretation of experimental results, but also the power of predictions. Computational spectroscopy powered by supercomputers has been integrated into the discovery process. 

In this presentation, computational chemistry methods, which are applied to study the electronic structures of molecules in close collaboration with world class experimental groups including IR/Raman spectroscopy, electron momentum spectroscopy, synchrotron sourced x-ray spectroscopy and most recently to gamma-ray spectroscopy, will be discussed. Ionization spectra of biomolecules including amino acids, DNA bases, cyclic dipeptides and other bioactive compounds will be highlighted, together with recent TD- DFT studies of organic dye sensitizer solar cells (DSCs), IR spectral studies of ferrocene in gas phase and solvents, micro-hydration of phenylalanine using CPMD simulations, and recently development in gamma ray spectra of positron annihilation in molecules will be also presented.

Biography
Feng Wang obtained her PhD degree in 1994 in theoretical chemistry from the University of Newcastle (Australia), and followed this with the prestigious NSERC Canada International Postdoctoral Fellowship (1994-1996) at University of Waterloo. In 1996, she joined School of Chemistry, The University of Melbourne as a Research Fellow. Feng took up the Senior Lectureship in Computational Science at Swinburne University of Technology in 2003. She was promoted to Associate Professor in 2005 and to Professor in 2009. Since 2011, Feng is Professor of Chemistry in the Faculty of Life and Social Sciences, which will form Faculty of Science, Engineering and Technology from 2014. 

Feng is an elected Fellow of The Royal Australian Chemical Institute (FRACI, C Chem) since 2001 and Fellow of Australian Institute of Physics (FAIP) since 2010. She has held visiting professor appointments at Sichuan Univ. (China) since 2001 and Xihua Univ (China) since 2011. Prof Wang's research interest lies in (bio)molecular materials modelling, computational spectroscopy, and atomic and molecular physics.
100 days of Cumulus ( BG/Q ) Workshop

Date: 27 November 2013, Wednesday
Time: 10AM - 12PM
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632

10:00 am - The Enabling Power of Supercomputers like BlueGene/Q by Prof Deng Yuefan
10:20 am - Performance of Density Functional Theory (DFT) Codes on Cumulus by Dr Michael Sullivan  
10:45 am - Unveiling the Origin of Intrinsic Brittleness in Magnesium by Cumulus by Dr Wu Zhao Xuan
11:00 am - Density Functional Theory Calculations using CPMD by Dr William Yim
11:15 am - Cumulus's Usage Patterns and Trends by Damien Leong
11:30 am - A Short Note for Compilation and Numerical Libraries in Cumulus by Łukasz Orłowski 
11:45 am - FAQ and Discussions

Architectures, Algorithms, and Applications of Supercomputers: Challenges and Solutions

Date: 28 November 2013, Thursday
Time: 10AM - 12PM
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Yuefan Deng, Visiting Professor of A*CRC

Lecture
Supercomputers are capable of performing 10^16 floating-point operations per second (34 PFlops). The greatest challenges facing computer and computational scientists are to further increase the computer speeds and, more challengingly, to develop programming models to enable realization of the potentials of such massive systems.
We will discuss three related topics. First, we review the latest innovations in interconnection networks and processor technologies for achieving ever increasing raw computing speeds. Second, we analyse a parallel computing algorithm, task mapping, for helping minimize and balance data movement over a complex network of processors. Third, we discuss several computational science projects including study of the mechanisms of human blood platelet activations, causing heart attacks and strokes, by multi-scale discrete particle dynamics and molecular dynamics.
Supercomputers are capable of performing 1016 floating-point operations per second (34 PFlops). The greatest challenges facing computer and computational scientists are to further increase the computer speeds and, more challengingly, to develop programming models to enable realization of the potentials of such massive systems.

We will discuss three related topics. First, we review the latest innovations in interconnection networks and processor technologies for achieving ever increasing raw computing speeds. Second, we analyse a parallel computing algorithm, task mapping, for helping minimize and balance data movement over a complex network of processors. Third, we discuss several computational science projects including study of the mechanisms of human blood platelet activations, causing heart attacks and strokes, by multi-scale discrete particle dynamics and molecular dynamics.

Biography
Yuefan Deng is Professor of Applied Mathematics at Stony Brook University, a Visiting Scientist at Brookhaven National Laboratory, and a distinguished visiting professor at the National Supercomputer Center in Jianan. He has worked at IBM in the design of BlueGene supercomputers. His research is in Parallel Computing, Molecular Dynamics, Monte Carlo Methods, and Computational Science. He published more than 80 papers and supervised nearly 30 doctoral theses. He is the architect of Galaxy Beowulf Supercomputer at Stony Brook built in 1997 and of NankaiStars Supercomputer which was China's fastest computer when it was completed in 2004. He lectured widely in US, Germany, Russia, Brazil, South Korea, Saudi Arabia, Turkey, as well as the Greater China region. His research is supported by the US DOE, NSF, NIH, as well as China’s Ministry of, and Shanghai’s Commission of, Science and Technology. Professor Deng earned his BA in Physics from Nankai University in 1983 and his PhD in Theoretical Physics from Columbia University in 1989.

Big Data and CFD Simulation on TSUBAME 2.0

Date : 2 July 2013, Tuesday
Time: 10 AM
Venue: Exploration Theatre, Level 4, Matrix Building, Biopolis, Singapore

Welcome Address by Prof Tan Tin Wee, Chairman, A*CRC

Title: Extreme Big Data and Resilience in Tsubame2.0 towards 3.0

Speaker: Prof Satoshi Matsuoka
              Tokyo Institute of Technology

Abstract:
Supercomputers often stress their FLOPS as their primary benefit. While this would be true in a classical sense, there is growing need for very fast I/O capabilities in handling large quantities of data,, or namely, “Big Data”. Many are lead to believe that current-day cloud infrastructures are more suitable for big data processing compared to supercomputers, but such is simply not true considering the current day technologies as well as future technological trend trajectories. Tsubame2.0, Tokyo Tech.’s petascale supercomputer is touted often for its FLOPS and greenness, but the other highlighted characteristics is that it is likely the world’s first supercomputer to facilitated fast I/O for both resilience and big data processing. Currently, various researches are being undertaken in our group to further enhance these properties, such that Tsubame2.5, a 6 petaflop update to Tsubame 2.0 in mid-2013, and Tsubame3.0, a 25 petaflop machine being planned for late 2015, would be considered as “big data supercomputers”.

Biography:
Satoshi Matsuoka is a Professor at the Global Scientific Information and Computing Center of Tokyo Institute of Technology (GSIC). He is the leader of TSUBAME series of supercomputers, which became the 4th fastest in the world on the Top500 and awarded the "Greenest Production Supercomputer in the World" by the Green 500 in November, 2010 and June 2011. He has also co-lead the Japanese national grid project NAREGI during 2003-2007, and is currently leading various projects such as the JST-CREST Ultra Low Power HPC and JSPS Billion-Scale Supercomputer Resilience. He has authored over 500 papers according to Google Scholar, and has chaired many ACM/IEEE conferences, including the Technical Papers Chair, Community Chair, and the upcoming Program Chair for Supercomputing Conferences 09, 11 and 13 respectively. He is a fellow of ACM and European ISC, and has won many awards including the JSPS Prize from the Japan Society for Promotion of Science in 2006, awarded by his Highness Prince Akishinomiya, the ACM Gordon Bell Prizes for 2011, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012.


Title: A Turbulent Air Flow Simulation in Metropolitan Tokyo for 10km x 10km area with 1m resolution and Several Peta-scale Real-world Simulations on TSUBAME 2.0

Speaker: Prof Takayuki Aoki
              Global Scientific Information and Computing Center, Tokyo Institute of technology

Abstract:
Turbulent modeling is a key issue of CFD (Computational Fluid Dynamics), since most flow phenomena become turbulent with higher Reynolds number. We have developed a CFD code based on Lattice Boltzmann Method with a LES (Large-Eddy Simulation) model. The dynamic Smagorinsky model is often used, however it requires costly average operations for wide area to determine the model constant. We applied the coherent-structure Smagorinsky model which is able to determine the model constant locally. We study a turbulent air flows in a metropolitan Tokyo 10km x 10km area with 1-m resolution taking account for the real building data.

We also demonstrate several stencil applications carried out on the whole TSUBAME 2.0 system, which currently has 2.4 PFLOPS of the peak performance at the Tokyo Institute of Technology. One of them is high resolution meso-scale atmosphere model ASUCA that is being developed by the Japan Meteorological Agency (JMA) for the purpose of the next-generation weather forecasting service. We have succeeded in a weather prediction with 500-m resolution (cf. Current JMA weather forecast uses 5km mesh). We also talk about a phase-field simulation to develop new materials by studying the dendritic solidification of Al-Si alloy with 2.0 PFLOPS in single precision, which is 44.5% of the peak performance.

Biography:
Takayuki Aoki received a BSc in Applied Physics (1983), an MSc in Energy Science and Dr. Sci (1989) from Tokyo Institute of Technology, has been a professor in Tokyo Institute of Technology since 2001 and the deputy director of the Global Scientific Information and Computing Center since 2009. He received the Achievement Awards from Japan Society of Mechanical Engineers, Japan Society for Industrial and Applied Mathematics and many awards and honors in GPU computing, scientific visualization, and others. His team got the Gordon Bell Prize, Special Achievement in Scalability and Time-to-Solution in 2011. He was also recognized as a CUDA fellow (currently 11 fellows in the world) by NVIDIA in 2012.

Prof Aoki Presentation File


Hardware Accelerated HPC

Date: 7 June 2013 (Friday)
Time: 2-5pm
Venue: Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Jeff Adie, Principal Systems Engineer, SGI
              Kenny Sng, Regional Solution Architect Manager, Asia Pacific, Intel Corporation

Abstract:
Hardware assisted acceleration is a well-known cost-effective way of obtaining performance gains on HPC systems. While various kinds of hardware accelerators exist (most notably FPGAs and GP-GPUs), this seminar will focus on the Intel Xeon Phi co-processor. In this Jeff Aide from SGI will talk about his experience with the Intel Xeon Phi accelertor. Kenny Sng from Intel will give a brief outline of the Phi architecture and roadmap.

Topics:
- Hardware acceleration in HPC – Jeff Adie
- Intel Xeon Phi Architecture and Roadmap overview – Kenny Sng
- How to port applications on Xeon Phi- Jeff Adie

Biography:
Jeff Adie
Jeff Adie joined SGI in 2000 as a Systems Engineer supporting the sales team in the ASEAN region. In 2002, Jeff was promoted to Principal and his role expanded to cover the Asia Pacific region. Jeff's work primarily involves assisting with solution design of HPC systems, as well as providing assistance to customers with HPC deployments, including installation, porting, training and tuning of HPC codes. Jeff's primary area of expertise is in CAE, specifically in structures, having previously worked at Toyota and on FEA/CFD analysis for America's cup class yachts for Team New Zealand. Jeff is also an expert in the visualization of HPC data. 

Jeff has a post graduate diploma from the University of Auckland in Computer Science, specializing in Parallel programming and Computer graphics.

Kenny Sng
Kenny Sng is the Regional Solution Architect Manager in the Intel’s Enterprise Solution Sales(ESS) team covering the APAC region. Kenny works with enterprise customers in various industries and government organizations to develop IT and business strategies through the usage of Cloud Computing, Networking, Data Center and Big Data Analytics solutions. It includes developing business value assessment, energy efficient designs and infrastructure solutions to solve business problems. Kenny participates in the Singapore Info-Communication Development Authority Workgroup to develop Data Center best practices.

Prior to joining ESS, Kenny works in Intel Information Technology for the last 15 years. He started with leading various IT teams in Singapore, India and South-East Asia and proceeds to lead the Asia network team. He was the Global Operation manager managing site resources personnel worldwide. He manages the Global Communication Services team for Intel in 2007 when he was based in Shanghai and subsequently Folsom, CA. Kenny then headed the Intel Data Centers Engineering team. He is a key member of Intel’s Data center efficiency program which formulated the development and implementation of the Intel data center strategy.

Presentation File

GPU-Acceleration of MCAE Applications

Date: 5 June 2013 (Wednesday)
Time: 2pm - 5pm
Venue: Seminar Room, Level 15, 1 Fusionopolis Way, Connexis North, Singapore 138632
Speaker: Dr Simon See, Director and Chief Solution Architect, Nvidia Inc. Asia Pacific
              Stan Posey, HPC Industry Development, NVIDIA Inc.

Seminar Overview
NVIDIA Graphic Processor Units (GPUs) technology is increasingly used to accelerate compute-intensive HPC applications across various disciplines in the scientific and engineering communities. OpenFOAM® simulations can require a significant amount of computing time which can potentially lead to higher simulation costs. Enabling faster research and discovery using CFD is of key importance, and GPU technology can help speed up simulations and accelerate science.

Biography
Stan Posey
Stan Posey currently manages the NVIDIA strategy of HPC applications and industry development for a variety of disciplines, with special focus on computational mechanics. Prior to joining NVIDIA in 2009, Mr. Posey has contributed for more than 20 years in applied HPC including vendor roles at Panasas, SGI, and Control Data Corporation, and engineering roles at CD-adapco and US DOE Oak Ridge National Laboratory. Mr. Posey earned a B.Sc. and M.Sc. in Mechanical Engineering from the University of Tennessee, Knoxville, TN, USA.

Dr Simon See
Dr Simon See is currently the High Performance Computing Technology Director and Chief Solution Architect for Nvidia Inc, Asia and also a Professor and Chief Scientific Computing Officer in Shanghai Jiao Tong University. Concurrently A/Prof See is also the Chief Scientific Computing Advisor for BGI (China). His research interests are in the area of High Performance Computing, computational science, Applied Mathematics and simulation methodology. He has published over 100 papers in these areas and has won various awards. Dr. See graduated from University of Salford (UK) with a Ph.D. in electrical engineering and numerical analysis in 1993. Prior to joining NVIDIA, Dr See worked for SGI, DSO National Lab. of Singapore, IBM, International Simulation Ltd (UK), Sun Microsystems and Oracle. He is also providing consultancy to a number of national research and supercomputing centers.

Presentation File 1
Presentation File 2
Presentation File 3
Presentation File 4


Bluegene/Q Workshop: Getting Started & Initial Optimizing, Tuning, Scaling
 
Purpose: The purpose of this workshop is to provide an opportunity for users (application developers) to learn how to get started and to take advantage of the BG/Q for their codes. This will be accomplished through some brief lectures to introduce different topics and as much hands-on effort with IBM consultants guiding the participants on their codes. This allows participants to obtain as much experience using the compilers, tools and techniques on their own codes as much as possible.

The focus will be on the getting started on the system, profiling, tracing communications and debugging, on-node computer performance scaling, tuning and libraries to improve performance. There will be an overview of compilers and the opportunity to experiment with compiler options. Some discussion on the overall hardware and software design philosophy that is needed to understand performance issues will be given. If time permits, we will discuss the performance counters. The tools will help to understand performance modifications to improve overall performance and scalability.
  • Introduction on how to use the local systems (Presented by A*Star System Admin staff)
    • Login and local file systems
    • Scheduling and submitting jobs
    • Policies for allocation of partitions, timing runs and scaling runs
    • I/O and file access
  • Blue Gene/Q: Brief system and environment overview needed to understand optimization, tuning and scaling.
    • Hardware overview
      • Systems architecture
      • Host systems
    • Software overview
      • Compute Node Kernel
      • Execution process modes
      • Message Passing Interface on Blue Gene/Q
      • Memory considerations
      • Other considerations
        • Input/output
        • Miscellaneous
    • I/O Node Software
  • Blue Gene/Q Compiler consideration
    • Overview of the IBM compilers and linker
      • Consideration on various compiler flags
      • Where to start for optimization
    • What options for what impact
  • HPCTool Kit or other Performance Tools
    • Over all discussion of the HPCToolkit if available
    • MPI performance: MPI Profiler/Tracer
    • CPU performance: Xprofiler, HPM
    • Threading performance: OpenMP profiling 
    • I/O performance: I/O profiling
    • Visualization and analysis: PeekPerf
  • IBM MPITrace/MPIHPM libraries 
    • MPI profiling
    • Performance counter profiling of whole code and code sections
    • MPI tracing (Selective)
    • vprof/cprof statistical profiling
  • Universal Performance Counters overview (Optional – for the hardcore if time permits) 
    • Hardware Description
    • Low Level Software API
    • Example Library and Program
    • Documentation & Discussion
  • Debuggers overview 
    • Compiler and linker flags required for debugging
    • GNU GDB – how to use it on Blue Gene/P
    • Other debuggers at A*STAR (DDT)
  • On-Compute Node optimization 
    • Information from the compiler 
    • Issues related to SIMD on BG/Q
    • Performance Discussion – On-Core, On-Node 
      • Profiling to identify performance issues
      • Remarks on performance inhibitors
      • Importance of Mapping for Torus
  • Discussion of performance and performance gains through the use of library routines 
    • Overview of MASS and MASSV libraries for intrinsic and math functions
    • Performance improvements using ESSL routines including BLAS 
  • Discussion of Scaling Inhibitors 
    • Understanding performance and improvement through the use of good I/O techniques including MPI I/O and GPFS Considerations
    • Trade offs between OpenMP and MPI on BG/Q
    • Other depending on the needs and interest of audience

NAG Seminar

A series of 3 NAG Seminars will be conducted at the following location by Dr Jonathan Gibson, Technical Consultant, Numerical Algorithms Group The venue will be at Charles Babbage Room, Level 17, 1 Fusionopolis Way, Connexis South, Singapore 138632

Title: An Introducton to NAG's Numerical Libraries & the NAG Fortran Compiler
Date: 28 May 2013
Time: 2.30 pm

In this talk we introduce NAG's numerical libraries, services and the NAG Fortran Compiler. We give an overview of the content of the libraries and show how they can be used with many languages (including Fortran, C/C++) and in many environments. We will discuss the advantages of numerical stability, choice of appropriate algorithm and the extensive NAG documentation. We also discuss the benefits of the NAG Fortran compiler.

Title: The NAG Toolbox for MATLAB
Date: 29 May 2013
Time: 2.30 pm

Here we will show how to use the NAG Toolbox for MATLAB, including some elements of MATLAB that you need to know. We will give demonstrations and show how easy it is to get help with the Toolbox fully embedded inside the MATLAB environment. We will also show some functionality and performance comparisons.

There will be time to experience the Toolbox for yourself by trying some simple exercises or looking at specific areas you are interested in. You do not need any prior knowledge of MATLAB to attend, although it would be very helpful.

Title: Multicore Demystified: An Introduction to Multicore Programming and the NAG Library for SMP & multicore
Date: 30 May 2013
Time: 10.00 am

In this lecture we aim to demystify programming your multicore machine. We give an introduction to the terminology and what it really means. We show how you can get the most out of your machine with a brief introduction to the programming language OpenMP. We also show you how to get the most out of the NAG Library for SMP and multicore with performance hints and tips.

Biography: DR JONATHAN GIBSON
Jonathan Gibson works as a Technical Consultant for the Numerical Algorithms Group (NAG) in the UK and has been working there for five years. He was taken on to work on HECToR, the current national academic HPC service for the UK, having previously worked for CSAR, an earlier national service, run by the University of Manchester.

He was awarded a PhD in Applied Mathematics from the University of Liverpool in 1999 and has a total of twenty years’ experience in scientific computing, having worked on a number of codes during that time. He regularly teaches at universities throughout the UK, including courses in MPI and Parallel I/O. He is involved in the NAG library development process and was the principal developer in the last release of the NAG Parallel library.

NAG Toolbox for MATLAB

NAG's Numerical Libraries & the NAG Fortran Compiler

Multicore Demystified

Parallel Programming Using CUDA
Date:  09 May 2013, Thursday
Time: 6:00PM – 9:00PM 
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Lecture Outline
Parallel computing with GPUs is becoming more and more widely used in demanding general-purpose scientific and engineering applications. CUDA has been widely adopted throughout the world as the most accessible and intuitive way to achieve massive parallelism - and this is reflected by large number of Universities which include CUDA as part of their standard curriculum, hundreds of technical papers and parallel programming textbooks.
The following topics will be covered in this lecture:
  • Introduction to GPU computing
  • CUDA programming basics
  • CUDA API and data allocation
  • Matrix multiplication in CUDA
  • CUDA memory model and tiled parallel algorithm
Speaker Biography
Kyle Rupnow is an Assistant Professor at Nanyang Technological University (Singapore) and a Research Scientist in the Advanced Digital Sciences Center, a University of Illinois research center (http://www.adsc.com.sg). He received a BS in Computer Engineering and Mathematics in 2003 from the University of Wisconsin-Madison.  Dr. Rupnow received his PhD in Electrical Engineering in 2010 working with Prof. Katherine Compton on operating system support for reconfigurable computing systems. During his PhD studies, Dr. Rupnow was supported as a Sandia National Laboratories Exellence in Engineering Fellow. In addition, he received the Gerald Holdridge tutorial development award and UW-Madison PhD Capstone teaching award for his work as a teaching assistant and lecturer during his time at UW-Madison.

Introduction to OpenFOAM

Date:  19 April 2013, Friday
          22 April 2013, Monday
Time: 2:00PM – 5:00PM
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Speaker: Jeff Adie, Principal Systems Engineer, SGI

Abstract:
OpenFOAM is a free, open source CFD software package developed by OpenCFD Ltd at ESI Group and distributed by the OpenFOAM Foundation. It has a large user base across most areas of engineering and science, from both commercial and academic organisations. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.

This lecture will introduce OpenFOAM, its uses, its solvers and case structure. Other topics to be covered include: running a case, samplers and function objects, parallel operation, and PyFOAM. The session will consist of a 90-minute lecture, followed by a hands-on portion.

Attendees are strongly encouraged to bring a laptop computer with an SSH client and X11 installed (for more information, please visit Access Information Page.

Biography:
Jeff Adie joined SGI in 2000 as a Systems Engineer supporting the sales team in the ASEAN region. In 2002, Jeff was promoted to Principal and his role expanded to cover the Asia Pacific region. Jeff's work primarily involves assisting with solution design of HPC systems, as well as providing assistance to customers with HPC deployments, including installation, porting, training and tuning of HPC codes. Jeff's primary area of expertise is in CAE, specifically in structures, having previously worked at Toyota and on FEA/CFD analysis for America's cup class yachts for Team New Zealand. Jeff is also an expert in the visualization of HPC data.

Jeff has a post graduate diploma from the University of Auckland in Computer Science, specializing in Parallel programming and Computer graphics.


Introduction to Computational Fluid Dynamics

Date: 25 March 2013, Monday
Time: 10:00AM – 1:00PM
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way Connexis South, Singapore 138632
Speaker: Jeff Adie, Principal Systems Engineer, SGI

Abstract
Computational Fluid Mechanics is the application of numerical methods and algorithms to problems involving fluid flows. Famously one of the most difficult and computationally intensive areas of classical mechanics, this two-part class will provide an introduction to CFD for neophytes. Topics to be covered during the first part of the lecture include: What is CFD, fluid characteristics, fluid modelling, turbulence modelling, and CFD workflow. The second part of the lecture will discuss CFD optimization, including highlighting the challenges faced in CFD, how to use CFD in conjunction with MPI for harnessing parallel computing power, and Multidisciplinary Design Optimization (MDO).  

Presentation File


The Art of Differentiating Computer Programs. An Introduction to Algorithmic Differentiation

Date: 18 February 2013
Time: 10.00 am - 12 noon
Venue: Charles Babbage Room (Fusionopolis, level 17, Connexis South)
Speaker: Prof. Dr. Uwe Naumann, Aachen University

About the Seminar:
“How sensitive are the values of the outputs of my computer program with respect to changes in the values of the inputs? How sensitive are these first-order sensitivities with respect to changes in the values of the inputs? How sensitive are the second-order sensitivities with respect to changes in the values of the inputs? ”

Computational scientists, engineers, and economists as well as quantitative analysts tend to ask these questions on a regular basis. They write computer programs in order to simulate diverse real-world phenomena. The underlying mathematical models often depend on a possibly large number of (typically unknown or uncertain) parameters. Values for the corresponding inputs of the numerical simulation programs can, for example, be the result of (typically error-prone) observations and measurements. If very small perturbations in these uncertain values yield large changes in the values of the outputs, then the feasibility of the entire simulation becomes questionable. Nobody should make decisions based on such highly uncertain data.

Quantitative information about the extent of this uncertainty is crucial. First- and higher-order sensitivities of outputs of numerical simulation programs with respect to their inputs (also first and higher derivatives) form the basis for various approximations of uncertainty. They are also crucial ingredients of a large number of numerical algorithms ranging from the solution of (systems of) nonlinear equations to optimization under constraints given as (systems of) partial differential equations. This talk describes a set of techniques for modifying the semantics of numerical simulation programs such that the desired first and higher derivatives can be computed accurately and efficiently. Computer programs implement algorithms. Consequently, the subject is known as Algorithmic (also Automatic) Differentiation (AD).

NWChem Tutorial
Venus: Potential 1 & 2 (FP Level 13)
Instructors: Dr. Edoardo Apra from Pacific Northwest National Laboratory, USA
Dr Karol Kowalski, Pacific Northwest National Laboratory
Syllabus summary (NWChem):
NWChem is a computational quantum chemistry package for the studies of electronic structure, geometry and properties of molecules and periodic systems.  It also includes classical and quantum (Carr-Parinello) molecular dynamics simulations.  The package exhibits excellent parallel scaling and has been shown to run on hundreds of thousands of cores on Jaguar, and other top Supercomputing systems.  The computations performed using NWChem have been awarded several Gordon Bell prizes for the best supercomputer programs.  NWChem is an open source, has well designed user interface for building the systems, launching the jobs and analysing results. It is designed to run on high-performance parallel supercomputers as well as conventional workstation clusters.
NWChem has been installed on all A*CRC computers, and is ready to be used.  It may be a very attractive alternative to more popular, but older packages that do not scale well on massively parallel systems.
NWChem is developed and maintained by the EMSL at the Pacific Northwest National Laboratory, USA.
This workshop is aimed at new and experienced users of NWChem. Basic knowledge of computational chemistry is desirable. A workshop will be a mix of morning lectures and afternoon hands-on tutorials where participants will have the opportunity to explore the various capabilities as well as interact with NWChem developers. Users and developers interested in developing and implementing new capabilities in NWChem are also welcome.
Day one
Morning
*  Basic Introduction of Computational Chemistry
*  Basic Introduction of NWChem software
*  Ground and Excited States with DFT and TDDFT 
Afternoon: 
*  Hands-on
Day two
Morning:
*  Correlated Methods for Ground and Excited states
*  Relativity, Spectroscopy
Afternoon: 
*  Discussion of recent NWChem papers
*  Hands-on
Day three
Morning:
*  QM/MM
*  Solid-state applications
Afternoon: 
*  Introduction of software development in NWChem
*  Hands-on
Registration for the workshop will commence soon.
NWChem Tutorial

Date: 23-25 October 2012
Time: 9.00am - 5.00pm
Venue:  Charles Babbage Room (Level 17, Connexis South, Fusionopolis) 

Instructors: Dr. Edoardo Apra from Pacific Northwest National Laboratory, USA
Dr Karol Kowalski, Pacific Northwest National Laboratory

Syllabus summary (NWChem):
NWChem is a computational quantum chemistry package for the studies of electronic structure, geometry and properties of molecules and periodic systems.  It also includes classical and quantum (Carr-Parinello) molecular dynamics simulations.  The package exhibits excellent parallel scaling and has been shown to run on hundreds of thousands of cores on Jaguar, and other top Supercomputing systems.  The computations performed using NWChem have been awarded several Gordon Bell prizes for the best supercomputer programs.  NWChem is an open source, has well designed user interface for building the systems, launching the jobs and analysing results. It is designed to run on high-performance parallel supercomputers as well as conventional workstation clusters.

NWChem has been installed on all A*CRC computers, and is ready to be used.  It may be a very attractive alternative to more popular, but older packages that do not scale well on massively parallel systems.

NWChem is developed and maintained by the EMSL at the Pacific Northwest National Laboratory, USA.

This workshop is aimed at new and experienced users of NWChem. Basic knowledge of computational chemistry is desirable. A workshop will be a mix of morning lectures and afternoon hands-on tutorials where participants will have the opportunity to explore the various capabilities as well as interact with NWChem developers. Users and developers interested in developing and implementing new capabilities in NWChem are also welcome.

Day one
Morning
*  Basic Introduction of Computational Chemistry
*  Basic Introduction of NWChem software
*  Ground and Excited States with DFT and TDDFT 

Afternoon: 
*  Hands-on

Day two
Morning:
*  Correlated Methods for Ground and Excited states
*  Relativity, Spectroscopy

Afternoon:
*  Discussion of recent NWChem papers
*  Hands-on

Day three
Morning:
*  QM/MM
*  Solid-state applications

Afternoon:
*  Introduction of software development in NWChem
*  Hands-on

Additional Notes on NWChem
NWChem in CygWin
NWChem Tutorial
NWChem Slides
NWChem Readme File




Advanced High Performance Scientific Computing Workshop

Date: 18th September 2012
Time: 9.00am - 5.00pm
Venue: Charles Babbage Room (Level 17, Connexis South, Fusionopolis)

Instructor: Prof. Serge G. Petiton, Université de Lille, Laboratoire d’Informatique Fondamentale de Lille (LIFL/CNRS)

About the Workshop:
The goal of this day-long series of talks is to introduce advanced high performance scientific computing including parallel and distributed algorithms and method designs, regular or irregular data structure adaptations, programming paradigms and methodologies. The course will cover all the main existing high performance execution and programming paradigms: flux parallelism (pipelined vector computing), data parallelism, control-flow parallelism, SIMD, SPMD, MSPMD,…along with linear algebra examples. Emerging multicore and many-core programming will be considered, and algorithmic optimizations to existing GPU and processor will be considered. We will also survey iterative Krylov-based methods for linear algebra problems and discuss how they are well-adapted for future post-petascale hypercomputers.

The course begins with an overview, following the history of supercomputing, of parallel and distributed architectures and programming paradigms, including associated concepts and terminology. Then, classical programming paradigms such as pipelined-vector and data parallel computing will be explored and illustrated with linear algebra examples from basic matrix-vector operations to hybrid Krylov methods. Large granularity parallel and distributed programming will be considered using the same examples. The last part of the course will focus on future challenge scientists would have to solve during this decade to be able to achieve adequate computational efficiency on hypercomputers. We will discuss unsolved problems on the road to exascale programming and computing.

A key aim of the course is for the participant to acquire state-of-art knowledge on high performance scientific computing by understanding how some classical linear algebra methods are to be programmed on such environments. Participants will be acquainted with the challenges at the forefront of post-petascale computing. 

Part I (18 September, am):
A Brief history and survey of supercomputing; from vector to GRID and Cloud Computing; toward exascale computing. 
o    Main HPC architectures and programming paradigms, from pipelined vector computing to GRID and Cloud computing
o    Vector and data parallel computing; dense and sparse linear algebra data structures and algorithms
o    Vector and data parallel programming, with examples as y=A(Ax+x)+x, Gauss, Gauss-Jordan, the Conjugate Gradient with Polynomial preconditioning 

Part II (18 September, pm):
Krylov methods; from basic parallel methods to smart hybrid auto-tuned methods
o    The parallel Arnoldi’s (ERAM, IRAM) and GMRES restarted subspace iterative methods
o    Vector orthogonalisation methods
o    The hybrid MERAM and GMRES-ERAM/LS methods
o    Auto-tuned (subspace size, orthogonalisation, sparse patterns) hybrid Krylov methods for large sparse no-symmetric linear algebra problems on cluster of GPU. 

Part III (18 September, pm):
Toward exascale programming and computing
o    Large Granularity parallel task computing; from SPMD, MSPMD to YML/XMP programming. Same example that previously, adapted for SPMD and MSPMD programming. Block-version of classical Linear Algebra methods. Large granularity hybrid asynchronous methods.
o    Toward post-petascale programming, on the road to the exascale frontier.  How the studied programming paradigms would have to be hybridized to obtained efficient post-petascale applications. Different problems to address (sparse matrices, GPU programming, energy consumption, asynchronously processes and other issues). Dynamic multi-parameter auto-tuning. 

About the speaker:
Prof. Serge G. Petiton received the B.S. degree in mathematics, in 1982, the M.S. degree in Applied Mathematics, in 1984, the M.E. degree in Operating System, in 1985, the Ph.D. degree in computer science, 1988, and the “Habilitation à diriger des recherches”, 1993, from Pierre and Marie Curie University, PARIS 6. He was post-doctoral student at Yale University, 1989-1990. He has been researcher at the “Site Experimental en Hyperparallelisme” (supported by CNRS, CEA, and the French DoD) from 1991 to 1994. He also was affiliate research scientist at Yale and visiting research fellow in several US laboratories, especially in NASA-ICASE and the AHPCRC during the period 1991-1994. 

Since 1994, Serge G. Petiton is tenured Professor at the Scientific and Technical University of Lille and leads the “Methodology and Algorithmic Parallel Programming” group of the CNRS “Laboratoire d’Informatique Fondamentale de Lille”. He participates in several projects of the INRIA Laboratory in Saclay and of the CNRS Japanese-French Laboratory on Informatics (JFLI) in Tokyo. He is senior lecturer at University of Paris 6, University of Versailles, and the University of Tsukuba in Japan. He is director of the board of the ORAP association (launched in 1994 by CNRS, INRIA and CEA) to promote HPC and he participates in several French and International HPC committees.

He has been scientific supervisor of more than 20 Ph.D. candidates and has authored more than 100 articles in international journals and conferences. His main current research interests are “Parallel and Distributed Computing”, “Post-Petascale Smart-tuned Dense and Sparse Linear Algebra”, and “Language and Programming Paradigm for Extreme Modern Scientific Computing”.

Presentation File Part 1a
Presentation File Part 1b
Presentation File Part 2
Presentation File Part 3
Advanced High Performance Scientific Computing Workshop
Date: 18th September 2012
Time: 9.00am - 5.00pm
Venue: Charles Babbage Room (Level 17, Connexis South, Fusionopolis)
Instructor: Prof. Serge G. Petiton, Université de Lille, Laboratoire d’Informatique Fondamentale de Lille (LIFL/CNRS)
About the Workshop:
The goal of this day-long series of talks is to introduce advanced high performance scientific computing including parallel and distributed algorithms and method designs, regular or irregular data structure adaptations, programming paradigms and methodologies. The course will cover all the main existing high performance execution and programming paradigms: flux parallelism (pipelined vector computing), data parallelism, control-flow parallelism, SIMD, SPMD, MSPMD,…along with linear algebra examples. Emerging multicore and many-core programming will be considered, and algorithmic optimizations to existing GPU and processor will be considered. We will also survey iterative Krylov-based methods for linear algebra problems and discuss how they are well-adapted for future post-petascale hypercomputers.
The course begins with an overview, following the history of supercomputing, of parallel and distributed architectures and programming paradigms, including associated concepts and terminology. Then, classical programming paradigms such as pipelined-vector and data parallel computing will be explored and illustrated with linear algebra examples from basic matrix-vector operations to hybrid Krylov methods. Large granularity parallel and distributed programming will be considered using the same examples. The last part of the course will focus on future challenge scientists would have to solve during this decade to be able to achieve adequate computational efficiency on hypercomputers. We will discuss unsolved problems on the road to exascale programming and computing.
A key aim of the course is for the participant to acquire state-of-art knowledge on high performance scientific computing by understanding how some classical linear algebra methods are to be programmed on such environments. Participants will be acquainted with the challenges at the forefront of post-petascale computing.
 
Part I (18 September, am): A Brief history and survey of supercomputing; from vector to GRID and Cloud Computing; toward exascale computing.
 
o    Main HPC architectures and programming paradigms, from pipelined vector computing to GRID and Cloud computing
o    Vector and data parallel computing; dense and sparse linear algebra data structures and algorithms
o    Vector and data parallel programming, with examples as y=A(Ax+x)+x, Gauss, Gauss-Jordan, the Conjugate Gradient with Polynomial preconditioning
 
Part II (18 September, pm): Krylov methods; from basic parallel methods to smart hybrid auto-tuned methods
o    The parallel Arnoldi’s (ERAM, IRAM) and GMRES restarted subspace iterative methods
o    Vector orthogonalisation methods
o    The hybrid MERAM and GMRES-ERAM/LS methods
o    Auto-tuned (subspace size, orthogonalisation, sparse patterns) hybrid Krylov methods for large sparse no-symmetric linear algebra problems on cluster of GPU.
 
Part III (18 September, pm): Toward exascale programming and computing
o    Large Granularity parallel task computing; from SPMD, MSPMD to YML/XMP programming. Same example that previously, adapted for SPMD and MSPMD programming. Block-version of classical Linear Algebra methods. Large granularity hybrid asynchronous methods.
o    Toward post-petascale programming, on the road to the exascale frontier.  How the studied programming paradigms would have to be hybridized to obtained efficient post-petascale applications. Different problems to address (sparse matrices, GPU programming, energy consumption, asynchronously processes and other issues). Dynamic multi-parameter auto-tuning.
 
About the speaker:
Prof. Serge G. Petiton received the B.S. degree in mathematics, in 1982, the M.S. degree in Applied Mathematics, in 1984, the M.E. degree in Operating System, in 1985, the Ph.D. degree in computer science, 1988, and the “Habilitation à diriger des recherches”, 1993, from Pierre and Marie Curie University, PARIS 6. He was post-doctoral student at Yale University, 1989-1990. He has been researcher at the “Site Experimental en Hyperparallelisme” (supported by CNRS, CEA, and the French DoD) from 1991 to 1994. He also was affiliate research scientist at Yale and visiting research fellow in several US laboratories, especially in NASA-ICASE and the AHPCRC during the period 1991-1994. 
Since 1994, Serge G. Petiton is tenured Professor at the Scientific and Technical University of Lille and leads the “Methodology and Algorithmic Parallel Programming” group of the CNRS “Laboratoire d’Informatique Fondamentale de Lille”. He participates in several projects of the INRIA Laboratory in Saclay and of the CNRS Japanese-French Laboratory on Informatics (JFLI) in Tokyo. He is senior lecturer at University of Paris 6, University of Versailles, and the University of Tsukuba in Japan. He is director of the board of the ORAP association (launched in 1994 by CNRS, INRIA and CEA) to promote HPC and he participates in several French and International HPC committees.
He has been scientific supervisor of more than 20 Ph.D. candidates and has authored more than 100 articles in international journals and conferences. His main current research interests are “Parallel and Distributed Computing”, “Post-Petascale Smart-tuned Dense and Sparse Linear Algebra”, and “Language and Programming Paradigm for Extreme Modern Scientific Computing”.


HPC and Big Data Workshop

Date: 25-27 April 2012
Venue: Potential 1 & 2 (Fusionopolis level 13)
Instructor: Dr. John Foe, Oreste Villa, Sinan Al-Saffar from Pacific Northwest National Laboratory

The Challenge: Big Data -Technology advances have made data storage relatively inexpensive and bandwidth abundant, resulting in voluminous datasets from modeling and simulation, high-throughput instruments, and system sensors. Such data stores exist in a diverse range of application domains, including scientific research (e.g., bioinformatics, climate change), national security (e.g., cyber security, ports-of-entry), environment (e.g., carbon management, subsurface science) and energy (e.g., power grid management).

As technology advances, the list grows. This challenge of extracting valuable knowledge from massive datasets is made all the more daunting by multiple types of data, numerous sources, and various scales -- not to mention the ultimate goal of achieving it in near-real time. To dissect the problem, the science and technology drivers can be grouped into three primary categories:
1. Managing the explosion of data
2. Extracting knowledge from massive datasets
3. Reducing data to facilitate human understanding and response.

Transformational Solution - Aggressive work to solve this big-data challenge through data intensive computing.

Data Intensive Computing - Data Intensive Computing (DIC) is concerned with capturing, managing, analyzing, and understanding data at volumes and rates that push the frontiers of current technologies. Addressing the demands of ever-growing data volume and complexity requires epochal advances in software, hardware, and algorithm development. Effective solution technologies must also scale to handle the amplified data rates and simultaneously accelerate timely, effective analysis results.

About the course:

Day 1 (25/4/12)
Leadership Class Systems (Instructor: Oreste Villa)
1. Notable HPC Systems
   * Cray XK6
   * IBM Blue Gene Q
   * K Machine
2. Programming Models
   * MPI
   * GlobalArrays
3. Program Exercises

Day 2 (26/4/12)
Multithreaded Systems (Instructor: John Feo)
1. Cray XMT/2 system
2. Programming Models
   * Data parallelism
   * Recursion
   * Dataflow
3. Program Exercises

Day 3 (27/4/12)
Data Intensive Science (Instructor: Sinan Al-Saffar)
1. Introduction to semantic graphs and ontologies
2. Dataset as graphs: concepts and implementations
3. Graph algorithms for semantic graph querying and mining
4. INSPIRE: visualizing large data sets


Event: NAG training workshop
Dates: 16-19th April 2012
Venue: Potential 1 & 2 at level 13, (#13-01 Connexis North), Fusionopolis
Instructor: Craig Lucas, Senior Technical Consultant, NAG
The purpose of the workshop is to introduce A*CRC users and Singapore researchers to The Numerical Algorithms Group (NAG) Libraries, compilers and tools. 
NAG produces numerical, data mining components, statistical and visualisation software, compilers and application development tools, for the solution of problems in a wide range of areas such as science, engineering, financial analysis and research. Produced by experts for use in a variety of applications, the NAG Library is the largest commercially available collection of numerical and statistical algorithms in the world. With over 1,600 tried and tested routines that are both flexible and portable, it remains at the core of thousands of programs and applications spanning the globe.
The NAG Library is so widely used and trusted because of its unrivalled quality, reliability and portability. The NAG Library is written by the best world experts in numerical analysis and is tailored for the entire range of computers: from a single PC to workstation to the world’s largest supercomputers. The NAG Library is available for use with many programming languages and for many platforms and operating systems.
The entire NAG Library of 1,600 mathematical routines and other tools has been installed on A*CRC computers  (Fuji, Axle and Aurora).
The entire stack consists of:
•         NAG Library for SMP & Multi-core (Fortran)
•         NAG Fortran Library
•         NAG C Library
•         NAG Fortran Compiler
A*CRC is happy to announce a special license agreement with NAG which covers:
1. All users from A*STAR Biomedical Research Council Institutes, A*STAR Science and Engineering Research Council Institutes. This license covers use of NAG software on personal machines as well as A*CRC supercomputers.
2.  Guest users from National University Singapore (NUS), Nanyang Technological University (NTU), National Environment Agency (NEA) of Singapore, Singapore Management University (SMU), Singapore University of Technology and Design (SUTD), DUKE-NUS, and Campus for Research Excellence and Technological Enterprise (CREATE) may use NAG software on A*CRC supercomputers.
About the course:
Monday 16th April - 09.00 - 12.00
An Introduction to NAG’s Numerical Libraries and NAG Fortran Compiler
In this talk we introduce NAG's numerical libraries, services and the NAG Fortran Compiler.
We give an overview of the content of the libraries and show how they can be used with many languages (including Fortran, C/C++) and in many environments. We will discuss the advantages of numerical stability, choice of appropriate algorithm and the extensive NAG documentation. We also discuss the benefits of the NAG Fortran compiler.
All attendees are invited to install NAG Libraries and/or the NAG Fortran Compiler on their local machine or laptop prior to the course. The NAG software can also be found installed on IHPC supercomputers.
Monday 16th April - 13.00 - 17.00
The NAG Toolbox for MATLAB  
Here we will show how to use the NAG Toolbox for MATLAB, including some elements of MATLAB that you need to know. We will give demonstrations and show how easy it is to get help with the Toolbox fully embedded inside the MATLAB environment. We will also show some functionality and performance comparisons.
There will be time to experience the Toolbox for yourself by trying some simple exercises or looking at specific areas you are interested in. You do not need any prior knowledge of MATLAB to attend, although it would be very helpful. If you are running MATLAB on your machine we invite you to get the NAG Toolbox for MATLAB installed on your local machine or laptop prior to attending the course.
Tuesday 17th April - 09.00-12.00 and 13.00-16.00
and Wednesday 18th - 09.00-12.00
Introduction to Fortran 95
This course will teach you the main concepts and syntax of Fortran 95, and we assume no prior knowledge of programming. We cover basics data types, mathematical operations, arrays and dynamic storage, IF statements, loops, functions, subroutines and modules, input and output and the many built in functions of Fortran.
Throughout the course we will emphasize good programming practice and each section of the course is supported by practical exercises.
Prerequisites: Attendees should be familiar with editing files in a Linux environment.
Wednesday 18th April - 13.00-17.00 
and Thursday 19th - 09.00-12.00 and 13.00-17.00
Introduction to OpenMP
OpenMP is the standard for writing parallel codes to run in a shared memory environment. It mainly involves adding compiler directives to an existing serial code. This course will introduce the concepts and essential syntax of OpenMP.
We review the shared memory environment and discuss the OpenMP execution model. We look at how work can be shared amongst cores and how we achieve load balancing through scheduling. OpenMP Tasks are introduced, how they are embedded in the language from version 3.0 and their use in parallelizing recursive algorithms and producer/consumer schemes. The course is supported by practical exercises.
Prerequisites: Attendees should be able to program in either Fortran or C and be familiar with editing and compiling in a Linux environment.
NAG training workshop
Dates: 16-19th April 2012
Venue: Potential 1 & 2 at level 13, (#13-01 Connexis North), Fusionopolis

Instructor: Craig Lucas, Senior Technical Consultant, NAG

The purpose of the workshop is to introduce A*CRC users and Singapore researchers to The Numerical Algorithms Group (NAG) Libraries, compilers and tools. 

NAG produces numerical, data mining components, statistical and visualisation software, compilers and application development tools, for the solution of problems in a wide range of areas such as science, engineering, financial analysis and research. Produced by experts for use in a variety of applications, the NAG Library is the largest commercially available collection of numerical and statistical algorithms in the world. With over 1,600 tried and tested routines that are both flexible and portable, it remains at the core of thousands of programs and applications spanning the globe.

The NAG Library is so widely used and trusted because of its unrivalled quality, reliability and portability. The NAG Library is written by the best world experts in numerical analysis and is tailored for the entire range of computers: from a single PC to workstation to the world’s largest supercomputers. The NAG Library is available for use with many programming languages and for many platforms and operating systems.

The entire NAG Library of 1,600 mathematical routines and other tools has been installed on A*CRC computers  (Fuji, Axle and Aurora).
The entire stack consists of:
•         NAG Library for SMP & Multi-core (Fortran)
•         NAG Fortran Library
•         NAG C Library
•         NAG Fortran Compiler

A*CRC is happy to announce a special license agreement with NAG which covers:

1. All users from A*STAR Biomedical Research Council Institutes, A*STAR Science and Engineering Research Council Institutes. This license covers use of NAG software on personal machines as well as A*CRC supercomputers.

2.  Guest users from National University Singapore (NUS), Nanyang Technological University (NTU), National Environment Agency (NEA) of Singapore, Singapore Management University (SMU), Singapore University of Technology and Design (SUTD), DUKE-NUS, and Campus for Research Excellence and Technological Enterprise (CREATE) may use NAG software on A*CRC supercomputers.

About the course:

Monday 16th April - 09.00 - 12.00
An Introduction to NAG’s Numerical Libraries and NAG Fortran Compiler

In this talk we introduce NAG's numerical libraries, services and the NAG Fortran Compiler.

We give an overview of the content of the libraries and show how they can be used with many languages (including Fortran, C/C++) and in many environments. We will discuss the advantages of numerical stability, choice of appropriate algorithm and the extensive NAG documentation. We also discuss the benefits of the NAG Fortran compiler.

All attendees are invited to install NAG Libraries and/or the NAG Fortran Compiler on their local machine or laptop prior to the course. The NAG software can also be found installed on IHPC supercomputers.

Monday 16th April - 13.00 - 17.00
The NAG Toolbox for MATLAB  

Here we will show how to use the NAG Toolbox for MATLAB, including some elements of MATLAB that you need to know. We will give demonstrations and show how easy it is to get help with the Toolbox fully embedded inside the MATLAB environment. We will also show some functionality and performance comparisons.

There will be time to experience the Toolbox for yourself by trying some simple exercises or looking at specific areas you are interested in. You do not need any prior knowledge of MATLAB to attend, although it would be very helpful. If you are running MATLAB on your machine we invite you to get the NAG Toolbox for MATLAB installed on your local machine or laptop prior to attending the course.

Tuesday 17th April - 09.00-12.00 and 13.00-16.00 and
Wednesday 18th - 09.00-12.00

Introduction to Fortran 95

This course will teach you the main concepts and syntax of Fortran 95, and we assume no prior knowledge of programming. We cover basics data types, mathematical operations, arrays and dynamic storage, IF statements, loops, functions, subroutines and modules, input and output and the many built in functions of Fortran.

Throughout the course we will emphasize good programming practice and each section of the course is supported by practical exercises.

Prerequisites: Attendees should be familiar with editing files in a Linux environment.

Wednesday 18th April - 13.00-17.00 and
Thursday 19th - 09.00-12.00 and 13.00-17.00

Introduction to OpenMP

OpenMP is the standard for writing parallel codes to run in a shared memory environment. It mainly involves adding compiler directives to an existing serial code. This course will introduce the concepts and essential syntax of OpenMP.

We review the shared memory environment and discuss the OpenMP execution model. We look at how work can be shared amongst cores and how we achieve load balancing through scheduling. OpenMP Tasks are introduced, how they are embedded in the language from version 3.0 and their use in parallelizing recursive algorithms and producer/consumer schemes. The course is supported by practical exercises.

Prerequisites: Attendees should be able to program in either Fortran or C and be familiar with editing and compiling in a Linux environment.
Last Updated - 15th Dec 2016
 
     
Privacy Policy