CfP: High Performance Computing Symposium

November 8th, 2012

The 21st High Performance Computing Symposium (HPC 2013), devoted to the impact of high performance computing and communications on computer simulations. Advances in multicore and many-core architectures, networking, high end computers, large data stores, and middleware capabilities are ushering in a new era of high performance parallel and distributed simulations. Along with these new capabilities come new challenges in computing and system modeling. The goal of HPC 2013 is to encourage innovation in high performance computing
and communication technologies and to promote synergistic advances in modeling methodologies and simulation. It will promote the exchange of ideas and information between universities, industry, and national laboratories about new developments in system modeling, high performance computing and communication, and scientific computing and simulation. Read the rest of this entry »

Symposium on Personal High-Performance Computing

September 20th, 2012

The Vrije Universiteit Brussel, Erasmus Hogeschool Brussel and Lessius Hogeschool have the pleasure to invite you to a symposium on Personal High-Performance Computing. The symposium aims at bringing together academia and industry to discuss recent advances in using accelerators such as GPUs or FPGAs for speeding up computational-intensive applications. We target single systems such as PCs, laptops or processor boards, hence the name ‘personal’ HPC.

Scientists are encouraged to submit abstracts to be presented at the poster session. All information can be found at https://sites.google.com/site/phpc2012bxl.

CfP: 3rd International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems (PMBS12)

August 11th, 2012

This workshop is concerned with the comparison of high-performance computing systems through performance modeling, benchmarking or the use of tools such as simulators. We are particularly interested in research which reports the ability to measure and make tradeoffs in software/hardware co-design to improve sustained application performance. We are also keen to capture the assessment of future systems, for example through work that ensures continued application scalability through peta- and exa-scale systems.

Read the rest of this entry »

5th Workshop on UnConventional High Performance Computing 2012

June 3rd, 2012

Together with EuroPar-12, the 5th Workshop on UnConventional High Performance Computing 2012 (UCHPC 2012) will take place on August 27/28 at Rhodes Island, Greece. The workshop tries to capture solutions for HPC which are unconventional today but could become conventional and significant tomorrow. While GPGPU is already used a lot in HPC, there still are all kind of issues around best exploitation and productivity for the programmer. Submission deadline: June 6, 2012. For more details, see
http://www.lrr.in.tum.de/~weidendo/uchpc12. UPDATE: Submission deadline extended to June 11.

HOOMD-blue 0.10.0 release

December 19th, 2011

HOOMD-blue performs general-purpose particle dynamics simulations on a single workstation, taking advantage of NVIDIA GPUs to attain a level of performance equivalent to many cores on a fast cluster. Flexible and configurable, HOOMD-blue is currently being used for coarse-grained molecular dynamics simulations of nano-materials, glasses, and surfactants, dissipative particle dynamics simulations (DPD) of polymers, and crystallization of metals.

HOOMD-blue 0.10.0 adds many new features. Highlights include: Read the rest of this entry »

On the Acceleration of Wavefront Applications using Distributed Many-Core Architectures

December 14th, 2011

Abstract:

In this paper we investigate the use of distributed graphics processing unit (GPU)-based architectures to accelerate pipelined wavefront applications—a ubiquitous class of parallel algorithms used for the solution of a number of scientific and engineering applications. Specifically, we employ a recently developed port of the LU solver (from the NAS Parallel Benchmark suite) to investigate the performance of these algorithms on high-performance computing solutions from NVIDIA (Tesla C1060 and C2050) as well as on traditional clusters (AMD/InfiniBand and IBM BlueGene/P).

Benchmark results are presented for problem classes A to C and a recently developed performance model is used to provide projections for problem classes D and E, the latter of which represents a billion-cell problem. Our results demonstrate that while the theoretical performance of GPU solutions will far exceed those of many traditional technologies, the sustained application performance is currently comparable for scientific wavefront applications. Finally, a breakdown of the GPU solution is conducted, exposing PCIe overheads and decomposition constraints. A new k-blocking strategy is proposed to improve the future performance of this class of algorithm on GPU-based architectures.

(Pennycook, S.J., Hammond, S.D., Mudalige, G.R., Wright, S.A. and Jarvis, S.A.: “On the Acceleration of Wavefront Applications using Distributed Many-Core Architectures”,  The Computer Journal (in press) [DOI] [PREPRINT])

rCUDA 3.1 Released

October 20th, 2011

The new version 3.1 of rCUDA (Remote CUDA), the Open Source package that allows performing CUDA calls to remote GPUs, is now available. Release highlights:

  • Fully updated API to CUDA 4.0 (added support for modules “Peer Device Memory Access” and “Unified Addressing”).
  • Fixed low level Surface Reference management functions.

For further information, please visit the rCUDA webpage  at http://www.gap.upv.es/rCUDA.

CfP: 20th High Performance Computing Symposium 2012

October 7th, 2011

The 2012 Spring Simulation Multi-conference will feature the 20th High Performance Computing Symposium (HPC 2012), devoted to the impact of high performance computing and communications on computer simulations. Topics of interest include:

  • high performance/large scale application case studies,
  • GPUs for general purpose computations (GPGPU)
  • multicore and many-core computing,
  • power aware computing,
  • large scale visualization and data management,
  • tools and environments for coupling parallel codes,
  • parallel algorithms and architectures,
  • high performance software tools,
  • component technologies for high performance computing.

Important dates: Paper submission due: December 2, 2011; Notification of acceptance: January 13, 2012; Revised manuscript due: January 27, 2012; Symposium: March 26–29, 2012.

rCUDA 3.0a released

July 17th, 2011

A new alpha release of rCUDA 3.0 (Remote CUDA), the Open Source package that allows performing CUDA calls to remote GPUs, has been released. Major improvements included in this new version are:

  • Partially updated API to 4.0
  • Added compatibility support with CUDA 4.0 environment
  • Updated CUBLAS API to 4.0 for the most common CUBLAS routines
  • Fixed some bugs
  • General performance improvements

For further information, please visit the rCUDA webpage.

CheCL: Transparent Checkpointing and Process Migration of OpenCL Applications

June 26th, 2011

Abstract:

We propose a new transparent checkpoint/restart (CPR) tool, named CheCL, for high performance and dependable GPU computing. CheCL can perform CPR on an OpenCL application program without any modification and recompilation of its code. A conventional checkpointing system fails to checkpoint a process if the process uses OpenCL. Therefore, in CheCL, every API call is forwarded to another process called an API proxy, and the API proxy invokes the API function; two processes, an application process and an API proxy, are launched for an OpenCL application. In this case, as the application process is not an OpenCL process but a standard process, it can be safely checkpointed. While CheCL intercepts all API calls, it records the information necessary for restoring OpenCL objects. The application process does not hold any OpenCL handles, but CheCL handles to keep such information. Those handles are automatically converted to OpenCL handles and then passed to API functions. Upon restart, OpenCL objects are automatically restored based on the recorded information. This paper demonstrates the feasibility of transparent checkpointing of OpenCL programs including MPI applications, and quantitatively evaluates the runtime overheads. It is also discussed that CheCL can enable process migration of OpenCL applications among distinct nodes, and among different kinds of compute devices such as a CPU and a GPU.

(Hiroyuki Takizawa, Kentaro Koyama, Katuto Sato, Kazuhiko Komatsu, and Hiroaki Kobayashi: “CheCL: Transparent Checkpointing and Process Migration of OpenCL Applications”, Proceedings of International Parallel and Distributed Processing Symposium (IPDPS11), 2011. [PDF])

 

Page 1 of 41234