Fast Visualization of Gaussian Density Surfaces for Molecular Dynamics and Particle System Trajectories

August 1st, 2012

Abstract:

We present an efficient algorithm for computation of surface representations enabling interactive visualization of large dynamic particle data sets. Our method is based on a GPU-accelerated data-parallel algorithm for computing a volumetric density map from Gaussian weighted particles. The algorithm extracts an isovalue surface from the computed density map, using fast GPU-accelerated Marching Cubes. This approach enables interactive frame rates for molecular dynamics simulations consisting of millions of atoms. The user can interactively adjust the display of structural detail on a continuous scale, ranging from atomic detail for in-depth analysis, to reduced detail visual representations suitable for viewing the overall architecture of molecular complexes. The extracted surface is useful for interactive visualization, and provides a basis for structure analysis methods.

(Michael Krone, John E. Stone, Thomas Ertl, and Klaus Schulten, “Fast visualization of Gaussian density surfaces for molecular dynamics and particle system trajectories”, In EuroVis – Short Papers 2012, pp. 67-71, 2012. [WWW])

A GPU-Based Multi-Swarm PSO Method for Parameter Estimation in Stochastic Biological Systems Exploiting Discrete-Time Target Series

August 1st, 2012

Abstract:

Parameter estimation (PE) of biological systems is one of the most challenging problems in Systems Biology. Here we present a PE method that integrates particle swarm optimization (PSO) to estimate the value of kinetic constants, and a stochastic simulation algorithm to reconstruct the dynamics of the system. The fitness of candidate solutions, corresponding to vectors of reaction constants, is defined as the point-to-point distance between a simulated dynamics and a set of experimental measures, carried out using discrete-time sampling and various initial conditions. A multi-swarm PSO topology with different modalities of particles migration is used to account for the different laboratory conditions in which the experimental data are usually sampled. The whole method has been specifically designed and entirely executed on the GPU to provide a reduction of computational costs. We show the effectiveness of our method and discuss its performances on an enzymatic kinetics and a prokaryotic gene expression network.

(M. Nobile, D. Besozzi, P. Cazzaniga, G. Mauri and D. Pescini: “A GPU-based multi-swarm PSO method for parameter estimation in stochastic biological systems exploiting discrete-time target series”,  in M. Giacobini, L. Vanneschi, W. Bush, editors, Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics, Springer, vol. 7246 of LNCS. pp. 74-85, 2012. [DOI])

OpenACC Compilers Now Available from PGI

July 27th, 2012

PGI Release 12.6 is now out. New in this release:

  • PGI Accelerator compilers — first release of the Fortran and C compilers to include comprehensive support for the OpenACC 1.0 specification including the acc cache construct and the entire OpenACC API library. See the PGI Accelerator page for a complete list of supported features.
  • CUDA Toolkit — PGI Accelerator compilers and CUDA Fortran now include support for CUDA Toolkit version 4.2; version 4.1 is now the default.

Download a free trial from the PGI website at http://www.pgroup.com/support/download_pgi2012.php?view=current. Upcoming PGI webinar with Michael Wolfe. 9:00AM PDT, July 31st sponsored by NVIDIA: “Using OpenACC Directives with the PGI Accelerator Compilers”. Register at http://www.pgroup.com/webinar212.htm?clicksource=gpgpu712.

Policy-based Tuning for Performance Portability and Library Co-optimization

July 22nd, 2012

Abstract:

Although modular programming is a fundamental software development practice, software reuse within contemporary GPU kernels is uncommon. For GPU software assets to be reusable across problem instances, they must be inherently flexible and tunable. To illustrate, we survey the performance-portability landscape for a suite of common GPU primitives, evaluating thousands of reasonable program variants across a large diversity of problem instances (microarchitecture, problem size, and data type). While individual specializations provide excellent performance for specific instances, we find no variants with universally reasonable performance. In this paper, we present a policy-based design idiom for constructing reusable, tunable software components that can be co-optimized with the enclosing kernel for the specific problem and processor at hand. In particular, this approach enables flexible granularity coarsening which allows the expensive aspects of communication and the redundant aspects of data parallelism to scale with the width of the processor rather than the problem size. From a small library of tunable device subroutines, we have constructed the fastest, most versatile GPU primitives for reduction, prefix and segmented scan, duplicate removal, reduction-by-key, sorting, and sparse graph traversal.

(Duane Merrill, Michael Garland and Andrew Grimshaw, “Policy-based Tuning for Performance Portability and Library Co-optimization”, Innovative Parallel Computing 2012. [WWW])

MC# 3.0 with GPU support

July 22nd, 2012

Version 3.0 of the MC# programming system has been released. MC# is an universal parallel programming language aimed to any parallel architecture  –  multicore processors, systems with GPU, or clusters. It is an extension of C# language and supports high-level parallel programming style.

Optimization of a Broadband Discone Antenna Design and Platform Installed Radiation Patterns Using a GPU-Accelerated Savant/WIPL-D Hybrid Approach

July 20th, 2012

Abstract:

Traditional design guidelines for broadband antennas do not always produce satisfactory performance for the desired frequency range of interest. In addition, the accurate prediction of the free-space antenna performance is not sufficient to determine if the antenna will meet a larger system requirement because the performance of the antenna can change significantly when it is installed on a platform. Antenna design software, such as WIPL-D, addresses the difficulties of designing antennas with broadband performance by providing optimization software that can automatically resize the various antenna dimensions until a desired performance criterion is met. At high-frequencies, the electrically large size of the platform makes it computationally difficult, or impossible, to directly consider the interactions between the antenna and the platform when designing the antenna in a full-wave solver. This paper describes an approach for the design and optimization of a discone antenna and then the subsequent installation on a large commercial aircraft. The antenna design will be optimized across a wide frequency range using WIPL-D Optimizer. The resulting discone antenna design is then imported into Savant-Hybrid, a hybrid asymptotic and full-wave solver, and the installed antenna performance is simulated using GPU acceleration at multiple potential antenna locations to determine the location that provides the least-degraded installed antenna performance.

(Tod Courtney, Matthew C. Miller, John E. Stone, and Robert A. Kipp: “Optimization of a Broadband Discone Antenna Design and Platform Installed Radiation Patterns Using a GPU-Accelerated Savant/WIPL-D Hybrid Approach”, Proceedings of the Applied Computational Electromagnetics Symposium
(ACES 2012), Columbus, Ohio, April 2012. [PDF])

Virtual School of Computational Science and Engineering

July 20th, 2012

The Virtual School of Computational Science and Engineering (VSCSE) helps graduate students, post-docs and young professionals from all disciplines and institutions across the country gain the skills they need to use advanced computational resources to advance their research. The VSCSE deploys conventional collaboration technologies in unconventional ways to create a national-scale virtual classroom that provides multiple high-quality audio and video channels for speakers, remote audiences, and various forms of content of immediate educational value to students.

Read the rest of this entry »

Facing the Multicore Challenge III

July 14th, 2012

Update: Deadline extension to  July 28, 2012

Submissions are cordially invited for MCC-III, to be held in Stuttgart, Germany, September 19-21. This conference is the 3rd in a series, starting in 2010 in Heidelberg at the Heidelberg Academy of Sciences (HAW) and 2011 at the Karlsruhe Institute of Technology (KIT) and the Engineering Mathematics and Computing Lab (EMCL). It aims to combine new aspects of multi-/manycore microprocessor technologies, parallel applications, numerical simulation, software development and tools. Contributions are welcome from all participating disciplines. Particular emphasis is placed on the support and advancement of young scientists, in addition to high-quality invited keynote talks and tutorials. More information including the full call for papers, topics of interest and submission instructions: http://www.multicore-challenge.org

Implementing a code generator for fast matrix multiplication in OpenCL on the GPU

July 11th, 2012

Abstract:

This paper presents results of an implementation of code generator for fast general matrix multiply (GEMM) kernels. When a set of parameters is given, the code generator produces the corresponding GEMM kernel written in OpenCL. The produced kernels are optimized for high-performance implementation on GPUs from AMD. Access latencies to GPU global memory is the main drawback for high performance. This study shows that storing matrix data in a block-major layout increases the performance and stability of GEMM kernels. On the Tahiti GPU (Radeon HD 7970), our DGEMM (double-precision GEMM) and SGEMM (single-precision GEMM) kernels achieve the performance up to 848 GFlop/s (90% of the peak) and 2646 GFlop/s (70%), respectively.

(K. Matsumoto, N. Nakasato, S. G. Sedukhin: “Implementing a code generator for fast matrix multiplication in OpenCL on the GPU”, accepted for Special Session: Auto-Tuning for Multicore and GPU (ATMG), IEEE 6th International Symposium on Embedded Multicore SoCs (MCSoC-12), Sep. 2012. [PDF])

A Study of Persistent Threads Style GPU Programming for GPGPU Workloads

July 4th, 2012

Abstract:

In this paper, we characterize and analyze an increasingly popular style of programming for the GPU called Persistent Threads (PT). We present a concise formal definition for this programming style, and discuss the difference between the traditional GPU programming style (nonPT) and PT, why PT is attractive for some high-performance usage scenarios, and when using PT may or may not be appropriate. We identify limitations of the nonPT style and identify four primary use cases it could be useful in addressing— CPU-GPU synchronization, load balancing/irregular parallelism, producer-consumer locality, and global synchronization. Through micro-kernel benchmarks we show the PT approach can achieve up to an order-of-magnitude speedup over nonPT kernels, but can also result in performance loss in many cases. We conclude by discussing the hardware and software fundamentals that will influence the development of Persistent Threads as a programming style in future systems.

(Kshitij Gupta, Jeff A. Stuart and John D. Owens: “A Study of Persistent Threads Style GPU Programming for GPGPU Workloads”, Proceedings of Innovative Parallel Computing, May 2012. [WWW])

Page 22 of 112« First...10...2021222324...304050...Last »