A Flexible Kernel for Adaptive Mesh Refinement on GPU

April 1st, 2008

This paper by Boubekeur (TU Berlin) and Schlick (INRIA) presents a flexible GPU kernel for adaptive on-the-fly refinement of meshes with arbitrary topology. By simply reserving a small amount of GPU memory to store a set of adaptive refinement patterns, on-the-fly refinement is performed by the GPU, without any preprocessing or additional topology data structure. The level of adaptive refinement can be controlled by specifying a per-vertex depth tag, in addition to usual position, normal, color and texture coordinates. This depth tag is used by the kernel to instanciate the correct refinement pattern. Finally, the refined patch produced for each triangle can be displaced by the vertex shader, using any kind of geometric refinement, such as Bezier patch smoothing, scalar valued displacement, procedural geometry synthesis or subdivision surfaces. This refinement engine requires no multi-pass rendering, fragment processing, or special preprocessing of the input mesh structure. It can be implemented on any GPU with vertex shading capabilities. (A Flexible Kernel for Adaptive Mesh Refinement on GPU, Tamy Boubekeur and Christophe Schlick, Computer Graphics Forum, 2008.)

Accelerating Resolution-of-the-Identity Second-Order Møller-Plesset Quantum Chemistry Calculations with Graphical Processing Units

February 11th, 2008

In this paper we describe a modification of a general purpose code for quantum mechanical calculations of molecular properties (Q-Chem) to use a graphical processing unit. We report a 4.3x speedup of the resolution-of-the-identity second-order Møller-Plesset perturbation theory execution time for single point energy calculation of linear alkanes. Furthermore, we obtain the correlation and total energy for n-octane conformers as the torsional angle of central bond is rotated to show that precision is not lost for these types of calculations. This code modification is accomplished using the NVIDIA CUDA Basic Linear Algebra Subprograms (CUBLAS) library for an NVIDIA Quadro FX 5600 graphics card. Finally, we anticipate further speedups of other matrix algebra based electronic structure calculations using a similar approach. (Accelerating Resolution-of-the-Identity Second-Order Møller-Plesset Quantum Chemistry Calculations with Graphical Processing Units. Vogt, L., Olivares-Amaya, R., Kermes, S., Shao, Y., Amador-Bedolla, C., and Aspuru-Guzik, A. J. Phys. Chem. A, 2008, DOI: 10.1021/jp0776762)

High-throughput sequence alignment using Graphics Processing Units

February 11th, 2008

The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. by University of Maryland researchers Michael Schatz, Cole Trapnell, Art Delcher, and Amitabh Varshney describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on GPUs. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, despite the very low arithmetic intensity of the task. (High-throughput sequence alignment using Graphics Processing Units, Schatz, M.C., Trapnell, C., Delcher, A.L., Varshney, A. (2007), BMC Bioinformatics 8:474.)

Parallel Implementation of the 2D Discrete Wavelet Transform on Graphics Processing Units: Filter Bank versus Lifting

February 11th, 2008

Abstract: “The widespread usage of the Discrete Wavelet Transform (DWT) has motivated the development of fast DWT algorithms and their tuning on all sorts of computer systems. Several studies have compared the performance of the most popular schemes, known as Filter Bank (FBS) and Lifting (LS), and have always concluded that Lifting is the most efficient option. However, there is no such study on streaming processors such as modern Graphic Processing Units (GPUs). Current trends have transformed these devices into powerful stream processors with enough flexibility to perform intensive and complex floating-point calculations. The opportunities opened up by these platforms, as well as the growing popularity of the DWT within the computer graphics field, make a new performance comparison of great practical interest. Our study indicates that FBS outperforms LS in current generation GPUs. In our experiments, the actual FBS gains range between 10% and 140%, depending on the problem size and the type and length of the wavelet filter. Moreover, design trends suggest higher gains in future generation GPUs. (Parallel Implementation of the 2D Discrete Wavelet Transform on Graphics Processing Units: Filter Bank versus Lifting. Christian Tenllado, Javier Setoain, Manuel Prieto, Luis Piñuel, and Francisco Tirado. IEEE Transactions on Parallel and Distributed Systems ,vol. 19, no. 3, pp. 299-310, March, 2008. )

Dynamic Warp Formation and Scheduling for Efficient GPU Control Flow

January 18th, 2008

Abstract: “Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead incurred by control hardware. Scalar threads are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One approach is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance. In this paper, we explore mechanisms for more efficient SIMD branch execution on GPUs. We show that a realistic hardware implementation that dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes improves performance by an average of 20.7% for an estimated area increase of 4.7%. (Wilson W. L. Fung, Ivan Sham, George Yuan, and Tor M. Aamodt, Dynamic Warp Formation and Scheduling for Efficient GPU Control Flow, to appear in 40th IEEE/ACM International Symposium on Microarchitecture (), Chicago, IL, December 1-5, 2007.

Toward efficient GPU-accelerated N-body simulations

January 18th, 2008

Abstract: “N-body algorithms are applicable to a number of common problems in computational physics including gravitation, electrostatics, and fluid dynamics. Fast algorithms (those with better than O(N2) performance) exist, but have not been successfully implemented on GPU hardware for practical problems. In the present work, we introduce not only best-in-class performance for a multipole-accelerated treecode method, but a series of improvements that support implementation of this solver on highly-data-parallel graphics processing units (GPUs). The greatly reduced computation times suggest that this problem is ideally suited for the current and next generations of single and cluster CPU-GPU architectures. We believe that this is an ideal method for practical computation of largescale turbulent flows on future supercomputing hardware using parallel vortex particle methods. (Mark J. Stock and Adrin Gharakhani, “Toward efficient GPU-accelerated N-body simulations,” in 46th AIAA Aerospace Sciences Meeting and Exhibit, AIAA 2008-608, January 2008, Reno, Nevada.)

Acceleration of a 3D Euler Solver Using Commodity Graphics Hardware

January 18th, 2008

Abstract:

The porting of two- and three-dimensional Euler solvers from a conventional CPU implementation to the novel target platform of the Graphics Processing Unit (GPU) is described. The motivation for such an effort is the impressive performance that GPUs offer: typically 10 times more floating point operations per second than a modern CPU, with over 100 processing cores and all at a very modest financial cost. Both codes were found to generate the same results on the GPU as the FORTRAN versions did on the CPU. The 2D solver ran up to 29 times quicker on the GPU than on the CPU; the 3D solver 16 times faster.

(Tobias Brandvik and Graham Pullan, Acceleration of a 3D Euler Solver Using Commodity Graphics Hardware. 46th AIAA Aerospace Sciences Meeting and Exhibit. January, 2008.)

Interactive Simulation of Large Scale Agent-Based Models (ABMs) on the GPU

January 16th, 2008

This article by D’Souza et al. explores large scale Agent-Based Model(ABM) simulation on the GPU. Agent-based modeling is a technique which has become increasingly popular for simulating complex natural phenomena such as swarms and biological cell colonies. An ABM describes a dynamic system by representing it as a collection of communicating, concurrent objects. Current ABM simulation toolkits and algorithms use discrete event simulation techniques and are executed serially on a CPU. This limits the size of the models that can be handled efficiently. In this paper we present a series of efficient data-parallel algorithms for simulating ABMs. These include methods for handling environment updates, agent interactions and replication. Important techniques presented in this work include a novel stochastic allocator which enables parallel agent replication in O(1) average time and an iterative method to handle collision among agents in the spatial domain. These techniques have been implemented on a modern GPU (GeForce 8800GTX), resulting in a substantial performance increase. The authors believe that their system is the first completely GPU-based ABM simulation framework. (D’Souza R., Lysenko, M., Rahmani, K., SugarScape on steroids: simulating over a million agents at interactive rates. Proceedings of the Agent2007 conference, Chicago, IL. 2007.)

Displacement Mapping on the GPU: State of the Art

January 14th, 2008

This paper reviews the latest developments of displacement mapping algorithms implemented on the vertex, geometry, and fragment shaders of graphics cards. Displacement mapping algorithms are classified as per-vertex and per-pixel methods. Per-pixel approaches are further categorized as safe algorithms that aim at correct solutions in all cases, to unsafe techniques that may fail in extreme cases but are usually much faster than safe algorithms, and to combined methods that exploit the robustness of safe and the speed of unsafe techniques. The paper discusses the possible roles of vertex, geometry, and fragment shaders to implement these algorithms. Then the particular GPU based bump, parallax, relief, sphere, horizon mapping, cone stepping, local ray tracing, pyramidal and view-dependent displacement mapping methods, as well as their numerous variations are reviewed providing also implementation details of the shader programs. The paper presents these methods using uniform notation and also points out when different authors referred to similar concepts differently. In addition to basic displacement mapping, self-shadowing and silhouette processing are also reviewed. Based on the authors’ experiences gained having re-implemented these methods, their performance and quality are compared, and the advantages and disadvantages are fairly presented. (Displacement Mapping on the GPU – State of the Art László Szirmay-Kalos and Tamás Umenhoffer. Computer Graphics Forum. 2008.)

AstroGPU 2007 Presentations Posted

January 14th, 2008

Slides from the 2007 AstroGPU conference, held at the Institute for Advanced Study in Princeton last November, have been posted to the AstroGPU Website.

Page 82 of 112« First...102030...8081828384...90100110...Last »