Abstract: “Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead incurred by control hardware. Scalar threads are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One approach is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance. In this paper, we explore mechanisms for more efficient SIMD branch execution on GPUs. We show that a realistic hardware implementation that dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes improves performance by an average of 20.7% for an estimated area increase of 4.7%. (Wilson W. L. Fung, Ivan Sham, George Yuan, and Tor M. Aamodt, Dynamic Warp Formation and Scheduling for Efficient GPU Control Flow, to appear in 40th IEEE/ACM International Symposium on Microarchitecture (), Chicago, IL, December 1-5, 2007.
Abstract: “N-body algorithms are applicable to a number of common problems in computational physics including gravitation, electrostatics, and fluid dynamics. Fast algorithms (those with better than O(N2) performance) exist, but have not been successfully implemented on GPU hardware for practical problems. In the present work, we introduce not only best-in-class performance for a multipole-accelerated treecode method, but a series of improvements that support implementation of this solver on highly-data-parallel graphics processing units (GPUs). The greatly reduced computation times suggest that this problem is ideally suited for the current and next generations of single and cluster CPU-GPU architectures. We believe that this is an ideal method for practical computation of largescale turbulent flows on future supercomputing hardware using parallel vortex particle methods. (Mark J. Stock and Adrin Gharakhani, “Toward efficient GPU-accelerated N-body simulations,” in 46th AIAA Aerospace Sciences Meeting and Exhibit, AIAA 2008-608, January 2008, Reno, Nevada.)
The porting of two- and three-dimensional Euler solvers from a conventional CPU implementation to the novel target platform of the Graphics Processing Unit (GPU) is described. The motivation for such an effort is the impressive performance that GPUs offer: typically 10 times more floating point operations per second than a modern CPU, with over 100 processing cores and all at a very modest financial cost. Both codes were found to generate the same results on the GPU as the FORTRAN versions did on the CPU. The 2D solver ran up to 29 times quicker on the GPU than on the CPU; the 3D solver 16 times faster.
(Tobias Brandvik and Graham Pullan, Acceleration of a 3D Euler Solver Using Commodity Graphics Hardware. 46th AIAA Aerospace Sciences Meeting and Exhibit. January, 2008.)
This article by Dâ€™Souza et al. explores large scale Agent-Based Model(ABM) simulation on the GPU. Agent-based modeling is a technique which has become increasingly popular for simulating complex natural phenomena such as swarms and biological cell colonies. An ABM describes a dynamic system by representing it as a collection of communicating, concurrent objects. Current ABM simulation toolkits and algorithms use discrete event simulation techniques and are executed serially on a CPU. This limits the size of the models that can be handled efficiently. In this paper we present a series of efficient data-parallel algorithms for simulating ABMs. These include methods for handling environment updates, agent interactions and replication. Important techniques presented in this work include a novel stochastic allocator which enables parallel agent replication in O(1) average time and an iterative method to handle collision among agents in the spatial domain. These techniques have been implemented on a modern GPU (GeForce 8800GTX), resulting in a substantial performance increase. The authors believe that their system is the first completely GPU-based ABM simulation framework. (Dâ€™Souza R., Lysenko, M., Rahmani, K., SugarScape on steroids: simulating over a million agents at interactive rates. Proceedings of the Agent2007 conference, Chicago, IL. 2007.)
This paper reviews the latest developments of displacement mapping algorithms implemented on the vertex, geometry, and fragment shaders of graphics cards. Displacement mapping algorithms are classified as per-vertex and per-pixel methods. Per-pixel approaches are further categorized as safe algorithms that aim at correct solutions in all cases, to unsafe techniques that may fail in extreme cases but are usually much faster than safe algorithms, and to combined methods that exploit the robustness of safe and the speed of unsafe techniques. The paper discusses the possible roles of vertex, geometry, and fragment shaders to implement these algorithms. Then the particular GPU based bump, parallax, relief, sphere, horizon mapping, cone stepping, local ray tracing, pyramidal and view-dependent displacement mapping methods, as well as their numerous variations are reviewed providing also implementation details of the shader programs. The paper presents these methods using uniform notation and also points out when different authors referred to similar concepts differently. In addition to basic displacement mapping, self-shadowing and silhouette processing are also reviewed. Based on the authors’ experiences gained having re-implemented these methods, their performance and quality are compared, and the advantages and disadvantages are fairly presented. (Displacement Mapping on the GPU – State of the Art László Szirmay-Kalos and Tamás Umenhoffer. Computer Graphics Forum. 2008.)
Slides from the 2007 AstroGPU conference, held at the Institute for Advanced Study in Princeton last November, have been posted to the AstroGPU Website.
In conjunction with The 2008 International Conference on Computational Science and Its Applications (ICCSA 2008, UCHPC ’08 is a technical session on UnConventional High Performance Computing. This session focuses on uses of hardware for HPC that was not originally intended for HPC. UCHPC invites papers on all aspects of unconventional HPC and its related areas describing either proven and tested solutions or novel ideas and concepts. Topics for submissions include but are not limited to the following areas: cluster solutions; performance issues and scalability; innovative use of hardware and software; and HPC on GPUs, Cell BE, FPGAs and other hardware. Please see the Call for Papers for more information.
ARCS 2008, the 21st Conference on Architecture of Computing Systems, is proud to announce a full day GPGPU tutorial, covering concepts, building blocks and case studies with a special focus on NVIDIA CUDA GPU Computing technology. ARCS is held in Dresden, Germany, on February 25-28, 2008. For more details, please visit The ARCS 2008 Website.
This Ph.D. thesis by Jansen describes a GPGPU development system that is embedded in the C++ programming language using ad-hoc polymorphism (i.e. operator overloading). While this technique is already known from the Sh library and the RapidMind Development Platform, GPU++ uses a more generic class interface and requires no knowledge of GPU programming at all. Furthermore, there is no separation between the different computation units of the CPU and GPU – the appropriate computation frequency is automatically chosen by the GPU++ system using several optimization algorithms. (“GPU++: An Embedded GPU Development System for General-Purpose Computations“. Thomas Jansen. Ph.D. Thesis, University of Munich, Germany.
Sure, you know GPUs, but have you heard of GPGPUs? The concept is simple: Use the massively parallel architecture of the graphics processor for general-purpose computing tasks. Because of that parallelism, ordinary calculations can be dramatically sped up. To create the Tesla, its powerful new entry into this market, NVIDIA has bundled multiple GPUs (without video connectors!) into either a board or a desk-side box that offers near-supercomputer levels of single-precision floating-point operations. The general-purpose GPU (thus the acronym GPGPU) is being used as a high-performance coprocessor for climate modeling, oil and gas exploration, and other applicationsâ€”and it’s much cheaper than a supercomputer. The Tesla even comes complete with its own C compiler and tools.