A Memory Model for Scientic Algorithms on Graphics Processors

October 4th, 2006

This Supercomputing 2006 paper by Govindaraju et al. presents a memory model to analyze and improve the performance of scientific algorithms on graphics processing units (GPUs). The memory model is based on texturing hardware, which uses a 2D block-based array representation to perform the underlying computations. It incorporates many characteristics of GPU architectures including smaller cache sizes, 2D block representations, and uses the 3C’s model to analyze the cache misses. Moreover, the paper presents techniques to improve the performance of nested loops on GPUs. In order to demonstrate the effectiveness of the model, the paper highlights its performance on three memory-intensive scientific applications: sorting, Fast Fourier Transform and dense matrix multiplication. In practice, their cache-efficient algorithms for these applications are able to achieve memory throughput of 30-50 GB/s on an NVIDIA 7900 GTX GPU. The paper also compares its results with prior GPU-based and CPU-based implementations on high-end processors. In practice, they are able to achieve 2x-5x performance improvement. (A Memory Model for Scientic Algorithms on Graphics Processors)