Welcome to the GPGPU.org Developer Resources page. Here you will find a wealth of information on the GPGPU programming, including tutorials, utility code, examples, useful links and much more. All of the code here is open source.
Jump to
GPGPU Programming
The GPGPU programming landscape has rapidly evolved over the past several years, so that now there are several approaches to programming GPUs. Recently, convergence towards standardization has begun. Readers are recommended to browse the available material to make their own decisions on which approach to use.
NVIDIA CUDA, AMD Stream and OpenCL
GPU Computing really took off when CUDA and Stream arrived in late 2006. These are programming interfaces and languages, designed by the GPU vendors in close proximity with the hardware, that constitute a tremendous step towards a usable, suitable, scalable and manageable future-proof programming model. Learn more about AMD Stream. Learn more about NVIDIA CUDA.
The Open Compute Language (OpenCL) is designed to provide a unified API for heterogeneous computing on several kinds of parallel devices, including GPUs, multicore CPUs and the Cell Broadband Engine. Learn more about OpenCL.
Sh and Brook for GPUs
High-level languages and programming environments for GPUs, in particular BrookGPU from Stanford University and Sh from the University of Waterloo were precursors to todays solutions like CUDA and OpenCL. Sh has been commercialized by its developers into RapidMind and BrookGPU has served as the basis for AMD’s Stream. Learn more about BrookGPU and Sh.
Legacy GPGPU: Graphics APIs
In the early days, GPGPU programming was a bit hacky. Algorithms had to be cast in terms of graphics APIs such as OpenGL and Direct3D; the underlying hardware was not fully exposed or documented; and the programming was sometimes unproductive. Despite all this, a lot of ground-breaking research has been accomplished that helped pave the way to what GPU computing is now. Despite their legacy status, these older tutorials and sample applications still have some value. Learn more about legacy GPGPU.
Conference Tutorials
Over the years, quite a few GPGPU tutorial sessions have been hosted at various conferences.
- PPAM 2011 GPU Tutorial
- Supercomputing 2009 CUDA Tutorial
- PPAM 2009 GPU and OpenCL Tutorial
- ISC 2009 CUDA Tutorial
- UNSW 2009: Workshop on High Performance Computing with NVIDIA CUDA
- Supercomputing 2008 CUDA Tutorial
- SIGGRAPH 2008 Beyond Programmable Shading Tutorial
- ASPLOS 2008 CUDA Tutorial
- ARCS 2008 GPGPU Tutorial
- Supercomputing 2007 CUDA Tutorial
- SIGGRAPH 2007 GPGPU Tutorial
- Supercomputing 2006 GPGPU Tutorial
- ICCS 2006 GPGPU Workshop and Tutorial
- SIGGRAPH 2005 GPGPU Tutorial
- Visualization 2005 GPGPU Tutorial
- SIGGRAPH 2004 GPGPU Tutorial
- Visualization 2004 GPGPU Tutorial
Other recommended tutorials include:
- SIGGRAPH 2009 Beyond Programmable Shading Tutorial
- SIGGRAPH ASIA 2008 Parallel Computing for Graphics: Beyond Programmable Shading Tutorial
Recommended Reading Material
GPGPU.org does not currently provide a full paper archive or bibliography, instead, the history of articles posted on the news page is a good entry point for detailed searches.
For an excellent overview, the survey articles by John Owens et al. are highly recommended:
- John D. Owens, Mike Houston, David Luebke, Simon Green, John E. Stone, and James C. Phillips: GPU Computing, Proceedings of the IEEE, 96(5):879–899, May 2008.
- John D. Owens, David Luebke, Naga Govindaraju, Mark Harris, Jens Krüger, Aaron E. Lefohn, and Tim Purcell: A Survey of General-Purpose Computation on Graphics Hardware, Computer Graphics Forum, 26(1):80–113, March 2007.
For details on GPU hardware and the underlying programming models, the following articles are relevant:
- Kayvon Fatahalian and Mike Houston: A closer look at GPUs, Communications of the ACM, 51(10), October 2008.
- Erik Lindholm, John Nickolls, Stuart Oberman and John Montrym: NVIDIA Tesla: A unified graphics and computing architecture, IEEE Micro, 28(2), 39–55, March 2008
Please refer to the subpages linked at the top of this page for more specific material.