Webinar: How to Improve Performance using the CUDA Memory Model and Features of the Kepler Architecture

December 20th, 2013

This webinar explores the memory model of the GPU and the memory enhancements available in the Kepler architecture, and how these will affect performance optimization. The webinar begins with an essential overview of GPU architecture and thread cooperation before focusing on the different memory types available on the GPU. We define shared, constant and global memory and discuss the best locations to store your application data for optimized performance. The shuffle instruction, new shared memory configurations and Read-Only Data Cache of the Kepler architecture are introduced and optimization techniques discussed. Click here to view the webinar recording.

Leave a Comment