August 31st, 2009
August 30th, 2009
GPUs are evolving as massively threaded vector machines. While the primary design goal of the GPUs is efficient processing of the graphics stack, the massive parallelism available in these chips has lately opened up the possibility of carrying out general-purpose computing on them. This computing paradigm is called GPGPU. Although manually mapping regular data-parallel applications to GPUs has been explored quite extensively, making truly general-purpose computing feasible on GPUs requires answering a number of important questions. This half-day workshop aims at bringing together the researchers and practitioners in this rapidly evolving area with a goal of addressing issues related to programming languages, programming models, compiler optimizations, and architecture to make GPGPU a conducive execution environment for regular as well as irregular applications.
The topics of interest include, but are not limited to, the following.
- New GPU architecture features to enhance GPGPU
- Memory system innovations to enhance GPGPU
- Implications of GPGPU on memory consistency models
- Architecture support for single-chip CPU-GPU integration
- Programming models and language support for GPGPU
- Compiler Optimization for GPGPU
- Debugging/Performance visualization tools for GPGPU
- Efficient synchronization support for GPGPU
- Performance evaluation of irregular applications on GPUs
- Energy-efficiency studies on GPGPU
- GPGPU benchmarks
This half-day workshop will be held in conjunction with HPCA 2010 in Bangalore in January 2010.
Read the rest of this entry »
August 23rd, 2009
Registration is now open for the Workshop on Non-Traditional Programming Models for High-Performance Computing (part of The Los Alamos Computer Science Symposium). The symposium and workshop will be held in Santa Fe, New Mexico on October 13-14, 2009.
The goals of the workshop are two-fold:
- To begin to identify, specify and capture in writing, the problematic issues and barriers inherent in today’s scientific software construction process.
- To expose attendees to non-traditional programming models with the express purpose of igniting thought and discussion on the future of large-scale scientific programming.
The one-day workshop will consist of three sequential tracks, each lead by a moderator/facilitator. The tracks will include a small number of speakers who will each present a short position paper outlining their thoughts on current problems and how specific non-traditional techniques may be applied to address these issues. Following the presentations, the moderator will lead a discussion with the audience on the ideas presented by the speakers. Both the position papers and the captured discussion will be published on the workshop web site. It is the organizers’ hope that the output of this workshop, perhaps refined, can act as input to a future meeting or workshop on this topic.
August 6th, 2009
Ke-Sen Huang has assembled a web page with links to all papers presented at these two important conferences, High Performance Graphics (a synthesis of the Graphics Hardware and Interactive Ray Tracing conferences) and SIGGRAPH. Both conferences had quite a number of GPGPU-related publications. Highlights from HPG include a paper on computing minimum spanning trees on the GPU, one on optimizing stream compaction on GPUs, and a study from NVIDIA on understanding the efficiency of GPUs and of wide-SIMD architectures in general on inherently imbalanced workloads like ray tracing (among others).
Click here for SIGGRAPH papers, and here for HPG papers. Ke-Sen’s pages are also a good resource for other conferences in the field.
August 6th, 2009
The GPU Technology Conference will be held Sept 30-Oct 2, 2009 in San Jose, Calif. This event will focus on the latest breakthroughs that developers, engineers and researchers are achieving through the use of the GPU. Learn more at www.nvidia.com/gtc
Session abstracts and speakers can be found at www.nvidia.com/gtc under the Agenda page. Sessions announced to date include
- Advanced C for CUDA
- CUDA Fortran Programming for NVIDIA GPUs
- What Every CUDA Programmer Needs to Know about OpenGL
- Debugging tools for CUDA
- Using CUDA within Mathematica
- The TotalView Debugger for CUDA
- OPLib: A GPL Library of Elementary Pricing Functions in CUDA/OpenCL and OpenMP
- Par4All: Auto-Parallelizing C and Fortran for the CUDA Architecture
More sessions are to be announced.
July 16th, 2009
The course notes and supplementary material for “Beyond Programmable Shading”, a full-day course held at SIGGRAPH 2009 on August 6, are now available online.
This course is presented in two parts, Beyond Programmable Shading I and Beyond Programmable Shading II.
There are strong indications that the future of interactive graphics programming is a more flexible model than today’s OpenGL/Direct3D pipelines. Graphics developers need a basic understanding of how to combine emerging parallel programming techniques and more flexible graphics processors with the traditional interactive rendering pipeline. The first half of the course introduces the trends and directions in this emerging field. Topics include: parallel graphics architectures, parallel programming models for graphics, and game-developer investigations of the use of these new capabilities in future rendering engines.
The second half of the course has leaders from graphics hardware vendors, game development, and academic research present case studies that show how general parallel computation is being combined with the traditional graphics pipeline to boost image quality and spur new graphics algorithm innovation. Each case study discusses the mix of parallel programming constructs used, details of the graphics algorithm, and how the rendering pipeline and computation interact to achieve the technical goals. Read the rest of this entry »
June 25th, 2009
This 3-day workshop, to be held September 30, 2009 to October 2, 2009 in Lugano, Switzerland, will explore the use of GPUs, Cell BE processors FPGAs and special-purpose hardware for large-scale scientific computing.
Similar to the 1990s, when the revolution in mainstream scientific software development, viz. going from structured programming to object-oriented programming, was the greatest change in the past 3 decades, we are at the beginning of a totally new revolution in terms of algorithmic engineering.
We are nowadays at a hardware/software technology inflection point due to large-scale parallelism, including parallel operations on the contents of a single register, pipelining, memory pre-fetch, single-core simultaneous multithreading (”hyper-threading”) and superscalar instruction issue. Some new processor options have emerged, such as the Cell BE processor and GPUs, which are extremely aggressive in their use of parallelism, while keeping, on the other hand, general-purpose programmability. Other processors, like FPGAs and special purpose hardware, still based on chip parallelism, are emerging for being extremely and efficiently specialized for unique tasks.
The main objective will be to demonstrate how some of the most challenging problems in computational sciences have already been ported to modern non-conventional computing platforms, presented by speakers coming from a wide computational community (physicists, chemists, engineers, computer scientists, biologists) active in the fields of algorithm re-engineering for the new architectures.
June 15th, 2009
A tutorial on High Performance Computing with CUDA was held at the International Conference on Supercomputing in Hamburg on Monday, June 22nd 2009. The tutorial included an introduction to the CUDA programming model and C for CUDA, along with details on the CUDA Toolkit, Libraries, and optimization. The tutorial also provided an introduction to OpenCL, and finished with a case study on Computational Fluid Dynamics by Dr. Graham Pullan from Cambridge University. Slides from the tutorial are now posted here on GPGPU.org.
(Massimiliano Fatica, Timo Stich, and Graham Pullan. High Performance Computing with CUDA. Tutorial. International Conference on Supercomputing 2009. Hamburg, Germany.)
June 8th, 2009
This competition focuses on the applications of genetic and evolutionary computation that can maximally exploit the parallelism provided by low-cost consumer graphical cards. The competition will award the best applications both in terms of degree of parallelism obtained, in terms of overall speed-up, and in terms of programming style.
Submissions should be mailed to firstname.lastname@example.org no later than June 23, 2009. The final scores will be announced during GECCO. More information is available at the following sites.
Read the rest of this entry »
June 4th, 2009
NVIDIA is offering a series of free GPU computing webinars covering a range of topics from a basic introduction to the CUDA architecture to advanced topics such as data structure optimization and multi-GPU usage.
There are several webinars scheduled already; attendees are encouraged to pick the date and time which best suits their schedule. Visit the NVIDIA GPU Computing Online Seminars webpage for webinar registration and further information. Additional webinars will be scheduled throughout the next few months so check for future alerts and visit the NVIDIA online seminar schedule page often.
The goal of this workshop, held at the National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, was to help computational scientists in the geosciences, computational chemistry, and astronomy and astrophysics communities take full advantage of emerging high-performance computing resources based on computational accelerators, such as clusters with GPUs and Cell processors.
Slides are now available online and cover a wide range of topics including
- GPU and Cell programming tutorials
- GPU and Cell technology
- Accelerator programming, clusters, frameworks and building blocks such as sparse matrix-vector products, tree-based algorithms and in particular accelerator integration into large-scale established code bases
- Case studies and posters from geosciences, computational chemistry and astronomy/astrophysics such as the simulation of earthquakes, molecular dynamics, solar radiation, tsunamis, weather predictions, climate modeling and n-body systems as well as Monte-Carlo, Euler, Navier-Stokes and Lattice-Boltzmann type of simulations
(National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign: Path to Petascale workshop presentations, organized by Wen-mei Hwu, Volodymyr Kindratenko, Robert Wilhelmson, Todd Martínez and Robert Brunner)