Intel Ct Tera-Scale White paper

November 5th, 2007

From the introduction: “Processors architecture is evolving towards more software-exposed parallelism through two features: more cores and wider SIMD ISA. At the same time, graphics processors (GPUs) are gradually adding more general purpose programming features. Several software development challenges arise from these trends. First, how do we mitigate the increased software development complexity that comes with exposing parallelism to the developer? Second, how do we provide portability across (increasing) core counts and SIMD ISA? Ct is a deterministic parallel programming model intended to leverage the best features of emerging general-purpose GPU (GPGPU) programming models while fully exploiting CPU flexibility. A key distinction of Ct is that it comprises a top-down design of a complete data parallel programming model, rather than being driven bottomup by architectural limitations, a flaw in many GPGPU programming models.” (Flexible Parallel Programming for Terascale Architectures with Ct)

Toward Acceleration of RSA Using 3D Graphics Hardware

November 5th, 2007

This paper by Moss et. al shows an implementation of multi-precision arithmetic running on a 7800-GTX. The paper shows how to compute the modular exponentiation of large integers (a central operation in the RSA cryptosystem) using the restricted control flow available on a DX9 card. Both the background number theory used to express the problem in a suitable way for a streaming architecture, and the program transformation techniques used to generate the GLSL code are described in detail. Surprisingly (given the unusual nature of the problem for GPGPU) the GPU is capable of out-performing the CPU over a large enough dataset by a factor of 2x-3x depending on the CPU implementation. Unfortunately the immature state of the GLSL compiler prevents a further 2x improvement by allocating too many registers, and the large latency for setting the problem up means that over 800 exponentiations need to be performed to break-even against the CPU. (Andrew Moss, Dan Page and Nigel Smart. Toward Acceleration of RSA Using 3D Graphics Hardware. In: LNCS 4887, pages 369–388. Springer, December 2007.)

Graphics-based Acoustic Simulations

November 5th, 2007

Physically correct acoustic simulations for complex and dynamic environments remain a difficult and computationally extensive task. Graphics hardware is here used for the simulation of sound wave propagation. Two different methods have been implemented, of which one uses ray tracing techniques, while the other is based on difference equations and waveguide meshes. Both techniques can efficiently be implemented within a real-time environment by concentrating on the similarities for sound and light wave propagation, and by exploiting the possibilities of using graphics hardware for non-graphics computations. Applications are discussed for real-time room acoustics, virtual reality as well as for virtual HRIR measurements based on polygonal meshes.

(Ray Acoustics using Computer Graphics Technology. Niklas Röber, Ulrich Kaminski, and Maic Masuch. Proceedings of DAFx 2007.)
(Waveguide-based Room Acoustics through Graphics Hardware. Niklas Röber, Martin Spindler, and Maic Masuch. Proceedings of ICMC 2006.)

GPGPU Workshop October 4th

September 18th, 2007

The Workshop on General Purpose Processing on Graphics Processing Units will be held October 4, 2007 at Northeastern University, Boston, MA. This meeting will include a keynote talk by Prof. Wen-mei Hwu on “GP Computing: Hardware, Architecture Tools and Education”.

The program will include three invited talks from NVIDIA, ATI and IBM Research, and will include demos by GPU hardware and software vendors. The technical program will include 12 refereed papers. Registration is free, though you need to register for The Workshop on GPGPU at: http://censsis-db3.ece.neu.edu/RICC2007/regist.aspx

Commercial companies that are interested in presenting at The Workshop on GPGPU, please contact the organizing committee at gpgpu@ece.neu.edu.

NVIDIA and Addison-Wesley Release GPU Gems 3 Book

September 10th, 2007

GPU Gems 3, the third volume of the best-selling GPU Gems series provides a snapshot of today’s latest Graphics Processing Unit (GPU) programming techniques. The programmability of modern GPUs allows developers to not only distinguish themselves from one another but also to use this awesome processing power for non-graphics applications, such as physics simulation, financial analysis, and even virus detection—particularly with the CUDA architecture. Graphics remains the leading application for GPUs, and readers will find that the latest algorithms create ultra-realistic characters, better lighting, and post-rendering compositing effects. This third volume is certain to appeal to not just the many fans of the first two, but a whole new group of programmers as well. (GPU Gems 3 Page at Addison-Wesley)

Genome Technology Article about GPGPU: “Not Just for Kids Anymore”

September 10th, 2007

This article at Genome Technology gives a brief overview of GPGPU, with a focus on biological information processing using NVIDIA CUDA Technology. The article discusses the results from UIUC’s NAMD / VMD project and neurological simulation company Evolved Machines.

Quantum Monte Carlo on GPUs

September 10th, 2007

This paper by Anderson et al at Caltech describes a method to use GPUs to accelerate Quantum Monte Carlo on a GPU. QMC is among the most accurate (and expensive) methods in the quantum chemistry zoo. Primarily, this involves the investigation of tricks available to this algorithm to speed up matrix multiplication. That is, as a statistical algorithm, the authors studied the performance enhancements available when multiplying many matrices simultaneously. Additionally, the paper explores the Kahan Summation Formula to improve the accuracy of GPU matrix multiplication. (Quantum Monte Carlo on Graphical Processing Units. Amos G. Anderson, William A Goddard III, Peter Schroder. Computer Physics Communications)

gDEBugger LINUX – Public Beta Available!

September 4th, 2007

gDEBugger is an OpenGL Debugger and Profiler. It provides the application behavior information a developer needs to find bugs and to optimize application performance. gDEBugger Linux brings all of gDEBugger’s debugging and profiling abilities to the Linux OpenGL developers’ world. gDEBugger Linux is now available as a final beta version. This version includes all gDEBugger’s features and supports the Linux i386 and x86_64 architectures. gDEBugger Linux official version will be released shortly after Graphic Remedy receive feedback from the field and fix any reported issues. (http://www.gremedy.com/gDEBuggerLinux.php)

Graphic processors to speed-up simulations for the design of high performance solar receptors

September 4th, 2007

This paper by Collange et al. at Université de Perpignan, France, decribes a prototype to be integrated into simulation codes that estimate temperature, velocity and pressure to design next generation solar receptors. Such codes delegate to GPUs the computation of heat transfer due to radiation. The authors use Monte-Carlo line-by-line ray-tracing through finite volumes. This means data-parallel arithmetic transformations on large data structures. The performance on two recent graphics cards (Nvidia 7800GTX and ATI RX1800XL) show speedups higher than 400 compared to CPU implementations leaving most of CPU computing resources available. As there were some questions pending about the accuracy of the operators implemented in GPUs, the authors start this report with a survey and some contributed tests on the various floating point units available on GPUs. (Graphic processors to speed-up simulations for the design of high performance solar receptors. S. Collange, M. Daumas, D. Defour. Proceedings of the IEEE 18th International Conference on Application-specific Systems, Architectures and Processors.)

CUDA Tutorial at Supercomputing 2007

August 22nd, 2007

On Sunday November 11 2007 at SC07 in Reno NVIDIA will host a full-day tutorial on CUDA. In this tutorial NVIDIA engineers will partner with academic and industrial researchers to present CUDA and discuss its advanced use for science and engineering domains. The morning session will introduce CUDA programming and the execution and memory models at its heart, motivate the use of CUDA with many brief examples from different HPC domains, and discuss fundamental algorithmic building blocks in CUDA. The afternoon will discuss advanced issues such as optimization and “tips & tricks”, and include real-world case studies from domain scientists using CUDA (VMD and NAMD Molecular Dynamics and Oil and Gas).
Follow this link for more information: http://sc07.supercomputing.org/schedule/event_detail.php?evid=11034.

Page 81 of 109« First...102030...7980818283...90100...Last »