This paper by Robert et al. at the University of Bern, Switzerland describes the object intersection buffer (OIB), a GPU-based visibility preprocessing algorithm for accelerating ray tracing. Based on this approach, a hybrid ray tracer is proposed to exploit parallel ray tracing using the GPU and CPU. (Hybrid Ray Tracing – Ray Tracing Using GPU-Accelerated Image-Space Methods. Philippe C.D. Robert, Severin Schoepke, and Hanspeter Bieri. Proceedings of GRAPP 2007.)
Radio wave propagation predictions are of great interest for cellular radio networks. Ray tracing approaches are an established technique for wave propagation, however, such approaches need to be extended to include diffraction, which is a predominant effect for common mobile radio frequencies. We demonstrate how to exploit the GPU to accelerate wave propagation predictions by orders of magnitude, making them available at interactive frame rates. The paper presents a GPU implementation of our diffraction technique. The presented technique can be easily extended to also simulate the diffraction of water waves by obstacles in complex three dimensional scenarios in a physically correct manner. (Fast Edge-Diffraction-Based Radio Wave Propagation Model for Graphics Hardware. Tobias Rick, Rudolf Mathar, Proceedings of ITG INICA 2007)
This work approaches the fundamental problem of accelerating FFT computation by use of GPUs, in order to apply it to Adaptive Optics, the key for obtaining the maximum performance from projected ground-based eXtremely Large Telescopes. A method to efficiently adapt the FFT for the underlying architecture of GPUs is given. The authors derive a novel FFT method that alternates base-2 and base-4 decomposition of the bidimensional domain to take the most from Multiple Render Target extension as they elaborate a very unusual Pease 8-data “butterfly”. (Modal Fourier wavefront reconstruction using GPUs J.G. Marichal-Hernandez, J.M. Rodriguez-Ramos, F. Rosa. La Laguna University. To appear in Journal of Electronic Imaging.)
GPUCV is a free GPU-accelerated library for image processing and computer vision. It offers an Intel OPENCV-like programming interface for easily porting existing applications. A one-page description is available. A longer presentation and discussion was published at IEEE ICME 2006. (J.-P. Farrugia, P. Horain, E. Guehenneux, Y. Allusse, “GPUCV: A framework for image processing acceleration with graphics processors”, CDROM proc. of the IEEE International Conference on Multimedia & Expo, July 9-12, 2006, Toronto, Ontario, Canada.)
Neoptica has recently posted a whitepaper, “Programmable Graphicsâ€”The Future of Interactive Rendering.” It introduces the coming era of programmable graphics, in which developers implement rendering algorithms using combinations of parallel CPU and GPU tasks executing cooperatively on heterogeneous multi-core architectures of the near future. By embracing both task- and data-parallel computation, this approach frees developers to use the most efficient parallel computation style for their algorithms, and makes it possible to define custom graphics pipelines built using complex algorithms and dynamic data structures. The paper argues that future graphics applications that leverage the tightly coupled capabilities of forthcoming CPUs and GPUs will generate far richer and more realistic imagery, use computational resources more efficiently, and scale to large numbers of CPU and GPU cores.
This survey paper by D. Göddeke and R. Strzodka compares native double precision solvers for linear systems of equations as they typically arise in finite element discretizations with emulated- and mixed-precision schemes. Such schemes are particularly suitable for coupled hardware configurations such as GPUs and FPGAs, which serve as co-processors to the general purpose CPU. The results demonstrate that
- accuracy is preserved even for very ill-conditioned systems,
- significant speedups can be achieved (time aspect, GPUs) and
- area requirements are reduced (space aspect, FPGA).
Data-parallel programming models are emerging as an extremely attractive model for parallel programming, driven by several factors. Through deterministic semantics and constrained synchronization mechanisms, they provide race-free parallel-programming semantics. Furthermore, data-parallel programming models free programmers from reasoning about the details of the underlying hardware and software mechanisms for achieving parallel execution and facilitate effective compilation. Finally, efforts in the GPGPU movement and elsewhere have matured implementation technologies for streaming and data-parallel programming models to the point where high performance can be reliably achieved.
This workshop gathers commercial and academic researchers, vendors, and users of data-parallel programming platforms to discuss implementation experience for a broad range of many-core architectures and to speculate on future programming-model directions. Participating institutions include AMD, Electronic Arts, Intel, Microsoft, NVIDIA, PeakStream, RapidMind, and The University of New South Wales. (Link to Call for Participation, Data-Parallel Programming Models for Many-Core Architectures)
With their upcoming publication in Computer Graphics Forum, Owens et al. have revised their 2005 comprehensive survey of the history and state of the art in GPGPU. It describes, summarizes and analyzes the latest research in mapping general-purpose computation to graphics hardware. The report begins with the technical motivations that underlie general-purpose computation on graphics processors (GPGPU) and describe the hardware and software developments that have led to the recent interest in this field. The authors describe the techniques used in mapping general-purpose computation to graphics hardware, and survey and categorize the latest developments in general-purpose application development on graphics hardware. (A Survey of General-Purpose Computation on Graphics Hardware. John D. Owens, David Luebke, Naga Govindaraju, Mark Harris, Jens Krüger, Aaron E. Lefohn, Timothy J. Purcell, in “Computer Graphics Forum”, Volume 26, number 1, pp 80-113. 2007. To appear.)
A beta of NVIDIA’s CUDA development environment, NVIDIA’s new technology for computing with GPUs, is now posted on developer.nvidia.com. This beta release of CUDA contains a C compiler for the GPU and an SDK with examples to get you started coding for the GPU. From the press release:
GPU Computing with CUDA is a new approach to computing where hundreds of on-chip processors simultaneously communicate and cooperate to solve complex computing problems. Applications that require mathematically intensive computing on large amounts of data are ideal targets for GPU Computing. NVIDIA NVIDIA’s CUDA technology is available in GeForce 8800 graphics products and future NVIDIA Quadro Professional Graphics solutions based on 8-series (G8X) GPUs. Developers are invited to download the beta version of the CUDA Software Developers Kit (SDK) and C compiler for Windows XP and Linux (RedHat Release 4 Update 3) from the NVIDIA Developer Web site at developer.nvidia.com/cuda. GPU Computing Forums for news, discussion and programming tips are also available at forums.nvidia.com.
The proceedings of the workshop “General-Purpose GPU Computing: Practice And Experience” held at SuperComputing 2006 are now posted. The proceedings include PDFs of the workshop presentations and posters. (http://www.gpgpu.org/sc2006/workshop/)