Radiance Cache Splatting: A GPU-Friendly GLobal Illumination Algorithm

June 14th, 2005

The irradiance caching algorithm is commonly used for fast global illumination since it provides high-quality rendering in a reasonable time. However this algorithm relies on a spatial data structure along with complex algorithms. This central and permanently modified data structure prevents this algorithm from being easily implemented on GPUs. This paper proposes a novel approach to global illumination using irradiance and radiance cache: the Radiance Cache Splatting. This method directly meets the processing constraints of graphics hardware since it avoids the need of complex data structure and algorithms. Moreover, the rendering quality remains identical to classical irradiance and radiance caching. This work will be presented at the Eurographics Symposium on Rendering 2005, and during SIGGRAPH 2005 sketches. (Radiance Cache Splatting: A GPU-Friendly GLobal Illumination Algorithm. Pascal Gautron, Jaroslav Krivanek, Kadi Bouatouch, Sumanta Pattanaik. Proceedings of Eurographics Symposium on Rendering 2005)

Exploring Graphics Processor Performance for General Purpose Applications

June 12th, 2005

This paper by P. Trancoso and M. Charalambous at the University of Cyprus presents a comprehensive study of the performance of general-purpose applications on the GPU, and determines the conditions that make the GPU work efficiently. Also, as the GPU is cheaper and consumes less power than a high-end CPU, the authors show the benefits of using the graphics card to extend the life-time of an existing computer system. (Exploring Graphics Processor Performance for General Purpose Applications. P. Trancoso and M. Charalambous. Proceedings of the Eighth Euromicro Conference on Digital System Design (DSD 2005))

Stack Implementation on Programmable Graphics Hardware

June 12th, 2005

This paper by Ernst et al. describes a stack implementation for the GPU using textures for storage. For a predefined maximum stack depth, k, either k data textures, or a single large texture with k stack layers side by side are used. Additionally a stack pointer texture is needed. The paper argues that both push and pop can become O(1) operations using fragment program branching. Both push and pop require separate rendering passes. The technique is demonstrated in a kd-tree traversal implementation. (gpu stack bibtex)

DuoDecim – A Structure for Point Scan Compression and Rendering

May 26th, 2005

This paper presents a compression scheme for large point scans including per-point normals. For the encoding of such scans, the paper introduces a type of closest sphere packing grids, the hexagonal close packing (HCP). To compress the data, linear sequences of filled cells in HCP grids are extracted. Point positions and normals in these runs are incrementally encoded. At a grid spacing close to the point sampling distance, the compression scheme only requires slightly more than 3 bits per point position. Incrementally encoded per-point normals are quantized at high fidelity using only 5 bits per normal. The compressed data stream can be decoded in the graphics processing unit (GPU). Decoded point positions are saved in graphics memory, and they are then used on the GPU again to render point primitives. In this way gigantic point scans are rendered from their compressed representation in local GPU memory at interactive frame rates. (http://wwwcg.in.tum.de/Research/data/Publications/pbg05.pdf)

GPU Simulation and Rendering of Volumetric Effects for Computer Games and Virtual Environments

May 26th, 2005

As simulation and rendering capabilities continue to increase, volumetric effects like smoke, fire or explosions will be frequently encountered in computer games and virtual environments. This paper presents techniques for the visual simulation and rendering of such effects that keep up with the demands for frame rates imposed by such environments. This is achieved by leveraging functionality on recent graphics programming units (GPUs) in combination with a novel approach to model non physics-based, yet realistic variations in flow fields. The paper shows how to use this mechanism for simulating effects. Physics-based simulation is performed on 2D proxy geometries, and simulation results are extruded to 3D using particle or texture based approaches. (http://wwwcg.in.tum.de/Research/data/Publications/eg05.pdf)

A Particle System for Interactive Visualization of 3D Flows

May 26th, 2005

This paper presents a particle system for interactive visualization of steady 3D flow fields on uniform grids. For large particle systems, particle integration needs to be accelerated and the transfer of particle data to the GPU must be avoided. To fulfill these requirements, this paper exploits features of recent graphics accelerators to advect particles in the graphics processing unit (GPU), saving particle positions in graphics memory, and then sending these positions through the GPU again to obtain images in the frame buffer. (http://wwwcg.in.tum.de/Research/data/Publications/tvcg05.pdf)

FxPlug GPU Image Processing API Launched

May 26th, 2005

The FxPlug API allows Mac OS X developers to write OpenGL based image processing plugins for Apple’s Motion video effects software. Designed to run on ARB_fragment_program capable hardware, it allows chains of complex effects to be run entirely on the GPU. With over 100 GPU filters and generators already running within Motion, this is well worth a look. (http://developer.apple.com/appleapplications/fxplugsdk.html)

Parallel Genetic Algorithms on Programmable Graphics Hardware

May 26th, 2005

Parallel genetic algorithms are usually implemented on parallel machines or distributed systems. This paper describes how fine-grained parallel genetic algorithms can be mapped to programmable graphics hardware found in commodity PCs. The approach stores chromosomes and their fitness values in texture memory on the graphics card. Both fitness evaluation and genetic operations are implemented entirely with fragment programs executed on the GPU in parallel. The paper demonstrate the effectiveness of this approach by comparing it with a compatible software implementation. The presented approach benefits from the advantages of parallel genetic algorithms on a low-cost platform. (http://www.cad.zju.edu.cn/home/yqz/)

A new real-time video synthesis method for virtual studio environments using GPU and projected screens

May 26th, 2005

This project focused on two supportive information techniques for virtual TV studio environments using a back projected screen and real time video composition on the GPU. In traditional TV, studios use blue or green background chroma-key for video composition. Therefore the actors cannot see the final composite without a preview monitor. Pointing at objects on the background image is especially difficult, requiring experience and rehearsal. In this system, the actors can see and point at supportive information displays such as computer-generated backgrounds, virtual actors, reading scripts and/or final composites behind them. To compose the computer graphics into the free area on the screen, a special real-time GPU-based video rendering program has been developed. (http://akihiko.shirai.as/projects/LuminaStudio/)

RoboGamer: Development of robotic TV game player using haptic interface and GPU image recognition

May 26th, 2005

“RoboGamer” is a robotic system which is able to play a video game together with a human player. This project realized a physically connected friendly computer player with a simple robotic system that is composed of a video camera, wire based force feedback display SPIDAR and fast GPU image recognition software without any modification of the original video game system. RoboGamer has three functions: autonomous play; augmented effects like force feedback and/or rich graphics added to original old video games; and collaboration play with A.I. and human player via force feedback on the joystick. (http://akihiko.shirai.as/projects/RoboGamer/)

Page 94 of 109« First...102030...9293949596...100...Last »