DuoDecim – A Structure for Point Scan Compression and Rendering

May 26th, 2005

This paper presents a compression scheme for large point scans including per-point normals. For the encoding of such scans, the paper introduces a type of closest sphere packing grids, the hexagonal close packing (HCP). To compress the data, linear sequences of filled cells in HCP grids are extracted. Point positions and normals in these runs are incrementally encoded. At a grid spacing close to the point sampling distance, the compression scheme only requires slightly more than 3 bits per point position. Incrementally encoded per-point normals are quantized at high fidelity using only 5 bits per normal. The compressed data stream can be decoded in the graphics processing unit (GPU). Decoded point positions are saved in graphics memory, and they are then used on the GPU again to render point primitives. In this way gigantic point scans are rendered from their compressed representation in local GPU memory at interactive frame rates. (http://wwwcg.in.tum.de/Research/data/Publications/pbg05.pdf)

GPU Simulation and Rendering of Volumetric Effects for Computer Games and Virtual Environments

May 26th, 2005

As simulation and rendering capabilities continue to increase, volumetric effects like smoke, fire or explosions will be frequently encountered in computer games and virtual environments. This paper presents techniques for the visual simulation and rendering of such effects that keep up with the demands for frame rates imposed by such environments. This is achieved by leveraging functionality on recent graphics programming units (GPUs) in combination with a novel approach to model non physics-based, yet realistic variations in flow fields. The paper shows how to use this mechanism for simulating effects. Physics-based simulation is performed on 2D proxy geometries, and simulation results are extruded to 3D using particle or texture based approaches. (http://wwwcg.in.tum.de/Research/data/Publications/eg05.pdf)

A Particle System for Interactive Visualization of 3D Flows

May 26th, 2005

This paper presents a particle system for interactive visualization of steady 3D flow fields on uniform grids. For large particle systems, particle integration needs to be accelerated and the transfer of particle data to the GPU must be avoided. To fulfill these requirements, this paper exploits features of recent graphics accelerators to advect particles in the graphics processing unit (GPU), saving particle positions in graphics memory, and then sending these positions through the GPU again to obtain images in the frame buffer. (http://wwwcg.in.tum.de/Research/data/Publications/tvcg05.pdf)

Parallel Genetic Algorithms on Programmable Graphics Hardware

May 26th, 2005

Parallel genetic algorithms are usually implemented on parallel machines or distributed systems. This paper describes how fine-grained parallel genetic algorithms can be mapped to programmable graphics hardware found in commodity PCs. The approach stores chromosomes and their fitness values in texture memory on the graphics card. Both fitness evaluation and genetic operations are implemented entirely with fragment programs executed on the GPU in parallel. The paper demonstrate the effectiveness of this approach by comparing it with a compatible software implementation. The presented approach benefits from the advantages of parallel genetic algorithms on a low-cost platform. (http://www.cad.zju.edu.cn/home/yqz/)

A new real-time video synthesis method for virtual studio environments using GPU and projected screens

May 26th, 2005

This project focused on two supportive information techniques for virtual TV studio environments using a back projected screen and real time video composition on the GPU. In traditional TV, studios use blue or green background chroma-key for video composition. Therefore the actors cannot see the final composite without a preview monitor. Pointing at objects on the background image is especially difficult, requiring experience and rehearsal. In this system, the actors can see and point at supportive information displays such as computer-generated backgrounds, virtual actors, reading scripts and/or final composites behind them. To compose the computer graphics into the free area on the screen, a special real-time GPU-based video rendering program has been developed. (http://akihiko.shirai.as/projects/LuminaStudio/)

RoboGamer: Development of robotic TV game player using haptic interface and GPU image recognition

May 26th, 2005

“RoboGamer” is a robotic system which is able to play a video game together with a human player. This project realized a physically connected friendly computer player with a simple robotic system that is composed of a video camera, wire based force feedback display SPIDAR and fast GPU image recognition software without any modification of the original video game system. RoboGamer has three functions: autonomous play; augmented effects like force feedback and/or rich graphics added to original old video games; and collaboration play with A.I. and human player via force feedback on the joystick. (http://akihiko.shirai.as/projects/RoboGamer/)

Massive Simulation using GPU of a distributed behavioral model of a flock with obstacle avoidance

May 25th, 2005

This VMV 2004 paper by De Chiara et al. presents a massive simulation of a behavioral model using graphics hardware. A well-known flocking model is implemented on the GPU. The model is capable of managing large aggregate motion of birds in a virtual environment including avoidance of both static and dynamic obstacles. The effectiveness of the GPU implementation is demonstrated with a comparison to a CPU implementation. (Massive Simulation using GPU of a distributed behavioral model of a flock with obstacle avoidance. Rosario De Chiara, Ugo Erra, Vittorio Scarano, Maurizio Tatafiore. In Proceedings of 9th Internation Fall Workshop VISION, MODELLING, AND VISUALIZATION 2004.)

Automatic Tuning Matrix Multiplication on Graphics Hardware

May 21st, 2005

Graphics hardware’s rapid evolving pace has made self-adaptable software very desirable. Changhao Jiang and Marc Snir at University of Illinois Urbana Champaign have developed a library generator for graphics hardware, that can automatically generate high performance matrix multiplication with comparable performance to expert manually tuned version on various graphics hardware platforms. The paper will be published at the Fourteenth International Conference on Parallel Architecture and Compilation Techniques (PACT) 2005. (Automatic Tuning Matrix Multiplication on Graphics Hardware)

Audio and the Graphics Processing Unit

May 16th, 2005

From the abstract: In recent years, the development of programmable graphics pipelines has placed the power of parallel computation in the hands of consumers. Systems developers are now paying attention to the general purpose computational ability of these graphics processor units, or GPUs, and are using them in novel ways. This paper examines using pixel shaders for executing audio algorithms. We compare GPU performance to CPU performance, discuss problems encountered, and suggest new directions for supporting the needs of the audio community. Source code is also available. (Audio and the Graphics Processing Unit”, by Sean Whalen)

MoXi: Digital Ink Simulation

May 13th, 2005

This paper by Chu and Tai at HKUST presents a physically-based method for simulating ink dispersion in absorbent paper for art creation purposes. The ink flow model is based on the lattice Boltzmann equation and is designed to work on the GPU efficiently. (MoXi: Real-Time Ink Dispersion in Absorbent Paper. Nelson S.-H. Chu and Chiew-Lan Tai. To appear in ACM Transactions on Graphics (SIGGRAPH 2005 issue), August 2005)

Page 47 of 57« First...102030...4546474849...Last »