Call For Participation for I3D 2008

July 19th, 2007

I3D 2008 (aka the Symposium on Interactive 3D Graphics and Games) will be happening the weekend before GDC this year, February 15-17, in nearby Redwood City, CA. The Call For Participation is now up at the website: October 22 is this year’s paper deadline. This is a small conference, 100 attendees or so, that offers a good opportunity to meet other people working on GPU related techniques. I3D 2007 included a number of GPGPU-related papers on interactive ray tracing, mesh simplification, and histogram generation; see Ke-Sen Huang’s summary page. (CFP I3D 2008 page)

Hybrid Ray Tracing: Ray Tracing Using GPU-Accelerated Image-Space Methods

April 25th, 2007

This paper by Robert et al. at the University of Bern, Switzerland describes the object intersection buffer (OIB), a GPU-based visibility preprocessing algorithm for accelerating ray tracing. Based on this approach, a hybrid ray tracer is proposed to exploit parallel ray tracing using the GPU and CPU. (Hybrid Ray Tracing – Ray Tracing Using GPU-Accelerated Image-Space Methods. Philippe C.D. Robert, Severin Schoepke, and Hanspeter Bieri. Proceedings of GRAPP 2007.)

Neoptica White Paper on Programmable Graphics

April 2nd, 2007

Neoptica has recently posted a whitepaper, “Programmable Graphics—The Future of Interactive Rendering.” It introduces the coming era of programmable graphics, in which developers implement rendering algorithms using combinations of parallel CPU and GPU tasks executing cooperatively on heterogeneous multi-core architectures of the near future. By embracing both task- and data-parallel computation, this approach frees developers to use the most efficient parallel computation style for their algorithms, and makes it possible to define custom graphics pipelines built using complex algorithms and dynamic data structures. The paper argues that future graphics applications that leverage the tightly coupled capabilities of forthcoming CPUs and GPUs will generate far richer and more realistic imagery, use computational resources more efficiently, and scale to large numbers of CPU and GPU cores.

Real-Time Relativistic Optical Calculations on the GPU

August 10th, 2006

This paper by Savage, Searle and McCalman describes a program which uses the built in support for 4-vector/matrix operations on a programmable GPU to perform Lorentz transformations on relativistic 4-momentum vectors in real time. This allows a pixel shader to render relativistic effects such as Geometric Aberration, Doppler shift and the Headlight effect in response to user’s interaction. A program, “Real-Time Relativity”, has been written to demonstrate these effects. (Real-Time Relativity. C. M. Savage, A. C. Searle, L. McCalman. Physics ArXiv)

Geomerics Demonstrate Real-Time Radiosity on the GPU

August 9th, 2006

Geomerics, a new R&D company based in Cambridge UK, have recently announced a real-time radiosity simulation running entirely on the GPU. The solution runs at up to 100hz on common graphics hardware and allows for fully dynamic lighting, including spot-lights, projected texture or video lighting, and area lights. It integrates well with traditional modeling techniques such as normal mapping, and all lighting is performed in high dynamic range. Videos, screen shots and further details of the simulation can be found on the  Geomerics website.

Caustics Mapping: An Image-space Technique for Real-time Caustics

August 11th, 2005

Caustics are complex patterns of shimmering light formed due to reflective and refractive objects; for example, those formed on the floor of a swimming pool. Caustics Mapping is a physically based real-time caustics rendering algorithm. It utilizes the concept of backward ray-tracing, however it involves no expensive computations that are generally associated with ray-tracing and other such techniques. The main advantage of caustics mapping is that it is extremely practical for games and other interactive applications because of its high frame rates. Furthermore, the algorithm runs entirely on graphics hardware, which leaves the CPU free for other computation. There is no pre-computation involved, and therefore fully dynamic geometry, lighting, and viewing directions are supported. In addition, there is no limitation on the topology of the reciever geometry, i.e., caustics can be formed on arbitrary surfaces. (Caustics Mapping: An Image-space Technique for Real-time Caustics. Musawir A. Shah and Sumanta Pattanaik. Technical Report, School of Engineering and Computer Science, University of Central Florida, CS TR 50-07, 07/29/2005 (Submitted for Publication))

Dynamic LOD on the GPU

August 11th, 2005

To implement dynamic LOD on the GPU, a quadtree structure is created based on a seamless geometry image atlas,and all the nodes in the quadtree are packed into the atlas textures. There are two passes in the approach. In the first pass, the LOD selection is performed in fragment shaders. The resultant buffer is taken as the input texture to the second pass by vertex texturing, and node culling and triangulation are performed in vertex shaders. The LOD algorithm can generate adaptive meshes dynamically, and can be fully implemented on the GPU. It improves the efficiency of LOD selection, and reduces computing load on CPU. (Dynamic LOD on GPU. Junfeng Ji, Enhua Wu, Sheng Li, and Xuehui Liu. Proceedings of Computer Graphics International 2005.)

Beyond Triangles: A Simple Framework For Hardware-Accelerated Non-Triangular Primitives

July 19th, 2004

This paper presents an extensible system for interactively rendering multiple types of ray-casted objects in a manner compatible with pre-existing rendering engines. The sample implementation includes support for general quadrics and volumetric isosurfaces. It also includes a high-speed sphere renderer, and of course a standard triangle-rendering pipeline. The system is designed so that most of the algorithms designed to run on the existing raster engine can be added with minimal overhead/coding effort. We have demonstrated shadowing using the shadow-map algorithm. (“Beyond Triangles: A Simple Framework For Hardware-Accelerated Non-Triangular Primitives”, To be Submitted for publication.)

Simulating Photon Mapping for Real-time Applications

June 11th, 2004

This paper by Larsen et al. at Technical University of Denmark introduces a fast GPU accelerated technique for simulating photon mapping. Each of the steps in the photon mapping algorithm are executed either on the CPU or the GPU depending on which of the processors are most appropriate for the task. The indirect illumination is calculated using a new GPU accelerated final gathering method. Caustic photons are traced on the CPU and then drawn using points in the framebuffer, and finally filtered using the GPU. Both diffuse and non-diffuse surfaces are handled by calculating the direct illumination on the GPU and the photon tracing on the CPU. (Simulating Photon Mapping for Real-time Applications. Bent D. Larsen, Niels J. Christensen, To appear at Eurographics Symposium on Rendering, 2004.)

Isosurface Computation Made Simple: Hardware Acceleration, Adaptive Refinement and Tetrahedral Stripping

May 4th, 2004

This paper by Valerio Pascucci describes a simple technique to compute isosurfaces on programmable GPUs. Given the vertices of a tetrahedron a simple vertex program computes the position of the vertices, normal and connectivity of the potential portion of an isosurface contained in the tetrahedron (a marching tet approach). One main advantage of this technique is to offload the CPU of the task of computing the isosurface and more importantly to avoid storing the surface in main memory. Interestingly, one could compile a display list for a tetrahedral mesh and display different isosurfaces by changing an OpenGL parameter and always rendering the same list. The paper presents and comments in detail all the source code of the vertex program. (Isosurface Computation Made Simple: Hardware Acceleration, Adaptive Refinement and Tetrahedral Stripping. V. Pascucci, Proceedings of VisSym 2004)

Page 2 of 3123