GMAC is a user-level library that implements an Asymmetric Distributed Shared Memory model to be used by CUDA programs. An ADSM model builds a global memory space that allows CPU code to transparently access data hosted in accelerators’ (GPUs’) memories. Moreover, the coherency of the data is automatically handled by the library. This removes the necessity for manual memory transfers (cudaMemcpy) between the host and GPU memories. Furthermore, GMAC assigns a different “virtual GPU” to each host thread, and the virtual GPUs are evenly mapped to physical GPUs. This is especially useful for multi-GPU programs since each host thread can access the memory of all GPUs and simple GPU-to-GPU transfers can be performed with simple memcpy calls. Read the rest of this entry »
Press release (submitted to gpgpu.org very late…):
LOS ANGELES,CA – July 26, 2010 – PEER 1 Hosting (TSX:PIX), a global online IT hosting provider, today announced the availability of the industry’s first large-scale, hosted graphics processing unit (GPU) Cloud at the 37th Annual Siggraph International Conference.
The system runs the RealityServer® 3D web application service platform, developed by mental images, a wholly owned subsidiary of NVIDIA. The RealityServer platform is a powerful combination of NVIDIA Tesla GPUs and 3D web services software. It delivers interactive and photorealistic applications over the web using the iray® renderer, which enables animators, product designers, architects and consumers to easily visualize 3D scenes with remarkable realism. Read the rest of this entry »
Here is a feature list for 2.0.969:
- PTX 2.2 and Fermi device support: Floating point results should be within the ULP limits in the PTX ISA manual. Over 500 unit tests verify that the behaviour matches NVIDIA devices.
- Four target device types: A functional PTX emulator. A PTX to LLVM to x86/ARM JIT. A PTX to CAL JIT for AMD devices (beta). A PTX to PTX JIT for NVIDIA devices.
- A full-featured PTX 2.2 IR: An analysis/optimization pass interface over PTX (Control flow graph, dataflow graph, dominator/postdominator trees, structured control tree). Optimizations can be plugged in as modules.
- Correctness checking tools: A memory checker (detects unaligned and out of bounds accesses). A race detector. An interactive debugger (allows stepping through PTX instructions).
- An instruction trace analyzer interface: Allows user-defined modules to receive callbacks when PTX instructions are executed. Can be used to compute metrics over applications or perform correctness checks.
- A CUDA API frontend: Existing CUDA programs can be directly linked against Ocelot. Device pointers can be shared across host threads. Multiple devices can be controlled from the same host thread (cudaSetDevice can be called multiple times).
Ocelot is available under a BSD license at http://code.google.com/p/gpuocelot.
Submissions are cordially invited for the Workshop on GPU Computing, held with PPAM 2011 — 9th International Conference on Parallel Processing and Applied Mathematics, September 11-14, 2011, Torun, Poland. This workshop is organised by Josep R. Herrero, Enrique S. Quintana-Orti, and Robert Strzodka.
GPU programming is now a much richer environment that it used to be a few years ago. On top of the two major programming languages, CUDA and OpenCL, libraries (e.g., cufft) and high level interfaces (e.g., thrust) have been developed that allow a fast access to the computing power of GPUs without detailed knowledge or programming of GPU hardware.
Annotation-based programming models (e.g., PGI Accelerator), GPU plug-ins for existing mathematical software (e.g., Jacket in Matlab), GPU script languages (e.g., PyOpenCL), and new data parallel languages (e.g., Copperhead) bring GPU programming to a new level. Read the rest of this entry »
We examine the problem of segmenting foreground objects in live video when background scene textures change over time. In particular, we formulate background subtraction as minimizing a penalized instantaneous risk functional yielding a local on-line discriminative algorithm that can quickly adapt to temporal changes. We analyze the algorithms convergence, discuss its robustness to non-stationarity, and provide an efficient non-linear extension via sparse kernels. To accommodate interactions among neighboring pixels, a global algorithm is then derived that explicitly distinguishes objects versus background using maximum a posteriori inference in a Markov random field (implemented via graph-cuts). By exploiting the parallel nature of the proposed algorithms, we develop an implementation that can run efficiently on the highly parallel Graphics Processing Unit (GPU). Empirical studies on a wide variety of datasets demonstrate that the proposed approach achieves quality that is comparable to state-of-the-art off-line methods, while still being suitable for real-time video analysis (75 fps on a mid-range GPU).
The Pan-American Advanced Studies Institute (PASI)—”Scientific Computing in the Americas: the challenge of massive parallelism”—was held in Valparaiso, Chile on 3–14 January 2011. The event hosted 14 lecturers and 68 participants, thanks to NSF/DOE funding. Lecture materials are now available publicly: PDFs of the lecture slides on the PASI website, and screencasts (video) via an iTunes U course and on YouTube also).
Exploitation of novel computer architectures, such as general purpose GPUs, is allowing researchers to accelerate the realization of frontier models in particle-based simulation, by enabling an increase in the level of realism in the description of the particles and their interactions and increasing both the number of particles and the timescales simulated.
This one-day meeting focuses on the new and exciting area of the exploitation of GPUs and related technology in the area of biomolecular simulations.
In addition to a programme of national and international speakers in the field, there is the opportunity to present a poster on your research. Read the rest of this entry »
The Parallel Processing for Imaging Applications conference, part of IS&T/SPIE’s Electronic Imaging conference, was held on January 24–25 in San Francisco. The conference had a large number of GPU papers (SPIE digital library link):
- Using a commercial graphical processing unit and the CUDA programming language to accelerate scientific image processing applications by Broussard and Ives
- GPGPU real-time texture analysis framework by Akhloufi et al.
- A parallel implementation of 3D Zernike moment analysis by Berjón et al.
- Visualization assisted by parallel processing by Lange et al.
- GPU color space conversion by Chase and Vondran
- Acceleration of the Retinex algorithm for image restoration by GPGPU/CUDA by Wang and Huang
- Video transcoding using GPU accelerated decoder by Hsu
- Real-time image deconvolution on the GPU by Klosowski and Krishnan
- GPU-completeness: theory and implications by Lin
- A parallel error diffusion implementation on a GPU by Zhang et al.
- Evaluation of CPU and GPU architectures for spectral image analysis algorithms by Fresse et al.
- Real-time 3D flash ladar imaging through GPU data processing by Wong et al.
- Advanced MRI reconstruction toolbox with accelerating on GPU by Wu et al.
- Accelerating image recognition on mobile devices using GPGPU by López et al.
- A GPU accelerated PDF transparency engine by Recker et al.
SpeedIT Extreme 1.2 introduces support for complex numbers in single and double precision for all SpeedIT methods, such as fast sparse matrix vector multiplication, CG and BiCGSTAB solver.
This new report covers all the performance improvements in the latest CUDA Toolkit 3.2 release, and compares CUDA parallel math library performance vs. commonly used CPU libraries.
Learn about the performance advantages of using the CUDA parallel math libraries for FFT, BLAS, sparse matrix operations, and random number generation.