July 29th, 2010
July 29th, 2010
From the paper’s abstract:
A wide class of finite element electromagnetic applications requires computing very large sparse matrix vector multiplications (SMVM). Due to the sparsity pattern and size of the matrices, solvers can run relatively slowly. The rapid evolution of graphic processing units (GPUs) in performance, architecture and programmability make them very attractive platforms for accelerating computationally intensive kernels such as SMVM. This work presents a new algorithm to accelerate the performance of the SMVM kernel on graphic processing units.
From the paper’s conclusion:
We have introduced several efficient techniques to accelerate the execution of the sparse matrix vector multiplication (SMVM) on NVIDIA graphic processing units. The proposed methods increased the performance of the SMVM kernel on GT 8800 up to 18.8 times compared to the quad core CPU and 3 times compared to previous work by Bell and Garland on accelerating SMVM for GPUs.
(M. Mehri Dehnavi, D. Fernandez and D. Giannacopoulos: “Finite element sparse matrix vector multiplication on GPUs”. IEEE Transactions on Magnetics, vol. 46, no. 8, pp. 2982-2985, August 2010. DOI 10.1109/TMAG.2010.2043511)
July 21st, 2010
Ocelot is a dynamic compilation framework designed to map the explicitly data parallel execution model used by NVIDIA CUDA applications onto diverse multithreaded platforms. Ocelot includes a dynamic binary translator from Parallel Thread eXecution ISA (PTX) to many-core processors that leverages the Low Level Virtual Machine (LLVM) code generator to target x86 and other ISAs. The dynamic compiler is able to execute existing CUDA binaries without recompilation from source and supports switching between execution on an NVIDIA GPU and a many-core CPU at runtime. It has been validated against over 130 applications taken from the CUDA SDK, the UIUC Parboil benchmark, the Virginia Rodinia benchmarks, the GPU-VSIPL signal and image processing library, the Thrust library, and several domain specific applications.
This paper presents a high level overview of the implementation of the Ocelot dynamic compiler highlighting design decisions and trade-offs, and showcasing their effect on application performance. Several novel code transformations are explored that are applicable only when compiling explicitly parallel applications and traditional dynamic compiler optimizations are revisited for this new class of applications. This study is expected to inform the design of compilation tools for explicitly parallel programming models (such as OpenCL) as well as future CPU and GPU architectures.
This paper identifies several key areas of research and open problems for optimizing the performance of data parallel programs (such as CUDA and OpenCL) that were encountered when designing a binary translator from PTX to LLVM/x86. The complete implementation of Ocelot is available open-source under the new BSD license at http://code.google.com/p/gpuocelot. Ongoing work involves translating PTX to AMD’s IL allowing CUDA programs to be executed on AMD GPUs, developing parallel-aware PTX to PTX optimizations, and exploring new programming and execution models that are layered on PTX.
(Gregory Diamos, Andrew Kerr, Sudhakar Yalamanchili and Nathan Clark: “Ocelot: A dynamic compiler for bulk-synchroneous applications in heterogeneous systems”. 19 International Conference on Parallel Architectures and Compilation Techniques (PACT2010), September 2010).
July 18th, 2010
NVIDIA today announced the release of NVIDIA Parallel Nsight software, the industry’s first development environment for GPU-accelerated applications that work with Microsoft Visual Studio. ”By adding functionality specifically for GPU Computing developers, Parallel Nsight makes the power of the GPU more accessible than ever before,” said Sanford Russell, GM of GPU Computing at NVIDIA. NVIDIA Parallel NSight features a CUDA C/C++ debugger and application performance analyzer, and a graphics debugger and inspector. NVIDIA Parallel Nsight supports Windows HPC Server 2008, Windows 7 and Windows Vista. Download Parallel Nsight here.
July 11th, 2010
Simbios, the NIH Center for Biomedical Computation at Stanford University, is excited to announce the release of OPENMM 2.0.
OPENMM was designed to enhance the performance of almost any molecular dynamics simulation package (MD package) by allowing the code to be executed on high performance computer architectures, in particular Graphics Processing Units (GPUs). Most molecular dynamics packages can be modified to call OPENMM, resulting in significant acceleration on such high performance architectures, without changing the way users interact with the MD package. Read the rest of this entry »
July 11th, 2010
A 30,000-hexahedron FEM model.
In this paper we present a GPU-based multigrid approach for simulating elastic deformable objects in real time. Our method is based on a finite element discretization of the deformable object using hexahedra. It draws upon recent work on multigrid schemes for the efficient numerical solution of partial differential equations on such discretizations. Due to the regular shape of the numerical stencil induced by the hexahedral regime, and since we use matrix-free formulations of all multigrid steps, computations and data layout can be restructured to avoid execution divergence and to support memory access patterns which enable the hardware to coalesce multiple memory accesses into single memory transactions. This enables to effectively exploit the GPU’s parallel processing units and high memory bandwidth via the CUDA parallel programming API. We demonstrate performance gains of up to a factor of 12 compared to a highly optimized CPU implementation. By using our approach, physics-based simulation at an object resolution of 64^3 is achieved at interactive rates.
(Christian Dick, Joachim Georgii and Rüdiger Westermann: “A Real-Time Multigrid Finite Hexahedra Method for Elasticity”, http://wwwcg.in.tum.de/Research/Publications/CompMechanics)
July 8th, 2010
EM Photonics announced today the general availability of CULA 2.0, its GPU-accelerated linear algebra library. The new version provides support for NVIDIA GPUs based on the latest “Fermi” architecture.
CULA contains a LAPACK interface comprised of over 150 mathematical routines from the industry standard for computational linear algebra, LAPACK. EM Photonics’ CULA library includes many popular routines including system solvers, least squares solvers, orthogonal factorizations, eigenvalue routines, and singular value decompositions. CULA offers performance up to a magnitude faster than highly optimized CPU-based linear algebra solvers. There is a variety of different interfaces available to integrate directly into your existing code. Programmers can easily call GPU-accelerated CULA from their C/C++, FORTRAN, MATLAB, or Python codes. This can all be done with no GPU programming experience. CULA is available for every system equipped with GPUs based on the NVIDIA CUDA architecture. This includes 32- and 64-bit versions of Linux, Windows, and OS X.
More information is available at www.culatools.com.
July 8th, 2010
This tutorial by Benedict R. Gaster from AMD provides a detailed introduction to OpenCL. Covered topics include:
- Using platform and device layers to build robust OpenCL™ applications
- Program compilation and kernel objects
- Managing buffers
- Kernel execution
- Kernel programming – basics
- Kernel programming – synchronization
- Matrix multiply – a case study
- Kernel programming – built-ins
July 7th, 2010
Graphic Remedy is proud to announce the release of gDEBugger Version 5.6 for Windows, Linux, Mac OS X, iPhone and iPad. This version introduces iPhone and iPad on-device debugging and profiling abilities, letting developers optimize their apps in real-time on actual iPhone and iPad hardware, while viewing invaluable inside information such as the device’s GPU, CPU, graphics driver and operating system performance counters.
gDEBugger is an OpenGL, OpenGL ES and OpenCL debugger and profiler that traces application activity on top of the OpenGL API, and lets programmers see what is happening within the graphics system implementation to find bugs and optimize OpenGL application performance. gDEBugger runs on Windows, Mac OS X, iPhone and Linux operating systems.
July 4th, 2010
For our Australian readers interested in GPU computing. Next week there will be two free workshops on GPU Computing with CUDA. The workshops will both include a tutorial on CUDA C/C++ programming along with additional presentations by local speakers. Topics will include an overview of NVIDIA Tesla and the latest Fermi architecture GPUs, CUDA programming, debugging and profiling tools, and optimization strategies.
Follow the links above for full details. Space is limited, so be sure to RSVP to the addresses provided.
Recent advances in computing have led to an explosion in the amount of data being generated. Processing the ever-growing data in a timely manner has made throughput computing an important aspect for emerging applications. Our analysis of a set of important throughput computing kernels shows that there is an ample amount of parallelism in these kernels which makes them suitable for today’s multi-core CPUs and GPUs. In the past few years there have been many studies claiming GPUs deliver substantial speedups (between 10X and 1000X) over multi-core CPUs on these kernels. To understand where such large performance difference comes from, we perform a rigorous performance analysis and find that after applying optimizations appropriate for both CPUs and GPUs the performance gap between an NVIDIA GTX280 processor and the Intel Core i7-960 processor narrows to only 2.5x on average. In this paper, we discuss optimization techniques for both CPU and GPU, analyze what architecture features contributed to performance differences between the two architectures, and recommend a set of architectural features which provide significant improvement in architectural efficiency for throughput kernels.
(Victor W. Lee, Changkyu Kim, Jatin Chhugani, Michael Deisher, Daehyun Kim, Anthony D. Nguyen, NadathurSatish, Mikhail Smelyanski, Srinivas Chennupaty, Per Hammarlund, Ronak Singhal and Pradeep Dubey: “Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU”, SIGARCH Computer Architecture News 38(3), pp. 451-460, June 2010. DOI Link.)