The Swarm-NG package helps scientists and engineers harness the power of GPUs. In the early releases, Swarm-NG will focus on the integration of an ensemble of N-body systems evolving under Newtonian gravity. Swarm-NG does not replicate existing libraries that calculate forces for large-N systems on GPUs, but rather focuses on integrating an ensemble of many systems where N is small. This is of particular interest for astronomers who study the chaotic evolution of planetary systems. In the long term, we hope Swarm-NG will allow for the efficient parallel integration of user-defined systems of ordinary differential equations.
Ocelot is a dynamic compilation framework designed to map the explicitly data parallel execution model used by NVIDIA CUDA applications onto diverse multithreaded platforms. Ocelot includes a dynamic binary translator from Parallel Thread eXecution ISA (PTX) to many-core processors that leverages the Low Level Virtual Machine (LLVM) code generator to target x86 and other ISAs. The dynamic compiler is able to execute existing CUDA binaries without recompilation from source and supports switching between execution on an NVIDIA GPU and a many-core CPU at runtime. It has been validated against over 130 applications taken from the CUDA SDK, the UIUC Parboil benchmark, the Virginia Rodinia benchmarks, the GPU-VSIPL signal and image processing library, the Thrust library, and several domain specific applications.
This paper presents a high level overview of the implementation of the Ocelot dynamic compiler highlighting design decisions and trade-offs, and showcasing their effect on application performance. Several novel code transformations are explored that are applicable only when compiling explicitly parallel applications and traditional dynamic compiler optimizations are revisited for this new class of applications. This study is expected to inform the design of compilation tools for explicitly parallel programming models (such as OpenCL) as well as future CPU and GPU architectures.
This paper identifies several key areas of research and open problems for optimizing the performance of data parallel programs (such as CUDA and OpenCL) that were encountered when designing a binary translator from PTX to LLVM/x86. The complete implementation of Ocelot is available open-source under the new BSD license at http://code.google.com/p/gpuocelot. Ongoing work involves translating PTX to AMD’s IL allowing CUDA programs to be executed on AMD GPUs, developing parallel-aware PTX to PTX optimizations, and exploring new programming and execution models that are layered on PTX.
(Gregory Diamos, Andrew Kerr, Sudhakar Yalamanchili and Nathan Clark: “Ocelot: A dynamic compiler for bulk-synchroneous applications in heterogeneous systems”. 19 International Conference on Parallel Architectures and Compilation Techniques (PACT2010), September 2010).
NVIDIA today announced the release of NVIDIA Parallel Nsight software, the industry’s first development environment for GPU-accelerated applications that work with Microsoft Visual Studio. “By adding functionality specifically for GPU Computing developers, Parallel Nsight makes the power of the GPU more accessible than ever before,” said Sanford Russell, GM of GPU Computing at NVIDIA. NVIDIA Parallel NSight features a CUDA C/C++ debugger and application performance analyzer, and a graphics debugger and inspector. NVIDIA Parallel Nsight supports Windows HPC Server 2008, Windows 7 and Windows Vista. Download Parallel Nsight here.
OPENMM was designed to enhance the performance of almost any molecular dynamics simulation package (MD package) by allowing the code to be executed on high performance computer architectures, in particular Graphics Processing Units (GPUs). Most molecular dynamics packages can be modified to call OPENMM, resulting in significant acceleration on such high performance architectures, without changing the way users interact with the MD package. Read the rest of this entry »
EM Photonics announced today the general availability of CULA 2.0, its GPU-accelerated linear algebra library. The new version provides support for NVIDIA GPUs based on the latest “Fermi” architecture.
CULA contains a LAPACK interface comprised of over 150 mathematical routines from the industry standard for computational linear algebra, LAPACK. EM Photonics’ CULA library includes many popular routines including system solvers, least squares solvers, orthogonal factorizations, eigenvalue routines, and singular value decompositions. CULA offers performance up to a magnitude faster than highly optimized CPU-based linear algebra solvers. There is a variety of different interfaces available to integrate directly into your existing code. Programmers can easily call GPU-accelerated CULA from their C/C++, FORTRAN, MATLAB, or Python codes. This can all be done with no GPU programming experience. CULA is available for every system equipped with GPUs based on the NVIDIA CUDA architecture. This includes 32- and 64-bit versions of Linux, Windows, and OS X.
More information is available at www.culatools.com.
- Using platform and device layers to build robust OpenCL™ applications
- Program compilation and kernel objects
- Managing buffers
- Kernel execution
- Kernel programming – basics
- Kernel programming – synchronization
- Matrix multiply – a case study
- Kernel programming – built-ins
Graphic Remedy is proud to announce the release of gDEBugger Version 5.6 for Windows, Linux, Mac OS X, iPhone and iPad. This version introduces iPhone and iPad on-device debugging and profiling abilities, letting developers optimize their apps in real-time on actual iPhone and iPad hardware, while viewing invaluable inside information such as the device’s GPU, CPU, graphics driver and operating system performance counters.
gDEBugger is an OpenGL, OpenGL ES and OpenCL debugger and profiler that traces application activity on top of the OpenGL API, and lets programmers see what is happening within the graphics system implementation to find bugs and optimize OpenGL application performance. gDEBugger runs on Windows, Mac OS X, iPhone and Linux operating systems.
For our Australian readers interested in GPU computing. Next week there will be two free workshops on GPU Computing with CUDA. The workshops will both include a tutorial on CUDA C/C++ programming along with additional presentations by local speakers. Topics will include an overview of NVIDIA Tesla and the latest Fermi architecture GPUs, CUDA programming, debugging and profiling tools, and optimization strategies.
- “High-Performance GPU Computing with NVIDIA CUDA”
Wednesday, July 14
8:45 – 14:00
The University of New South Wales, Sydney
- “High-Performance GPU Computing with NVIDIA CUDA and Fermi”
Thursday, July 15
9:15 – 15:30
The Australian National University, Canberra
Follow the links above for full details. Space is limited, so be sure to RSVP to the addresses provided.
SagivTech plans to offer a 3-days course that deals with Image Processing with CUDA in the USA this September. This is an advanced course that is intended for experienced CUDA developers looking for optimization methods for image processing applications implemented on NVIDIA GPUs.
The course will be held in the San Francisco area, 9am to 5pm September 27-29.
The OpenCL 1.1 specification, including header files and documentation, has been released. It includes significant new functionality:
- Host-thread safety, enabling OpenCL commands to be enqueued from multiple host threads
- Sub-buffer objects to distribute regions of a buffer across multiple OpenCL devices
- User events to enable enqueued OpenCL commands to wait on external events
- Event callbacks that can be used to enqueue new OpenCL commands based on event state changes in a non-blocking manner
- 3-component vector data types
- Global work-offset which enable kernels to operate on different portions of the NDRange
- Memory object destructor callback
- Read, write and copy a 1D, 2D or 3D rectangular region of a buffer object
- Mirrored repeat addressing mode and additional image formats
- New OpenCL C built-in functions such as integer clamp, shuffle and asynchronous strided copies
- Improved OpenGL interoperability through efficient sharing of images and buffers by linking OpenCL event objects to OpenGL fence sync objects
- Optional features in OpenCL 1.0 have been bought into core OpenCL 1.1 including: writes to a pointer of bytes or shorts from a kernel, and conversion of atomics to 32-bit integers in local or global memory