Thrust: a Template Library for CUDA Applications

May 31st, 2009

Thrust is an open-source template library for data parallel CUDA applications featuring an interface similar to the C++ Standard Template Library (STL). Thrust provides a flexible high-level interface for GPU programming that greatly enhances developer productivity while remaining high performance. Note that Thrust supersedes Komrade, the initial release of the library, and all future development will proceed under this title.

Thrust is open source under the Apache 2.0 license and available now at http://thrust.googlecode.com. Download Thrust and check out the Thrust tutorial to get started.

The thrust::host_vector and thrust::device_vector containers simplify memory management and transfers between host and device. Thrust provides efficient algorithms for:

  • sorting – thrust::sort and thrust::sort_by_key
  • transformations – thrust::transform
  • reductions – thrust::reduce and thrust::transform_reduce
  • scans – thrust::inclusive_scan and thrust::transform_inclusive_scan
  • And many more!

Read the rest of this entry »

MemtestG80: A Memory and Logic Tester for NVIDIA CUDA-enabled GPUs

May 25th, 2009

MemtestG80 is a software-based tester to test for “soft errors” in GPU memory or logic for NVIDIA CUDA-enabled GPUs. It uses a variety of proven test patterns (some custom and some based on Memtest86) to verify the correct operation of GPU memory and logic. It is a useful tool to ensure that given GPUs do not produce “silent errors” which may corrupt the results of a computation without triggering an overt error.

Precompiled binaries for Windows, Linux and OSX, as well as the source code, are available for download under the LGPL license. MemtestG80 is developed by Imran Haque and Vijay Pande.

GPUmat: GPU toolbox for MATLAB

May 25th, 2009

GPUmat, developed by the GP-You Group, allows Matlab code to benefit from the compute power of modern GPUs. It is built on top of NVIDIA CUDA. The  acceleration is transparent to the user, only the declaration of variables needs to be changed using new GPU-specific keywords. Algorithms need not be changed. A wide range of standard Matlab functions have been implemented.  GPUmat is available as freeware for Windows and Linux from the GP-You download page.

University of Melbourne Workshop: High-Performance GPU Computing with NVIDIA CUDA

May 12th, 2009

A half-day workshop and discussion forum will be held from 8:45-13:00, Wednesday May 27, in Lecture theatre 3 of the Alan Gilbert Building at The University of Melbourne, Victoria, Australia. A  light lunch will be supplied afterwards from 13:00-14:00. With speakers from NVIDIA and Xenon Systems, this workshop is hosted by the ARC Centre of Excellence for Mathematics and Statistics of Complex Systems (MASCOS), and the Department of Mathematics and Statistics at the University of Melbourne.

Due to recent advances in GPU hardware and software, so called general-purpose GPU computing (GPGPU) is rapidly expanding from niche applications to the mainstream of high performance computing. For HPC researchers, hardware gains have increased the imperative to learn this new computing paradigm, while high level programming languages (in particular, CUDA) have decreased the barrier to entry to this field, so that it is now possible for new developers to rapidly port suitable applications from C/C++ running on CPUs to CUDA running on GPUs. For appropriate applications, GPUs have significant, even dramatic, advantages compared to CPUs in terms of both Dollars/FLOPS and Watts/FLOPS.

For more information see the workshop announcement.

Barra: A Modular Functional GPU Simulator

May 4th, 2009

Barra, developed by Sylvain Collange, Marc Daumas, David Defour and David Parello from Université de Perpignan, simulates CUDA programs at the assembly language level (NVIDIA PTX ISA). Its ultimate goal is to provide a 100% bit-accurate simulation, offering bug-for-bug compatibility with NVIDIA G80-based GPUs. It works directly with CUDA executables; neither source modification nor recompilation is required. Barra is primarily intended as a tool for research on computer architecture, although it can also be used to debug, profile and optimize CUDA programs at the lowest level. For more details and downloads, see the Barra wiki. A technical report is also available.

University of Western Australia GPU Computing Workshop

April 29th, 2009

A GPU computing workshop and discussion forum will be held at the UWA University Club Thursday, May 7th.  The workshop aims to provide a detailed introduction to GPU computing with CUDA and NVIDIA Tesla computing solutions, and to present research in GPU and Heterogeneous computing being undertaken in Western Australia.

Mark Harris (NVIDIA) will present an introduction to the CUDA architecture, programming model, and the programming environment of C for CUDA, as well as an overview of the Tesla GPU architecture, a live programming demo, and strategies for optimizing CUDA applications for the GPU. To better enable the uptake of this technology, Dragan Dimitrovici from Xenon Systems will provide an overview of CUDA enabled hardware options. The workshop will also include brief presentations of some of the projects using CUDA within Western Australia, including a presentation from Professor Karen Haines (WASP@UWA) on parallel computing strategies required for optimizing applications for GPU and heterogeneous computing.

Please see the workshop flyer for full details.

NVIDIA First to Roll out OpenCL Drivers & SDK

April 20th, 2009

From an NVIDIA Press Release:

SANTA CLARA, CA—APRIL 20, 2009—NVIDIA Corporation, the inventor of the GPU, today announced the release of its OpenCL driver and software development kit (SDK) to developers participating in its OpenCL Early Access Program. NVIDIA is providing this release to solicit early feedback in advance of a beta release which will be made available to all GPU Computing Registered Developers in the coming months.

Developers can apply to become a GPU Computing Registered Developer at: www.nvidia.com/opencl

“The OpenCL standard was developed on NVIDIA GPUs and NVIDIA was the first company to demonstrate OpenCL code running on a GPU,” said Tony Tamasi, senior vice president of technology and content at NVIDIA. “Being the first to release an OpenCL driver to developers cements NVIDIA’s leadership in GPU Computing and is another key milestone in our ongoing strategy to make the GPU the soul of the modern PC.”

At the core of NVIDIA®’s GPU Computing strategy is the massively parallel CUDA™ architecture that NVIDIA pioneered and has been shipping since 2006. Accessible today through familiar industry standard programming environments such as C, Java, Fortran and Python, the CUDA architecture supports all manner of computational interfaces and, as such, is a perfect complement to OpenCL. Enabled on over 100 million NVIDIA GPUs, the CUDA architecture is enabling developers to innovate with the GPU and unleash never before seen performance across a wide range of applications.

Developers can apply to become a GPU Computing Registered Developer at: www.nvidia.com/opencl

eResearch South Australia Workshop: High Performance GPU Computing with NVIDIA CUDA

April 14th, 2009

This workshop,  hosted by eResearch SA and to be presented by Mark Harris (NVIDIA) with Dragan Dimitrovici (Xenon Systems), aims to provide a detailed introduction to GPU computing with CUDA and NVIDIA GPUs such as the Tesla series of high-performance computing processors.

The workshop will be held from 9:00-13:00 on Tuesday 28th April, in the Henry Ayers Room, Ayers House
288 North Terrace, Adelaide (opposite the Royal Adelaide Hospital).

CUDA is NVIDIA’s revolutionary parallel computing architecture for GPUs. The available software tools include a C compiler for developers to build applications, as well as useful libraries for high-performance computing (BLAS, FFT, etc). Several widely-used scientific applications have been ported to run on GPUs using CUDA. This half-day workshop will provide an introduction to the CUDA architecture, programming model, and the programming environment of C for CUDA, as well as an overview of the Tesla GPU architecture, a live programming demo, and strategies for optimizing CUDA applications for the GPU. The workshop will also include a brief presentation of some of the current NVIDIA hardware offerings for GPU computing using CUDA.

The workshop is free, but space is limited. For complete details and registration, visit the workshop web page or download the brochure.

Molecular dynamics on NVIDIA GPUs with speed-ups up to two orders of magnitude

April 13th, 2009

ACEMD is a production-class bio-molecular dynamics (MD) simulation program designed specifically for GPUs which is able to achieve supercomputing scale performance of 40 nanoseconds /day for all-atom protein systems with over 23,000 atoms.  With GPU technology it has become possible to run a microsecond-long trajectory for an all-atom molecular system in explicit water on a single workstation computer equipped with just 3 GPUs. This performance would have required over 100 CPU cores.  Visit the project website for details.

(M. J. Harvey, G. Giupponi, G. De Fabritiis, ACEMD: Accelerating bio-molecular dynamics in the microsecond time-scale. Link to preprint.)

NVIDIA GPU Computing Tutorial Webinar Series

April 8th, 2009

This series of free web seminars (“webinars”) starting April 15th 2009 will cover the basics of data-parallel computing on GPUs using NVIDIA’s CUDA architecture. Tutorials will be presented by the NVIDIA Developer Technology team and will cover many topics including C for CUDA, programming with the OpenCL API , using DirectX Compute and performance optimization techniques.

Webinar topics, schedules and registration information will be updated regularly. Pre-registration is required. Please follow the links provided (after clicking “read the rest of this entry”), and registration details will be emailed back upon successful registration. Read the rest of this entry »

Page 30 of 41« First...1020...2829303132...40...Last »