New rCUDA 4.1 version available

March 26th, 2014

A new version of the rCUDA middleware has been released (version 4.1). In addition to fix some bugs related with asynchronous memory transfers, the new release provides support for:

  • CUDA 5.5 Runtime API
  • Mellanox Connect-IB network adapters
  • Dynamic Parallelism
  • cuFFT and cuBLAS libraries

The rCUDA middleware allows to seamlessly use, within your cluster, GPUs that are installed in computing nodes different from the one that is executing the CUDA application, without requiring to modify nor recompile your program. Please visit www.rcuda.net for more details about the rCUDA technology.

GPU Boost on NVIDIA’s Tesla K40 GPUs

March 26th, 2014

This blog post explains GPU Boost, a new user controllable feature available on Tesla GPUs. Case studies and benchmarks for reverse time migration and an electromagnetic solver are discussed.

Acceleware OpenCL Training June 2-5, 2014

March 5th, 2014

This hands-on four day course will teach you how to write applications in OpenCL that fully leverage the multi-core processing capabilities of the GPU. Taught by Acceleware developers who bring real world experience to the class room, students will benefit from:

  • Hands-on exercises and progressive lectures
  • Individual laptops with AMD Fusion APU for student use
  • Small class sizes to maximize learning
  • 90 days post training support

For more information please visit: http://acceleware.com/training/1028

PARALUTION – new release 0.6.0

February 26th, 2014

PARALUTION is a library for sparse iterative methods which can be performed on various parallel devices, including multi-core CPU, GPU (CUDA and OpenCL) and Intel Xeon Phi. The new 0.6.0 version provides the following new features:

  • Windows support (OpenMP backend)
  • FGMRES (Flexible GMRES)
  • (R)CMK (Cuthill–McKee) ordering
  • Thread-core affiliation (for Host OpenMP)
  • Asynchronous transfers (CUDA backend)
  • Pinned memory allocation on the host when using CUDA backend
  • Verbose output for debugging
  • Easy to handle timing function in the examples

PARALUTION 0.6.0 is available at http://www.paralution.com.

PyViennaCL: Python wrapper for GPU-accelerated linear algebra

February 26th, 2014

The new free open-source PyViennaCL 1.0.0 release provides the Python bindings for the ViennaCL linear algebra and numerical computation library for GPGPU and heterogeneous systems. ViennaCL itself is a header-only C++ library, so these bindings make available to Python programmers ViennaCL’s fast OpenCL and CUDA algorithms, in a way that is idiomatic and compatible with the Python community’s most popular scientific packages, NumPy and SciPy. Support through the Google Summer of Code 2013 for the primary developer Toby St Clere Smithe is greatly appreciated.

More information and download: PyViennaCL Home

Maximizing Shared Memory Bandwidth on NVIDIA Kepler GPUs

February 17th, 2014

This tutorial by Dan Cyca outlines the shared memory configurations for NVIDIA Fermi and Kepler architectures, and demonstrates how to rewrite kernels to take advantage of the changes in Kepler’s shared memory architecture.

OpenCLIPP: an OpenCL library for optimized image processing primitives

February 2nd, 2014

OpenCLIPP is a library providing processing primitives (image processing primitives in the first version) implemented with OpenCL for fast execution on dedicated computing devices like GPUs. Two interfaces are provided: C (similar to the Intel IPP and NVIDIA NPP libraries) and C++. OpenCLIPP is free for personal and commercial use. It can be downloaded from GitHub.

Related publication:
M. Akhloufi, A. Campagna, “OpenCLIPP: OpenCL Integrated Performance Primitives library for computer vision applications”, Proc. SPIE Electronic Imaging 2014, Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques, P. 9025-31, February 2014.

Acceleware CUDA Training Feb 25-28, 2014

January 15th, 2014

Developed in partnership with NVIDIA, this hands-on four day course will teach how to write and optimize applications that fully leverage the multi-core processing capabilities of the GPU. Benefits include:

  • Hands-on exercises and progressive lectures
  • Individual laptops equipped with NVIDIA GPUs for student use
  • Small class sizes to maximize learning
  • 90 days post training support – NEW!

February 25-28, 2014, Baltimore, MD, USA, details and registration.

Webinar: HOOMD-blue for Polymer Simulations and Big Systems

January 15th, 2014

This webinar will demonstrate how real-world computational research in soft matter physics can be accelerated on a GPU-equipped desktop computer with the HOOMD-blue molecular dynamics software. A presentation of how to set up a simulation of a dense polymer liquid, and how to analyze and visualize the results is provided. There will be a demonstration of how self-assembled ordered structures of block copolymers emerge out of an initially disordered configuration. With external potentials, an artificially ordered phase can be produced as well. HOOMD-blue’s easy-to-use scripting interface and plug-ins are used to create a productive work-flow and extend its capabilities. As an advanced topic, there will be a discussion of how the upcoming version of HOOMD-blue can be used on compute clusters running on ten to hundreds of GPUs in parallel, to boost simulations of long polymer chains or large-scale systems.

January 21, 2014, 11:00 a.m. EST, Registration required.

Javascript Library for GPGPU

December 30th, 2013

WebCLGL is a free Javascript library for general purpose computing using WebGL. It uses a code style like WebCL to handle the operations and then translate to WebGL code. The library is not 100% the same as the future WebCL specification nor has all its advantages, but it is already very usable.

Page 2 of 4012345...102030...Last »