Performance Portable Parallel Programming – Target CUDA and OpenMP in a Unified Codebase

August 14th, 2014

Hybrid Fortran is an Open Source directive based extension for the Fortran language. It is a way for HPC programmers to keep writing Fortran code like they are used to – only now with GPGPU support. It achieves performance portability by allowing different storage orders and loop structures for the CPU and GPU version. All computational code stays the same as in the respective CPU version, e.g. it can be kept in a low dimensionality even when the GPU version needs to be privatised in more dimensions in order to achieve a speedup. Hybrid Fortran takes care of the necessary transformations at compile-time (so there is no runtime overhead). A (python based) preprocessor parses these annotations together with the Fortran user code structure, declarations, accessors and procedure calls, and then writes separate versions of the code – once for CPU with OpenMP parallelization and once for GPU with CUDA Fortran. More details: http://typhooncomputing.com/?p=416

CUDA 5 Production Release Now Available

October 15th, 2012

The CUDA 5 Production Release is now available as a free download at www.nvidia.com/getcuda.
This powerful new version of the pervasive CUDA parallel computing platform and programming model can be used to accelerate more of applications using the following four (and many more) new features.

• CUDA Dynamic Parallelism brings GPU acceleration to new algorithms by enabling GPU threads to directly launch CUDA kernels and call GPU libraries.
• A new device code linker enables developers to link external GPU code and build libraries of GPU functions.
• NVIDIA Nsight Eclipse Edition enables you to develop, debug and optimize CUDA code all in one IDE for Linux and Mac OS.
• GPUDirect Support for RDMA provides direct communication between GPUs in different cluster nodes

As a demonstration of the power of Dynamic Parallelism and device code linking, CUDA 5 includes a device-callable version of the CUBLAS linear algebra library, so threads already running on the GPU can invoke CUBLAS functions on the GPU. Read the rest of this entry »

CUDA 4.1 Released

January 26th, 2012

Today NVIDIA released CUDA 4.1, including a new CUDA Toolkit, SDK, Visual Profiler, Parallel Nsight IDE and NVIDIA device driver.

CUDA 4.1 makes it easier to accelerate scientific research with GPUs with key features including

  • a redesigned Visual Profiler with automated performance analysis and expert guidance;
  • a new LLVM-based compiler that generates up to 10% faster code; and
  • 1000+ new imaging and signal processing functions in the NPP library.

The CuSparse library included with CUDA 4.1 has a new tridiagonal solver and 2x faster sparse matrix-vector multiplication using the ELL hybrid format, and the CuRand library included with CUDA 4.1 has two new random number generators. Read the rest of this entry »

FortranCL: An OpenCL interface for Fortran 90

December 30th, 2011

FortranCL is an interface to OpenCL from Fortran90 programs, and it is distributed under the LGPL free software license. It allows Fortran programmer to directly execute code on GPUs or other massively parallel processors. The interface is designed to be as close to the C OpenCL interface as possible, and it is written in native Fortran 90 with type checking. FortranCL is not complete yet, but it includes enough subroutines to write GPU accelerated code in Fortran. More information: http://code.google.com/p/fortrancl/

Microsoft Announces C++ AMP

June 26th, 2011

Microsoft has announced that the next version of Visual Studio will contain technology labeled C++ Accelerated Massive Parallelism (C++ AMP) to enable C++ developers to take advantage of the GPU for computation purposes. More information is available in the MSDN blog posts here and here.

Intel announces a high-performance SPMD compiler for the CPU

June 26th, 2011

Intel has announced ispc, The Intel SPMD Program Compiler, now available in source and binary form from http://ispc.github.com.

ispc is a new compiler for “single program, multiple data” (SPMD) programs; the same model that is used for (GP)GPU programming, but here targeted to CPUs. ispc compiles a C-based SPMD programming language to run on the SIMD units of CPUs; it frequently provides a a 3x or more speedup on CPUs with 4-wide SSE units, without any of the difficulty of writing intrinsics code. There were a few principles and goals behind the design of ispc:

  • To build a small C-like language that would deliver excellent performance to performance-oriented programmers who want to run SPMD programs on the CPU.
  • To provide a thin abstraction layer between the programmer and the hardware—in particular, to have an execution and data model where the programmer can cleanly reason about the mapping of their source program to compiled assembly language and the underlying hardware.
  • To make it possible to harness the computational power of the SIMD vector units without the extremely low-programmer-productivity activity of directly writing intrinsics.
  • To explore opportunities from close coupling between C/C++ application code and SPMD ispc code running on the same processor—to have lightweight function calls between the two languages, to share data directly via pointers without copying or reformatting, and so forth.

ispc is an open source compiler with a BSD license. It uses the LLVM Compiler Infrastructure for back-end code generation and optimization and is hosted on github. It supports Windows, Mac, and Linux, with both x86 and x86-64 targets. It currently supports the SSE2 and SSE4 instruction sets, though support for AVX should be available soon.

SGC Ruby CUDA 0.1.0 Release

May 4th, 2011

SGC Ruby CUDA has been heavily updated. It is now available from the standard Ruby Gems repository. Updates include:

  • Basic CUDA Driver and Runtime API support on CUDA 4.0rc2 with unit tests.
  • Object-Oriented API.
  • Exception classes for CUDA errors.
  • Support for Linux and Mac OSX platforms.
  • Documented with YARD.

See http://blog.speedgocomputing.com/2011/04/first-release-of-sgc-ruby-cuda.html for more details.

CUDA 4.0 Release Aims to Make Parallel Programming Easier

March 1st, 2011

Today NVIDIA announced the upcoming 4.0 release of CUDA.  While most of the major CUDA releases accompanied a new GPU architecture, 4.0 is a software-only release, but that doesn’t mean there aren’t a lot of new features.  With this release, NVIDIA is aiming to lower the barrier to entry to parallel programming on GPUs, with new features including easier multi-GPU programming, a unified virtual memory address space, the powerful Thrust C++ template library, and automatic performance analysis in the Visual Profiler tool.  Full details follow in the quoted press release below.

Read the rest of this entry »

MOSIX Virtual OpenCL (VCL) Cluster Platform

December 27th, 2010

The MOSIX group announces the release of the MOSIX Virtual OpenCL (VCL) cluster platform version 1.0, which allows OpenCL applications to transparently utilize many GPU devices in clusters. In the VCL run-time environment, all the cluster devices are seen as if they are located in each hosting-node. Applications need not be aware which nodes and devices are available and where the devices are located. VCL benefits OpenCL applications that can use multiple devices concurrently. Read the rest of this entry »

Announcing GPU.NET from TidePowerd: “Native” GPU computing for .NET

December 14th, 2010

Tidepowerd LogoThe “Beta 2″ version of GPU.NET, a new product by TidePowerd, has recently been released. It allows developers to write  GPU-based code in C# or other .NET-supported languages. GPU.NET beta is available for public download, and the full documentation and several example projects are available online.

Page 1 of 512345