Gaalop Geometric Algebra Library for HSA

May 18th, 2015

Geometric Algebra is a new, geometrically intuitive mathematical system. It provides very easy algorithms for many application areas such as computer graphics, computer vision, robotics and computer simulations. The HSA Foundation (Heterogeneous System Architecture Foundation) is a not-for-profit industry standards body founded by companies such as AMD, ARM Samsung and Texas Instruments and focused on making it dramatically easier to program heterogeneous computing devices such as GPUs.

Since Gaalop (Geometric algebra algorithms optimizer) is focusing exactly on the optimization and integration of Geometric Algebra in these kind of new parallel computing architectures, this technology together with the new Kalmar C++ AMP compiler provides a solution for Math, Science & Engineering for HSA.

Accelerate OpenFOAM® with Culises

April 13th, 2015

Culises significantly accelerates your OpenFOAM® application by using GPUs for the computationally most intensive tasks.

Its main features are

  • Library for GPU-based acceleration of OpenFOAM®
  • Multi-GPU support, significantly reduced computing times
  • Highly efficient state-of-the-art iterative solvers like AMG
  • Quick and easy installation, no validation necessary
  • Flexible interfaces to customer-specific software/engineering applications available

The acceleration of the linear solver by Culises is greater than 2x. The overall speedup depends on the type of application and the time spent in the linear solver. Culises my be tested on FluiDyna’s purpose-built workstation to determine the acceleration potential for your individual OpenFOAM® application. Find out more on:

SpeedIT FLOW: RANS single-phase fluid flow solver on GPU

September 4th, 2014

SpeedIT FLOW is a RANS single-phase fluid flow solver that runs fully on GPU. Benchmark results on external aero flow and other industry-relevant OpenFOAM cases on a GPU card indicate approximately 3x faster time to solution vs. Intel Xeon E5649 running 12 cores. This is about two times faster than competing solutions that offer only partial acceleration on GPU. More details are available on this blog.

New book: Numerical Computations with GPUs

July 22nd, 2014

A new book titled “Numerical Computations with GPUs” has been published:

This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to reusable, adaptable and scalable code fragments. This book also serves as a GPU implementation manual for many numerical algorithms, sharing tips on GPUs that can increase application efficiency. The valuable insights into parallelization strategies for GPUs are supplemented by ready-to-use code fragments. Numerical Computations with GPUs targets professionals and researchers working in high performance computing and GPU programming. Advanced-level students focused on computer science and mathematics will also find this book useful as secondary text book or reference.

From the table of contents: Read the rest of this entry »

PARALUTION 0.7.0 released

May 27th, 2014

PARALUTION is a library for sparse iterative methods which can be performed on various parallel devices, including multi-core CPU, GPU (CUDA and OpenCL) and Intel Xeon Phi. The new 0.7.0 version provides the following new features:

  • Windows support – full windows support for all backends (CUDA, OpenCL, OpenMP)
  • Assembling function – new OpenMP parallel assembling function for sparse matrices (includes an update function for time-dependent problems)
  • Direct (dense) solvers (for very small problems)
  • (Restricted) Additive Schwarz preconditioners
  • MATLAB/Octave plug-in

To avoid OpenMP overhead for small sized problems, the library will compute in serial if the size of the matrix/vector is below a pre-defined threshold. Internally, the OpenCL backend has been modified for simplified cross platform compilation.

PARALUTION – new release 0.6.0

February 26th, 2014

PARALUTION is a library for sparse iterative methods which can be performed on various parallel devices, including multi-core CPU, GPU (CUDA and OpenCL) and Intel Xeon Phi. The new 0.6.0 version provides the following new features:

  • Windows support (OpenMP backend)
  • FGMRES (Flexible GMRES)
  • (R)CMK (Cuthill–McKee) ordering
  • Thread-core affiliation (for Host OpenMP)
  • Asynchronous transfers (CUDA backend)
  • Pinned memory allocation on the host when using CUDA backend
  • Verbose output for debugging
  • Easy to handle timing function in the examples

PARALUTION 0.6.0 is available at

Performance benchmarks on CPU/GPU/Xeon Phi

October 19th, 2013

PARALUTION is a library for sparse iterative methods which can be performed on various parallel devices, including multi-core CPU and GPU. In the new 0.4.0 version, the library provides also a backend for Xeon Phi (MIC). With this new version, various performance benchmarks based on vector-vector routines, sparse matrix-vector multiplication and CG method on different backends have been released: OpenMP/CUDA/OpenCL- NVIDIA GPU, AMD GPU, CPU and Xeon Phi. More information:

CfP: Numerical Computations with GPUs

September 22nd, 2013

Numerical Computations with GPUs, to be published by Springer, will contain a collection of articles on core numerical methods adapted for Graphics Processing Units (GPUs). Classical numerical methods (solution of linear equations, FFT, etc.) are at the core of many scientific and engineering computations. In recent years substantial efforts were undertaken to adapt these methods for recently emerged GPU-based systems. The book is envisioned as a consolidation of such work into a single volume covering widely used methods and techniques. Each chapter will provide mathematical background, parallel algorithm, and implementation details leading to reusable, adaptable, and scalable code fragments. Each chapter will be accompanied with a basic CUDA or OpenCL source code that can be used by the readers as a starting point for adaptation in their applications. The book will serve as a GPU implementation manual for many numerical algorithms providing valuable insights into parallelization strategies for GPUs as well as ready-to-use code fragments with a broad appeal to both developers and researchers interested in GPU computing.

Authors interested in contributing to this volume are asked to submit a short proposal via EasyChair ( by October 15, 2013. Authors of the accepted/invited chapters are expected to write and submit to the editor completed chapters by January 31, 2014. For more details see full solicitation ( or contact the Editor at

Towards Performance-Portable, Scalable and Convenient Linear Algebra

August 16th, 2013


The rise of multi- and many-core architectures also gave birth to a plethora of new parallel programming models. Among these, the open industry standard OpenCL addresses this heterogeneity of programming environments by providing a unified programming framework. The price to pay, however, is that OpenCL requires additional low-level boilerplate code, when compared to vendor-specific solutions, even if only simple operations are to be performed. Also, the unified programming framework does not automatically provide any guarantees on performance portability of a particular implementation. Thus, device-specific compute kernels are still required for obtaining good performance across different hardware architectures.
We address both, the issue of programmability and portable performance, in this work: On the one hand, a high-level programming interface for linear algebra routines allows for the convenient specification of the operations of interest without having to go into the details of the underlying hardware. On the other hand, we discuss the underlying generator for device-specific OpenCL kernels at runtime, which is supplemented by an auto-tuning framework for portable performance as well as with work partitioning and task scheduling for multiple devices. Our benchmark results show portable performance across hardware from major vendors. In all cases, at least 75 percent of the respective vendor tuned library was obtained, while in some cases we even outperformed the reference. We further demonstrate the convenient and efficient use of our high-level interface in a multi-device setting with good scalability.

(Philippe Tillet, Karl Rupp, Siegfried Selberherr, Chin-Teng Lin: “Towards Performance-Portable, Scalable, and Convenient Linear Algebra”. 5th USENIX Workshop on Hot Topics in Parallelism (HotPar’) 2013 [PDF].)

Communication-Avoiding Krylov Techniques on Graphic Processing Units

May 11th, 2013


Communicating data within the graphic processing unit (GPU) memory system and between the CPU and GPU are major bottlenecks in accelerating Krylov solvers on GPUs. Communication-avoiding techniques reduce the communication cost of Krylov subspace methods by computing several vectors of a Krylov subspace “at once,” using a kernel called “matrix powers.” The matrix powers kernel is implemented on a recent generation of NVIDIA GPUs and speedups of up to 5.7 times are reported for the communication-avoiding matrix powers kernel compared to the standards prase matrix vector multiplication (SpMV) implementation.

(M. Mehri Dehnavi, Y. El-Kurdi, J. Demmel and D. Giannacopoulos: “Communication-Avoiding Krylov Techniques on Graphic Processing Units”, IEEE Transactions on Magnetics 49(5):1749-1752, May 2013. [DOI])

Page 1 of 41234