Accelerate OpenFOAM® with Culises

April 13th, 2015

Culises significantly accelerates your OpenFOAM® application by using GPUs for the computationally most intensive tasks.

Its main features are

  • Library for GPU-based acceleration of OpenFOAM®
  • Multi-GPU support, significantly reduced computing times
  • Highly efficient state-of-the-art iterative solvers like AMG
  • Quick and easy installation, no validation necessary
  • Flexible interfaces to customer-specific software/engineering applications available

The acceleration of the linear solver by Culises is greater than 2x. The overall speedup depends on the type of application and the time spent in the linear solver. Culises my be tested on FluiDyna’s purpose-built workstation to determine the acceleration potential for your individual OpenFOAM® application. Find out more on: www.culises.com

RapidCFD: open-source CFD for GPUs

April 13th, 2015

A new open-source CFD project have just been published. RapidCFD is a new open-source CFD project that uses NVIDIA CUDA for the entire calculation process which gives a significant reduction in computation time.

Features:

  • most incompressible and compressible solvers on static mesh are available
  • all the calculations are done on the GPU
  • no overhead for GPU-CPU memory copy
  • can run in parallel on multiple GPUs

Visit RapidCFD project page.

Scalable Partitioning for Parallel Position Based Dynamics

April 13th, 2015

Abstract:

We introduce a practical partitioning technique designed for parallelizing Position Based Dynamics, and exploiting the ubiquitous multi-core processors present in current commodity GPUs. The input is a set of particles whose dynamics is influenced by spatial constraints. In the initialization phase, we build a graph in which each node corresponds to a constraint and two constraints are connected by an edge if they influence at least one common particle. We introduce a novel greedy algorithm for inserting additional constraints (phantoms) in the graph such that the resulting topology is qˆ-colourable, where qˆ ≥ 2 is an arbitrary number. We color the graph, and the constraints with the same color are assigned to the same partition. Then, the set of constraints belonging to each partition is solved in parallel during the animation phase. We demonstrate this by using our partitioning technique; the performance hit caused by the GPU kernel calls is significantly decreased, leaving unaffected the visual quality, robustness and speed of serial position based dynamics.

(Fratarcangeli M and Pellacini F, Scalable Partitioning for Parallel Position Based Dynamics, Computer Graphics Forum (Special Issue of Eurographics 2015 Conference). Vol. 34(2) 2015)

MAPS: Optimizing Massively Parallel Applications Using Device-Level Memory Abstraction

February 11th, 2015

Abstract:

GPUs play an increasingly important role in high-performance computing. While developing naive code is straightforward, optimizing massively parallel applications requires deep understanding of the underlying architecture. The developer must struggle with complex index calculations and manual memory transfers. This article classifies memory access patterns used in most parallel algorithms, based on Berkeley’s Parallel “Dwarfs.” It then proposes the MAPS framework, a device-level memory abstraction that facilitates memory access on GPUs, alleviating complex indexing using on-device containers and iterators. This article presents an implementation of MAPS and shows that its performance is comparable to carefully optimized implementations of real-world applications.

Rubin, Eri, et al. ["MAPS: Optimizing Massively Parallel Applications Using Device-Level Memory Abstraction."](http://dl.acm.org/citation.cfm?id=2680544) ACM Transactions on Architecture and Code Optimization (TACO) 11.4 (2014): 44.

[Library website](http://www.cs.huji.ac.il/~talbn/maps/)

A Survey Of Techniques for Managing and Leveraging Caches in GPUs

February 10th, 2015

Abstract:

Initially introduced as special-purpose accelerators for graphics applications, graphics processing units (GPUs) have now emerged as general purpose computing platforms for a wide range of applications. To address the requirements of these applications, modern GPUs include sizable hardware-managed caches. However, several factors, such as unique architecture of GPU, rise of CPU-GPU heterogeneous computing, etc., demand effective management of caches to achieve high performance and energy efficiency. Recently, several techniques have been proposed for this purpose. In this paper, we survey several architectural and system-level techniques proposed for managing and leveraging GPU caches. We also discuss the importance and challenges of cache management in GPUs. The aim of this paper is to provide the readers insights into cache management techniques for GPUs and motivate them to propose even better techniques for leveraging the full potential of caches in the GPUs of tomorrow.

Sparsh Mittal, “A Survey Of Techniques for Managing and Leveraging Caches in GPUs”, Journal of Circuits, Systems, and Computers (JCSC), vol. 23, no. 8, 2014. WWW

Real-time Deblocked GPU rendering of Compressed Volume Data

December 2nd, 2014

Abstract:

The wide majority of current state-of-the-art compressed GPU volume renderers are based on block-transform coding, which is susceptible to blocking artifacts, particularly at low bit-rates. In this paper the authors address the problem for the first time, by introducing a specialized deferred filtering architecture working on block-compressed data and including a novel deblocking algorithm. The architecture efficiently performs high quality shading of massive datasets by closely coordinating visibility- and resolution-aware adaptive data loading with GPU-accelerated per-frame data decompression, deblocking, and rendering. A thorough evaluation including quantitative and qualitative measures demonstrates the performance of our approach on large static and dynamic datasets including a massive 512^4 turbulence simulation (256GB), which is aggressively compressed to less than 2 GB, so as to fully upload it on graphics board and to explore it in real-time during animation.

(Fabio Marton, José Antonio Iglesias Guitián, Jose Díaz and Enrico Gobbetti: “Real-time deblocked GPU rendering of compressed volumes”. Proc. 19th International Workshop on Vision, Modeling and Visualization (VMV), pp. 167-174, Oct. 2014. [WWW])

PARALUTION v0.8.0 released

November 14th, 2014

PARALUTION is a library for sparse iterative methods which can be performed on various parallel devices, including multi-core CPU, GPU (CUDA and OpenCL) and Intel Xeon Phi. The new 0.8.0 release provides the following extra features:

  • Complex support
  • TNS, Variable preconditioner
  • BiCGStab(l), QMRCGStab, FCG solvers
  • RS and PairWise AMG
  • SIRA eigenvalue solver
  • Replace/Extract column/row functions
  • Stencil computation

For details, visit http://www.paralution.com.

Massive exploration of perturbed conditions of the blood coagulation cascade through GPU parallelization

November 3rd, 2014

Abstract:

The introduction of general-purpose Graphics Processing Units (GPUs) is boosting scientific applications in Bioinformatics, Systems Biology, and Computational Biology. In these fields, the use of high-performance computing solutions is motivated by the need of performing large numbers of in silico analysis to study the behavior of biological systems in different conditions, which necessitate a computing power that usually overtakes the capability of standard desktop computers. In this work we present coagSODA, a CUDA-powered computational tool that was purposely developed for the analysis of a large mechanistic model of the blood coagulation cascade (BCC), defined according to both mass-action kinetics and Hill functions. coagSODA allows the execution of parallel simulations of the dynamics of the BCC by automatically deriving the system of ordinary differential equations and then exploiting the numerical integration algorithm LSODA. We present the biological results achieved with a massive exploration of perturbed conditions of the BCC, carried out with one-dimensional and bi-dimensional parameter sweep analysis, and show that GPU-accelerated parallel simulations of this model can increase the computational performances up to a 181× speedup compared to the corresponding sequential simulations.

(Cazzaniga P., Nobile M.S., Besozzi D., Bellini M., Mauri G.: “Massive exploration of perturbed conditions of the blood coagulation cascade through GPU parallelization”. BioMed Research International, vol. 2014. [DOI])

On Demand Webinar: Essential CUDA Optimization Techniques

November 3rd, 2014

This webinar provides an overview of the improved analysis performance tools available in CUDA 6.0 and key optimization strategies for compute, latency and memory bound problems. The webinar includes techniques for ensuring peak utilization of CUDA cores, how to improve branching efficiency, intrinsic functions and loop unrolling. Optimal access patterns for global and shared memory are presented, including a comparison between the Fermi and Kepler architectures. To view the webinar go to: http://acceleware.com/blog/webinar-essential-cuda-optimization-techniques

CUDA finance course Dec 2-5, 2014, New York

October 22nd, 2014

Developed in partnership with NVIDIA, this hands-on four day course will teach you how to write and optimize applications that fully leverage the multi-core processing capabilities of the GPU. This course will have a finance focus. Commonly used algorithms such as random number generation and Monte Carlo simulations will be used and profiled in examples. A background in finance is not necessary. For more information please visit: http://acceleware.com/training/988

Page 1 of 3612345...102030...Last »