LibBi: Bayesian State-Space Modelling and Inference on GPU

June 24th, 2013


LibBi is a software package for state-space modelling and Bayesian inference on modern computer hardware, including multi-core central processing units (CPUs), many-core graphics processing units (GPUs) and distributed-memory clusters of such devices. The software parses a domain-specific language for model specification, then optimises, generates, compiles and runs code for the given model, inference method and hardware platform. In presenting the software, this work serves as an introduction to state-space models and the specialised methods developed for Bayesian inference with them. The focus is on sequential Monte Carlo (SMC) methods such as the particle filter for state estimation, and the particle Markov chain Monte Carlo (PMCMC) and SMC^2 methods for parameter estimation. All are well-suited to current computer hardware. Two examples are given and developed throughout, one a linear three-element windkessel model of the human arterial system, the other a nonlinear Lorenz ’96 model. These are specified in the prescribed modelling language, and LibBi demonstrated by performing inference with them. Empirical results are presented, including a performance comparison of the software with different hardware configurations.

(Lamwrence M. Murray: “Bayesian state-space modelling on high-performance hardware using LibBi”, Preprint, June 2013. [arXiv])

Physically based lighting for volumetric data with Exposure Render

October 27th, 2011

Exposure Render is a Direct Volume Rendering Application that applies progressive Monte Carlo raytracing, coupled with physically based light transport to heterogeneous volumetric data. Exposure Render enables the configuration of any number of arbitrarily shaped area lights, models a real-world camera, including its lens and aperture, and incorporates complex materials, whilst still maintaining interactive display updates. It features both surface and volumetric scattering, and applies noise reduction to remove the unwanted startup noise associated with progressive Monte Carlo rendering. The complete implementation is available in source and binary forms under a permissive free software license.

Multi-GPU accelerated multi-spin Monte Carlo simulations of the 2D Ising model

August 1st, 2010


A Modern Graphics Processing unit (GPU) is able to perform massively parallel scientific computations at low cost. We extend our implementation of the checkerboard algorithm for the two-dimensional Ising model in order to overcome the memory limitations of a single GPU which enables us to simulate significantly larger systems. Using multi-spin coding techniques, we are able to accelerate simulations on a single GPU by factors up to 35 compared to an optimized single Central Processor Unit (CPU) core implementation which employs multi-spin coding. By combining the Compute Unified Device Architecture (CUDA) with the Message Parsing Interface (MPI) on the CPU level, a single Ising lattice can be updated by a cluster of GPUs in parallel. For large systems, the computation time scales nearly linearly with the number of GPUs used. As proof of concept we reproduce the critical temperature of the 2D Ising model using finite size scaling techniques.

(Benjamin Block, Peter Virnau and Tobias Preis: “Multi-GPU accelerated multi-spin Monte Carlo simulations of the 2D Ising model”, Computer Physics Communications 181:9, 1549-1556, Sep. 2010. DOI Link. arXiv link)

Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units

March 14th, 2010


Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed-up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is present.

(Vadim Demchik, “Pseudo-random number generators for Monte Carlo simulations on Graphics Processing Units”, Mar. 2010, arXiv:1003.1898 [hep-lat])

Monte Carlo Simulation of Photon Migration in 3D Turbid Media Accelerated by Graphics Processing Units

November 23rd, 2009


We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging.

(Qianqian Fang and David A. Boas, “Monte Carlo Simulation of Photon Migration in 3D Turbid Media Accelerated by Graphics Processing Units,” Opt. Express, vol. 17, issue 22, pp. 20178-20190 (2009), doi:10.1364/OE.17.020178 , link to full-text PDF

A free software, Monte Carlo eXtreme (MCX), is also available at

GPU-accelerated Monte Carlo simulation of the 2D and 3D Ising model

May 12th, 2009


The compute unified device architecture (CUDA) is a programming approach for performing scientific calculations on a graphics processing unit (GPU) as a data-parallel computing device. The programming interface allows to implement algorithms using extensions to standard C language. With continuously increased number of cores in combination with a high memory bandwidth, a recent GPU offers incredible resources for general purpose computing. First, we apply this new technology to Monte Carlo simulations of the two dimensional ferromagnetic square lattice Ising model. By implementing a variant of the checkerboard algorithm, results are obtained up to 60 times faster on the GPU than on a current CPU core. An implementation of the three dimensional ferromagnetic cubic lattice Ising model on a GPU is able to generate results up to 35 times faster than on a current CPU core. As proof of concept we calculate the critical temperature of the 2D and 3D Ising model using finite size scaling techniques. Theoretical results for the 2D Ising model and previous simulation results for the 3D Ising model can be reproduced.

The paper is available, as well as CUDA source code for the 2D Ising model.

[Tobias Preis, Peter Virnau, Wolfgang Paul, and Johannes J. Schneider. “GPU accelerated Monte Carlo simulation of the 2D and 3D Ising model“. Journal of Computational Physics 228, 4468-4477 (2009)]

Monte Carlo simulations on Graphics Processing Units

April 13th, 2009


Implementation of basic local Monte-Carlo algorithms on ATI Graphics Processing Units (GPU) is investigated. The Ising model and pure SU(2) gluodynamics simulations are realized with the Compute Abstraction Layer (CAL) of ATI Stream environment using the Metropolis and the heat-bath algorithms, respectively. We present an analysis of both CAL programming model and the efficiency of the corresponding simulation algorithms on GPU. In particular, the significant performance speed-up of these algorithms in comparison with serial execution is observed.

(Vadim Demchik, Alexei Strelchenko. Monte Carlo simulations on Graphics Processing Units. arXiv:0903.3053 [hep-lat].)

Lattice QCD as a video game (GPGPU for quantum field theory)

July 14th, 2007

This paper outlines how GPGPU techniques can be used for Monte Carlo simulations of quantum field theories such as QCD. The speedup is around a factor of 4-10 depending on the GPU model relative to SSE optimized code on a Pentium 4. Sample code is also given. (Lattice QCD as a video game)