BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs

May 27th, 2014


Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful GPUs to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github:

(A. Eklund, P. Dufort, M. Villani and S. LaConte: “BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs”. Front. Neuroinform. 8:24, 2014. [DOI])

Medical Image Processing on the GPU – Past, Present and Future

June 6th, 2013


Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges.

(Eklund, A., Dufort, P., Forsberg, D., LaConte, S.M., Medical Image Processing on the GPU – Past, Present and Future, Medical Image Analysis. [DOI])

fMRI Analysis on the GPU – Possibilities and Challenges

July 17th, 2011


Functional magnetic resonance imaging (fMRI) makes it possible to non-invasively measure brain activity with high spatial resolution. There are however a number of issues that have to be addressed. One is the large amount of spatio-temporal data that needs to be processed. In addition to the statistical analysis itself, several preprocessing steps, such as slice timing correction and motion compensation, are normally applied. The high computational power of modern graphic cards has already successfully been used for MRI and fMRI. Going beyond the first published demonstration of GPU-based analysis of fMRI data, all the preprocessing steps and two statistical approaches, the general linear model (GLM) and canonical correlation analysis (CCA), have been implemented on a GPU. For an fMRI dataset of typical size (80 volumes with 64 x 64 x 22 voxels), all the preprocessing takes about 0.5 s on the GPU, compared to 5 s with an optimized CPU implementation and 120 s with the commonly used statistical parametric mapping (SPM) software. A random permutation test with 10 000 permutations, with smoothing in each permutation, takes about 50 s if three GPUs are used, compared to 0.5 – 2.5 h with an optimized CPU implementation. The presented work will save time for researchers and clinicians in their daily work and enables the use of more advanced analysis, such as non-parametric statistics, both for conventional fMRI and for real-time fMRI.

(Anders Eklund, Mats Andersson, Hans Knutsson: “fMRI Analysis on the GPU – Possibilities and Challenges”, Computer Methods and Programs in Biomedicine, 2011 [DOI])

Fast Random Permutation Tests Enable Objective Evaluation of Methods for Single Subject fMRI Analysis

July 17th, 2011


Parametric statistical methods, such as Z-, t-, and F-values are traditionally employed in functional magnetic resonance imaging (fMRI) for identifying areas in the brain that are active with a certain degree of statistical significance. These parametric methods, however, have two major drawbacks. First, it is assumed that the observed data are Gaussian distributed and independent; assumptions that generally are not valid for fMRI data. Second, the statistical test distribution can be derived theoretically only for very simple linear detection statistics. With non-parametric statistical methods, the two limitations described above can be overcome. The major drawback of non-parametric methods is the computational burden with processing times ranging from hours to days, which so far have made them impractical for routine use in single subject fMRI analysis. In this work, it is shown how the computational power of cost-efficient Graphics Processing Units (GPUs) can be used to speed up random permutation tests. A test with 10 000 permutations takes less than a minute, making statistical analysis of advanced detection methods in fMRI practically feasible. To exemplify the permutation based approach, brain activity maps generated by the General Linear Model (GLM) and Canonical Correlation Analysis (CCA) are compared at the same significance level. During the development of the routines and writing of the paper, 3-4 years of processing time has been saved by using the GPU.

(Anders Eklund, Mats Andersson, Hans Knutsson: “Fast Random Permutation Tests Enable Objective Evaluation of Methods for Single Subject fMRI Analysis”, International Journal of Biomedical Imaging, Article ID 627947, 2011 [Youtube Video] [PDF])

True 4D Image Denoising on the GPU

July 17th, 2011


The use of image denoising techniques is an important part of many medical imaging applications. One common application is to improve the image quality of low-dose, i.e. noisy, computed tomography (CT) data. The medical imaging domain has seen a tremendous development during the last decades. It is now possible to collect time resolved volumes, i.e. 4D data, with a number of modalities (e.g. ultrasound (US), CT, magnetic resonance imaging (MRI)). While 3D image denoising previously has been applied to several volumes independently, there has not been much work done on true 4D image denoising, where the algorithm considers several volumes at the same time (and not a single volume at a time). By using all the dimensions, it is for example possible to remove some of the time varying reconstruction artefacts that exist in CT volumes. The problem with 4D image denoising, compared to 2D and 3D denoising, is that the computational complexity increases exponentially. In this paper we describe a novel algorithm for true 4D image denoising, based on local adaptive filtering, and how to implement it on the graphics processing unit (GPU). The algorithm was applied to a 4D CT heart dataset of the resolution 512 x 512 x 445 x 20. The result is that the GPU can complete the denoising in about 25 minutes if spatial filtering is used and in about 8 minutes if FFT based filtering is used. The CPU implementation requires several days of processing time for spatial filtering and about 50 minutes for FFT based filtering. Fast spatial filtering makes it possible to apply the denoising algorithm to larger datasets (compared to if FFT based filtering is used). The short processing time increases the clinical value of true 4D image denoising significantly.

(Anders Eklund, Mats Andersson, Hans Knutsson: “True 4D Image Denoising on the GPU”, International Journal of Biomedical Imaging, Article ID 952819, 2011 [Youtube Video] [PDF])

WaveTomography v1.0: 2D waveform tomography reconstruction

February 21st, 2010

WaveTomography is a 2D time-domain waveform tomography reconstruction algorithm that can be run on graphics processing units. It features:

  • Wave propagation using leapfrog and ONADM schemes.
  • First order absorbing boundary conditions.
  • CPU only and CPU/GPU implementations.
  • Flexible reconstruction strategy (choice of emitters and receivers at each iteration).
  • Flexible imaging setup (choice of transducers’ positions).

The WaveTomography package also includes a standalone simulator for wave propagation. The source code can be freely downloaded.

(Roy, O., Jovanovic, I., Hormati, A., and Parhizkar, R., and Vetterli, M., “Sound speed estimation using wave-based ultrasound tomography: Theory and GPU implementation”, in Proc. SPIE Medical Imaging, 2010.)

High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy

August 11th, 2008

This paper described an implementation of fast deformable image registration using GPUs and CUDA in radiation therapy. Using lung and prostate volumetric imaging, the GPU implementation is 40-66 times faster than a single-threaded CPU implementation and 25-41 times faster than a multithreaded implementation. The paradigm of GPU-based near-real-time deformable image registration opens up a host of clinical applications for medical imaging. ( High performance computing for deformable image registration: Towards a new paradigm in adaptive radiotherapy. (Sanjiv S. Samant, Junyi Xia, P&#305nar Muyan-Özçelik, John D. Owens. Medical physics, 2008.)

GPU-Accelerated Computed Tomography

May 6th, 2005

The task of reconstructing an object from its projections via tomographic methods is a time-consuming process due to the vast complexity of the data. GPUs offer an affordable alternative to proprietary ASICs and FPGAs. Fang Xu and Klaus Mueller at Stony Brook University have shown that the latest generation of GPUs can be exploited to perform both analytical and iterative reconstruction from X-ray and functional imaging data at clinical rates and high quality. Visualization of the reconstructed object is easily achieved since the object already resides in the graphics hardware, allowing one to run a visualization module at any time to view the reconstruction results. Their implementation allows speedups of 1-2 orders of magnitude over software implementations, at comparable image quality. (Link to the project page)

Interactive, GPU-Based Level Sets for 3D Brain Tumor Segmentation

August 29th, 2003

This Medical Image Computing and Computer Assisted Intervention (MICCAI) 2003 paper by Lefohn et al. describes a brain tumor segmentation study performed with a new GPU-based level-set solver. This paper demonstrates that the ability to interact with a level-set computation in real time enables users to quickly produce segmentations from MRI data that qualitatively and quantitatively compare favorably with expert hand-segmentations. (Interactive, GPU-Based Level Sets for 3D Brain Tumor Segmentation. Aaron E. Lefohn, Joshua E. Cates and Ross T. Whitaker. To appear at “Medical Image Computing and Computer Assisted Intervention,” (MICCAI) 2003.)