Back Testing of HFT Strategies with Xcelerit and GPUs

July 26th, 2013

Algorithmic trading has become ever more popular in recent years – accounting for approximately half of all European and American stock trades placed in 2012. The trading strategies need to be back-tested regularly using historical market data for calibration and to check the expected return and risk. This is a computationally demanding process that can take hours to complete. However, back-testing the strategies frequently intra-day can significantly increase the profits for the trading institution.

Read the rest of this entry »

SciComp Speeds Derivatives Performance with Support for New NVIDIA® Hardware and Software

November 17th, 2010

From a press release:

AUSTIN, Texas, — Financial institutions are turning to graphics processing unit (GPU) computing for real economic and performance benefits. Fast and accurate derivatives pricing model development and accelerated execution speeds are crucial for today’s derivatives marketplace. SciComp Inc. has enhanced SciFinance®, its flagship derivatives pricing software, to help quantitative developers further shorten Monte Carlo derivatives pricing model development time and create models with faster execution speeds. SciFinance® now features support for NVIDIA® Tesla™ 20-series GPUs and CUDA™ 3.0.

“The mathematical problems of pricing derivatives are tailor-made for GPU computing, and Monte Carlo simulations enjoy some of the fastest speed-ups on GPUs: from 50 to over 300 times faster compared to serial code,” said Curt Randall, executive vice president of SciComp. “This execution speed increase makes it feasible to replace grid solutions (CPUs and interconnects) with a GPU system. GPU costs are a tiny percentage of the cost of a grid solution and offer radical reductions in both footprint and power consumption.”

SciFinance takes advantage of new GPU hardware and software from NVIDIA Read the rest of this entry »

CfP: Workshop on High-performance computing applied to Finance (HPCF 2010)

April 12th, 2010

This workshop focuses on computational issues in the evaluation of financial instruments using advanced architectures. The workshop is intended to bring together academics from finance,
statistics, numerical analysis and computer science, as well as decision makers and strategists from the financial industries and regulators from supervisory authorities in order to discuss recent challenges and results in using high-performance technologies for the evaluation of financial instruments. Accepted papers presented at the Workshop will be published in a special Euro-Par 2010 Workshop Volume in the Lecture Notes in Computer Science (LNCS) series after the Euro-Par 2010 conference.

This workshop will be held in conjunction with Euro-Par 2010, Ischia, Naples, Italy, on August 30, 2010. More information and the full call for papers are available on the workshop homepage.

Accelerated Fluctuation Analysis by Graphic Cards and Complex Pattern Formation in Financial Markets

September 22nd, 2009


The compute unified device architecture is an almost conventional programming approach for managing computations on a graphics processing unit (GPU) as a data-parallel computing device. With a maximum number of 240 cores in combination with a high memory bandwidth, a recent GPU offers resources for computational physics. We apply this technology to methods of fluctuation analysis, which includes determination of the scaling behavior of a stochastic process and the equilibrium autocorrelation function. Additionally, the recently introduced pattern formation conformity (Preis T et al 2008 Europhys. Lett. 82 68005), which quantifies pattern-based complex short-time correlations of a time series, is calculated on a GPU and analyzed in detail. Results are obtained up to 84 times faster than on a current central processing unit core. When we apply this method to high-frequency time series of the German BUND future, we find significant pattern-based correlations on short time scales. Furthermore, an anti-persistent behavior can be found on short time scales. Additionally, we compare the recent GPU generation, which provides a theoretical peak performance of up to roughly 1012 [ed. should be 1 Trillion] floating point operations per second with the previous one.

(Tobias Preis et al., Accelerated fluctuation analysis by graphic cards and complex pattern formation in financial markets, New J. Phys. 11 093024 (21pp) doi: 10.1088/1367-2630/11/9/093024)

Performance Evaluation of GPUs Using the RapidMind Development Platform

November 4th, 2006

This white paper from RapidMind and HP compares the performance of BLAS dense linear algebra operations, the FFT, and European option pricing on the GPU against highly tuned CPU implementations on the fastest available CPUs. All of the GPU implementations were made using the RapidMind Development Platform, which allows the use of standard C++ programming to create high-performance parallel applications that run on the GPU. The full source for the samples is available in conjunction with a new beta version of the RapidMind development platform. The results will also be presented as a poster at SC06.