Parallelization and Characterization of Pattern Matching using GPUs

October 29th, 2011

Abstract:

Pattern matching is a highly computationally intensive operation used in a plethora of applications. Unfortunately, due to the ever increasing storage capacity and link speeds, the amount of data that needs to be matched against a given set of patterns is growing rapidly. In this paper, we explore how the highly parallel computational capabilities of commodity graphics processing units (GPUs) can be exploited for high-speed pattern matching. We present the design, implementation, and evaluation of a pattern matching library running on the GPU, which can be used transparently by a wide range of applications to increase their overall performance. The library supports both string searching and regular expression matching on the NVIDIA CUDA architecture. We have also explored the performance impact of different types of memory hierarchies, and present solutions
to alleviate memory congestion problems. The results of our performance evaluation using off-the-self graphics processors demonstrate that GPU-based pattern matching can reach tens of gigabits per second on different workloads.

(Giorgos Vasiliadis, Michalis Polychronakis and Sotiris Ioannidis: “Parallelization and Characterization of Pattern Matching using GPUs”, Proceedings of the IEEE International Symposium on Workload Characterization (IISWC). November 2011. [PDF])

Physically based lighting for volumetric data with Exposure Render

October 27th, 2011

Exposure Render is a Direct Volume Rendering Application that applies progressive Monte Carlo raytracing, coupled with physically based light transport to heterogeneous volumetric data. Exposure Render enables the configuration of any number of arbitrarily shaped area lights, models a real-world camera, including its lens and aperture, and incorporates complex materials, whilst still maintaining interactive display updates. It features both surface and volumetric scattering, and applies noise reduction to remove the unwanted startup noise associated with progressive Monte Carlo rendering. The complete implementation is available in source and binary forms under a permissive free software license.

SIMD Re-convergence at Thread Frontiers: A new method for handling branch divergence on GPUs

October 24th, 2011

Abstract:

Hardware and compiler techniques for mapping data-parallel programs with divergent control flow to SIMD architectures have recently enabled the emergence of new GPGPU programming models such as CUDA,  OpenCL, and DirectX Compute. The impact of branch divergence can be quite different depending upon whether the program’s control flow is structured or unstructured. In this paper, we show that unstructured control flow occurs frequently in applications and can lead to significant code expansion when executed using existing approaches for handling branch divergence. This paper proposes a new technique for automatically mapping arbitrary control flow onto SIMD processors that relies on a concept of a “Thread Frontier”, which is a statically bounded region of the program
containing all threads that have branched away from the current warp. This technique is evaluated on a GPU emulator configured to model i) a commodity GPU (Intel Sandybridge), and ii) custom hardware support not realized in current GPU architectures. It is shown that this new technique performs identically to the best existing method for structured control flow, and re-converges at the earliest possible point when executing unstructured control flow. This leads to i) between 1.5-633.2% reductions in dynamic instruction counts for several real applications, ii) simplification of the compilation process, and iii) ability to efficiently add high level unstructured programming constructs (e.g., exceptions) to existing data-parallel languages.

(Gregory Diamos, Benjamin Ashbaugh, Subramaniam Maiyuran, Andrew Kerr, Haicheng Wu and Sudhakar Yalamanchili: “SIMD Re-convergence at Thread Frontiers”. 44th International Symposium on Microarchitecture (MICRO 44), 2011. [WWW])

Efficient Synchronization Primitives for GPUs

October 22nd, 2011

Abstract:

In this paper, we revisit the design of synchronization primitives—specifically barriers, mutexes, and semaphores—and how they apply to the GPU. Previous implementations are insufficient due to the discrepancies in hardware and programming model of the GPU and CPU. We create new implementations in CUDA and analyze the performance of spinning on the GPU, as well as a method of sleeping on the GPU, by running a set of memory-system benchmarks on two of the most common GPUs in use, the Tesla- and Fermi-class GPUs from NVIDIA. From our results we define higher-level principles that are valid for generic many-core processors, the most important of which is to limit the number of atomic accesses required for a synchronization operation because atomic accesses are slower than regular memory accesses. We use the results of the benchmarks to critique existing synchronization algorithms and guide our new implementations, and then define an abstraction of GPUs to classify any GPU based on the behavior of the memory system. We use this abstraction to create suitable implementations of the primitives specifically targeting the GPU, and analyze the performance of these algorithms on Tesla and Fermi. We then predict performance on future GPUs based on characteristics of the abstraction. We also examine the roles of spin waiting and sleep waiting in each primitive and how their performance varies based on the machine abstraction, then give a set of guidelines for when each strategy is useful based on the characteristics of the GPU and expected contention.

(Jeff A. Stuart and John D. Owens: “Efficient Synchronization Primitives for GPUs”, submitted October 2011. [ARXIV]).

rCUDA 3.1 Released

October 20th, 2011

The new version 3.1 of rCUDA (Remote CUDA), the Open Source package that allows performing CUDA calls to remote GPUs, is now available. Release highlights:

  • Fully updated API to CUDA 4.0 (added support for modules “Peer Device Memory Access” and “Unified Addressing”).
  • Fixed low level Surface Reference management functions.

For further information, please visit the rCUDA webpage  at http://www.gap.upv.es/rCUDA.

Symscape Releases Caedium v3.0 with GPU Support

October 20th, 2011

The latest release of Symscape’s Caedium (v3.0) now has support for CFD simulations using NVIDIA CUDA GPU devices on Windows and Linux. Caedium is an integrated simulation environment that targets Computational Fluid Dynamics (CFD). The GPU support is provided by Symscape’s ofgpu linear solver library for OpenFOAM®. For more details see:
http://www.symscape.com/news/hybrid-cfd-modeling-cloud-computing

Extending MPI to Accelerators

October 19th, 2011

A paper detailing several possible avenues to expand MPI to accelerators has just been presented at “Architectures and System for Big Data (ASBD) 2011″, a workshop at PACT 2011. The abstract and a link to the paper are both below. We (the authors) are looking for feedback as to which options seem attractive to GPU programmers and developers. We welcome any comments/thoughts/critiques you might have.

Current trends in computing and system architecture point towards a need for accelerators such as GPUs to have inherent communication capabilities. We review previous and current software libraries that provide pseudo-communication abilities through direct message passing. We show how these libraries are beneficial to the HPC community, but are not forward-thinking enough. We give motivation as to why MPI should be extended to support these accelerators, and provide a road map of achievable milestones to complete such an extension, some of which require advances in hardware and device drivers.

(Jeff A. Stuart, Pavan Balaji and John D. Owens, “Extending MPI to Accelerators”, PACT 2011 Workshop Series: Architectures and Systems for Big Data, October 2011. [WWW])

OpenCL Compiler Tools

October 19th, 2011

OCLTools is a powerful, yet compact, suite of Open Source tools that provide OpenCL developers with more alternatives to kernel compilation. OCLTools enables developers to eliminate costly kernel compilation time from the runtime of your application. With OCLTools developers can embed the source code of their kernels (clear text or encrypted) directly into their program binaries, eliminating the need to distribute kernel source code in the open while still maintaining the flexibility of runtime compilation. Both source code and precompiled binaries can be embedded into OpenCL binaries, effectively eliminating the additional kernel compilation overhead from the run time of your application.

For more information go to http://www.clusterchimps.org

CfP: 20th High Performance Computing Symposium 2012

October 7th, 2011

The 2012 Spring Simulation Multi-conference will feature the 20th High Performance Computing Symposium (HPC 2012), devoted to the impact of high performance computing and communications on computer simulations. Topics of interest include:

  • high performance/large scale application case studies,
  • GPUs for general purpose computations (GPGPU)
  • multicore and many-core computing,
  • power aware computing,
  • large scale visualization and data management,
  • tools and environments for coupling parallel codes,
  • parallel algorithms and architectures,
  • high performance software tools,
  • component technologies for high performance computing.

Important dates: Paper submission due: December 2, 2011; Notification of acceptance: January 13, 2012; Revised manuscript due: January 27, 2012; Symposium: March 26–29, 2012.

Sequence Homology Search using Fine-Grained Cycle Sharing of Idle GPUs

October 2nd, 2011

Abstract:

In this paper, we propose a fine-grained cycle sharing (FGCS) system capable of exploiting idle graphics processing units (GPUs) for accelerating sequence homology search in local area network environments. Our system exploits short idle periods on GPUs by running small parts of guest programs such that each part can be completed within hundreds of milliseconds. To detect such short idle periods from the pool of registered resources, our system continuously monitors keyboard and mouse activities via event handlers rather than waiting for a screensaver, as is typically deployed in existing systems. Our system also divides guest tasks into small parts according to a performance model that estimates execution times of the parts. This task division strategy minimizes any disruption to the owners of the GPU resources. Experimental results show that our FGCS system running on two non-dedicated GPUs achieves 111-116% of the throughput achieved by a single dedicated GPU. Furthermore, our system provides over two times the throughput of a screensaver-based system. We also show that the idle periods detected by our system constitute half of the system uptime. We believe that the GPUs hidden and often unused in office environments provide a powerful solution to sequence homology search.

(Fumihiko Ino, Yuma Munekawa, and Kenichi Hagihara, “Sequence Homology Search using Fine-Grained Cycle Sharing of Idle GPUs”, accepted for publication in IEEE Transactions on Parallel and Distributed Systems, Sep. 2011. [DOI])

Page 30 of 110« First...1020...2829303132...405060...Last »