Intel Releases SDK with OpenCL* 1.2 support for Intel® Xeon Phi™ Coprocessors

May 11th, 2013

The new Intel® SDK for OpenCL* Applications XE 2013 includes certified OpenCL 1.2 support for Intel® Xeon® processors and Intel® Xeon Phi™ coprocessors using Linux* operating systems. This SDK is targeted at developers of highly parallel applications including High Performance Compute (HPC), workstations, and data analytics, to name just a few. OpenCL broadens the parallel programming options on Intel® architecture and allows developers to maximize data parallel application performance on Intel Xeon Phi coprocessors.
The Intel SDK for OpenCL Applications XE 2013 provides developers OpenCL runtime and compiler, development tools, optimization guides, code samples, and training collaterals. More information: www.intel.com/software/opencl-xe

New Three-part Webinar Series: OpenCL Programming on Intel® Processors

June 12th, 2012

Register today up now for a webinar series on how to use the Intel® SDK for OpenCL Applications to best utilize the CPU and Intel® HD Graphics of 3rd Gen Intel® Core™ processors for developing OpenCL applications:

 

OpenCL SDK for new Intel Core Processors

April 27th, 2012

The Intel® SDK for OpenCL Applications now supports the OpenCL 1.1 full-profile on 3rd generation Intel® Core™ processors with Intel® HD Graphics 4000/2500. For the first time, OpenCL developers using Intel® architecture can utilize compute resources across both Intel® Processor and Intel HD Graphics. More information: http://software.intel.com/en-us/articles/vcsource-tools-opencl-sdk

Intel SPMD Compiler Version 1.1 Released

December 7th, 2011

A major new release of the Intel SPMD Program Compiler (ispc) was posted on December 5, 2011. ispc is an extended version of the C programming language with support for “single program, multiple data” (SPMD) programming on the CPU; the SPMD model makes it easy to harness the full power of both the SIMD vector units and multiple cores on modern CPUs. The major features added in the 1.1 release include:

  • Full support for pointers, including pointer arithmetic, function pointers, and all other features of pointers in C.
  • A new parallel “foreach” statement, for more easily mapping computation to data.
  • Substantially revised documentation, including a new Performance Guide.
  • Many other small bug fixes and improvements.

ispc is open-source and is licensed under the BSD license. Source and binaries are available from http://ispc.github.com.

Intel announces a high-performance SPMD compiler for the CPU

June 26th, 2011

Intel has announced ispc, The Intel SPMD Program Compiler, now available in source and binary form from http://ispc.github.com.

ispc is a new compiler for “single program, multiple data” (SPMD) programs; the same model that is used for (GP)GPU programming, but here targeted to CPUs. ispc compiles a C-based SPMD programming language to run on the SIMD units of CPUs; it frequently provides a a 3x or more speedup on CPUs with 4-wide SSE units, without any of the difficulty of writing intrinsics code. There were a few principles and goals behind the design of ispc:

  • To build a small C-like language that would deliver excellent performance to performance-oriented programmers who want to run SPMD programs on the CPU.
  • To provide a thin abstraction layer between the programmer and the hardware—in particular, to have an execution and data model where the programmer can cleanly reason about the mapping of their source program to compiled assembly language and the underlying hardware.
  • To make it possible to harness the computational power of the SIMD vector units without the extremely low-programmer-productivity activity of directly writing intrinsics.
  • To explore opportunities from close coupling between C/C++ application code and SPMD ispc code running on the same processor—to have lightweight function calls between the two languages, to share data directly via pointers without copying or reformatting, and so forth.

ispc is an open source compiler with a BSD license. It uses the LLVM Compiler Infrastructure for back-end code generation and optimization and is hosted on github. It supports Windows, Mac, and Linux, with both x86 and x86-64 targets. It currently supports the SSE2 and SSE4 instruction sets, though support for AVX should be available soon.

“Can CPUs Match GPUs on Performance with Productivity?: Experiences with Optimizing a FLOP-intensive Application on CPUs and GPU”

October 27th, 2010

Abstract:

In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using operations involving dot-products and additions. We implement this algorithm on a NVIDIA Fermi GPU (Tesla 2050) using CUDA, and also manually parallelize it for the Intel Xeon X5680 (Westmere) and IBM Power7 multi-core processors. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version on the multi-core CPUs. A number of optimizations were performed for the GPU implementation on the Fermi, including blocking for Fermi’s configurable on-chip memory architecture. Experimental results illustrate that on a single multi-core processor, the manually parallelized versions of the correlation application perform only a small order of factor slower than the CUDA version executing on the Fermi – 1.005s on Power7, 3.49s on Intel X5680, and 465ms on Fermi. On a two-processor Power7 system, performance approaches that of the Fermi (650ms), while the Intel version runs in 1.78s. These results conclusively demonstrate that performance of the GPU memory subsystem is critical to effectively harness its computational capabilities. For the correlation application, a significantly higher amount of effort was put into developing the GPU version when compared to the CPU ones (several days against few hours). Our experience presents compelling evidence that performance comparable to that of GPUs can be achieved with much greater productivity on modern multi-core CPUs

(R. Bordawekar and U. Bondhugula and R. Rao: “Can CPUs Match GPUs on Performance with Productivity?: Experiences with Optimizing a FLOP-intensive Application on CPUs and GPU”, Technical Report, IBM T. J. Watson Research Center, 2010 [PDF])

.

Debunking the 100X GPU vs. CPU myth: An evaluation of throughput computing on CPU and GPU

July 4th, 2010

Abstract:

Recent advances in computing have led to an explosion in the amount of data being generated. Processing the ever-growing data in a timely manner has made throughput computing an important aspect for emerging applications. Our analysis of a set of important throughput computing kernels shows that there is an ample amount of parallelism in these kernels which makes them suitable for today’s multi-core CPUs and GPUs. In the past few years there have been many studies claiming GPUs deliver substantial speedups (between 10X and 1000X) over multi-core CPUs on these kernels. To understand where such large performance difference comes from, we perform a rigorous performance analysis and find that after applying optimizations appropriate for both CPUs and GPUs the performance gap between an NVIDIA GTX280 processor and the Intel Core i7-960 processor narrows to only 2.5x on average. In this paper, we discuss optimization techniques for both CPU and GPU, analyze what architecture features contributed to performance differences between the two architectures, and recommend a set of architectural features which provide significant improvement in architectural efficiency for throughput kernels.

(Victor W. Lee, Changkyu Kim, Jatin Chhugani, Michael Deisher, Daehyun Kim, Anthony D. Nguyen, NadathurSatish, Mikhail Smelyanski, Srinivas Chennupaty, Per Hammarlund, Ronak Singhal and Pradeep Dubey: “Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU”, SIGARCH Computer Architecture News 38(3), pp. 451-460, June 2010. DOI Link.)

Intel Releases Knights Corner

June 2nd, 2010

At ISC’10, Intel demonstrated their co-processor approach to HPC (formerly known as Larrabee, now codenamed Knights Corner). A prototype of the Intel Many Integrated Core (MIC) architecture with 32 in-order cores, each equipped with a 512-wide vector unit and connected via an on-chip coherent cache, delivered more than half a Teraflop performance for LU decomposition in a live demonstration during a keynote by Kirk Skaugen.

The full press release from ISC’10 is available here.

NVIDIA Announces Performance Primitives (NVPP) Library

June 8th, 2009

NVIDIA NVPP is a library of functions for performing CUDA accelerated processing. The initial set of functionality in the library focuses on imaging and video processing and is widely applicable for developers in these areas. NVPP will evolve over time to encompass more of the compute heavy tasks in a variety of problem domains. The NVPP library is written to maximize flexibility, while maintaining high performance.

NVPP can be used in one of two ways:

  • A stand-alone library for adding GPU acceleration to an application with minimal effort. Using this route allows developers to add GPU acceleration to their applications in a matter of hours.
  • A cooperative library for interoperating with a developer’s GPU code efficiently.

Either route allows developers to harness the massive compute resources of NVIDIA GPUs, while simultaneously reducing development times. The NVPP API matches the Intel Performance Primitives (IPP) library API so that porting existing IPP code to the GPU is easy to do.  For more information and to sign up for access to the beta release of NVPP, visit the NVPP website.