“Acceleware offers industry leading training courses for software developers looking to increase their skills in writing or optimizing applications for highly parallel processing. The training focuses on using GPUs for computing and the associated popular programming languages.
The courses are all taught by experienced programmers who provide real world experience, derived from Acceleware’s 9 years of building commercial GPU applications.
Clients will access our top rated training techniques for parallel programming.
We offer public and private courses (private courses require a minimum of 6 students). The 2015 training schedule for public courses is posted on the Acceleware website.”
Acceleware’s Training Courses 2015
General-purpose multiprocessors (as, in our case, Intel IvyBridge and Intel Haswell) increasingly add GPU computing power to the former multicore architectures. When used for embedded applications (for us, Synthetic aperture radar) with intensive signal processing requirements, they must constantly compute convolution algorithms, such as the famous Fast Fourier Transform. Due to its ”fractal” nature (the typical butterfly shape, with larger FFTs defined as combination of smaller ones with auxiliary data array transpose functions), one can hope to compute analytically the size of the largest FFT that can be performed locally on
an elementary GPU compute block. Then, the full application must be organized around this given building block size. Now, due to phenomena involved in the data transfers between various memory levels across CPUs and GPUs, the optimality of such a scheme is only loosely predictable (as communications tend to overcome in time the complexity of computations). Therefore a mix of (theoretical) analytic approach and (practical) runtime validation is here needed. As we shall illustrate, this occurs at both stage, first at the level of deciding on a given elementary FFT block size, then at the full application level.
Mohamed Amine Bergach, Emilien Kofman, Robert de Simone, Serge Tissot, Michel Syska. Efficient FFT mapping on GPU for radar processing application: modeling and implementation. arXiv:1505.08067 [cs.MS]
The use of GPU computing in FEA is today an active research field. This is primary due to current GPU sparse solvers are partially parallelizable and can hardly make use of Data-Level Parallelism (DLP) for which GPU architectures are designed. This paper proposes a fine-grained implementation of matrix-free Conjugate Gradient (CG) solver for Finite Element Analysis (FEA) using Graphics Processing Unit (GPU) architectures. The proposed GPU instance takes advantage of Massively Parallel Processing (MPP) architectures performing well-balanced parallel calculations at the Degree-of-Freedom (DoF) level of finite elements. The numerical experiments evaluate and analyze the performance of diverse GPU instances of the matrix-free CG solver.
Jesús Martínez-Frutos, Pedro J. Martínez-Castejón, David Herrero-Pérez, Fine-grained GPU implementation of assembly-free iterative solver for finite element problems, Computers & Structures, Volume 157, September 2015, Pages 9-18, ISSN 0045-7949, http://dx.doi.org/10.1016/j.compstruc.2015.05.010.
The University of West of England announces a new Ph.D. studentship opening, entitled “Building a heterogeneous future: lock, load and fire”, under advisor: Dr. Benedict R. Gaster The studentship is fully funded with the aim of working on the foundations of heterogeneous computing. More details on request. Please feel free to contact benedict.gaster at uwe.ac.uk for further information.
At the moment Google does not support OpenCL™ as part of the Android platform. However newer generation devices do support it. But not all devices are equipped with the right drivers.
More and more device manufacturers include these drivers as OpenCL™ can be very useful to accelerate specific workloads. The goal of this tool is to build a database of all OpenCL™ capable devices and its properties so developer/users can search though this data. This enables them to see how many devices have OpenCL™ support and what features are implemented. It enables a developer to decide if it make sense for them to utilize OpenCL™ to accelerate their application.
With the tool it is possible to browse through the database and see all devices that support OpenCL. Next to the it is possible to view all the OpenCL capabilities of your current device and all the devices in the on-line database. Read the rest of this entry »
As the number of cores on a chip increase and key applications become even more data-intensive, memory systems in modern processors have to deal with increasingly large amount of data. In face of such challenges, data compression presents as a promising approach to increase effective memory system capacity and also provide performance and energy advantages. This paper presents a survey of techniques for using compression in cache and main memory systems. It also classifies the techniques based on key parameters to highlight their similarities and differences. It discusses compression in CPUs and GPUs, conventional and non-volatile memory (NVM) systems, and 2D and 3D memory systems. We hope that this survey will help the researchers in gaining insight into the potential role of compression approach in memory components of future extreme-scale systems.
Sparsh Mittal and Jeffrey Vetter, “A Survey Of Architectural Approaches for Data Compression in Cache and Main Memory Systems”, IEEE TPDS 2015. WWW
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and application level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). We believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.
Sparsh Mittal and Jeffrey Vetter, “A Survey of CPU-GPU Heterogeneous Computing Techniques”, accepted in ACM Computing Surveys, 2015. WWW
Geometric Algebra is a new, geometrically intuitive mathematical system. It provides very easy algorithms for many application areas such as computer graphics, computer vision, robotics and computer simulations. The HSA Foundation (Heterogeneous System Architecture Foundation) is a not-for-profit industry standards body founded by companies such as AMD, ARM Samsung and Texas Instruments and focused on making it dramatically easier to program heterogeneous computing devices such as GPUs.
Since Gaalop (Geometric algebra algorithms optimizer) is focusing exactly on the optimization and integration of Geometric Algebra in these kind of new parallel computing architectures, this technology together with the new Kalmar C++ AMP compiler provides a solution for Math, Science & Engineering for HSA.
Developers have been using utility tools such as CPU-Z, GPU-Z, CUDA-Z, OpenCL-Z for a long time. These tools provide platform and hardware information in details and help developers quickly understand the hardware capabilities.
Recently, OpenCL has been supported by most of the latest mobile phones/tablets, as the mobile GPUs are gaining more compute power. OpenCL-A Android can help developer to quickly detect the availability of the OpenCL on a device, and get information about OpenCL-capable platform and devices.
In addition to detecting the OpenCL capability and getting device information, the OpenCL-Z Android is also able to measure the raw compute power in terms of ALU peak GFLOPS performance and memory bandwidth performance. These numbers would be useful for developers who want to take advantage of GPU compute capability of the modern GPU. The developers can roughly predict the performance of a certain algorithm targeting on a specific platform, or compare the raw compute performance among platforms.
The OpenCL-Z Android is a free software and it is now available on Google Play:
Download link at Google Play
The major features of OpenCL-Z Android:
– detect OpenCL availability;
– detect OpenCL driver library;
– display detailed OpenCL platform information;
– display detailed OpenCL device information;
– measure the raw compute performance and memory system bandwidth;
– export OpenCL information to sdcard;
– share OpenCL information with other applications, such as e-mail clients, note applications, social media and so on.
The OpenCL-Z Android has been tested on mobile devices with Qualcomm Snapdragon 8064, 8974, 8084, 8994 chipsets (with Adreno 305, 320, 330, 420, 430 GPUs), Samsung Exynos 5420, 5433 chipsets (with Mali T628, T760 GPUs), MediaTek MT6752 chipset (with Mali T760 GPU), Rockchip RK3288 (with Mali T764 GPU).
The OpenCL-Z Android should be able to support other chipsets. If your device is known to have OpenCL support, but this tool fails to detect it, please contact the developer of OpenCL-Z.
The author of OpenCL-Z is also trying to create a relatively complete list of mobile devices that support OpenCL, the list can be found at the OpenCL-Z official website . If you see any device supporting OpenCL not on that list, please send the author an email and help the list grow.
Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability’ a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. This paper provides a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory (NVM), GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based on their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. We believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.
Sparsh Mittal, Jeffrey S Vetter, “A Survey of Techniques for Modeling and Improving Reliability of Computing Systems”, in IEEE TPDS, 2015. WWW