Webinar: Learn How GPU-Accelerated Applications Benefit Academic Research

October 26th, 2012

GPUs have become a corner stone of computational research in high performance computing with over 200 commonly used applications already GPU-enabled. Researchers across many domains, such as Computational Chemistry, Biology, Weather & Climate, and Engineering, are using GPU-accelerated applications to greatly reduce time to discovery by achieving results that were simply not possible before.

Join Devang Sachdev, Sr. Product Manager, NVIDIA for an overview of the most popular applications used in academic research and an account of success stories enabled by GPUs. Learn also about a complimentary program which allows researchers to easily try GPU-accelerated applications on a remotely hosted cluster or Amazon AWS cloud.

Register at http://www.gputechconf.com/page/gtc-express-webinar.html.

CfP: GPU-Cloud 2012

August 6th, 2012

The 2012 International Workshop on GPU Computing in Clouds (GPU-Cloud 2012) will he held December 03-06 2012 in Taipei, Taiwan, in conjunction with the 4th International Conference on Cloud Computing Technology and Science. Important Dates:

  • Submission Deadline: August 17, 2012
  • Authors Notification: September 11, 2012
  • Final Manuscript Due: September 28, 2012
  • Workshop: December 04, 2012

Submission site: http://www.easychair.org/conferences/?conf=gpucloud2012

GPU Virtualization for Dynamic GPU Provisioning

November 18th, 2011

From a recent press release:

Taipei, November 18, 2011: Zillians, a leading cloud solution provider specializing in high performance computing, GPU virtualization middleware and massive multi-player online game (MMOG) platforms today announced the availability of vGPU – the world’s first commercial virtualization solution for decoupling GPU hardware from software. Traditionally, physical GPUs must reside on the same machine running GPU code. This severely hampers GPU cloud deployment due to the difficulty of dynamic GPU provisioning. With vGPU technology, bulky hardware is no longer a limiting factor. vGPU introduces a thin, transparent RPC layer between local application and remote GPU, enabling existing GPU software to run without any modification on a remote GPU resource. Read the rest of this entry »

PEER 1 Hosting: Large-Scale Hosted NVIDIA GPU Cloud

February 10th, 2011

Press release (submitted to gpgpu.org very late…):

LOS ANGELES,CA – July 26, 2010 – PEER 1 Hosting (TSX:PIX), a global online IT hosting provider, today announced the availability of the industry’s first large-scale, hosted graphics processing unit (GPU) Cloud at the 37th Annual Siggraph International Conference.

The system runs the RealityServer® 3D web application service platform, developed by mental images, a wholly owned subsidiary of NVIDIA. The RealityServer platform is a powerful combination of NVIDIA Tesla GPUs and 3D web services software. It delivers interactive and photorealistic applications over the web using the iray® renderer, which enables animators, product designers, architects and consumers to easily visualize 3D scenes with remarkable realism. Read the rest of this entry »

Amazon announces GPUs for Cloud Computing

November 22nd, 2010

From a recent announcement:

We are excited to announce the immediate availability of Cluster GPU Instances for Amazon EC2, a new instance type designed to deliver the power of GPU processing in the cloud. GPUs are increasingly being used to accelerate the performance of many general purpose computing problems. However, for many organizations, GPU processing has been out of reach due to the unique infrastructural challenges and high cost of the technology. Amazon Cluster GPU Instances remove this barrier by providing developers and businesses immediate access to the highly tuned compute performance of GPUs with no upfront investment or long-term commitment.

Learn more about the new Cluster GPU instances for Amazon EC2 and their use in running HPC applications.

Also, community support is becoming available; see for instance this blog post about  SCG-Ruby on EC2 instances.

A GPGPU transparent virtualization component for high performance computing clouds

October 4th, 2010

Abstract:

The promise of exascale computing power is enforced by the many core technology, that involves all purpose CPUs and specialized computing devices, such as FPGA, DSP and GPUs. In particular GPUs, due also to their wide market footprint, have currently achieved one of the best core/cost rate in that category. Relying to some APIs provided by GPU vendors, the use of GPUs as general purpose massive parallel computing device (GPGPUs) is now routinely carried out in the scientific community. The increasing number of CPUs cores on chip has driven the development and spreading of the cloud computing, leveraging on consolidated technologies such as, but not limited to, grid computing and virtualization. In recent years the use of grid computing in high performance demanding applications in e-science has become a common issue. Elastic computer power and storage provided by a cloud infrastructure may be attractive but it is still limited by poor communication performance and lack of support in using GPGPUs within a virtual machine instance. The GPU Virtualization Service (gVirtuS) presented in this work tries to fill the gap between in-house hosted computing clusters, equipped with GPGPUs devices, and pay-for-use high performance virtual clusters deployed via public or private computing clouds. gVirtuS allows an instanced virtual machine to access GPGPUs in a transparent way, with an overhead slightly greater than a real machine/GPGPU setup. gVirtuS is hypervisor independent, and, even though it currently virtualizes nVIDIA CUDA based GPUs, it is not limited to a specific brand technology. The performance of the components of gVirtuS is assessed through a suite of tests in different deployment scenarios, such as providing GPGPU power to cloud computing based HPC clusters and sharing remotely hosted GPGPUs among HPC nodes.

(Giunta G., R. Montella, G. Agrillo, and G. Coviello: “A GPGPU transparent virtualization component for high performance computing clouds”. In P. D’Ambra, M. Guarracino, and D. Talia, editors, Euro-Par 2010 – Parallel Processing, volume 6271 of Lecture Notes in Computer Science, chapter 37, pages 379-391. Springer Berlin / Heidelberg, 2010. DOI. Link to project webpage with source code.)

CFP: First International Workshop on Accelerating Data Management Systems Using Modern Processor and Storage Architectures (ADMS’10), Colocated with VLDB 2010

June 1st, 2010

The objective of this one-day workshop is to investigate opportunities in accelerating data management systems and workloads (which include traditional OLTP, data warehousing/OLAP, ETL, Streaming/Realtime, and XML/RDF Processing) using various processor architectures  (e.g., commodity and specialized Multi-core CPUs, Many-core GPUs, and FPGAs), storage systems (e.g., Storage-class Memories like SSDs and Phase-change Memory), and multicore programming strategies like OpenCL.

More information and the full call can be found here: http://www.adms-conf.org/

Read the rest of this entry »

Penguin Computing Launches HPC Cloud Computing with GPUs

August 17th, 2009

Penguin Computing has launched a new service that enables high-performance computing within a cloud-computing infrastructure, including support for GPU computing with NVIDIA Tesla GPUs.  From HPCWire:

SAN FRANCISCO, Aug. 11 — Penguin Computing, experts in high performance computing solutions, today announced the immediate availability of “Penguin on Demand” — or POD — a new service that delivers, for the first time, a complete high performance computing (HPC) solution in the cloud. POD extends the concept of cloud computing by making optimized compute resources designed specifically for HPC available on demand. POD is targeted at researchers, scientists and engineers who require surge capacity for time-critical analyses or organizations that need HPC capabilities without the expense and effort required to acquire HPC clusters.

POD provides a computing infrastructure of highly optimized Linux clusters with specialized hardware interconnects and software configurations tuned specifically for HPC. Rather than utilizing machine virtualization, as is typical in traditional cloud computing, POD allows users to access a server’s full resources at one time for maximum performance and I/O for massive HPC workloads.

Comprising high-density Xeon-based compute nodes coupled with high-speed storage, POD provides a persistent compute environment that runs on a head node and executes directly on the compute nodes’ physical cores. Both GigE and DDR high-performance Infiniband network fabrics are available. POD customers also get access to state-of-the-art GPU supercomputing with NVIDIA Tesla processor technology. Jobs typically run over a localized network topology to maximize inter-process communication, to maximize bandwidth and minimize latency.