October 11th, 2012
April 13th, 2011
Seeing speedups of an accelerated application is great, but what does it take to build a codebase that will last for years and across architectures? In this webinar, John Stratton will cover some of the insights gained at the University of Illinois at Urbana-Champaign from experience with computer architecture, programming languages, and application development.
The webinar will offer three main conclusions including:
- Performance portability should be more achievable than many people think.
- The number one performance-limiting factor now and in the future will be parallel scalability.
- As much as we care about performance, general libraries that will last have to be reliable as well as fast.
Register at http://www.gputechconf.com/page/gtc-express-webinar.html
March 1st, 2011
We are pleased to announce the 2011 Innovative Parallel Computing: Foundations & Applications of GPU, Manycore, and Heterogeneous Systems (InPar’11). This new conference provides a first-tier academic venue for peer-reviewed publications in the emerging fields of parallel computing, encompassing the topics of GPU computing, manycore computing, and heterogeneous computing.
InPar has dual focus on “Foundations”—the fundamental advances in parallel computing itself—and “Applications”—case studies and lessons learned from the application of commodity parallel computing in domains across science and engineering. The goal of InPar is to bring together researchers in the myriad fields being revolutionized by GPUs to share experiences, discover commonalities, and both inform and learn from the computer scientists working on the foundations of parallel computing.
Topics: InPar encourages papers involving current GPU/manycore architectures, new or emerging commodity parallel architectures (such as Intel “MIC” products), and hybrid or heterogeneous systems. Possible topics include, but are not limited to: Read the rest of this entry »
May 20th, 2010
Today NVIDIA announced the upcoming 4.0 release of CUDA. While most of the major CUDA releases accompanied a new GPU architecture, 4.0 is a software-only release, but that doesn’t mean there aren’t a lot of new features. With this release, NVIDIA is aiming to lower the barrier to entry to parallel programming on GPUs, with new features including easier multi-GPU programming, a unified virtual memory address space, the powerful Thrust C++ template library, and automatic performance analysis in the Visual Profiler tool. Full details follow in the quoted press release below.
Read the rest of this entry »
May 13th, 2010
Abstracts due…24 September 2010
Papers due…1 October 2010
Anchorage, home to moose, bears, birds and whales, is strategically located at almost equal flying distance from Europe, Asia and the Eastern USA. Embraced by six mountain ranges, with views of Mount McKinley in Denali National Park, and warmed by a maritime climate, the area offers year-round adventure, recreation, and sporting events. It is a fitting destination for IPDPS to mark a quarter century of tracking developments in computer science. IPDPS serves as a forum for engineers and scientists from around the world to present their latest research findings in the fields of parallel processing and distributed computing. The five-day program will follow the usual format of contributed papers, invited speakers, and panels mid week, framed by workshops held on the first and last days. To celebrate the 25th year of IPDPS, plan to come early and stay late and also enjoy a modern city surrounded by spectacular wilderness. For updates on IPDPS 2011, visit the Web at www.ipdps.org.
December 20th, 2009
Imaging translates information into and out of the visual system with today’s computation engine of choice: digital electronic systems. While scalar architectures are no longer scaling at historical rates, we see a massive explosion in the total number of connected computation devices and the ways that hardware architectures and software parallel programming environments use these devices to work in concert and in parallel. From the computing cloud to map-reduce programming models and systems to multi-core CPUs to the regular layout of graphics processing units (GPUs) to the increasing capacity of FPGA fabrics, a range of parallel architectures and parallel programming environments are available to designers and researchers to solve computationally complex problems in efficient (and often real-time) imaging applications.
Read the rest of this entry »
February 27th, 2009
Second USENIX Workshop on Hot Topics in Parallelism (HotPar ’10)
June 14-15, Berkeley, CA
Following the tremendous success of HotPar ’09, the Second USENIX Workshop on Hot Topics in Parallelism (HotPar ’10) will once again bring together researchers and practitioners doing innovative work in the area of parallel computing. Multicore processors are the pervasive computing platform of the future. This trend is driven by limits on energy consumption in computer systems and the poor energy performance of conventional microprocessors. Parallel architectures can potentially mitigate these problems, but this new computer architecture will only be successful if languages, systems, and applications can take advantage of parallel hardware. Navigating this change will require new concurrency-friendly programming paradigms, new methods of application design, new structures for system software, and new models of interaction between applications, compilers, operating systems, and hardware.
We request submissions of position papers that propose new directions for research of products in these areas, advocate non-traditional approaches to the problems engendered by parallelism, or potentially generate controversy and discussion. We encourage submissions from practitioners as well as from researchers. Read the rest of this entry »
March 7th, 2007
To be held March 30-31, 2009 in Berkeley, California, HotPar ’09 will bring together researchers and practitioners doing innovative work in the area of parallel computing. HotPar recognizes the broad impact of multicore computing and seeks relevant contributions from all fields, including application design, languages and compilers, systems, and architecture. (http://www.usenix.org/events/hotpar09/)
Data-parallel programming models are emerging as an extremely attractive model for parallel programming, driven by several factors. Through deterministic semantics and constrained synchronization mechanisms, they provide race-free parallel-programming semantics. Furthermore, data-parallel programming models free programmers from reasoning about the details of the underlying hardware and software mechanisms for achieving parallel execution and facilitate effective compilation. Finally, efforts in the GPGPU movement and elsewhere have matured implementation technologies for streaming and data-parallel programming models to the point where high performance can be reliably achieved.
This workshop gathers commercial and academic researchers, vendors, and users of data-parallel programming platforms to discuss implementation experience for a broad range of many-core architectures and to speculate on future programming-model directions. Participating institutions include AMD, Electronic Arts, Intel, Microsoft, NVIDIA, PeakStream, RapidMind, and The University of New South Wales. (Link to Call for Participation, Data-Parallel Programming Models for Many-Core Architectures)