The Need for Speed Seminar Series: David Kirk Keynote

February 3rd, 2009

The University of Illinois at Urbana-Champaign is launching a 13-week seminar series that will focus on emerging applications for parallel computing. The Need for Speed Seminar Series will feature world-class applications experts and researchers who will discuss what increased computing performance means for their fields. The series will bring together hardware engineers and software developers who require parallel processing to create faster and superior applications. Speakers will help forecast breakthroughs enabled by the rapid advances in computing performance per dollar, performance per watt, or storage capacity provided by Moore’s Law.

David Kirk, NVIDIA Fellow, will kick off the series with a special keynote on January 28. Following that, the Need for Speed series will be held at 4pm CT every Wednesday until April 29 at the UI’s Coordinated Science Laboratory. Seminars will also stream live over the internet and speakers will take questions from both in-house and online audience members. To learn more about the series, or to view the live seminars, please visit the Need for Speed seminar web page.

(Editor’s Note: this news was submitted after the talk occurred.)

Webinar: Jacket: Accelerating MATLAB using CUDA-Enabled GPUs

February 3rd, 2009

February 5, 2009, 11am PST / 2pm EST

Are you looking for ways to improve your productivity by accelerating MATLAB functions? Now you can with the unprecedented performance of GPU computing.

By attending this webinar, you will learn:

  • What is GPU computing
  • What is NVIDIA CUDA parallel computing architecture
  • What is the Jacket engine for MATLAB from AccelerEyes
  • How to get 10x to 50x speed-up for several MATLAB functions

Date: Thursday, February 5, 2009
Time: 11:00am PST / 2:00pm EST
Duration: 45 Minute Presentation, 15 Minute Q&A
Register Here
Presented By: Sumit Gupta, Ph.D., Sr Product Manager of Tesla GPU Computing at NVIDIA and John Melonakos, Ph.D., CEO at AccelerEyes LLC

National Taiwan University Becomes Worlds First Asia-Pacific CUDA Center of Excellence

January 22nd, 2009

NVIDIA announced that National Taiwan University has been named as Asia’s first CUDA Center of Excellence (press release below). The university earned this title by formally adopting NVIDIA GPU Computing solutions across its research facilities and integrating a class to teach parallel computing based on the CUDA architecture into its educational curriculum. As the computing industry rapidly moves toward parallel processing and many-core architectures, over the past year, NVIDIA has worked to offer tomorrow’s developers and engineers education on the best tools and methodologies for parallel computing. In addition to working with over 50 Universities worldwide that are actively using CUDA in their courses, NVIDIA developed the CUDA Center of Excellence Program to further assist universities that are devoted to educating tomorrow’s software developers about parallel computing. (Press Release)

Wipro to Offer CUDA Software Services to Global Customer Base

January 22nd, 2009

From a press release:

SANTA CLARA, CA—JANUARY 15, 2009—NVIDIA today announced it is now working closely with Wipro to provide CUDA™ professional services to their joint customers worldwide. CUDA, NVIDIA’s parallel computing architecture accessible through an industry standard C language programming environment, has already delivered major leaps in performance across many industries. Wipro’s Product Engineering Services group will accelerate the development efforts of companies with vast software portfolios seeking to exploit parallel computing with the GPU.

(Read More)

Symposium on Application Accelerators in High Performance Computing (SAAHPC’09)

January 22nd, 2009

What do GPUs, FPGAs, vector processors and other special-purpose chips have in common? They are examples of advanced processor architectures that the scientific community is using to accelerate computationally demanding applications. While high-performance computing systems that use application accelerators are still rare, they will be the norm rather than the exception in the near future. The 2009 Symposium on Application Accelerators in High-Performance Computing aims to bring together developers of computing accelerators and end-users of the technology to exchange ideas and learn about the latest developments in the field. The Symposium will focus on the use of application accelerators in high-performance and scientific computing and issues that surround it. Topics of interest include:

  • novel accelerator processors, systems, and architectures
  • integration of accelerators with high-performance computing systems
  • programming models for accelerator-based computing
  • languages and compilers for accelerator-based computing
  • run-time environments, profiling and debugging tools for accelerator-based computing
  • scientific and engineering applications that use application accelerators

Presentations from technology developers and the academic user community are invited. Researchers interested in presenting at the Symposium should submit extended abstracts of 2-3 pages to submit@saahpc.org by April 20, 2009. All submissions will be reviewed by the Technical Program Committee and accepted submissions will be presented as either oral presentations or posters. Presentation materials will be made available online at www.saahpc.org.
(2009 Symposium on Application Accelerators in High Performance Computing (SAAHPC’09). July 27-31, 2009, University of Illinois, Urbana, IL)

gDEBugger for Apple Mac OS X – Beta Program

January 22nd, 2009

Graphic Remedy is proud to announce the upcoming release of gDEBugger for Mac OS X. This new product brings all of gDEBugger’s Debugging and Profiling abilities to the Mac OpenGL developer’s world. Using gDEBugger Mac will help OS X OpenGL developers optimize their application performance: find graphics pipeline bottlenecks, improve application graphics memory consumption, locate and remove redundant OpenGL calls and graphics memory leaks, and much more. Visit the gDebuggerMac home page to join the Beta Program, see screenshots and get more details.

gDEBugger, an OpenGL and OpenGL ES debugger and profiler, traces application activity on top of the OpenGL API, and lets programmers see what is happening within the graphics system implementation to find bugs and optimize OpenGL application performance. gDEBugger runs on Windows, Linux and Mac OS X operating systems.

Experience with the GPU and the Cell Processor

January 22nd, 2009

This workshop, to be held at TU Delft on Friday January 30, 2009, presents state-of-the-art performance results for engineering applications on parallel machines, based on either the Cell Processor or on GPUs. Next to iterative solvers, finite element applications, tomography and visualization applications, some background information on computation on these platforms and coupling of processors will be shown. To attend this workshop is free, registration is required. (Workshop: Experience with the GPU and the Cell Processor)

Workshop on Exploiting Parallelism using GPUs and other Hardware-Assisted Methods (EPHAM 2009)

January 11th, 2009

This workshop will focus on compilation techniques for exploiting parallelism in emerging massively multi-threaded and multi-core architectures, with a particular focus on the use of general-purpose GPU computing techniques to overcome traditional barriers to parallelization. Recently, GPUs have evolved to address programming of general-purpose computations, especially those exemplified by data-parallel models. This change will have long-term implications for languages, compilers, and programming models. Development of higher-level programming languages, models and compilers that exploit such processors will be important. Clearly, the economics and performance of applications is affected by a transition to general-purpose GPU computing. This will require new ideas and directions as well as recasting some older techniques to the new paradigm.

EPHAM 2009 invites papers in this emerging discipline which include, but are not limited, to the following areas of interest.

  • Static and dynamic parallelization for hybrid CPU/GPU systems
  • Compiler optimizations for GPU computing
  • Language constructs and extensions to enable parallel programming with GPUs
  • Run-time techniques to off-load computation to the GPU
  • Language, programming model, or compiler techniques for mapping irregular computations to GPUs
  • Debugging support for GPU programs
  • Performance analysis tools related to GPU computing
  • Other hardware-assisted methods for extracting and exploiting parallelism

Please find more information at the EPHAM 2009 workshop website.

“Parallel Computing for Graphics: Beyond Programmable Shading” SIGGRAPH Asia 2008 Course

December 23rd, 2008

The complete course notes from the “Parallel Computing for Graphics: Beyond Programmable Shading” SIGGRAPH Asia 2008 course , are available online. The course gives an introduction to parallel programming architectures and environments for interactive graphics and explores case studies of combining traditional rendering API usage with advanced parallel computation from game developers, researchers, and graphics hardware vendors. There are strong indications that the future of interactive graphics involves a programming model more flexible than today’s OpenGL and Direct3D pipelines. As such, graphics developers need a basic understanding of how to combine emerging parallel programming techniques with the traditional interactive rendering pipeline. This course gives an introduction to several parallel graphics architectures and programming environments, and introduces the new types of graphics algorithms that will be possible. The case studies in the class discuss the mix of parallel programming constructs used, details of the graphics algorithms, and how the rendering pipeline and computation interact to achieve the technical goals. The course speakers are Jason Yang and Justin Hensley (AMD), Tim Foley (Intel), Mark Harris (NVIDIA), Kun Zhou (Zhejiang University), Anjul Patney (UC Davis), Pedro Sander (HKUIST), and Christopher Oat (AMD) (Complete course notes.)

NVIDIA Releases Version 2.1 Beta of the CUDA Toolkit and SDK

December 23rd, 2008

DECEMBER 19, 2008- NVIDIA has announced the availability of version 2.1 beta of its CUDA toolkit and SDK. This is the latest version of the C-compiler and software development tools for accessing the massively parallel CUDA compute architecture of NVIDIA GPUs. In response to overwhelming demand from the developer community, this latest version of the CUDA software suite includes support for NVIDIA®® Tesla™ GPUs on Windows Vista and 32-bit debugger support for CUDA on RedHat Enterprise Linux 5.x (separate download).

The CUDA Toolkit and SDK 2.1 beta includes support for VisualStudio 2008 support on Windows XP and Vista and Just-In-Time (JIT) compilation for applications that dynamically generate CUDA kernels. Several new interoperability APIs have been added for Direct3D 9 and Direct3D 10 that accelerate communication to DirectX applications as well as a series of improvements to OpenGL interoperability.

CUDA Toolkit and SDK 2.1 beta also features support for using a GPU that is not driving a display on Vista, a beta of Linux Profiler 1.1 (separate download) as well as support for recent releases of Linux including Fedora9, OpenSUSE 11 and Ubuntu 8.04.

CUDA Toolkit and SDK 2.1 beta is available today for free download from www.nvidia.com/object/cuda_get.

Page 72 of 109« First...102030...7071727374...8090100...Last »