The new gDEBugger V4.5 adds the ability to view texture MIP-map levels. Each texture MIP-map level’s parameters and data (as an image or raw data) can be displayed in the gDEBugger Texture and Buffers viewer. Browse the different MIP-map levels using the Texture MIP-map Level slidergDEBugger V4.5 also introduces support for 1D and 2D texture arrays. The new Textures and Buffers viewer Texture Layer slider enables viewing the contents of different texture layers. This version also introduces notable performance and stability improvements.
gDEBugger, an OpenGL and OpenGL ES debugger and profiler, traces application activity on top of the OpenGL API and lets programmers see what is happening within the graphics system implementation to find bugs and optimize OpenGL application performance. gDEBugger runs on Windows and Linux operating systems, and is currently in Beta phase on Mac OS X.
OpenMM is a freely downloadable, high performance, extensible library that allows molecular dynamics (MD) simulations to run on high performance computer architectures, such as graphics processing units (GPUs). Significant performance speedups of 100 times were achieved in some cases by running OpenMM on GPUs in desktop PCs (vs CPU). The new release includes a version of the widely used MD package GROMACS that integrates the OpenMM library, enabling acceleration on high-end NVIDIA and AMD/ATI GPUs. OpenMM is a collaborative project between Vijay Pande’s lab at Stanford University and Simbios, the National Center for Physics-based Simulation of Biological Structures at Stanford, which is supported by the National Institutes of Health. For more information on OpenMM, go to http://simtk.org/home/openmm. (Full press release.)
CUDA.NET 2.1 has been released with support for the NVIDIA CUDA 2.1 API. This version supports DirectX 10 interoperability and the new JIT compilation API. The library is supported on Windows and Linux operating systems. (CUDA.NET)
The first NTU workshop on GPU supercomputing was held at NTU on January 16, 2009. Organized by the Center for Quantum Science and Engineering (CQSE) at National Taiwan University, This workshop consisted of seminars on applications of GPU/CUDA in high performance computations in science and engineering, as well as other fields. Slides from the presentations are now online.
Scott Sherman from Bjorn3D is holding a “Fold for Stephanie” month in support of his 13-year-old daughter who has Hodgkins stage 4B cancer. He is even giving away an XFX NVIDIA GeForce GTX 285 GPU to the highest folder for Stephanie. For more information, see the Bjorn 3D Forums.
The University of Illinois at Urbana-Champaign is launching a 13-week seminar series that will focus on emerging applications for parallel computing. The Need for Speed Seminar Series will feature world-class applications experts and researchers who will discuss what increased computing performance means for their fields. The series will bring together hardware engineers and software developers who require parallel processing to create faster and superior applications. Speakers will help forecast breakthroughs enabled by the rapid advances in computing performance per dollar, performance per watt, or storage capacity provided by Moore’s Law.
David Kirk, NVIDIA Fellow, will kick off the series with a special keynote on January 28. Following that, the Need for Speed series will be held at 4pm CT every Wednesday until April 29 at the UI’s Coordinated Science Laboratory. Seminars will also stream live over the internet and speakers will take questions from both in-house and online audience members. To learn more about the series, or to view the live seminars, please visit the Need for Speed seminar web page.
(Editor’s Note: this news was submitted after the talk occurred.)
February 5, 2009, 11am PST / 2pm EST
Are you looking for ways to improve your productivity by accelerating MATLAB functions? Now you can with the unprecedented performance of GPU computing.
By attending this webinar, you will learn:
- What is GPU computing
- What is NVIDIA CUDA parallel computing architecture
- What is the Jacket engine for MATLAB from AccelerEyes
- How to get 10x to 50x speed-up for several MATLAB functions
Date: Thursday, February 5, 2009
Time: 11:00am PST / 2:00pm EST
Duration: 45 Minute Presentation, 15 Minute Q&A
Presented By: Sumit Gupta, Ph.D., Sr Product Manager of Tesla GPU Computing at NVIDIA and John Melonakos, Ph.D., CEO at AccelerEyes LLC
NVIDIA announced that National Taiwan University has been named as Asia’s first CUDA Center of Excellence (press release below). The university earned this title by formally adopting NVIDIA GPU Computing solutions across its research facilities and integrating a class to teach parallel computing based on the CUDA architecture into its educational curriculum. As the computing industry rapidly moves toward parallel processing and many-core architectures, over the past year, NVIDIA has worked to offer tomorrow’s developers and engineers education on the best tools and methodologies for parallel computing. In addition to working with over 50 Universities worldwide that are actively using CUDA in their courses, NVIDIA developed the CUDA Center of Excellence Program to further assist universities that are devoted to educating tomorrow’s software developers about parallel computing. (Press Release)
From a press release:
SANTA CLARA, CA—JANUARY 15, 2009—NVIDIA today announced it is now working closely with Wipro to provide CUDA™ professional services to their joint customers worldwide. CUDA, NVIDIA’s parallel computing architecture accessible through an industry standard C language programming environment, has already delivered major leaps in performance across many industries. Wipro’s Product Engineering Services group will accelerate the development efforts of companies with vast software portfolios seeking to exploit parallel computing with the GPU.
What do GPUs, FPGAs, vector processors and other special-purpose chips have in common? They are examples of advanced processor architectures that the scientific community is using to accelerate computationally demanding applications. While high-performance computing systems that use application accelerators are still rare, they will be the norm rather than the exception in the near future. The 2009 Symposium on Application Accelerators in High-Performance Computing aims to bring together developers of computing accelerators and end-users of the technology to exchange ideas and learn about the latest developments in the field. The Symposium will focus on the use of application accelerators in high-performance and scientific computing and issues that surround it. Topics of interest include:
- novel accelerator processors, systems, and architectures
- integration of accelerators with high-performance computing systems
- programming models for accelerator-based computing
- languages and compilers for accelerator-based computing
- run-time environments, profiling and debugging tools for accelerator-based computing
- scientific and engineering applications that use application accelerators
Presentations from technology developers and the academic user community are invited. Researchers interested in presenting at the Symposium should submit extended abstracts of 2-3 pages to firstname.lastname@example.org by April 20, 2009. All submissions will be reviewed by the Technical Program Committee and accepted submissions will be presented as either oral presentations or posters. Presentation materials will be made available online at www.saahpc.org.
(2009 Symposium on Application Accelerators in High Performance Computing (SAAHPC’09). July 27-31, 2009, University of Illinois, Urbana, IL)