Using NVIDIA GPUs and PyCUDA, MIT and Harvard researchers demonstrate a better way for computers to ‘see’

December 8th, 2009

From: http://web.mit.edu/press/2009/visual-systems.html

Taking inspiration from genetic screening techniques, researchers from MIT and Harvard have demonstrated a way to build better artificial visual systems with the help of low-cost, high-performance gaming hardware.

The neural processing involved in visually recognizing even the simplest object in a natural environment is profound — and profoundly difficult to mimic. Neuroscientists have made broad advances in understanding the visual system, but much of the inner workings of biologically based systems remain a mystery.

Using Graphics Processing Units (GPUs) — the same technology video game designers use to render life-like graphics — MIT and Harvard researchers are now making progress faster than ever before. “We made a powerful computing system that delivers over hundred fold speed-ups relative to conventional methods,” said Nicolas Pinto, a PhD candidate in James DiCarlo’s lab at the McGovern Institute for Brain Research at MIT. “With this extra computational power, we can discover new vision models that traditional methods miss.” Pinto co-authored the PLoS study with David Cox of the Visual Neuroscience Group at the Rowland Institute at Harvard.

Finding a better way for computers to “see” from Cox Lab @ Rowland Institute on Vimeo.

How they did it: Harnessing the processing power of dozens of high-performance NVIDIA graphics cards and PlayStation 3s gaming devices, the team designed a high-throughput screening process to tease out the best parameters for visual object recognition tasks. The resulting model outperformed a crop of state-of-the-art vision systems across a range of tests — more accurately identifying a range of objects on random natural backgrounds with variation in position, scale, and rotation. Had the team used conventional computational tools, the one-week screening phase would have taken over two years to complete.

Next steps: The researchers say that their high-throughput approach could be applied to other areas of computer vision, such as face identification, object tracking, pedestrian detection for automotive applications, and gesture and action recognition. Moreover, as scientists better understand what components make a good artificial vision system, they can use these hints to better understand the human brain as well.

Funding: National Institutes of Health, McKnight Endowment for Neuroscience, Jerry and Marge Burnett, the McGovern Institute for Brain Research at MIT, and the Rowland Institute at Harvard. Hardware support provided by the NVIDIA Corporation.

Also covered by:
http://www.eurekalert.org/pub_releases/2009-12/hu-rda120209.php
http://slashdot.org/story/09/12/05/1410231/MIT-amp-Harvard-On-Brain-Inspired-AI-Vision
http://hardware.slashdot.org/article.pl?sid=08/07/27/0721222
http://www.ddj.com/hpc-high-performance-computing/222000481
http://www.engadget.com/2009/12/04/harvard-and-mit-researchers-working-to-simulate-the-visual-corte/

Authors’ website:
http://web.mit.edu/pinto
http://www.rowland.org/rjf/cox/index.html
http://web.mit.edu/dicarlo-lab/

Citation:
Pinto N,  Doukhan D,  DiCarlo JJ,  Cox DD, 2009 A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation. PLoS Comput Biol 5(11): e1000579. doi:10.1371/journal.pcbi.1000579