The deadline for submissions to “GPU’s in Databases” GID2011 workshop has been extended [ed: again...] to April 22nd, 2011. The “GPUs in Databases” workshop is devoted to sharing the knowledge related to applying GPUs in database environments and to discuss possible future development of this application domain. See our previous post for details.
The High performance computational systems Biology (www.hibi.it) special session of CMSB 2011 (http://contraintes.inria.fr/CMSB11/) establishes a forum to link researchers in the areas of parallel computing and computational systems biology. Experts from around the world will present their current work, discuss profound challenges, new ideas, results, applications and their experience relating to key aspects of high performance computing in biology. Topics of interest include: Workload partitioning strategies, Parallel stochastic simulation, Biological and Numerical parallel computing, Parallel and distributed architectures, General-Purpose Computation on Graphics Hardware, Emerging processing architecture (Cell processors, FPGA, PlayStation3, etc.),
Parallel model checking techniques, Parallel parameter estimation, Parallel sensitivity analysis, Parallel algorithms for biological network analysis, Application of concurrency theory to biology, Parallel visualization algorithms, Web-services and Internet computing for e-Science, Grid/Could/P2P/High performance computing for biology, Multicore and Cluster computing for biology, Tools and applications.
The call for papers is now open, please refer to www.hibi.it for details.
Please consider submitting your work to the 2011 Emerging Applications and Many-core Architectures workshop, colocated with ISCA. Deadline for submissions is April 15th, the workshop takes place on June 4th in San Jose, California, US. For more details refer to the workshop page: http://sites.google.com/site/eamaworkshop/home
HOOMD-blue performs general-purpose particle dynamics simulations on a single workstation, taking advantage of NVIDIA GPUs to attain a level of performance equivalent to many cores on a fast cluster. Flexible and configurable, HOOMD-blue is currently being used for coarse-grained molecular dynamics simulations of nano-materials, glasses, and surfactants, dissipative particle dynamics simulations (DPD) of polymers, and crystallization of metals.
HOOMD-blue 0.9.2 adds many new features. Highlights include:
- Long-ranged electrostatics via PPPM
- Support for CUDA 3.2 and 4.0
- New neighbor list option to exclude by particle diameter (for pair.slj)
- New syntax to specify multiple pair coefficients at once
- Improved documentation
- Significant performance boosts for small simulations
- RPM and .deb packaging for CentOS, Fedora, and Ubuntu
- and more
The North Carolina Renaissance Computing Institute (RENCI) is running Amber PMEMD on the Open Science Grid, the high throughput computing (HTC) fabric used by the Large Hadron Collider (LHC). This approach is likely to be helpful to researchers with any of these challenges:
- Constrained by limited computing resources including access to GPGPUs
- Manually executing the same simulation repeatedly with different parameters
- Making simulations easier to understand, share, scale and re-use across compute resources
For more information see these two blog posts: High Throughput Parallel Molecular Dynamics and CUDA/Tesla Accelerated PMEMD on OSG. Contact Steve Cox (firstname.lastname@example.org) if you’d like to discuss further and determine if your application is a fit. If it is, RENCI can provide access to the grid as well as tools for executing and managing simulations.
The fourth International workshop and tutorial on Computational Intelligence on Consumer Games and Graphics Hardware (CIGPU 2011) will be held in Dublin 13 July 2011. Submissions are invited in (but not limited to): Parallel genetic algorithms, GP, EP, ES, PSO, ACO, DE, Computational Biology, EC on video game platforms and mobile devices. Papers that discuss novel implementations and the practicalities of writing software for these hardware platforms are especially welcome.
Papers should be submitted by 7 April, 2011 in PDF format via email to: email@example.com and contain the subject “GECCO Workshop”
Diffraction, particularly of X-rays, is a powerful technique for the investigation of structure, microstructure and dynamical properties of matter. In order to link theoretical methods, like Molecular Dynamics and other atomistic approaches, and diffraction experiments we developed a new software for calculating the powder diffraction pattern of nano-sized objects on the GPUs. The software, soon to be made available under GPL license, allows the use of GPUs on different hosts for a direct (brute-force) computation of the Debye scattering equation.
(L Geliso, C. L. Azanza Ricardo, M. Leoni and P. Scardi: “Real-space calculation of powder diffraction patterns on graphics processing units”, Journal of Applied Crystallography 43:647-653, 2010. [DOI])
The deadline for submissions to “GPU’s in Databases” GID2011 workshop has been extended to April 12th, 2011. The “GPUs in Databases” workshop is devoted to sharing the knowledge related to applying GPUs in database environments and to discuss possible future development of this application domain. The workshop topics include, but are not limited to: Read the rest of this entry »
Data stream processing applications such as stock exchange data analysis, VoIP streaming, and sensor data processing pose two conflicting challenges: short per-stream latency — to satisfy the milliseconds-long, hard real-time constraints of each stream, and high throughput — to enable efficient processing of as many streams as possible. High-throughput programmable accelerators such as modern GPUs hold high potential to speed up the computations. However, their use for hard real-time stream processing is complicated by slow communications with CPUs, variable throughput changing non-linearly with the input size, and weak consistency of their local memory with respect to CPU accesses. Furthermore, their coarse grain hardware scheduler renders them unsuitable for unbalanced multi-stream workloads.
We present a general, efficient and practical algorithm for hard real-time stream scheduling in heterogeneous systems. The algorithm assigns incoming streams of different rates and deadlines to CPUs and accelerators. By employing novel stream schedulability criteria for accelerators, the algorithm finds the assignment which simultaneously satisfies the aggregate throughput requirements of all the streams and the deadline constraint of each stream alone.
Using the AES-CBC encryption kernel, we experimented extensively on thousands of streams with realistic rate and deadline distributions. Our framework outperformed the alternative methods by allowing 50% more streams to be processed with provably deadline-compliant execution even for deadlines as short as tens milliseconds. Overall, the combined GPU-CPU execution allows for up to 4-fold throughput increase over highly-optimized multi-threaded CPU-only implementations.
( Uri Verner, Assaf Schuster and Mark Silberstein, “Processing data streams with hard real-time constraints on heterogeneous systems”, ICS’11, to appear)
The 4th workshop on UnConventional High Performance Computing 2011 (UCHPC 2011), August 29th, 2011, Bordeaux, France, will be held in conjunction with Euro-Par 2011. This workshop is organized by Anders Hast, Josef Weidendorfer and Jan-Philipp Weiss.
As the word “UnConventional” in the title suggests, the workshop focuses on hardware or platforms used for HPC, which were not intended for HPC in the first place. Reasons could be raw computing power, good performance per watt, or low cost in general. Thus, UCHPC tries to capture solutions for HPC which are unconventional today but perhaps conventional tomorrow. For example, the computing power of platforms for games recently raised rapidly. This motivated the use of GPUs for computing (GPGPU), or even building computational grids from game consoles. The recent trend of integrating GPUs on processor chips seem to be very beneficial for use of both parts for HPC. Other examples for “unconventional” hardware are embedded, low-power processors, upcoming many-core architectures, FPGAs or DSPs. Thus, interesting devices for research in unconventional HPC are not only standard server or desktop systems, but also relative cheap devices due to being mass market products, such as smartphones, netbooks, tablets and small NAS servers. For example, smartphones seem to become more performance hungry every day. Only imagination sets the limit for use.
The full call for papers including detailed submission instructions is available at http://www.lrr.in.tum.de/~weidendo/uchpc11.