The International Workshop on OpenMP is an annual series of workshops dedicated to the promotion and advancement of all aspects focusing on parallel programming with OpenMP. OpenMP is now a major programming model for shared memory systems from multi-core machines to large scale servers. Recently, new ideas and challenges are proposed to extend OpenMP framework for adopting accelerators and also exploiting parallelism beyond loop levels. The workshop serves as a forum to present the latest research ideas and results related to this shared memory programming model. It also offers the opportunity to interact with OpenMP users, developers and the people working on the next release of the standard. This year, the 2010 International Workshop on OpenMP (IWOMP 2010) was held in the high-tech city of Tsukuba, Japan.
Although, meeting has passed, the papers presented at the workshop are now available as a book published by Springer Verlag: Beyond Loop Level Parallelism in OpenMP: Accelerators, Tasking and More. The papers are organized in topical sections on Runtime and Optimization, Proposed Extensions to OpenMP, Scheduling and Performance, as well as Hybrid Programming and Accelerators with OpenMP.
Table of Contents:
Runtime and Optimization
- Enabling Low-Overhead Hybrid MPI/OpenMP Parallelism with MPC
- A ROSE-Based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries
- Binding Nested OpenMP Programs on Hierarchical Memory Architectures
Proposed Extensions to OpenMP
- A Proposal for User-Defined Reductions in OpenMP
- An Extension to Improve OpenMP Tasking Control
- Towards an Error Model for OpenMP
Scheduling and Performance
- How OpenMP Applications Get More Benefit from Many-Core Era
- Topology-Aware OpenMP Process Scheduling
- How to Reconcile Event-Based Performance Analysis with Tasking in OpenMP
- Fuzzy Application Parallelization Using OpenMP
Hybrid Programming and Accelerators with OpenMP
- Hybrid Parallel Programming on SMP Clusters using XPFortran and OpenMP
- A Case for Including Transactions in OpenMP
- OMPCUDA: OpenMP Execution Framework for CUDA Based on Omni OpenMP Compiler
Last chapter is a special intensest to the GPU community. Arithmetic performance with GPGPU attracts attention. However, the difficulty of the programming poses a problem. Authors have proposed GPGPU programming which used the existing parallel programming technique. They are currently developing OpenMP framework for GPUs. The framework is based on Omni OpenMP Compiler and named “OMPCUDA”. In this paper authors describe a design and an implementation of OMPCUDA. They evaluated using test programs, and validated that parallel improvement in the speed could be easily carried out in the same code as the existing OpenMP.
For those who has institutional access to the Springer books, it can be accessed at [DOI: 10.1007/978-3-642-13217-9 ] Otherwise, its is available on Amazon for under $72. Please, help us to cover hardware expenses, support GPU Science and buy this book via our affiliated store!