نتایج جستجو برای: openmp

تعداد نتایج: 2294  

2002
Marc González Eduard Ayguadé Xavier Martorell Jesús Labarta Phu V. Luong

Two alternative dual-level parallel implementations of the Multiblock Grid Princeton Ocean Model (MGPOM) are compared in this paper. The first one combines the use of two programming paradigms: message passing with the Message Passing Interface (MPI) and shared memory with OpenMP (version called MPI-OpenMP); the second uses only OpenMP (version called OpenMP-Only). MGPOM is a multiblock grid co...

2004
J. Balart A. Duran

In this paper we present a technique based on code templates, oriented to source to source code transformations for OpenMP parallelization. Our goal is to provide an OpenMP compilation infrastructure that includes a reconfigurable code generation phase, targetting different OpenMP runtime systems or explore different translation strategies for OpenMP constructs. We describe the main OpenMP tran...

2011
Barbara Chapman

Asynchronous tasks make it easy to express the parallelism in a broad variety of computations and are especially useful for writing parallel applications with irregular and/or dynamic workloads. Their introduction into the OpenMP specification has greatly extended the scope of this API. Yet the body of benchmarks using OpenMP tasks remains minimal. The EPCC OpenMP Microbenchmarks provide measur...

2007
Chunhua Liao Christoph Eick Yuriy Fofanov Lei Huang Oscar Hernandez Laksono Adhianto

OpenMP is a de facto API for parallel programming in C/C++ and Fortran on shared memory and distributed shared memory platforms. It is also being increasingly used with MPI to form a hybrid programming model and is expected to be a promising candidate to exploit emerging multicore architectures. An OpenMP cost model is an analytical model that reflects the characteristics of OpenMP applications...

Journal: :CoRR 2017
Simone Atzeni Ganesh Gopalakrishnan

OpenMP is the de facto standard to exploit the on-node parallelism in new generation supercomputers. Despite its overall ease of use, even expert users are known to create OpenMP programs that harbor concurrency errors, of which one of the most insidious of errors are data races. OpenMP is also a rapidly evolving standard, which means that future data races may be introduced within unfamiliar c...

2007
Larry Meadows

The OpenMP 3.0 standard should be released for public comment by the time of this conference. OpenMP 3.0 is the first major upgrade of the OpenMP standard since the merger of the C and Fortran standards in OpenMP 2.5. This talk will give an overview of the new features in the OpenMP standard and show how they help to extend the range of problems for which OpenMP is suitable. Even with multi-cor...

2000
Mitsuhisa Sato Hiroshi Harada Yutaka Ishikawa

In this paper, we present an implementation of OpenMP compiler for a page-based software distributed shared memory system, SCASH on a cluster of PCs. For programming distributed memory multiprocessors such as clusters of PC/WS and MPP, message passing is usually used. A message passing system requires programmers to explicitly code the communication and makes writing parallel programs cumbersom...

2015
Raul Vidal Marc Casas Miquel Moretó Dimitrios Chasapis Roger Ferrer Xavier Martorell Eduard Ayguadé Jesús Labarta Mateo Valero

OpenMP has been for many years the most widely used programming model for shared memory architectures. Periodically, new features are proposed and some of them are finally selected for inclusion in the OpenMP standard. The OmpSs programming model developed at the Barcelona Supercomputing Center (BSC) aims to be an OpenMP forerunner that handles the main OpenMP constructs plus some extra feature...

2001
Seung-Jai Min Seon Wook Kim Michael Voss Sang Ik Lee Rudolf Eigenmann

The recent parallel language standard for shared memory multiprocessor (SMP) machines, OpenMP, promises a simple and portable interface for programmers who wish to exploit parallelism explicitly. In this paper, we present our effort to develop portable compilers for the OpenMP parallel directive language. Our compiler consists of two parts. Part one is an OpenMP parallelizer, which transforms s...

2000
Kazuhiro Kusano Shigehisa Satoh Mitsuhisa Sato

We developed an OpenMP compiler, called Omni. This paper describes a performance evaluation of the Omni OpenMP compiler. We take two commercial OpenMP C compilers, the KAI GuideC and the PGI C compiler, for comparison. Microbenchmarks and a program in Parkbench are used for the evaluation. The results using a SUN Enterprise 450 with four processors show the performance of Omni is comparable to ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید