نتایج جستجو برای: message passing interface mpi

تعداد نتایج: 292891  

2007
Stephen F. Siegel

This paper explores a way to apply model checking techniques to parallel programs that use the nonblocking primitives of the Message Passing Interface (MPI). The method has been implemented as an extension to the model checker Spin called Mpi-Spin. It has been applied to 17 examples from a widely-used textbook on MPI. Many correctness properties of these examples were verified and in two cases ...

2006
Rainer Keller George Bosilca Graham E. Fagg Michael M. Resch Jack J. Dongarra

This paper describes the implementation, usage and experience with the MPI performance revealing extension interface (Peruse) into the Open MPI implementation. While the PMPI-interface allows timing MPI-functions through wrappers, it can not provide MPI-internal information on MPI-states and lower-level network performance. We introduce the general design criteria of the interface implementatio...

2000
Jeffrey M. Squyres Andrew Lumsdaine William L. George John G. Hagedorn Judith E. Devaney

Interoperable MPI (IMPI) is a protocol specification to allow multiple MPI implementations to cooperate on a single MPI job. Unlike portable MPI implementations, an IMPI-connected parallel job allows the use of vendor-tuned message passing libraries on given target architectures, thus potentially allowing higher levels of performance than previously possible. Additionally, the IMPI protocol use...

Journal: :Concurrency - Practice and Experience 2004
Anthony Skjellum Arkady Kanevsky Yoginder S. Dandass Jerrell Watts Steve Paavola Dennis Cottel Greg Henley L. Shane Hebert Zhenqian Cui Anna Rounbehler

The MPI/RT standard is the product of the work of many people working in an open community standards group over a period of six plus years. The purpose of this archival publication is to preserve the significant knowledge and experience that was developed in real-time message passing systems as a consequence of the R&D effort as well as in the specification of the standard. Interestingly, sever...

2000
Ian T. Foster David R. Kohr Rakesh Krishnaiyer Alok Choudhary

Data-parallel languages such as High Performance Fortran (HPF) p resent a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be ...

2003
William Gropp Ewing L. Lusk Robert B. Ross Rajeev Thakur

The Message Passing Interface (MPI) specification is widely used for solving significant scientific and engineering problems on parallel computers. There exist more Gropp is a completely up to appear in the existing mpi supercomputers appear. The area of mpi functionality added by the cm. The most frequently misunderstood feature of, the new extensions to basic mpi. Saltz proceedings of work pr...

Journal: :Bioinformatics 2002
Jens Kleinjung Nigel Douglas Jaap Heringa

UNLABELLED Multiple sequence alignment is a frequently used technique for analyzing sequence relationships. Compilation of large alignments is computationally expensive, but processing time can be considerably reduced when the computational load is distributed over many processors. Parallel processing functionality in the form of single-instruction multiple-data (SIMD) technology was implemente...

1996
Ian Foster David R. Kohr Rakesh Krishnaiyer Alok Choudhary Ian T. Foster

Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in which a single thread of control performs high-level operations on distributed arrays. These languages can greatly ease the development of parallel programs. Yet there are large classes of applications for which a mixture of task and data parallelism is most appropriate. Such applications can be s...

2005
Brian W. Barrett Jeffrey M. Squyres Andrew Lumsdaine Richard L. Graham George Bosilca

Component architectures provide a useful framework for developing an extensible and maintainable code base upon which largescale software projects can be built. Component methodologies have only recently been incorporated into applications by the High Performance Computing community, in part because of the perception that component architectures necessarily incur an unacceptable performance pen...

2008
Mohamad Chaarawi Jeffrey M. Squyres Edgar Gabriel Saber Feki

Clustered computing environments, although becoming the predominant high-performance computing platform of choice, continue to grow in complexity. It is relatively easy to achieve good performance with real-world MPI applications on such platforms, but obtaining the best possible MPI performance is still an extremely difficult task, requiring painstaking tuning of all levels of the hardware and...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید