نتایج جستجو برای: message passing interface mpi

تعداد نتایج: 292891  

2007
Alexandre C. Hausen Silvia R. Vergilio Simone R. S. Souza Paulo S. L. Souza Adenilso S. Simão

Among the message passing environments, MPI (Message Passing Interface) has been considered by several authors as the de facto standard to build parallel software. In spite of this great popularization, there is a lack of tools that support the test of MPI programs. The existent tools do not support the application of a test criterion; they only aim at the visualization and debugging. The use o...

1999
Judith Ellen Devaney

| There are many ways to create a distributed system, such as with Parallel Virtual Machine (PVM), PAR-MACS, p4, Message Passing Interface (MPI) and the Common Object Request Broker Architecture (CORBA). This article concentrates on MPI, CORBA and the interface for information (data-type) transfer. We discuss the transfer of complex data-types, that are compositions of basic predeened data-type...

1994
Purushotham V. Bangalore Nathan E. Doss Anthony Skjellum

The draft of the MPI (Message-Passing Interface) standard was released at Supercomputing '93, November 1993. The nal MPI document is expected to be released in mid-April of 1994. Language bindings for C and FORTRANwere included in the draft; however, a language binding for C++ was not included. MPI provides support for datatypes and topologies that is needed for developing numerical libraries, ...

1995
Hubertus Franke Ching-Farn Eric Wu Michel Riviere Pratap Pattnaik Marc Snir

In this paper we discuss an implementation of the Message Passing Interface standard (MPI) for the IBM Scalable Power PARALLEL 1 and 2 (SP1, SP2). Key to a reliable and eecient implementation of a message passing library on these machines is the careful design of a UNIX-Socket like layer in the user space with controlled access to the communication adapters and with adequate recovery and ow con...

Journal: :CoRR 2011
Alaa Ismail Elnashar

Message Passing Interface (MPI) is widely used to implement parallel programs. Although Windowsbased architectures provide the facilities of parallel execution and multi-threading, little attention has been focused on using MPI on these platforms. In this paper we use the dual core Window-based platform to study the effect of parallel processes number and also the number of cores on the perform...

Journal: :CoRR 2011
Alaa Ismail Elnashar Sultan Aljahdali Mosaid Al Sadhan

Message Passing Interface (MPI) is the most commonly used paradigm in writing parallel programs since it can be employed not only within a single processing node but also across several connected ones. Data flow analysis concepts, techniques and tools are needed to understand and analyze MPI-based programs to detect bugs arise in these programs. In this paper we propose two automated techniques...

2001
Rossen Dimitrov Anthony Skjellum

E cient Message Passing Interface implementations for emerging cluster interconnects are an important requirement for useful parallel processing on cost-e ective clusters of NT workstations. This paper reports on a new implementation of MPI for VI Architecture networks. Support for high bandwidth, low latency, and low overhead are considered, as is the match of the MPI speci cation to the VI Ar...

2015
Thomas Moreau Laurent Oudre

Convolutional sparse coding methods focus on building representations of time signals as sparse and linear combinations of shifted patterns. These techniques have proven to be useful when dealing with signals (such as ECG or images) which are composed of several characteristic patterns ([1, 2, 3]). For this type of signals, the shapes and positions of these templates are crucial for their study...

1999
Rossen Dimitrov Anthony Skjellum

EEcient Message Passing Interface implementations for emerging cluster interconnects are an important requirement for useful parallel processing on cost-eeective clusters of NT workstations. This paper reports on a new implementation of MPI for VI Architecture networks. Support for high bandwidth, low latency, and low overhead are considered, as is the match of the MPI speciica-tion to the VI A...

2004
Ekaterina Elts

PVM and MPI, two systems for programming clusters, are often compared. Each system has its unique strengths and this will remain so into the foreseeable future. This paper compares PVM and MPI features, pointing out the situations where one may be favored over the other; it explains the differences between these systems and the reasons for such differences. PVM – Parallel Virtual Machine; MPI –...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید