نتایج جستجو برای: message passing interface mpi

تعداد نتایج: 292891  

2009
IZZATDIN A. AZIZ

In this paper, we present a parallel implementation of a solution for the Traveling Salesman Problem (TSP). TSP is the problem of finding the shortest path from point A to point B, given a set of points and passing through each point exactly once. Initially a sequential algorithm is fabricated from scratch and written in C language. The sequential algorithm is then converted into a parallel alg...

2007
P. MELAS

The availability of a large number of high-performance computing workstations connected through a network is an attractive option for many applications. The Message-Passing Interface (MPI) software environment is a de-facto message-passing standard which has gained widespread adoption by many organisations. However, the underlying network architecture must be taken into account to exploit eecie...

2004
Bettina Krammer Matthias S. Müller Michael M. Resch

The Message Passing Interface (MPI) is widely used to write parallel programs using message passing. Due to the complexity of parallel programming there is a need for tools supporting the development process. There are many situations where incorrect usage of MPI by the application programmer can automatically be detected. Examples are the introduction of irreproducibility, deadlocks and incorr...

2004
Kathryn Marie Mohror KATHRYN MARIE MOHROR Karen L. Karavanic David McClure

An abstract of the thesis of Kathryn Marie Mohror for the Master of Science in Computer Science presented November 13, 2003. Title: Infrastructure For Performance Tuning MPI Applications Clusters of workstations are becoming increasingly popular as a low-budget alternative for supercomputing power. In these systems, message-passing is often used to allow the separate nodes to act as a single co...

Journal: :Parallel Computing 1994
David W. Walker

This paper presents an overview of mpi, a proposed standard message passing interface for MIMD distributed memory concurrent computers. The design of mpi has been a collective eeort involving researchers in the United States and Europe from many organizations and institutions. mpi includes point-to-point and collective communication routines, as well as support for process groups, communication...

Journal: :IJNGC 2012
Chapram Sudhakar T. Ramesh

In this paper, an Improved Distributed Shared Memory (IDSM) system, a hybrid version of shared memory and message passing version is proposed. This version effectively uses the benefits of shared memory in terms of ease of programming and message passing in terms of efficiency. Further it is designed to effectively utilize the stateof-art multicore based network of workstations and supports sta...

1997
William Saphir

The Message Passing Interface (MPI) standard has enabled the creation of portable and efficient programs for distributed memory parallel computers. Since the first version of the standard was completed in 1994, a large number of MPI implementations have become available. These include several portable implementations as well as optimized implementations from every major parallel computer manufa...

Journal: :Concurrency and Computation: Practice and Experience 2002
Glenn R. Luecke Yan Zou James Coyle Jim Hoekstra Marina Kraeva

The Message Passing Interface (MPI) is commonly used to write parallel programs for distributed memory parallel computers. MPI-CHECK is a tool developed to aid in the debugging of MPI programs that are written in free or fixed format Fortran 90 and Fortran 77. This paper presents the methods used in MPI-CHECK 2.0 to detect many situations where actual and potential deadlocks occur when using bl...

2011
Mehnaz Hafeez Sajjad Asghar Usman A. Malik Naveed Riaz

MPI is a de facto standard for message passing for high performance parallel, as well as, for distributed computing environment. The static and homogenous model of MPI is not compatible with the dynamic and heterogeneous Grid environment. There are not many implementations which offer message passing over Internet and Grids. P2P-MPI and A-JUMP are MPI implementations, which provide both point-t...

2011
John-Nicholas Furst Ayse K. Coskun

The number of cores integrated on a single chip increases with each generation of computers. Traditionally, a single operating system (OS) manages all the cores and resource allocation on a multicore chip. Intel’s Single-chip Cloud Computer (SCC), a manycore processor built for research use with 48 cores, is an implementation of a “cluster-on-chip” architecture. That is, the SCC can be configur...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید