نتایج جستجو برای: message passing interface mpi

تعداد نتایج: 292891  

2000
Zhenqian Cui Anthony Skjellum Arkady Kanevsky

Parallel applications, emerging in the last several years, require high-performance communication and computation systems, and have additionally placed stringent quality of service (QoS) requirements on programming environments. Especially with the introduction of new communication protocols for cluster computing such as Virtual Interface (VI) architecture, the design and implementation of an e...

2002
Marc González Eduard Ayguadé Xavier Martorell Jesús Labarta Phu V. Luong

Two alternative dual-level parallel implementations of the Multiblock Grid Princeton Ocean Model (MGPOM) are compared in this paper. The first one combines the use of two programming paradigms: message passing with the Message Passing Interface (MPI) and shared memory with OpenMP (version called MPI-OpenMP); the second uses only OpenMP (version called OpenMP-Only). MGPOM is a multiblock grid co...

Journal: :Journal of structural biology 2007
Chao Yang Pawel A Penczek ArDean Leith Francisco J Asturias Esmond G Ng Robert M Glaeser Joachim Frank

We describe the strategies and implementation details we employed to parallelize the SPIDER software package on distributed-memory parallel computers using the message passing interface (MPI). The MPI-enabled SPIDER preserves the interactive command line and batch interface used in the sequential version of SPIDER, thus does not require users to modify their existing batch programs. We show the...

Journal: :CoRR 2015
Sascha Hunold Alexandra Carpen-Amarie

The Message Passing Interface (MPI) is the prevalent programming model used on today’s supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, t...

Journal: :J. Parallel Distrib. Comput. 2001
Boris V. Protopopov Anthony Skjellum

Device Interface (ADI ) Channel Device Interface Low-level device MPI Implementation MPI Application Communication Device Figure 1. MPICH layered software architecture The deficiencies of the MPICH architecture, such as inefficient multi-fabric communication and non-thread-safety are rooted in the ADI and Device layers. In order to make further discussion specific, we present the ADI and CDI in...

2000
Olivier Aumage Luc Bougé Raymond Namyst

This paper introduces Madeleine II , an adaptive multiprotocol extension of the Madeleine portable communication interface. Madeleine II provides facilities to use multiple network protocols (VIA, SCI, TCP, MPI) and multiple network adapters (Ethernet, Myrinet, SCI) within the same application. Moreover, it can dynamically select the most appropriate transfer method for a given network protocol...

Journal: :Algorithms 2023

The paper proposes a parallel algorithm for solving large overdetermined systems of linear algebraic equations with dense matrix. This is based on the use modification conjugate gradient method, which able to take into account rounding errors accumulated during calculations when making decision terminate iterative process. constructed in such way that it takes capabilities message passing inter...

Journal: :Parallel Computing 2011
Brice Goglin

In the last decade, cluster computing has become the most popular high-performance computing architecture. Although numerous technological innovations have been proposed to improve the interconnection of nodes, many clusters still rely on commodity Ethernet hardware to implement message passing within parallel applications. We present Open-MX, an open-source message passing stack over generic E...

1995
Dhabaleswar K. Panda

This paper presents a new approach to implement global reduction operations in wormhole k-ary n-cubes. The novelty lies in using multidestination message passing mechanism instead of single destination (unicast) messages. Using pairwise exchange worms along each dimension, it is shown that complete global reduction and barrier synchronization operations , as deened by the Message Passing Interf...

2003
Jason Digalakis Konstantinos Margaritis

Parallel Evolutionary algorithms have been developed to reduce the running time of serial Evolutionary algorithms. Two major paradigms for parallel programming, Message Passing and Shared Memory, are implemented and their performance observed. Message Passing Interface (MPI) and TreadMarks runtime libraries are chosen to implement parallel Evolutionary algorithms, based on a synchronous master-...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید