Paradigms and Strategies for Scientific Computing on Distributed Memory Concurrent Computers Paradigms and Strategies for Scientific Computing on Distributed Memory Concurrent Computers 1
نویسندگان
چکیده
In this work we examine recent advances in parallel languages and abstractions that have the potential for improving the programmability and maintainability of large-scale, parallel, scientiic applications running on high performance architectures and networks. This paper focuses on Fortran M, a set of extensions to Fortran 77 that supports the modular design of message passing programs. We describe the Fortran M implementation of a particle-in-cell (PIC) plasma simulation application and discuss issues in the optimization of the code. The use of two other method-ologies for parallelizing the PIC application are considered. The rst is based on the shared object abstraction as embodied in the Orca language. The second approach is the Split-C language. In Fortran M, Orca, and Split-C the ability of the programmer to control the granularity of communication is important is designing an eecient implementation .
منابع مشابه
Mobile Agents, DSM, Coordination, and Self-Migrating Threads: A Common Framework
We compare four paradigms that have recently been the subject of considerable attention: mobile agents, distributed shared memory (DSM), coordination paradigms, and self-migrating threads. We place these paradigms in a common framework consisting of three layers—the computational model, its implementation on a physical architecture, and the interface to the system's environment—to demonstrate t...
متن کاملNUTS: a Distributed Object-oriented Platform with High Level Communication Functions
An extensible object-oriented platform NUTS for distributed computing is described which is based on an object-oriented programming environment NUT, is built on top of the Parallel Virtual Machine (PVM), and hides all low-level features of the latter. The language of NUTS is a concurrent object-oriented programming language with coarsegrained parallelism and distributed shared memory communicat...
متن کاملTechnical Paper Accepted for Publication in Siam Review Software Libraries for Linear Algebra Computations on High Performance Computers 1 Software Libraries for Linear Algebra Computations on High Performance Computers
This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under develo...
متن کاملSoftware Libraries for Linear Algebra Computations on High Performance Computers 1 Software Libraries for Linear Algebra Computations on High Performance Computers
This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under develo...
متن کاملCombining Shared and Distributed Memory Programming Models on Clusters of Symmetric Multiprocessors: Some Basic Promising Experiments
This note presents some experiments on different clusters of SMPs, where both distributed and shared memory parallel programming paradigms can be naturally combined. Although the platforms exhibit the same macroscopic memory organization, it appears that their individual overall performance is closely dependent on the ability of their hardware to efficiently exploit the local shared memory with...
متن کامل