The Interaction between Memory Allocation and Adaptive Partitioning in Message-Passing Multicomputers
نویسنده
چکیده
Most studies on adaptive partitioning policies for scheduling parallel jobs on distributed memory parallel computers ignore the constraints imposed by the memory requirements of the jobs. In this paper , we rst show that these constraints can have a negative impact on the performance of adaptive partitioning policies. We then evaluate the performance of adaptive partitioning in a system where these minimum processor constraints are eased due to the provision of support for virtual memory. Our primary conclusion is that any performance beneets resulting from the easing of minimum processor constraints imposed by the memory requirements of jobs will be negated by the overhead due to paging.
منابع مشابه
A Message-Passing Distributed Memory Parallel Algorithm for a Dual-Code Thin Layer, Parabolized Navier-Stokes Solver
In this study, the results of parallelization of a 3-D dual code (Thin Layer, Parabolized Navier-Stokes solver) for solving supersonic turbulent flow around body and wing-body combinations are presented. As a serial code, TLNS solver is very time consuming and takes a large part of memory due to the iterative and lengthy computations. Also for complicated geometries, an exceeding number of grid...
متن کاملPerformance and Power Analysis of RCCE Message Passing on the Intel Single-Chip Cloud Computer
The number of cores integrated on a single chip increases with each generation of computers. Traditionally, a single operating system (OS) manages all the cores and resource allocation on a multicore chip. Intel’s Single-chip Cloud Computer (SCC), a manycore processor built for research use with 48 cores, is an implementation of a “cluster-on-chip” architecture. That is, the SCC can be configur...
متن کاملReducing Data Communication Overhead for Doacross Loop Nests Reducing Data Communication Overhead for Doacross Loop Nests
If the loop iterations of a loop nest cannot be partitioned into independent sets, the data communication for data dependences are inevitable in order to execute them on parallel machines. This kind of loop nests are referred to as Doacross loop nests. This paper is concerned with compiler algorithms for parallelizing Doacross loop nests for distributed-memory multicomputers. We present a metho...
متن کاملThe performance of fast Givens rotations problem implemented with MPI extensions in multicomputers
In this paper, issues related to implementing an MPI version of the fast Givens rotations problem are investigated. We have chosen this algorithm because it has the feature of having no predictable communication pattern. Message Passing Interface (MPI) is an attempt to standardise the communication library for distributed memory computing systems. The message-passing paradigm is attractive beca...
متن کاملVirtual-Memory-Mapped Network Interfaces
In today’s multicomputers, software overhead dominates the message-passing latency cost. We designed two multicomputer network interfaces that signif~cantiy reduce this overhead. Both support vMual-memory-mapped communication, allowing user processes to communicate without expensive buffer management and without making system calls across the protection boundary separating user processes from t...
متن کامل