Massively Parallel Computing: Data Distribution and Communication
نویسنده
چکیده
We discuss some techniques for preserving locality of reference in index spaces when mapped to memory units in a distributed memory architecture. In particular, we discuss the use of multidimensional address spaces instead of linearized address spaces, partitioning of irregular grids, and placement of partitions among nodes. We also discuss a set of communication primitives we have found very useful on the Connection Machine systems in implementing scienti c and engineering applications. We brie y review some of the techniques used to fully utilize the bandwidth of the binary cube network of the CM{2 and CM{200, and give some performance data from implementations of communication primitives.
منابع مشابه
Application of Big Data Analytics in Power Distribution Network
Smart grid enhances optimization in generation, distribution and consumption of the electricity by integrating information and communication technologies into the grid. Today, utilities are moving towards smart grid applications, most common one being deployment of smart meters in advanced metering infrastructure, and the first technical challenge they face is the huge volume of data generated ...
متن کاملParallel Execution Time Analysis for Least Squares Problems on Distributed Memory Architectures
In this paper we study the parallelization of PCGLS, a basic iterative method which main idea is to organize the computation of conjugate gradient method with preconditioner applied to normal equations. Two important schemes are discussed. What is the best possible data distribution and which communication network topology is most suitable for solving least squares problems on massively paralle...
متن کاملAn Ai-based Approach to Massively Parallel Programming
Although massively parallel architectures are expected to provide the next major advance in computing power, writing applications on parallel distributed computers is considerably more diicult than programming conventional computers. Today's parallel distributed programming requires from the programmer to take care of quite complex tasks such as data and load distribution, data communication, p...
متن کاملDesign of scalable optical interconnection networks for massively parallel computers
The increased amount of data handled by current information systems, coupled with the ever growing need for more processing functionality and system throughput is putting stringent demands on communication bandwidths and processing speeds. While the progress in designing high-speed processing elements has progressed significantly, the progress on designing high-performance interconnection netwo...
متن کاملCollective Communication in Wormhole-Routed Massively Parallel Computers
Massively parallel computers (MPC) are characterized by the distribution of memory among an ensemble of nodes. Since memory is physically distributed, MPC nodes communicate by sending data through a network. In order to program an MPC, the user may directly invoke low-level message passing primitives, may use a higher-level communications library, or may write the program in a data parallel lan...
متن کامل