نتایج جستجو برای: message passing interface mpi
تعداد نتایج: 292891 فیلتر نتایج به سال:
The tremendous advance in computer technology in the past decade has made it possible to achieve the performance of a supercomputer on a very small budget. We have built a multi-CPU cluster of Pentium PC capable of parallel computations using the Message Passing Interface (MPI). We will discuss the configuration, performance, and application of the cluster to our work in physics.
Improved parallel, external and parallel-external algorithms for list-ranking and computing the connected components of a graph are presented. These algorithms are implemented and tested on a cluster of workstations using the C programming language and mpich, a portable implementation of the MPI (Message-Passing Interface) standard.
This paper presents a parallel implementation of a new permutation generation method. This permutation generation method is based on the starter sets for listing all the n! permutations. The sequential algorithm is developed and parallelized for parallel computing by integrating with Message Passing Interface (MPI) libraries. The performance of the parallel algorithms is presented to demonstrat...
A parallel ant colony algorithm for the Travelling Salesman Problem (TSP) is presented. Some experiments using a MPI based framework are made and analyzed. The achieved results prove that the TSP parallel implementation is efficient. Key-Words: Travelling Salesman Problem, Ant Colony Optimization, Parallel Algorithms, Framework, Message Passing Interface.
In this paper we propose a parallel memetic algorithm which combines population-based method with guided local search (GLS) procedure. In the proposed algorithm, a GLS procedure is applied to each solution generated by genetic operations. The performance of proposed method compared with parallel genetic approaches (i.e. global model, migration model for GA). The parallel implementation is based...
The Poisson Problem, ∇ · ∇x = b, is a sparse linear system of equations that arises, for example, in scientific computing. For this project, I describe a parallel Successive Over-Relaxation (SOR) algorithm for solving the Poisson problem and implement it in a C library using Message Passing Interface (MPI). I evaluate the performance of my implementation on a single multicore machine and in a c...
Improved parallel, external and parallel-external algorithms for list-ranking and computing the con-nected components of a graph are presented. These algorithms are implemented and tested on a clusterof workstations using the C programming language and mpich, a portable implementation of the MPI(Message-Passing Interface) standard.
In this report we document a software package under development to allow message passing in the MPI model using the computer algebra system Maple. The new software, called maplle, consists of two components, a set of Maple functions and a MPI/C driver. The maplle system allows the user to easily parallelize Maple algorithms and use message passing functionality familiar to MPI users in a Maple ...
View-Oriented Parallel Programming(VOPP) is a novel programming style based on Distributed Shared Memory, which is friendly and easy for programmers to use. In this paper we compare VOPP with two other systems for parallel programming on clusters: LAM/MPI, a message passing system, and TreadMarks, a software distributed shared memory system. We present results for ten applications implemented a...
With multi-core and many-core architectures becoming the current focus of research and development, and as vast varieties of architectures and programming models emerging in research, the design space for applications is becoming enormous. From the number of cores, the memory hierarchy, the interconnect to even the programming model and language used are all design choices that need to be optim...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید