The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition
نویسندگان
چکیده
منابع مشابه
The Glasgow Parallel Reduction Machine: Programming Shared-memory Many-core Systems using Parallel Task Composition
We present the Glasgow Parallel Reduction Machine (GPRM), a novel, flexible framework for parallel task-composition based many-core programming. We allow the programmer to structure programs into task code, written as C++ classes, and communication code, written in a restricted subset of C++ with functional semantics and parallel evaluation. In this paper we discuss the GPRM, the virtual machin...
متن کاملStatic Task Allocation in Distributed Systems Using Parallel Genetic Algorithm
Over the past two decades, PC speeds have increased from a few instructions per second to several million instructions per second. The tremendous speed of today's networks as well as the increasing need for high-performance systems has made researchers interested in parallel and distributed computing. The rapid growth of distributed systems has led to a variety of problems. Task allocation is a...
متن کاملParallel Logic Programming Ondistributed Shared Memory
This paper presents an implementation of a parallel logic programming system on a distributed shared memory(DSM) system. Firstly, we give a brief introduction of Andorra-I parallel logic programming system implemented on multi-processors. Secondly, we outline the concurrent programming environment provided by a distributed shared memory system{TreadMarks. Thirdly, we discuss the implementation ...
متن کاملParallel Conventional Systems versus Parallel Logic Programming Systems on Distributed Shared Memory Architectures
Distributed shared memory architectures have been object of research by many computer science groups. Research goes broadly from hardware based coherence protocols to DSM software protocols on networks of workstations passing through high technology interconnection networks that reduce network latency. In this work we thoroughly investigate how diierent hardware cache coherence protocols aaect ...
متن کاملChapter 7 - Troubleshooting Using OpenMP : Portable Shared Memory Parallel Programming
OpenMP has several safety nets to help avoid this kind of bug. But OpenMP cannot prevent its introduction, since it is typically a result of faulty use of one of the directives. For example it may arise from the incorrect parallelization of a loop or an unprotected update of shared data. In this section we elaborate on this type of error, commonly known as a data race condition. This is sometim...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Electronic Proceedings in Theoretical Computer Science
سال: 2013
ISSN: 2075-2180
DOI: 10.4204/eptcs.137.7