Efficient Matrix Multiplication in Hadoop

نویسندگان

  • Song Deng
  • Wenhua Wu
چکیده

In a typical MapReduce job, each map task processing one piece of the input file. If two input matrices are stored in separate HDFS files, one map task would not be able to access the two input matrices at the same time. To deal with this problem, we propose a efficient matrix multiplication in Hadoop. For dense matrices, we use plain row major order to store the matrices on HDFS; For sparse matrices, we use the row-major-like strategy. So, a mapper can get the rows and columns by only scannig through a consecutive part of a file. We modify the Hadoop MapReduce input format, add two file paths to the two input matrices and store the input matrices in row major order. With the new file split structure, all data are distributed properly to the mappers. Finally, we propose a user feedback method to avoid the overheads of starting multiple map waves. A number of comparative experiments are conducted, the result show that our method observably improve the performance of dense matrix multiplication in MapReduce.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Parallel Matrix Multiplication Method Adapted on Fibonacci Hypercube Structure

The objective of this study was to develop a new optimal parallel algorithm for matrix multiplication which could run on a Fibonacci Hypercube structure. Most of the popular algorithms for parallel matrix multiplication can not run on Fibonacci Hypercube structure, therefore giving a method that can be run on all structures especially Fibonacci Hypercube structure is necessary for parallel matr...

متن کامل

Experimental Evaluation of Multi-Round Matrix Multiplication on MapReduce

This paper proposes an Hadoop library, named M3, for performing dense and sparse matrix multiplication in MapReduce. The library features multi-round MapReduce algorithms that allow to tradeoff round number with the amount of data shuffled in each round and the amount of memory required by reduce functions. We claim that multi-round MapReduce algorithms are preferable in cloud settings to tradi...

متن کامل

Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU

MapReduce[5] is an emerging programming model that utilizes distributed processing elements (PE) on large datasets. With this model, programmers can write highly parallelized code without explicitly dealing with task scheduling and code parallelism in distributed systems. In this paper, we comparatively evaluate the performance of MapReduce model on Hadoop[2] and on Mars[3]. Hadoop is a softwar...

متن کامل

What Makes Affinity-Based Schedulers So Efficient ?

The tremendous increase in the size and heterogeneity of supercomputers makes it very difficult to predict the performance of a scheduling algorithm. Therefore, dynamic solutions, where scheduling decisions are made at runtime have overpassed static allocation strategies. The simplicity and efficiency of dynamic schedulers such as Hadoop are a key of the success of the MapReduce framework. Dyna...

متن کامل

Fast graph mining with HBase

Mining large graphs using distributed platforms has attracted a lot of research interests. Especially, large graph mining on Hadoop has been researched extensively, due to its simplicity and massive scalability. However, the design principle of Hadoop to maximize scalability often limits the efficiency of the graph algorithms. For this reason, the performance of graph mining algorithms running ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IJCSA

دوره 13  شماره 

صفحات  -

تاریخ انتشار 2016