Isoefficiency Analysis of CGLS Algorithms for Parallel Least Squares Problems

نویسندگان

  • Laurence T. Yang
  • Hai-Xiang Lin
چکیده

In this paper we study the parallelization of CGLS, a basic iterative method for large and sparse least squares problems whose main idea is to organize the computation of conjugate gradient method to normal equations. A performance model called isoeeciency concept is used to analyze the behavior of this method implemented on massively parallel distributed memory computers with two dimensional mesh communication scheme. Two diierent mappings of data to processors, namely simple stripe and cyclic stripe partitionings are compared by putting these communication times into the isoeeciency concept which models scalability aspects. Theoretically, the cyclic stripe partitioning is shown to be asymptotically more scalable.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Scalability Analysis of the Parallel Implementation of LMS and RLS Algorithms

The parallel implementation of the Least Mean Square (LMS) and Recursive Least Square (RLS) adaptive algorithms was investigated to study the scalability and the isoefficiency of these parallel implementations. The analysis includes deriving theoretical expressions for the computation and communication time for the parallel implementation of the adaptive algorithms. These expressions capture th...

متن کامل

Data Distribution Analysis of MCGLS Algorithm for Parallel Least Squares Problems

In this paper we mainly study diierent data distribution of MCGLS, a modiied algorithm of CGLS which is a basic iterative method to organize the computation of conjugate gradient method applied to normal equations, for solving sparse least squares problems on massively parallel distributed memory computers. The performance of CGLS on this kind of architecture is always limited because of the gl...

متن کامل

GMRES Methods for Least Squares Problems

The standard iterative method for solving large sparse least squares problems min ∈Rn ‖ −A ‖2, A ∈ Rm×n is the CGLS method, or its stabilized version LSQR, which applies the (preconditioned) conjugate gradient method to the normal equation ATA = AT . In this paper, we will consider alternative methods using a matrix B ∈ Rn×m and applying the Generalized Minimal Residual (GMRES) method to min ∈R...

متن کامل

Scalability Analysis of CGLS Algorithm for Sparse Least Squares Problems on Massively Distributed Memory Computers

In this paper we study the parallelization of CGLS, a basic iterative method for large and sparse least squares problems whose main idea is to organize the computation of conjugate gradient method to normal equations. A performance model of computation and communication phases with isoeeciency concept are used to analyze the qualitative scalability behavior of this method implemented on massive...

متن کامل

Comparison of Three Algorithms for Solving Linearized Systems of Parallel Excitation RF Waveform Design Equations: Experiments on an Eight-Channel System at 3 Tesla

Three algorithms for solving linearized systems of RF waveform design equations for calculating accelerated spatially-tailored excitations on parallel excitation MRI systems are presented. Their artifact levels, computational speed, and RF peak and root-mean-square (RMS) voltages are analyzed. An SVD-based inversion method is compared with conjugate gradient least squares (CGLS) and least squar...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997