On the Subspace Projected Approximate Matrix Method
نویسندگان
چکیده
We provide a comparative study of the Subspace Projected Approximate Matrix method, abbreviated SPAM, which is a fairly recent iterative method of computing a few eigenvalues of a Hermitian matrix A. It falls in the category of inner-outer iteration methods and aims to reduce the costs of matrix-vector products with A within its inner iteration. This is done by choosing an approximation A0 of A, and then, based on both A and A0, to define a sequence (Ak) n k=0 of matrices that increasingly better approximate A as the process progresses. Then the matrix Ak is used in the kth inner iteration instead of A. In spite of its main idea being refreshingly new and interesting, SPAM has not yet been studied in detail by the numerical linear algebra community. We would like to change this by explaining the method, and to show that for certain special choices for A0, SPAM turns out to be mathematically equivalent to known eigenvalue methods. More sophisticated approximations A0 turn SPAM into a boosted version of Lanczos, whereas it can also be interpreted as an attempt to enhance a certain instance of the preconditioned JacobiDavidson method. Numerical experiments are performed that are specifically tailored to illustrate certain aspects of SPAM and its variations. For experiments that test the practical performance of SPAM in comparison with other methods, we refer to other sources. The main conclusion is that SPAM provides a natural transition between the Lanczos method and one-step preconditioned Jacobi-Davidson.
منابع مشابه
Large-scale Inversion of Magnetic Data Using Golub-Kahan Bidiagonalization with Truncated Generalized Cross Validation for Regularization Parameter Estimation
In this paper a fast method for large-scale sparse inversion of magnetic data is considered. The L1-norm stabilizer is used to generate models with sharp and distinct interfaces. To deal with the non-linearity introduced by the L1-norm, a model-space iteratively reweighted least squares algorithm is used. The original model matrix is factorized using the Golub-Kahan bidiagonalization that proje...
متن کاملThe Subspace Projected Approximate Matrix (SPAM) Modification of the Davidson Method
A modification of the iterative matrix diagonalization method of Davidson is presented that is applicable to the symmetric eigenvalue problem. This method is based on subspace projections of a sequence of one or more approximate matrices. The purpose of these approximate matrices is to improve the efficiency of the solution of the desired eigenpairs by reducing the number of matrix-vector produ...
متن کاملA Subspace-Projected Approximate Matrix Method for Systems of Linear Equations
Given two n×n matrices A and A0 and a sequence of subspaces {0}=V0 ⊂ · · · ⊂ Vn = R n with dim(Vk) = k, the k-th subspace-projected approximated matrix Ak is defined as Ak = A + Πk(A0 − A)Πk , where Πk is the orthogonal projection on V ⊥ k . Consequently, Ak v = Av and v Ak = v ∗A for all v ∈ Vk. Thus (Ak) n k≥0 is a sequence of matrices that gradually changes from A0 into An = A. In principle,...
متن کاملUsing an Efficient Penalty Method for Solving Linear Least Square Problem with Nonlinear Constraints
In this paper, we use a penalty method for solving the linear least squares problem with nonlinear constraints. In each iteration of penalty methods for solving the problem, the calculation of projected Hessian matrix is required. Given that the objective function is linear least squares, projected Hessian matrix of the penalty function consists of two parts that the exact amount of a part of i...
متن کاملPreconditioned Generalized Minimal Residual Method for Solving Fractional Advection-Diffusion Equation
Introduction Fractional differential equations (FDEs) have attracted much attention and have been widely used in the fields of finance, physics, image processing, and biology, etc. It is not always possible to find an analytical solution for such equations. The approximate solution or numerical scheme may be a good approach, particularly, the schemes in numerical linear algebra for solving ...
متن کاملA Novel Noise Reduction Method Based on Subspace Division
This article presents a new subspace-based technique for reducing the noise of signals in time-series. In the proposed approach, the signal is initially represented as a data matrix. Then using Singular Value Decomposition (SVD), noisy data matrix is divided into signal subspace and noise subspace. In this subspace division, each derivative of the singular values with respect to rank order is u...
متن کامل