Accuracy of Computed Eigenvectors via Optimizing a Rayleigh Quotient
نویسنده
چکیده
This paper establishes converses to the well-known result: for any vector ũ such that the sine of the angle sin θ(u, ũ) = O( ), we have ρ(ũ) def = ũ∗Aũ ũ∗ũ = λ+O( ), where λ is an eigenvalue and u is the corresponding eigenvector of a Hermitian matrix A, and “∗” denotes complex conjugate transpose. It shows that if ρ(ũ) is close to A’s largest eigenvalue, then ũ is close to the corresponding eigenvector with an error proportional to the square root of the error in ρ(ũ) as an approximation to the eigenvalue and inverse proportional to the square root of the gap between A’s first two largest eigenvalues. A subspace version of such an converse is also established. Results as such may have interest in applications, such as eigenvector computations in Principal Component Analysis in image processing where eigenvectors may be computed by optimizing Rayleigh quotients with the Conjugate Gradient method. AMS subject classification (2000): 15A42, 65F15.
منابع مشابه
Accuracy of Computed Eigenvectors via Optimizing a Rayleigh Quotient 1 Accuracy of Computed Eigenvectors via Optimizing a Rayleigh Quotient
This note gives converses to the well-known result: for any vector e u such that sin (u; e u) = O( ), we have e u Ae u e u e u = + O( ) where is an eigenvalue and u is the corresponding eigenvector of a Hermitian matrix A, and \ " denotes complex conjugate transpose. It shows that if e u Ae u=e u e u is close to A's largest eigenvalue, then e u is close to the corresponding eigenvector with an ...
متن کاملHomotopy Method for the Eigenvalue Problem for Partial Diierential Equations
Given a linear self-adjoint partial diierential operator L, the smallest few eigenvalues and eigenfunctions of L are computed by the homotopy (continuation) method. The idea of the method is very simple. From some initial operator L0 with known eigenvalues and eigenfunctions, deene the homotopy H (t) = (1 ? t)L0 + tL; 0 t 1. If the eigenfunctions of H (t0) are known, then they are used to deter...
متن کاملAccuracy of Computed Eigenvectorsvia Optimizing a Rayleigh
This note gives converses to the well-known result: for any vector e u such that sin (u; e u) = O(), we have e u Ae u e u e u = + O(2) where is an eigenvalue and u is the corresponding eigenvector of a Her-mitian matrix A, and \ " denotes complex conjugate transpose. It shows that if e u Ae u=e u e u is close to A's largest eigenvalue, then e u is close to the corresponding eigenvector with an ...
متن کاملRayleigh Quotient Based Optimization Methods For Eigenvalue Problems
Four classes of eigenvalue problems that admit similar min-max principles and the Cauchy interlacing inequalities as the symmetric eigenvalue problem famously does are investigated. These min-max principles pave ways for efficient numerical solutions for extreme eigenpairs by optimizing the so-called Rayleigh quotient functions. In fact, scientists and engineers have already been doing that for...
متن کاملM ar 2 00 8 Two - sided Grassmann - Rayleigh quotient iteration ∗
The two-sided Rayleigh quotient iteration proposed by Ostrowski computes a pair of corresponding left-right eigenvectors of a matrix C. We propose a Grassmannian version of this iteration, i.e., its iterates are pairs of p-dimensional subspaces instead of one-dimensional subspaces in the classical case. The new iteration generically converges locally cubically to the pairs of left-right p-dimen...
متن کامل