Low-Rank Tensor Methods with Subspace Correction for Symmetric Eigenvalue Problems
نویسندگان
چکیده
We consider the solution of large-scale symmetric eigenvalue problems for which it is known that the eigenvectors admit a low-rank tensor approximation. Such problems arise, for example, from the discretization of high-dimensional elliptic PDE eigenvalue problems or in strongly correlated spin systems. Our methods are built on imposing low-rank (block) TT structure on the trace minimization characterization of the eigenvalues. The common approach of alternating optimization is combined with an enrichment of the TT cores by (preconditioned) gradients, as recently proposed by Dolgov and Savostyanov for linear systems. This can equivalently be viewed as a subspace correction technique. Several numerical experiments demonstrate the performance gains from using this technique.
منابع مشابه
Subspace Methods with Local Refinements for Eigenvalue Computation Using Low-Rank Tensor-Train Format
Computing a few eigenpairs from large-scale symmetric eigenvalue problems is far beyond the tractability of classic eigensolvers when the storage of the eigenvectors in the classical way is impossible. We consider a tractable case in which both the coefficient matrix and its eigenvectors can be represented in the low-rank tensor train formats. We propose a subspace optimization method combined ...
متن کاملA New Inexact Inverse Subspace Iteration for Generalized Eigenvalue Problems
In this paper, we represent an inexact inverse subspace iteration method for computing a few eigenpairs of the generalized eigenvalue problem Ax = Bx [Q. Ye and P. Zhang, Inexact inverse subspace iteration for generalized eigenvalue problems, Linear Algebra and its Application, 434 (2011) 1697-1715 ]. In particular, the linear convergence property of the inverse subspace iteration is preserved.
متن کاملFundamental Tensor Operations for Large-Scale Data Analysis in Tensor Train Formats
We discuss extended definitions of linear and multilinear operations such as Kronecker, Hadamard, and contracted products, and establish links between them for tensor calculus. Then we introduce effective low-rank tensor approximation techniques including Candecomp/Parafac (CP), Tucker, and tensor train (TT) decompositions with a number of mathematical and graphical representations. We also pro...
متن کامل0 Most Tensor Problems are NP - Hard
We prove that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard. Our list here includes: determining the feasibility of a system of bilinear equations, deciding whether a 3-tensor possesses a given eigenvalue, singular value, or spectral norm; approximating an eigenvalue, eigenvector, singular vector, or the spectral norm; and determi...
متن کاملA projection method to solve linear systems in tensor format
In this paper we propose a method for the numerical solution of linear systems of equations in low rank tensor format. Such systems may arise from the discreti-sation of PDEs in high dimensions but our method is not limited to this type of application. We present an iterative scheme which is based on the projection of the residual to a low dimensional subspace. The subspace is spanned by vector...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- SIAM J. Scientific Computing
دوره 36 شماره
صفحات -
تاریخ انتشار 2014