نتایج جستجو برای: krylov subspace

تعداد نتایج: 18307  

Journal: :SIAM J. Scientific Computing 2001
Michael K. Schneider

Computing the linear least-squares estimate of a high-dimensional random quantity given noisy data requires solving a large system of linear equations. In many situations, one can solve this system e ciently using a Krylov subspace method, such as the conjugate gradient (CG) algorithm. Computing the estimation error variances is a more intricate task. It is di cult because the error variances a...

2002
Zhaojun Bai

In recent years, a great deal of attention has been devoted to Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems. The surge of interest was triggered by the pressing need for efficient numerical techniques for simulations of extremely large-scale dynamical systems arising from circuit simulation, structural dynamics, and microelectromechanical systems. In th...

2017
MING ZHOU

Gradient iterations for the Rayleigh quotient are elemental methods for computing the smallest eigenvalues of a pair of symmetric and positive definite matrices. A considerable convergence acceleration can be achieved by preconditioning and by computing Rayleigh-Ritz approximations from subspaces of increasing dimensions. An example of the resulting Krylov subspace eigensolvers is the generaliz...

Journal: :Comp. Opt. and Appl. 1996
Ali Bouaricha

In this paper, we describe tensor methods for large systems of nonlinear equations based on Krylov subspace techniques for approximately solving the linear systems that are required in each tensor iteration. We refer to a method in this class as a tensor-Krylov algorithm. We describe comparative testing for a tensor-Krylov implementation versus an analogous implementation based on a Newton-Kryl...

2005
Valeria Simoncini Daniel B. Szyld

Recent computational and theoretical studies have shown that the matrix-vector product occurring at each step of a Krylov subspace method can be relaxed as the iterations proceed, i.e., it can be computed in a less exact manner, without degradation of the overall performance. In the present paper a general operator treatment of this phenomenon is provided and a new result further explaining its...

Journal: :SIAM J. Matrix Analysis Applications 2004
Christopher A. Beattie Mark Embree John Rossi

The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the effects of polynomial restarting and impose no restrictions concerning the diagonalizability of the matrix or its degree of nonnormality. Associated with a desired se...

Journal: :SIAM J. Matrix Analysis Applications 2017
Zaiwen Wen Yin Zhang

Iterative algorithms for large-scale eigenpair computation are mostly based subspace projections consisting of two main steps: a subspace update (SU) step that generates bases for approximate eigenspaces, followed by a Rayleigh-Ritz (RR) projection step that extracts approximate eigenpairs. A predominant methodology for the SU step makes use of Krylov subspaces that builds orthonormal bases pie...

2015
Kevin Carlberg Virginia Forstall Ray Tuminaro

This work presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by a non-invariant symmetric-positive-definite matrix. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This id...

Journal: :Neural Computation 1994
Juha Karhunen

Principal eigenvectors of the data covariance matrix or the subspace spanned by them, called PCA subspace, provide optimal solutions to several information representation tasks. Recently, many neural approaches have been proposed for learning them (see, e.g., Hertz r t a/. 1991; Oja 1992). A well-known algorithm for learning the PCA subspace of the input vectors is so-called Op’s subspace rule ...

2011

1.1 Admissibility Let G be locally profinite. Recall that a representation π : G → GL(V ) is admissible if it is smooth and has the property that V K is finite-dimensional for all open compact K ⊂ G. If σ : K → GL(W ) is a representation of K, let V [σ] be the σ-isotypic subspace of V . This is the sum of the images of all K-equivariant maps W → V . Proposition 1.1. A representation π : G→ GL(V...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید