New Fast Algorithms for Structured Linear Least Squares Problems
نویسندگان
چکیده
منابع مشابه
New Fast Algorithms for Structured Linear Least Squares Problems
We present new fast algorithms for solving the Toeplitz and the Toeplitz-plus-Hankel least squares problems. These algorithms are based on a new fast algorithm for solving the Cauchy-like least squares problem. We perform an error analysis and provide conditions under which these algorithms are numerically stable. We also develop implementation techniques that signiicantly reduce the execution ...
متن کاملFast Algorithms for Structured Least Squares and Total Least Squares Problems
We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z 1 and Z 2. We develop formulas for the generators of the matrix M (H) M in terms of the generators of M and show that the Cholesky factorization of the matrix M (H) M can be computed quickly if Z 1 is close to unitary and Z 2 is triangular and nilpotent. These...
متن کاملLinear Least Squares Problems
A fundamental task in scientific computing is to estimate parameters in a mathematical model from collected data which are subject to errors. The influence of the errors can be reduced by using a greater number of data than the number of unknowns. If the model is linear, the resulting problem is then to “solve” an in general inconsistent linear system Ax = b, where A ∈ Rm×n and m ≥ n. In other ...
متن کاملCondition Numbers for Structured Least Squares Problems
This paper studies the normwise perturbation theory for structured least squares problems. The structures under investigation are symmetric, persymmetric, skewsymmetric, Toeplitz and Hankel. We present the condition numbers for structured least squares. AMS subject classification (2000): 15A18, 65F20, 65F25, 65F50.
متن کاملNew Subsampling Algorithms for Fast Least Squares Regression
We address the problem of fast estimation of ordinary least squares (OLS) from large amounts of data (n p). We propose three methods which solve the big data problem by subsampling the covariance matrix using either a single or two stage estimation. All three run in the order of size of input i.e. O(np) and our best method, Uluru, gives an error bound of O( √ p/n) which is independent of the am...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM Journal on Matrix Analysis and Applications
سال: 1998
ISSN: 0895-4798,1095-7162
DOI: 10.1137/s089547989529646x