Noisy Matrix Decomposition via Convex Relaxation: Optimal Rates in High Dimensions1 by Alekh Agarwal2, Sahand Negahban3 And
نویسنده
چکیده
We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation X of the sum of an (approximately) low rank matrix with a second matrix endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including factor analysis, multi-task regression and robust covariance estimation. We derive a general theorem that bounds the Frobenius norm error for an estimate of the pair ( , ) obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. Our results use a “spikiness” condition that is related to, but milder than, singular vector incoherence. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields nonasymptotic Frobenius error bounds for both deterministic and stochastic noise matrices, and applies to matrices that can be exactly or approximately low rank, and matrices that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices and the identity observation operator, we establish matching lower bounds on the minimax error. The sharpness of our nonasymptotic predictions is confirmed by numerical simulations.
منابع مشابه
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation X of the sum of an (approximately) low rank matrix Θ⋆ with a second matrix Γ⋆ endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including ...
متن کاملFast global convergence of gradient methods for high-dimensional statistical recovery
Many statistical M -estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension d to grow with (and possibly exceed) the samp...
متن کاملStochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions
We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding a O(d/T ) convergence rate for strongly convex objectives in d dimensions and O( √ s(log d)/T ) convergence rate when the optimum is s-sparse. Our algori...
متن کاملRestricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries ...
متن کاملOutput-only Modal Analysis of a Beam Via Frequency Domain Decomposition Method Using Noisy Data
The output data from a structure is the building block for output-only modal analysis. The structure response in the output data, however, is usually contaminated with noise. Naturally, the success of output-only methods in determining the modal parameters of a structure depends on noise level. In this paper, the possibility and accuracy of identifying the modal parameters of a simply supported...
متن کامل