On Globally Linear Convergence of Dual Gradient Descent Method for Sparse Solutions
نویسندگان
چکیده
In [14], researchers studied the convergence of a dual gradient descent algorithm for sparse solutions of undetermined linear systems and showed that it has a globally linear convergence. In this paper we present another analysis. Mainly, we remove one assumption of completely full rankness on the linear system from the convergence result in [14] and provide with a different argument which significantly simplifies the proof of the linear convergence in [14].
منابع مشابه
Randomized Sparse Block Kaczmarz as Randomized Dual Block-Coordinate Descent
We show that the Sparse Kaczmarz method is a particular instance of the coordinate gradient method applied to an unconstrained dual problem corresponding to a regularized `1-minimization problem subject to linear constraints. Based on this observation and recent theoretical work concerning the convergence analysis and corresponding convergence rates for the randomized block coordinate gradient ...
متن کاملMomentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all o...
متن کاملA Voted Regularized Dual Averaging Method for Large-Scale Discriminative Training in Natural Language Processing
We propose a new algorithm based on the dual averaging method for large-scale discriminative training in natural language processing (NLP), as an alternative to the perceptron algorithms or stochastic gradient descent (SGD). The new algorithm estimates parameters of linear models by minimizing L1 regularized objectives and are effective in obtaining sparse solutions, which is particularly desir...
متن کاملIndexed Learning for Large-Scale Linear Classification
Linear classification has achieved complexity linear to the data size. However, in many applications, large-scale data contains only a few samples that can improve the target objective. In this paper, we propose a sublinear-time algorithm that uses Nearest-Neighbor-based Coordinate Descent method to solve Linear SVM with truncated loss. In particular, we propose a sequential relaxation that sol...
متن کاملGradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization
Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantee and impressive numerical performance. In this paper, we generalize HTP from compressive sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm ite...
متن کامل