On the Suboptimality of Proximal Gradient Descent for $\ell^{0}$ Sparse Approximation
نویسندگان
چکیده
We study the proximal gradient descent (PGD) method for l sparse approximation problem as well as its accelerated optimization with randomized algorithms in this paper. We first offer theoretical analysis of PGD showing the bounded gap between the sub-optimal solution by PGD and the globally optimal solution for the l sparse approximation problem under conditions weaker than Restricted Isometry Property widely used in compressive sensing literature. Moreover, we propose randomized algorithms to accelerate the optimization by PGD using randomized low rank matrix approximation (PGD-RMA) and randomized dimension reduction (PGD-RDR). Our randomized algorithms substantially reduces the computation cost of the original PGD for the l sparse approximation problem, and the resultant sub-optimal solution still enjoys provable suboptimality, namely, the sub-optimal solution to the reduced problem still has bounded gap to the globally optimal solution to the original problem.
منابع مشابه
Complexity of Inexact Proximal Newton methods
Recently several, so-called, proximal Newton methods were proposed for sparse optimization [6, 11, 8, 3]. These methods construct a composite quadratic approximation using Hessian information, optimize this approximation using a first-order method, such as coordinate descent and employ a line search to ensure sufficient descent. Here we propose a general framework, which includes slightly modif...
متن کاملNonconvex Sparse Logistic Regression with Weakly Convex Regularization
In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the `0 pseudo norm is able to better induce sparsity than the commonly used `1 norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corres...
متن کاملSparse Q-learning with Mirror Descent
This paper explores a new framework for reinforcement learning based on online convex optimization, in particular mirror descent and related algorithms. Mirror descent can be viewed as an enhanced gradient method, particularly suited to minimization of convex functions in highdimensional spaces. Unlike traditional gradient methods, mirror descent undertakes gradient updates of weights in both t...
متن کاملLearning with $\ell^{0}$-Graph: $\ell^{0}$-Induced Sparse Subspace Clustering
ℓ 1-graph [19, 4], a sparse graph built by reconstructing each datum with all the other data using sparse representation , has been demonstrated to be effective in clustering high dimensional data and recovering independent subspaces from which the data are drawn. It is well known that ℓ 1-norm used in ℓ 1-graph is a convex relaxation of ℓ 0-norm for enforcing the sparsity. In order to handle g...
متن کاملA New Analysis of Compressive Sensing by Stochastic Proximal Gradient Descent
In this manuscript, we analyze the sparse signal recovery (compressive sensing) problem from the perspective of convex optimization by stochastic proximal gradient descent. This view allows us to significantly simplify the recovery analysis of compressive sensing. More importantly, it leads to an efficient optimization algorithm for solving the regularized optimization problem related to the sp...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1709.01230 شماره
صفحات -
تاریخ انتشار 2017