Nonconvex Sorted l1 Minimization for Sparse Approximation
نویسندگان
چکیده
The l1 norm is the tight convex relaxation for the l0 “norm” and has been successfully applied for recovering sparse signals. However, for problems with fewer samples than required for accurate l1 recovery, one needs to apply nonconvex penalties such as lp “norm”. As one method for solving lp minimization problems, iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration. It assigns large weights on small components in magnitude and small weights on large components in magnitude. The set of the weights is not fixed, and it makes the analysis of this method difficult. In this paper, we consider a weighted l1 penalty with the set of the weights fixed and the weights are assigned based on the sort of all the components in magnitude. The smallest weight is assigned to the largest component in magnitude. This new penalty is called nonconvex sorted l1. Then we propose two methods for solving nonconvex sorted l1 minimization problems: iteratively reweighted l1 minimization and iterative sorted thresholding, and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems. We also show that both methods are generalizations of iterative This work is partially supported by ERC AdG A-DATADRIVE-B, IUAP-DYSCO, GOAMANET and OPTEC, National Natural Science Foundation of China (11201079), the Fundamental Research Funds for the Central Universities of China (20520133238, 20520131169), and NSF grants DMS-0748839 and DMS-1317602. X. Huang Department of Electrical Engineering, KU Leuven, B-3001 Leuven, Belgium. E-mail: [email protected] L. Shi Shanghai Key Laboratory for Contemporary Applied Mathematics, School of Mathematical Sciences, Fudan University, Shanghai, 200433, P.R. China. E-mail: [email protected] M. Yan Department of Mathematics, University of California, Los Angeles, CA 90095, USA. E-mail: [email protected] 2 Xiaolin Huang et al. support detection and iterative hard thresholding respectively. The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value.
منابع مشابه
Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations
The reconstruction from sparse-view projections is one of important problems in computed tomography (CT) limited by the availability or feasibility of obtaining of a large number of projections. Traditionally, convex regularizers have been exploited to improve the reconstruction quality in sparse-view CT, and the convex constraint in those problems leads to an easy optimization process. However...
متن کاملComputing Sparse Representation in a Highly Coherent Dictionary Based on Difference of L1 and L2
We study analytical and numerical properties of the L1−L2 minimization problem for sparse representation of a signal over a highly coherent dictionary. Though the L1 −L2 metric is non-convex, it is Lipschitz continuous. The difference of convex algorithm (DCA) is readily applicable for computing the sparse representation coefficients. The L1 minimization appears as an initialization step of DCA...
متن کاملSparse signal recovery by $\ell_q$ minimization under restricted isometry property
In the context of compressed sensing, the nonconvex lq minimization with 0 < q < 1 has been studied in recent years. In this paper, by generalizing the sharp bound for l1 minimization of Cai and Zhang, we show that the condition δ(sq+1)k < 1
متن کاملDC approximation approaches for sparse optimization
Sparse optimization refers to an optimization problem involving the zero-norm in objective or constraints. In this paper, nonconvex approximation approaches for sparse optimization have been studied with a unifying point of view in DC (Difference of Convex functions) programming framework. Considering a common DC approximation of the zero-norm including all standard sparse inducing penalty func...
متن کاملComputational Aspects of Constrained L 1-L 2 Minimization for Compressive Sensing
We study the computational properties of solving a constrained L1-L2 minimization via a difference of convex algorithm (DCA), which was proposed in our previous work [13, 19] to recover sparse signals from a under-determined linear system. We prove that the DCA converges to a stationary point of the nonconvex L1-L2 model. We clarify the relationship of DCA to a convex method, Bregman iteration ...
متن کامل