Hardness of Approximation for Sparse Optimization with L0 Norm
نویسندگان
چکیده
In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between computational accuracy and statistical accuracy: It is intractable to approximate the estimator within constant factors of its statistical error. We also show that differentiating between the best k-sparse solution and the best (k+ 1)-sparse solution is computationally hard. It suggests that tuning the sparsity level is hard.
منابع مشابه
Worst-Case Hardness of Approximation for Sparse Optimization with L0 Norm
In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between c...
متن کاملHomotopic l0 minimization technique applied to dynamic cardiac MR imaging
Introduction: The l1 minimization technique has been empirically demonstrated to exactly recover an S-sparse signal with about 3S-5S measurements [1]. In order to get exact reconstruction with smaller number of measurements, recently, for static images, Trzasko [2] has proposed homotopic l0 minimization technique. Instead of minimizing the l0 norm which achieves best possible theoretical bound ...
متن کاملUne véritable approche $\ell_0$ pour l'apprentissage de dictionnaire
Sparse representation learning has recently gained a great success in signal and image processing, thanks to recent advances in dictionary learning. To this end, the l0-norm is often used to control the sparsity level. Nevertheless, optimization problems based on the l0-norm are non-convex and NP-hard. For these reasons, relaxation techniques have been attracting much attention of researchers, ...
متن کاملQuantitative susceptibility imaging with homotopic L0 minimization programming: preliminary study of brain
INTRODUCTION Susceptibility-weighted imaging (SWI) technique is used for neuroimaging to improve visibility of iron deposits, veins, and hemorrhage [1]. Quantitative susceptibility imaging (QSI) improves upon SWI by measuring iron in tissues, which can be useful for molecular/cellular imaging to analyze brain function, diagnose neurological diseases, and quantify contrast agent concentrations. ...
متن کاملSparsity-Aware and Noise-Robust Subband Adaptive Filter
This paper presents a subband adaptive filter (SAF) for a system identification where an impulse response is sparse and disturbed with an impulsive noise. Benefiting from the uses of l1-norm optimization and l0-norm penalty of the weight vector in the cost function, the proposed l0-norm sign SAF (l0-SSAF) achieves both robustness against impulsive noise and much improved convergence behavior th...
متن کامل