نتایج جستجو برای: sparse optimization

تعداد نتایج: 371252  

2016
Yichen Chen Mengdi Wang

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between c...

Journal: :Journal of Machine Learning Research 2013
Xiao-Tong Yuan Tong Zhang

This paper considers the sparse eigenvalue problem, which is to extract dominant (largest) sparse eigenvectors with at most k non-zero components. We propose a simple yet effective solution called truncated power method that can approximately solve the underlying nonconvex optimization problem. A strong sparse recovery result is proved for the truncated power method, and this theory is our key ...

2017
Roberto M. Tumolo Michele D’Urso Giancarlo Prisco Aniello Buonanno

In this paper a method for a fast synthesis of planar, maximally thinned and steerable arrays is proposed and tested on several benchmarks available in literature. The method optimizes simultaneously the weight coefficients and sensor positions of a planar array without using global optimization schemes, properly exploiting convex optimization based algorithms. The resulting arrays are able to ...

Journal: :Journal of Machine Learning Research 2010
Julien Mairal Francis R. Bach Jean Ponce Guillermo Sapiro

Sparse coding—that is, modelling data vectors as sparse linear combinations of basis elements—is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set in order to adapt it to specific data. Variations of this problem include dictionary learning in signal process...

Journal: :Neural networks : the official journal of the International Neural Network Society 2015
Jiachen Yang Zhiyong Ding Fei Guo Huogen Wang Nick Hughes

In this paper, we investigate the problem of optimization of multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function correspo...

Spectral unmixing of hyperspectral images is one of the most important research fields  in remote sensing. Recently, the direct use of spectral libraries in spectral unmixing is on increase. In this way  which is called sparse unmixing, we do not need an endmember extraction algorithm and the number determination of endmembers priori. Since spectral libraries usually contain highly correlated s...

2017
Luca Baldassarre Massimiliano Pontil Janaina Mourão-Miranda

Structured sparse methods have received significant attention in neuroimaging. These methods allow the incorporation of domain knowledge through additional spatial and temporal constraints in the predictive model and carry the promise of being more interpretable than non-structured sparse methods, such as LASSO or Elastic Net methods. However, although sparsity has often been advocated as leadi...

1994
Louis H. Ziantz Boleslaw K. Szymanski

Sparse matrix-vector multiplication forms the heart of iterative linear solvers used widely in scientiic computations (e.g., nite element methods). In such solvers, the matrix-vector product is computed repeatedly, often thousands of times, with updated values of the vector until convergence is achieved. In an SIMD architecture, each processor has to fetch the updated oo-processor vector elemen...

Journal: :CoRR 2017
Santosh Nannuru Kay L. Gemba Peter Gerstoft William S. Hodgkiss Christoph F. Mecklenbräuker

Sparse Bayesian learning (SBL) has emerged as a fast and competitive method to perform sparse processing. The SBL algorithm, which is developed using a Bayesian framework, approximately solves a non-convex optimization problem using fixed point updates. It provides comparable performance and is significantly faster than convex optimization techniques used in sparse processing. We propose a sign...

2011
Logan Grosenick

Methods that use an !1-norm to encourage model sparsity are now widely applied across many disciplines. However, aggregating such sparse models across fits to resampled data remains an open problem. Because resampling approaches have been shown to be of great utility in reducing model variance and improving variable selection, a method able to generate a single sparse solution from multiple fit...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید