نتایج جستجو برای: l1 norm
تعداد نتایج: 74840 فیلتر نتایج به سال:
In this paper, we consider the partial inverse assignment problem under l1 norm without bound constraints. We show that the partial inverse problem can be solved by a strongly polynomial algorithm. The technique for solving this problem can be extended to handle a special type of partial inverse 0.1 combinatorial optimization problems. © 2006 Elsevier B.V. All rights reserved.
A l-norm penalized orthogonal forward regression (l-POFR) algorithm is proposed based on the concept of leaveone-out mean square error (LOOMSE). Firstly, a new l-norm penalized cost function is defined in the constructed orthogonal space, and each orthogonal basis is associated with an individually tunable regularization parameter. Secondly, due to orthogonal computation, the LOOMSE can be anal...
We present an algorithm that computes exactly (optimally) the S-sparse (1≤S<D) maximum-L1-norm-projection principal component of a real-valued data matrix X ∈ RD×N that contains N samples of dimension D. For fixed sample support N , the optimal L1-sparse algorithm has linear complexity in data dimension, O (D). For fixed dimension D (thus, fixed sparsity S), the optimal L1-sparse algorithm has ...
The notion of an L1–norm density estimator process indexed by a class of kernels is introduced. Then a functional central limit theorem and a Glivenko–Cantelli theorem are established for this process. While assembling the necessary machinery to prove these results, a body of Poissonization techniques and restricted chaining methods is developed, which is useful for studying weak convergence of...
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimiz...
Expressing data vectors as sparse linear combinations of basis elements (dictionary) is widely used in machine learning, signal processing, and statistics. It has been found that dictionaries learned from data are more effective than off-the-shelf ones. Dictionary learning has become an important tool for computer vision. Traditional dictionary learning methods use quadratic loss function which...
The Johnson-Lindenstrauss Lemma shows that any set of n points in Euclidean space can be mapped linearly down to O((log n)/ǫ) dimensions such that all pairwise distances are distorted by at most 1 + ǫ. We study the following basic question: Does there exist an analogue of the JohnsonLindenstrauss Lemma for the l1 norm? Note that Johnson-Lindenstrauss Lemma gives a linear embedding which is inde...
This project surveys and examines optimization approaches proposed for parameter estimation in Least Squares linear regression models with an L1 penalty on the regression coefficients. We first review linear regression and regularization, and both motivate and formalize this problem. We then give a detailed analysis of 8 of the varied approaches that have been proposed for optimizing this objec...
We prove additivity of the minimal conditional entropy associated with a quantum channel Φ, represent by a completely positive (CP), trace-preserving map, when the infimum of S(γ12)−S(γ1) is restricted to states of the form (I ⊗Φ) ( |ψ〉〈ψ| ) We show that this follows from multiplicativity of the completely bounded norm of Φ considered as a map from L1 → Lp for Lp spaces defined by the Schatten ...
Given an arbitrary measure μ, this study shows that the set of norm attaining multilinear forms is not dense in the space of all continuous multilinear forms on L1 μ . However, we have the density if and only if μ is purely atomic. Furthermore, the study presents an example of a Banach space X in which the set of norm attaining operators from X into X∗ is dense in the space of all bounded linea...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید