نتایج جستجو برای: sparse non

تعداد نتایج: 1367563  

2009
Sotiris E. Nikoletseas Christoforos Raptopoulos Paul G. Spirakis

An intersection graph of n vertices assumes that each vertex is equipped with a subset of a global label set. Two vertices share an edge when their label sets intersect. Random Intersection Graphs (RIGs) (as defined in [18, 31]) consider label sets formed by the following experiment: each vertex, independently and uniformly, examines all the labels (m in total) one by one. Each examination is i...

Journal: :Statistical Analysis and Data Mining 2016
Yuting Ma Tian Zheng

This paper proposes a boosting-based solution addressing metric learning problems for high-dimensional data. Distance measures have been used as natural measures of (dis)similarity and served as the foundation of various learning methods. The efficiency of distance-based learning methods heavily depends on the chosen distance metric. With increasing dimensionality and complexity of data, howeve...

2010
Marius Kloft Ulf Brefeld Soeren Sonnenburg Alexander Zien Pavel Laskov Motoaki Kawanabe Vojtech Franc Peter Gehler Gunnar Raetsch Peter Bartlett

Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). It is to be expected in such cases that learning algorithms will have to deal with manipulated data aimed at hampering decision making. Although some previous work address...

2008
ÖZGÜR YILMAZ

In this note, we address the theoretical properties of ∆p, a class of compressed sensing decoders that rely on ℓ p minimization with p ∈ (0, 1) to recover estimates of sparse and compressible signals from incomplete and inaccurate measurements. In particular, we extend the results of Candès, Romberg and Tao [3] and Wojtaszczyk [30] regarding the decoder ∆ 1 , based on ℓ 1 minimization, to ∆p wi...

2010
Marius Kloft Ulf Brefeld Soeren Sonnenburg Alexander Zien

Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this `1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtur...

Journal: :journal of advances in computer research 2016
atefe aghaei sajjad tavassoli

human action recognition is an important problem in computer vision. one of the methods that are recently used is sparse coding. conventional sparse coding algorithms learn dictionaries and codes in an unsupervised manner and neglect class information that is available in the training set. but in this paper for solving this problem, we use a discriminative sparse code based on multi-manifolds. ...

Journal: :TACL 2016
Joris Pelemans Noam Shazeer Ciprian Chelba

We present Sparse Non-negative Matrix (SNM) estimation, a novel probability estimation technique for language modeling that can efficiently incorporate arbitrary features. We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus. Results show that SNM language models trained with n-gram features are a close match for the well...

Journal: :Journal of Machine Learning Research 2012
Fei Yan Josef Kittler Krystian Mikolajczyk Muhammad Atif Tahir

Sparsity-inducing multiple kernel Fisher discriminant analysis (MK-FDA) has been studied in the literature. Building on recent advances in non-sparse multiple kernel learning (MKL), we propose a non-sparse version of MK-FDA, which imposes a general lp norm regularisation on the kernel weights. We formulate the associated optimisation problem as a semi-infinite program (SIP), and adapt an iterat...

2012
Dong Wang Javier Tejedor

Convolutive non-negative matrix factorization (CNMF) and its sparse version, convolutive non-negative sparse coding (CNSC), exhibit great success in speech processing. A particular limitation of the current CNMF/CNSC approaches is that the convolution ranges of the bases in learning are identical, resulting in patterns covering the same time span. This is obvious unideal as most of sequential s...

2005
Matthias Heiler Christoph Schnörr

Reverse-convex programming (RCP) concerns global optimization of a specific class of non-convex optimization problems. We show that a recently proposed model for sparse non-negative matrix factorization (NMF) belongs to this class. Based on this result, we design two algorithms for sparse NMF that solve sequences of convex secondorder cone programs (SOCP). We work out some well-defined modifica...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید