نتایج جستجو برای: empirical matrix

تعداد نتایج: 563469  

Estimation procedures for nonstationary Markov chains appear to be relatively sparse. This work introduces empirical  Bayes estimators  for the transition probability  matrix of a finite nonstationary  Markov chain. The data are assumed to be of  a panel study type in which each data set consists of a sequence of observations on N>=2 independent and identically dis...

2009
Rodolphe Jenatton Jean-Yves Audibert Francis Bach

We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual l1-norm and the group l1-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problem...

Journal: :CoRR 2017
Sohail Bahmani Justin Romberg

We consider the problem of estimating a solution to (random) systems of equations that involve convex nonlinearities which has applications in machine learning and signal processing. Conventional estimators based on empirical risk minimization generally lead to non-convex programs that are often computationally intractable. We propose anchored regression, a new approach that utilizes an anchor ...

1997
Imran A. Pirwani

An experimental comparison and evaluation of two diierent matrix covers is presented. We begin by presenting a motivation for using matrix covers on sparse matrices as a method of computing a matrix{vector product in parallel. Two diierent matrix covers are introduced and discussed | a stripe cover proposed by Melhem and a staircase cover proposed by Heath and Pemmaraju. A report is presented o...

2016
RYAN MURRAY

This work considers the problem of binary classification: given training data x1, . . . ,xn from a certain population, together with associated labels y1, . . . ,yn ∈ {0, 1}, determine the best label for an element x not among the training data. More specifically, this work considers a variant of the regularized empirical risk functional which is defined intrinsically to the observed data and d...

2013
Robert J. Durrant Ata Kabán

We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomlyprojected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels?’...

2008
Vladimir Koltchinskii Ming Yuan

A problem of learning a prediction rule that is approximated in a linear span of a large number of reproducing kernel Hilbert spaces is considered. The method is based on penalized empirical risk minimization with `1type complexity penalty. Oracle inequalities on excess risk of such estimators are proved showing that the method is adaptive to unknown degree of “sparsity” of the target function.

2011
Ulrich Rückert Marius Kloft

The success of regularized risk minimization approaches to classification with linear models depends crucially on the selection of a regularization term that matches with the learning task at hand. If the necessary domain expertise is rare or hard to formalize, it may be difficult to find a good regularizer. On the other hand, if plenty of related or similar data is available, it is a natural a...

2015
Barry Haddow Matthias Huck Alexandra Birch Nikolay Bogoychev Philipp Koehn

This paper describes the submission of the University of Edinburgh and the Johns Hopkins University for the shared translation task of the EMNLP 2015 Tenth Workshop on Statistical Machine Translation (WMT 2015). We set up phrase-based statistical machine translation systems for all ten language pairs of this year’s evaluation campaign, which are English paired with Czech, Finnish, French, Germa...

2010
Vladimir Koltchinskii Stas Minsker

Let S be an arbitrary measurable space, T ⊂ R and (X,Y ) be a random couple in S × T with unknown distribution P. Let (X1, Y1), . . . , (Xn, Yn) be i.i.d. copies of (X,Y ). Denote by Pn the empirical distribution based on the sample (Xi, Yi), i = 1, . . . , n. Let H be a set of uniformly bounded functions on S. Suppose that H is equipped with a σ-algebra and with a finite measure μ. Let D be a ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید