نتایج جستجو برای: loss minimization

تعداد نتایج: 475131  

2010
Vladimir Koltchinskii Stas Minsker

Let S be an arbitrary measurable space, T ⊂ R and (X,Y ) be a random couple in S × T with unknown distribution P. Let (X1, Y1), . . . , (Xn, Yn) be i.i.d. copies of (X,Y ). Denote by Pn the empirical distribution based on the sample (Xi, Yi), i = 1, . . . , n. Let H be a set of uniformly bounded functions on S. Suppose that H is equipped with a σ-algebra and with a finite measure μ. Let D be a ...

Journal: :Journal of Machine Learning Research 2009
John C. Duchi Yoram Singer

We describe, analyze, and experiment with a framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem that trades off minimization of a regularization term while keeping close proximity to the result...

2015
Carla E. Brodley

We describe an experimental study of pruning methods for decision tree classi ers in two learning situations: minimizing loss and probability estimation. In addition to the two most common methods for error minimization, CART's cost-complexity pruning and C4.5's errorbased pruning, we study the extension of cost-complexity pruning to loss and two pruning variants based on Laplace corrections. W...

2006
Samuel S. Gross Olga Russakovsky Chuong B. Do Serafim Batzoglou

We consider the problem of training a conditional random field (CRF) to maximize per-label predictive accuracy on a training set, an approach motivated by the principle of empirical risk minimization. We give a gradient-based procedure for minimizing an arbitrarily accurate approximation of the empirical risk under a Hamming loss function. In experiments with both simulated and real data, our o...

Journal: :Journal of Machine Learning Research 2015
Sivan Sabato Shai Shalev-Shwartz Nathan Srebro Daniel J. Hsu Tong Zhang

We consider the problem of learning a non-negative linear classifier with a `1-norm of at most k, and a fixed threshold, under the hinge-loss. This problem generalizes the problem of learning a k-monotone disjunction. We prove that we can learn efficiently in this setting, at a rate which is linear in both k and the size of the threshold, and that this is the best possible rate. We provide an e...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید