نتایج جستجو برای: loss minimization

تعداد نتایج: 475131  

2012
Yaoliang Yu James Neufeld Ryan Kiros Xinhua Zhang Dale Schuurmans

We demonstrate that almost all nonparametric dimensionality reduction methods can be expressed by a simple procedure: regularized loss minimization plus singular value truncation. By distinguishing the role of the loss and regularizer in such a process, we recover a factored perspective that reveals some gaps in the current literature. Beyond identifying a useful new loss for manifold unfolding...

2012
James Neufeld Yaoliang Yu Xinhua Zhang Ryan Kiros Dale Schuurmans

We demonstrate that almost all nonparametric dimensionality reduction methods can be expressed by a simple procedure: regularized loss minimization plus singular value truncation. By distinguishing the role of the loss and regularizer in such a process, we recover a factored perspective that reveals some gaps in the current literature. Beyond identifying a useful new loss for manifold unfolding...

2012
M. Manasa N. Narendar Reddy

Minimization of line losses and the improvement of power quality in distribution system are the most challenging problems, particularly when it is not economic to upgrade the entire feeder systems. This paper presents a new method to achieve the line loss minimum condition and improve power quality in radial and loop distribution system by using Unified Power Flow Controllers (UPFC), one of the...

Journal: :CoRR 2016
Hongyang Xue Deng Cai

In [7], Mozerov et al. propose to perform stereo matching as a twostep energy minimization problem. They formulate cost filtering as a local energy minimization model, and solve the fully connected MRF model and the locally connected MRF model sequentially. In this paper we intend to combine the two steps of energy minimization in order to improve stereo matching results. We propose to jointly ...

Journal: :CoRR 2017
Tianbao Yang Zhe Li Lijun Zhang

In this paper, we present a simple analysis of fast rates with high probability of empirical minimization for stochastic composite optimization over a finite-dimensional bounded convex set with exponentially concave loss functions and an arbitrary convex regularization. To the best of our knowledge, this result is the first of its kind. As a byproduct, we can directly obtain the fast rate with ...

Journal: :Journal of Machine Learning Research 2014
Xiaolin Huang Lei Shi Johan A. K. Suykens

The ramp loss is a robust but non-convex loss for classification. Compared with other non-convex losses, a local minimum of the ramp loss can be effectively found. The effectiveness of local search comes from the piecewise linearity of the ramp loss. Motivated by the fact that the `1-penalty is piecewise linear as well, the `1-penalty is applied for the ramp loss, resulting in a ramp loss linea...

Journal: :CoRR 2015
Dzmitry Bahdanau Dmitriy Serdyuk Philemon Brakel Nan Rosemary Ke Jan Chorowski Aaron C. Courville Yoshua Bengio

Often, the performance on a supervised machine learning task is evaluated with a task loss function that cannot be optimized directly. Examples of such loss functions include the classification error, the edit distance and the BLEU score. A common workaround for this problem is to instead optimize a surrogate loss function, such as for instance cross-entropy or hinge loss. In order for this rem...

Journal: :CoRR 2015
Martin Takác Peter Richtárik Nathan Srebro

We present an improved analysis of mini-batched stochastic dual coordinate ascent for regularized empirical loss minimization (i.e. SVM and SVM-type objectives). Our analysis allows for flexible sampling schemes, including where data is distribute across machines, and combines a dependence on the smoothness of the loss and/or the data spread (measured through the spectral norm).

2007
Bernadetta Tarigan Sara A. van de Geer Leslie Pack Kaelbling

The success of support vector machines in binary classification relies on the fact that hinge loss utilized in the risk minimization targets the Bayes rule. Recent research explores some extensions of this large margin based method to the multicategory case. We obtain a moment inequality for multicategory support vector machine loss minimizers with fast rate of convergence.

Journal: :CoRR 2016
Gábor Balázs András György Csaba Szepesvári

This paper extends the standard chaining technique to prove excess risk upper bounds for empirical risk minimization with random design settings even if the magnitude of the noise and the estimates is unbounded. The bound applies to many loss functions besides the squared loss, and scales only with the sub-Gaussian or subexponential parameters without further statistical assumptions such as the...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید