نتایج جستجو برای: log error loss function

تعداد نتایج: 1829298  

1998
Chris R. Rose Matthew W. Stettler

This paper describes progress in the design and testing of the log-ratio-based beamposition/intensity measurement module being built for the Low Energy Demonstration Accelerator (LEDA) and Accelerator Production of Tritium (APT) projects at Los Alamos National Laboratory. The VU-based module uses four, 2 MHz if inputs to perform two-axis position measurements and one intensity measurement. To c...

2014
Chia-Hao Wu

In dual-hop multi-relaying wireless systems, the non-closed form expression in log-normal probability distributions and high varying standard deviations makes it impossible to effectively execute performance analyses of outage probability and bit-error-rate (BER) performance levels; thus, an analysis framework was proposed for use in a composite (Rayleigh plus log-normal) fading channel. Develo...

Journal: :Journal of Machine Learning Research 2012
José Hernández-Orallo Peter A. Flach César Ferri

Many performance metrics have been introduced in the literature for the evaluation of classification performance, each of them with different origins and areas of application. These metrics include accuracy, unweighted accuracy, the area under the ROC curve or the ROC convex hull, the mean absolute error and the Brier score or mean squared error (with its decomposition into refinement and calib...

2016
Prahladh Harsha Srikanth Srinivasan

We make progress on some questions related to polynomial approximations of AC0. It is known, by works of Tarui (Theoret. Comput. Sci. 1993) and Beigel, Reingold, and Spielman (Proc. 6th CCC 1991), that any AC0 circuit of size s and depth d has an ε-error probabilistic polynomial over the reals of degree (log(s/ε))O(d). We improve this upper bound to (log s)O(d) · log(1/ε), which is much better ...

2018
Jian Zhang

A loss function is a mapping l : Y×Y "→ R (sometimes R×R "→ R). For example, in binary classification the 0/1 loss function l(y, p) = I(y ≠ p) is often used and in regression the squared error loss function l(y, p) = (y − p) is often used. Other loss functions include the following: absolute loss, Huber loss, εinsensitive loss, hinge loss, logistic loss, exponential loss, modified least squares...

2000
Vaibhava Goel Shankar Kumar William J. Byrne

ROVER [1] and its successor voting procedures have been shown to be quite effective in reducing the recognition word error rate (WER). The success of these methods has been attributed to their minimum Bayes-risk (MBR) nature: they produce the hypothesis with the least expected word error. In this paper we develop a general procedure within the MBR framework, called segmental MBR recognition, th...

Journal: :CoRR 2011
José Hernández-Orallo Peter A. Flach César Ferri

Many performance metrics have been introduced in the literature for the evaluation of classification performance, each of them with different origins and areas of application. These metrics include accuracy, macro-accuracy, area under the ROC curve or the ROC convex hull, the mean absolute error and the Brier score or mean squared error (with its decomposition into refinement and calibration). ...

2004
Ercan Balaban Asli Bayar

This paper evaluates the out-of-sample forecasting accuracy of eleven models for monthly volatility in fifteen stock markets. Volatility is defined as within-month standard deviation of continuously compounded daily returns on the stock market index of each country for the ten-year period 1988 to 1997. The first half of the sample is retained for the estimation of parameters while the second ha...

2017

In the main paper, we have reviewed variants of AIPs according to the loss functions and the optimisation algorithms. Algorithms FGV, FGS, BI, and GA use the softmax-log loss − log f̂ . The DeepFool (DF) and our GAMAN variants use the difference of two scores (e.g. f ? − f). This section includes an auxiliary analysis for the effect of the loss type: softmax-log loss− log f̂ versus score loss −f ...

2018
Adam Klivans Pravesh K. Kothari Raghu Meka

We give the first polynomial-time algorithm for performing linear or polynomial regression resilient to adversarial corruptions in both examples and labels. Given a sufficiently large (polynomial-size) training set drawn i.i.d. from distribution D and subsequently corrupted on some fraction of points, our algorithm outputs a linear function whose squared error is close to the squared error of t...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید