نتایج جستجو برای: log error loss function

تعداد نتایج: 1829298  

Journal: :CoRR 2017
Syed Ali Asad Rizvi Stephen J. Roberts Michael A. Osborne Favour Nyikosa

In this paper we use Gaussian Process (GP) regression to propose a novel approach for predicting volatility of financial returns by forecasting the envelopes of the time series. We provide a direct comparison of their performance to traditional approaches such as GARCH. We compare the forecasting power of three approaches: GP regression on the absolute and squared returns; regression on the env...

Journal: :The Journal of Japan Institute of Navigation 1978

2005
Ralf Schlüter T. Scharrenbach Volker Steinbiss Hermann Ney

In this work, fundamental properties of Bayes decision rule using general loss functions are derived analytically and are verified experimentally for automatic speech recognition. It is shown that, for maximum posterior probabilities larger than 1/2, Bayes decision rule with a metric loss function always decides on the posterior maximizing class independent of the specific choice of (metric) lo...

2001
Min Chu Hu Peng

This paper proposes an average concatenative cost function as the objective measure for naturalness of synthesized speech. All its seven component-costs can be derived directly from the input text and the scripts of speech database. A formal Mean Opinion Score (MOS) experiment shows that the average concatenative cost and its seven components are all highly correlated with MOS obtained subjecti...

2007
George P. YANEV Chris P. TSOKOS

The aim of the present paper is to obtain Bayes estimators for the oospring mean of a simple branching process with a power series oospring probability distribution. We study the sensitivity behavior of the obtained estimators with respect to the choice of the loss function. We propose a minimax criterion using the Bayes risk for ranking the eeectiveness (in the sense of robustness) of the loss...

2010
Jinyu Li Yu Tsao Chin-Hui Lee

We propose a parameter shrinkage adaptation framework to estimate models with only a limited set of adaptation data to improve accuracy for automatic speech recognition, by regularizing an objective function with a sum of parameterwise power q constraint. For the first attempt, we formulate ridge maximum likelihood linear regression (MLLR) and ridge constraint MLLR (CMLLR) with an element-wise ...

2015
Matthias Paulik

This paper examines two strategies that improve the beam pruning behavior of DNN acoustic models with only a negligible increase in model complexity. By augmenting the boosted MMI loss function used in sequence training with the weighted cross-entropy error, we achieve a real time factor (RTF) reduction of more than 13%. By directly incorporating a transition model into the DNN, which leads to ...

2002
Shankar Kumar William J. Byrne

Minimum Bayes Risk (MBR) decoders improve upon MAP decoders by directly optimizing loss function of interest: Word Error Rate MBR decoding is expensive when the search spaces are large Segmental MBR (SMBR) decoding breaks the single utterance-level MBR decoder into a sequence of simpler search problems. – To do this, the N-best lists or lattices need to be segmented We present: A new lattice se...

2007
Eyke Hüllermeier

We study the problem of label ranking, a machine learning task that consists of inducing a mapping from instances to rankings over a finite number of labels. Our learning method, referred to as ranking by pairwise comparison (RPC), first induces pairwise order relations (preferences) from suitable training data, using a natural extension of so-called pairwise classification. A ranking is then d...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید