نتایج جستجو برای: squared log error loss function

تعداد نتایج: 1839436  

Journal: :CoRR 2016
Gábor Balázs András György Csaba Szepesvári

This paper extends the standard chaining technique to prove excess risk upper bounds for empirical risk minimization with random design settings even if the magnitude of the noise and the estimates is unbounded. The bound applies to many loss functions besides the squared loss, and scales only with the sub-Gaussian or subexponential parameters without further statistical assumptions such as the...

Journal: :Neurocomputing 2007
Zhenwei Shi Changshui Zhang

Fetal electrocardiogram (FECG) extraction is a vital issue in biomedical signal processing and analysis. A promising approach is blind (semi-blind) source extraction. In this paper, we develop an objective function for extraction of temporally correlated sources. The objective function is based on the non-Gaussianity and the autocorrelations of source signals, and it contains the well-known mea...

Journal: :Int. J. Systems Science 2017
Xia Hong Sheng Chen Yi Guo Junbin Gao

A l-norm penalized orthogonal forward regression (l-POFR) algorithm is proposed based on the concept of leaveone-out mean square error (LOOMSE). Firstly, a new l-norm penalized cost function is defined in the constructed orthogonal space, and each orthogonal basis is associated with an individually tunable regularization parameter. Secondly, due to orthogonal computation, the LOOMSE can be anal...

Journal: :J. Inform. and Commun. Convergence Engineering 2011
Ju-Phil Cho Bong-Man Ahn Jee-Won Hwang

— In this paper, we propose an equivalent Wiener-Hopf equation. The proposed algorithm can obtain the weight vector of a TDL(tapped-delay-line) filter and the error simultaneously if the inputs are orthogonal to each other. The equivalent Wiener-Hopf equation was analyzed theoretically based on the MMSE(minimum mean square error) method. The results present that the proposed algorithm is equiva...

2000
Virginia R. Young

In credibility ratemaking, one seeks to estimate the conditional mean of a given risk. The most accurate estimator (as measured by squared error loss) is the predictive mean. To calculate the predictive mean one needs the conditional distribution of losses given the parameter of interest (often the conditional mean) and the prior distribution of the parameter of interest. Young (1997. ASTIN Bul...

2000
Pedro M. Domingos

The bias-variance decomposition is a very useful and widely-used tool for understanding machine-learning algorithms. It was originally developed for squared loss. In recent years, several authors have proposed decompositions for zero-one loss, but each has significant shortcomings. In particular, all of these decompositions have only an intuitive relationship to the original squared-loss one. I...

Journal: :IJSSMET 2014
Suvojit Acharjee Sayan Chakraborty Wahiba Ben Abdessalem Karaa Ahmad Taher Azar Nilanjan Dey

Video is an important medium in terms of information sharing in this present era. The tremendous growth of video use can be seen in the traditional multimedia application as well as in many other applications like medical videos, surveillance video etc. Raw video data is usually large in size, which demands for video compression. In different video compressing schemes, motion vector is a very i...

Journal: :CoRR 2017
K. Pavan Srinath Ramji Venkataramanan

The problem of estimating a high-dimensional sparse vector θ ∈ R from an observation in i.i.d. Gaussian noise is considered. The performance is measured using squared-error loss. An empirical Bayes shrinkage estimator, derived using a Bernoulli-Gaussian prior, is analyzed and compared with the well-known soft-thresholding estimator. We obtain concentration inequalities for the Stein’s unbiased ...

Journal: :MASA 2011
Gyan Prakash

An impressive array of paper has been devoted to the reliability properties and the hazard rates of the order statistics along with IFR (increasing failure rate) and DFR (decreasing failure rate) property. However, studies relating to the Bayesian estimation on the repairable system along with IFR property of the failure time distribution and on the repair time distributions have received compa...

2011
Luís A. Alexandre

This paper presents the adaptation of a single layer complex valued neural network (NN) to use entropy in the cost function instead of the usual mean squared error (MSE). This network has the good property of having only one layer so that there is no need to search for the number of hidden layer neurons: the topology is completely determined by the problem. We extend the existing stochastic MSE...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید