نتایج جستجو برای: squared error loss

تعداد نتایج: 698765  

2017
JONATHAN ROUGIER

In fields such as climate science, it is common to compile an ensemble of different simulators for the same underlying process. It is a striking observation that the ensemble mean often out-performs at least half of the ensemble members in mean squared error (measured with respect to observations). In fact, as demonstrated in the most recent IPCC report, the ensemble mean often out-performs all...

2003
Yonina C. Eldar

This paper develops and explores applications of a linear shaping transformation that minimizes the mean squared error (MSE) between the original and shaped data, i.e., that results in an output vector with the desired covariance that is as close as possible to the input, in an MSE sense. Three applications of minimum MSE shaping are considered, specifically matched filter detection, multiuser ...

Journal: :Neurocomputing 2007
Jong-Hoon Ahn Jong-Hoon Oh Seungjin Choi

A common derivation of principal component analysis (PCA) is based on the minimization of the squared-error between centered data and linear model, corresponding to the reconstruction error. In fact, minimizing the squared-error leads to principal subspace analysis where scaled and rotated principal axes of a set of observed data, are estimated. In this paper, we introduce and investigate an al...

2006
Andrew J. Patton Allan Timmermann

Evaluation of forecast optimality in economics and finance has almost exclusively been conducted under the assumption of mean squared error loss. Under this loss function optimal forecasts should be unbiased and forecast errors serially uncorrelated at the single period horizon with increasing variance as the forecast horizon grows. Using analytical results we show that standard properties of o...

Journal: :CoRR 2017
Chulhee Yun Suvrit Sra Ali Jadbabaie

We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global...

2002
Vladimir Cherkassky Yunqian Ma

This paper addresses selection of the loss function for regression problems with finite data. It is well-known (under standard regression formulation) that for a known noise density there exist an optimal loss function under an asymptotic setting (large number of samples), i.e. squared loss is optimal for Gaussian noise density. However, in real-life applications the noise density is unknown an...

2004
Michael Woodroofe

The problem considered is sequential estimation of the mean 0 of a one-parameter exponential family of distributions with squared error loss for estimation error and a cost c > 0 for each of an i.i.d, sequence of potential observations X 1, X 2 . . . . . A Bayesian approach is adopted, and natural conjugate prior distributions are assumed. For this problem, the asymptotically pointwise optimal ...

Journal: :CoRR 2017
Peter van Beek R. Wayne Oldford

White balancing is a fundamental step in the image processing pipeline. The process involves estimating the chromaticity of the illuminant or light source and using the estimate to correct the image to remove any color cast. Given the importance of the problem, there has been much previous work on illuminant estimation. Recently, an approach based on ensembles of univariate regression trees tha...

Journal: :J. Multivariate Analysis 2015
Tatsuya Kubokawa Éric Marchand William E. Strawderman

Tatsuya Kubokawa, Éric Marchand, William E. Strawderman a Department of Economics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, JAPAN (e-mail: [email protected]) b Université de Sherbrooke, Département de mathématiques, Sherbrooke Qc, CANADA, J1K 2R1 (e-mail: [email protected]) c Rutgers University, Department of Statistics and Biostatistics, 501 Hill Center, Bu...

2013
Cedric E. Ginestet

This criterion should be contrasted with the RSS encountered earlier in the course. The RSS pertains to model estimation, since we are already assuming a given model for some particular data set; and it suffices to estimate the specific values of our estimators for the unknown parameters. The MSE combines the previous two criteria, on the unbiasedness and the variance of β̂, through the followin...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید