نتایج جستجو برای: cross validation error

تعداد نتایج: 878094  

2003
Yoshua Bengio Yves Grandvalet

1 Motivations In machine learning, the standard measure of accuracy for models is the prediction error (PE), i.e. the expected loss on future examples. We consider here the i.i.d. regression or classification setups, where future examples are assumed to be independently sampled from the distribution that generated the training set. When the data distribution is unknown, PE cannot be computed. T...

2007
Takahiro Shinozaki Tatsuya Kawahara

A Gaussian mixture optimization method is explored using cross-validation likelihood as an objective function instead of the conventional training set likelihood. The optimization is based on reducing the number of mixture components by selecting and merging a pair of Gaussians step by step base on the objective function so as to remove redundant components and improve the generality of the mod...

2017
Matt Barnes Artur Dubrawski

In this paper, we study the non-IID learning setting where samples exhibit dependency within latent clusters. Our goal is to estimate a learner’s loss on new clusters, an extension of the out-of-bag error. Previously developed cross-validation estimators are well suited to the case where the clustering of observed data is known a priori. However, as is often the case in real world problems, we ...

Journal: :Applied spectroscopy 2006
S T McCain M E Gehm Y Wang N P Pitsianis D J Brady

Coded aperture spectroscopy allows for sources of large étendue to be efficiently coupled into dispersive spectrometers by replacing the traditional input slit with a patterned mask. We describe a coded aperture spectrometer optimized for Raman spectroscopy of diffuse sources, (e.g., tissue). We provide design details of the Raman system, along with quantitative estimation results for ethanol a...

2009
Patrick S. CARMACK William R. SCHUCANY Jeffrey S. SPENCE Richard F. GUNST Qihua LIN Robert W. HALEY

Cross-validation has long been used for choosing tuning parameters and other model selection tasks. It generally performs well provided the data are independent, or nearly so. Improvements have been suggested which address ordinary cross-validation’s (OCV) shortcomings in correlated data. Whereas these techniques have merit, they can still lead to poor model selection in correlated data or are ...

2015
Paul Wohlhart Vincent Lepetit Teresa Klatzer Thomas Pock

In this paper, we address the problem of determining optimal hyper-parameters for support vector machines (SVMs). The standard way for solving the model selection problem is to use grid search. Grid search constitutes an exhaustive search over a pre-defined discretized set of possible parameter values and evaluating the cross-validation error until the best is found. We developed a bi-level opt...

2015
Hongyi Ge Yuying Jiang Feiyu Lian Yuan Zhang Shanhong Xia

Terahertz (THz) spectroscopy and multivariate data analysis were explored to discriminate eight wheat varieties. The absorption spectra were measured using THz time-domain spectroscopy from 0.2 to 2.0 THz. Using partial least squares (PLS), a regression model for discriminating wheat varieties was developed. The coefficient of correlation in cross validation (R) and root-mean-square error of cr...

2009
Heng Lian

Recent literature provides many computational and modeling approaches for covariance matrices estimation in a penalized Gaussian graphical models but relatively little study has been carried out on the choice of the tuning parameter. This paper tries to fill this gap by focusing on the problem of shrinkage parameter selection when estimating sparse precision matrices using the penalized likelih...

2017
Tomoyuki Obuchi Shiro Ikeda Kazunori Akiyama Yoshiyuki Kabashima

We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of ...

2003
Gavin C. Cawley Nicola L. C. Talbot

Mika et al. [1] introduce a non-linear formulation of the Fisher discriminant based the well-known “kernel trick”, later shown to be equivalent to the Least-Squares Support Vector Machine [2, 3]. In this paper, we show that the cross-validation error can be computed very efficiently for this class of kernel machine, specifically that leave-one-out cross-validation can be performed with a comput...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید