نتایج جستجو برای: bootstrap aggregating

تعداد نتایج: 18325  

Journal: :Neurocomputing 2015
Marc Claesen Frank De Smet Johan A. K. Suykens Bart De Moor

We present a novel approach to learn binary classifiers when only positive and unlabeled instances are available (PU learning). This problem is routinely cast as a supervised task with label noise in the negative set. We use an ensemble of SVM models trained on bootstrap subsamples of the training data for increased robustness against label noise. The approach can be considered in a bagging fra...

2003
Simon Ho

Verification problems are usually posted as a 2-class problem and the objective is to verify if an observation belongs to a class, say, A or its complement A’. However, we find that in a computer-assisted language learning application, because of the relatively low reliability of phoneme verification — with an equal-error-rate of more than 30% — a system built on conventional phoneme verificati...

2003
Wei Fan Haixun Wang Philip S. Yu Sheng Ma

Inductive learning searches an optimal hypothesis that minimizes a given loss function. It is usually assumed that the simplest hypothesis that fits the data is the best approximate to an optimal hypothesis. Since finding the simplest hypothesis is NP-hard for most representations, we generally employ various heuristics to search its closest match. Computing these heuristics incurs significant ...

2009
Haimonti Dutta

The problem of combining predictors to increase accuracy (often called ensemble learning) has been studied broadly in the machine learning community for both classification and regression tasks. The design of an ensemble is based on the individual accuracy of the predictors and also how different they are from one another. There is a significant body of literature on how to design and measure d...

2002
Ron Meir Tong Zhang

We consider Bayesian mixture approaches, where a predictor is constructed by forming a weighted average of hypotheses from some space of functions. While such procedures are known to lead to optimal predictors in several cases, where sufficiently accurate prior information is available, it has not been clear how they perform when some of the prior assumptions are violated. In this paper we esta...

2002
Henrik Haraldsson Mattias Ohlsson

We propose a new method for training an ensemble of neural networks. A population of networks is created and maintained such that more probable networks replicate and less probable networks vanish. Each individual network is updated using random weight changes. This produces a diversity among the networks which is important for the ensemble prediction using the population. The method is compare...

Journal: :CoRR 2016
Tal Galili Isaac Meilijson

It is widely believed that the prediction accuracy of decision tree models is invariant under any strictly monotone transformation of the individual predictor variables. However, this statement may be false when predicting new observations with values that were not seen in the training-set and are close to the location of the split point of a tree rule. The sensitivity of the prediction error t...

2014
Justin Heinermann Oliver Kramer

In this work, we propose the use of support vector regression ensembles for wind power prediction. Ensemble methods often yield better classification and regression accuracy than classical machine learning algorithms and reduce the computational cost. In the field of wind power generation, the integration into the smart grid is only possible with a precise forecast computed in a reasonable time...

2010
Weiwei Sun

We present a theoretical and empirical comparative analysis of the two dominant categories of approaches in Chinese word segmentation: word-based models and character-based models. We show that, in spite of similar performance overall, the two models produce different distribution of segmentation errors, in a way that can be explained by theoretical properties of the two models. The analysis is...

2011
José-Francisco Díez-Pastor César Ignacio García-Osorio Juan José Rodríguez Diez Andrés Bustillo

This paper proposes a method for constructing ensembles of decision trees: GRASP Forest. This method uses the metaheuristic GRASP, usually used in optimization problems, to increase the diversity of the ensemble. While Random Forest increases the diversity by randomly choosing a subset of attributes in each tree node, GRASP Forest takes into account all the attributes, the source of randomness ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید